Sample records for applications benchmark exercise

  1. Application of Shape Similarity in Pose Selection and Virtual Screening in CSARdock2014 Exercise.

    PubMed

    Kumar, Ashutosh; Zhang, Kam Y J

    2016-06-27

    To evaluate the applicability of shape similarity in docking-based pose selection and virtual screening, we participated in the CSARdock2014 benchmark exercise for identifying the correct docking pose of inhibitors targeting factor XA, spleen tyrosine kinase, and tRNA methyltransferase. This exercise provides a valuable opportunity for researchers to test their docking programs, methods, and protocols in a blind testing environment. In the CSARdock2014 benchmark exercise, we have implemented an approach that uses ligand 3D shape similarity to facilitate docking-based pose selection and virtual screening. We showed here that ligand 3D shape similarity between bound poses could be used to identify the native-like pose from an ensemble of docking-generated poses. Our method correctly identified the native pose as the top-ranking pose for 73% of test cases in a blind testing environment. Moreover, the pose selection results also revealed an excellent correlation between ligand 3D shape similarity scores and RMSD to X-ray crystal structure ligand. In the virtual screening exercise, the average RMSD for our pose prediction was found to be 1.02 Å, and it was one of the top performances achieved in CSARdock2014 benchmark exercise. Furthermore, the inclusion of shape similarity improved virtual screening performance of docking-based scoring and ranking. The coefficient of determination (r(2)) between experimental activities and docking scores for 276 spleen tyrosine kinase inhibitors was found to be 0.365 but reached 0.614 when the ligand 3D shape similarity was included.

  2. Competency based training in robotic surgery: benchmark scores for virtual reality robotic simulation.

    PubMed

    Raison, Nicholas; Ahmed, Kamran; Fossati, Nicola; Buffi, Nicolò; Mottrie, Alexandre; Dasgupta, Prokar; Van Der Poel, Henk

    2017-05-01

    To develop benchmark scores of competency for use within a competency based virtual reality (VR) robotic training curriculum. This longitudinal, observational study analysed results from nine European Association of Urology hands-on-training courses in VR simulation. In all, 223 participants ranging from novice to expert robotic surgeons completed 1565 exercises. Competency was set at 75% of the mean expert score. Benchmark scores for all general performance metrics generated by the simulator were calculated. Assessment exercises were selected by expert consensus and through learning-curve analysis. Three basic skill and two advanced skill exercises were identified. Benchmark scores based on expert performance offered viable targets for novice and intermediate trainees in robotic surgery. Novice participants met the competency standards for most basic skill exercises; however, advanced exercises were significantly more challenging. Intermediate participants performed better across the seven metrics but still did not achieve the benchmark standard in the more difficult exercises. Benchmark scores derived from expert performances offer relevant and challenging scores for trainees to achieve during VR simulation training. Objective feedback allows both participants and trainers to monitor educational progress and ensures that training remains effective. Furthermore, the well-defined goals set through benchmarking offer clear targets for trainees and enable training to move to a more efficient competency based curriculum. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.

  3. Source-term development for a contaminant plume for use by multimedia risk assessment models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whelan, Gene; McDonald, John P.; Taira, Randal Y.

    1999-12-01

    Multimedia modelers from the U.S. Environmental Protection Agency (EPA) and the U.S. Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: DOE's Multimedia Environmental Pollutant Assessment System (MEPAS), EPA's MMSOILS, EPA's PRESTO, and DOE's RESidual RADioactivity (RESRAD). These models represent typical analytically, semi-analytically, and empirically based tools that are utilized in human risk and endangerment assessments for use at installations containing radioactive and/or hazardous contaminants. Although the benchmarking exercise traditionally emphasizes the application and comparison of these models, the establishment of a Conceptual Site Model (CSM) should be viewed with equalmore » importance. This paper reviews an approach for developing a CSM of an existing, real-world, Sr-90 plume at DOE's Hanford installation in Richland, Washington, for use in a multimedia-based benchmarking exercise bet ween MEPAS, MMSOILS, PRESTO, and RESRAD. In an unconventional move for analytically based modeling, the benchmarking exercise will begin with the plume as the source of contamination. The source and release mechanism are developed and described within the context of performing a preliminary risk assessment utilizing these analytical models. By beginning with the plume as the source term, this paper reviews a typical process and procedure an analyst would follow in developing a CSM for use in a preliminary assessment using this class of analytical tool.« less

  4. Summary of comparison and analysis of results from exercises 1 and 2 of the OECD PBMR coupled neutronics/thermal hydraulics transient benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mkhabela, P.; Han, J.; Tyobeka, B.

    2006-07-01

    The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has accepted, through the Nuclear Science Committee (NSC), the inclusion of the Pebble-Bed Modular Reactor 400 MW design (PBMR-400) coupled neutronics/thermal hydraulics transient benchmark problem as part of their official activities. The scope of the benchmark is to establish a well-defined problem, based on a common given library of cross sections, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events through a set of multi-dimensional computational test problems. The benchmark includes three steady state exercises andmore » six transient exercises. This paper describes the first two steady state exercises, their objectives and the international participation in terms of organization, country and computer code utilized. This description is followed by a comparison and analysis of the participants' results submitted for these two exercises. The comparison of results from different codes allows for an assessment of the sensitivity of a result to the method employed and can thus help to focus the development efforts on the most critical areas. The two first exercises also allow for removing of user-related modeling errors and prepare core neutronics and thermal-hydraulics models of the different codes for the rest of the exercises in the benchmark. (authors)« less

  5. Higher Education Ranking and Leagues Tables: Lessons Learned from Benchmarking

    ERIC Educational Resources Information Center

    Proulx, Roland

    2007-01-01

    The paper intends to contribute to the debate on ranking and league tables by adopting a critical approach to ranking methodologies from the point of view of a university benchmarking exercise. The absence of a strict benchmarking exercise in the ranking process has been, in the opinion of the author, one of the major problems encountered in the…

  6. Benchmarking high performance computing architectures with CMS’ skeleton framework

    NASA Astrophysics Data System (ADS)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-10-01

    In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.

  7. Establishing objective benchmarks in robotic virtual reality simulation at the level of a competent surgeon using the RobotiX Mentor simulator.

    PubMed

    Watkinson, William; Raison, Nicholas; Abe, Takashige; Harrison, Patrick; Khan, Shamim; Van der Poel, Henk; Dasgupta, Prokar; Ahmed, Kamran

    2018-05-01

    To establish objective benchmarks at the level of a competent robotic surgeon across different exercises and metrics for the RobotiX Mentor virtual reality (VR) simulator suitable for use within a robotic surgical training curriculum. This retrospective observational study analysed results from multiple data sources, all of which used the RobotiX Mentor VR simulator. 123 participants with varying experience from novice to expert completed the exercises. Competency was established as the 25th centile of the mean advanced intermediate score. Three basic skill exercises and two advanced skill exercises were used. King's College London. 84 Novice, 26 beginner intermediates, 9 advanced intermediates and 4 experts were used in this retrospective observational study. Objective benchmarks derived from the 25th centile of the mean scores of the advanced intermediates provided suitably challenging yet also achievable targets for training surgeons. The disparity in scores was greatest for the advanced exercises. Novice surgeons are able to achieve the benchmarks across all exercises in the majority of metrics. We have successfully created this proof-of-concept study, which requires validation in a larger cohort. Objective benchmarks obtained from the 25th centile of the mean scores of advanced intermediates provide clinically relevant benchmarks at the standard of a competent robotic surgeon that are challenging yet also attainable. That can be used within a VR training curriculum allowing participants to track and monitor their progress in a structured and progressional manner through five exercises. Providing clearly defined targets, ensuring that a universal training standard has been achieved across training surgeons. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  8. Benchmarking high performance computing architectures with CMS’ skeleton framework

    DOE PAGES

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-11-23

    Here, in 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta,more » Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.« less

  9. Benchmarking high performance computing architectures with CMS’ skeleton framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    Here, in 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta,more » Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.« less

  10. A Benchmarking Initiative for Reactive Transport Modeling Applied to Subsurface Environmental Applications

    NASA Astrophysics Data System (ADS)

    Steefel, C. I.

    2015-12-01

    Over the last 20 years, we have seen the evolution of multicomponent reactive transport modeling and the expanding range and increasing complexity of subsurface environmental applications it is being used to address. Reactive transport modeling is being asked to provide accurate assessments of engineering performance and risk for important issues with far-reaching consequences. As a result, the complexity and detail of subsurface processes, properties, and conditions that can be simulated have significantly expanded. Closed form solutions are necessary and useful, but limited to situations that are far simpler than typical applications that combine many physical and chemical processes, in many cases in coupled form. In the absence of closed form and yet realistic solutions for complex applications, numerical benchmark problems with an accepted set of results will be indispensable to qualifying codes for various environmental applications. The intent of this benchmarking exercise, now underway for more than five years, is to develop and publish a set of well-described benchmark problems that can be used to demonstrate simulator conformance with norms established by the subsurface science and engineering community. The objective is not to verify this or that specific code--the reactive transport codes play a supporting role in this regard—but rather to use the codes to verify that a common solution of the problem can be achieved. Thus, the objective of each of the manuscripts is to present an environmentally-relevant benchmark problem that tests the conceptual model capabilities, numerical implementation, process coupling, and accuracy. The benchmark problems developed to date include 1) microbially-mediated reactions, 2) isotopes, 3) multi-component diffusion, 4) uranium fate and transport, 5) metal mobility in mining affected systems, and 6) waste repositories and related aspects.

  11. The Earthquake Source Inversion Validation (SIV) - Project: Summary, Status, Outlook

    NASA Astrophysics Data System (ADS)

    Mai, P. M.

    2017-12-01

    Finite-fault earthquake source inversions infer the (time-dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, this kinematic source inversion is ill-posed and returns non-unique solutions, as seen for instance in multiple source models for the same earthquake, obtained by different research teams, that often exhibit remarkable dissimilarities. To address the uncertainties in earthquake-source inversions and to understand strengths and weaknesses of various methods, the Source Inversion Validation (SIV) project developed a set of forward-modeling exercises and inversion benchmarks. Several research teams then use these validation exercises to test their codes and methods, but also to develop and benchmark new approaches. In this presentation I will summarize the SIV strategy, the existing benchmark exercises and corresponding results. Using various waveform-misfit criteria and newly developed statistical comparison tools to quantify source-model (dis)similarities, the SIV platforms is able to rank solutions and identify particularly promising source inversion approaches. Existing SIV exercises (with related data and descriptions) and all computational tools remain available via the open online collaboration platform; additional exercises and benchmark tests will be uploaded once they are fully developed. I encourage source modelers to use the SIV benchmarks for developing and testing new methods. The SIV efforts have already led to several promising new techniques for tackling the earthquake-source imaging problem. I expect that future SIV benchmarks will provide further innovations and insights into earthquake source kinematics that will ultimately help to better understand the dynamics of the rupture process.

  12. A suite of exercises for verifying dynamic earthquake rupture codes

    USGS Publications Warehouse

    Harris, Ruth A.; Barall, Michael; Aagaard, Brad T.; Ma, Shuo; Roten, Daniel; Olsen, Kim B.; Duan, Benchun; Liu, Dunyu; Luo, Bin; Bai, Kangchen; Ampuero, Jean-Paul; Kaneko, Yoshihiro; Gabriel, Alice-Agnes; Duru, Kenneth; Ulrich, Thomas; Wollherr, Stephanie; Shi, Zheqiang; Dunham, Eric; Bydlon, Sam; Zhang, Zhenguo; Chen, Xiaofei; Somala, Surendra N.; Pelties, Christian; Tago, Josue; Cruz-Atienza, Victor Manuel; Kozdon, Jeremy; Daub, Eric; Aslam, Khurram; Kase, Yuko; Withers, Kyle; Dalguer, Luis

    2018-01-01

    We describe a set of benchmark exercises that are designed to test if computer codes that simulate dynamic earthquake rupture are working as intended. These types of computer codes are often used to understand how earthquakes operate, and they produce simulation results that include earthquake size, amounts of fault slip, and the patterns of ground shaking and crustal deformation. The benchmark exercises examine a range of features that scientists incorporate in their dynamic earthquake rupture simulations. These include implementations of simple or complex fault geometry, off‐fault rock response to an earthquake, stress conditions, and a variety of formulations for fault friction. Many of the benchmarks were designed to investigate scientific problems at the forefronts of earthquake physics and strong ground motions research. The exercises are freely available on our website for use by the scientific community.

  13. Benchmarking of Decision-Support Tools Used for Tiered Sustainable Remediation Appraisal.

    PubMed

    Smith, Jonathan W N; Kerrison, Gavin

    2013-01-01

    Sustainable remediation comprises soil and groundwater risk-management actions that are selected, designed, and operated to maximize net environmental, social, and economic benefit (while assuring protection of human health and safety). This paper describes a benchmarking exercise to comparatively assess potential differences in environmental management decision making resulting from application of different sustainability appraisal tools ranging from simple (qualitative) to more quantitative (multi-criteria and fully monetized cost-benefit analysis), as outlined in the SuRF-UK framework. The appraisal tools were used to rank remedial options for risk management of a subsurface petroleum release that occurred at a petrol filling station in central England. The remediation options were benchmarked using a consistent set of soil and groundwater data for each tier of sustainability appraisal. The ranking of remedial options was very similar in all three tiers, and an environmental management decision to select the most sustainable options at tier 1 would have been the same decision at tiers 2 and 3. The exercise showed that, for relatively simple remediation projects, a simple sustainability appraisal led to the same remediation option selection as more complex appraisal, and can be used to reliably inform environmental management decisions on other relatively simple land contamination projects.

  14. New Multi-group Transport Neutronics (PHISICS) Capabilities for RELAP5-3D and its Application to Phase I of the OECD/NEA MHTGR-350 MW Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom; Cristian Rabiti; Andrea Alfonsi

    2012-10-01

    PHISICS is a neutronics code system currently under development at the Idaho National Laboratory (INL). Its goal is to provide state of the art simulation capability to reactor designers. The different modules for PHISICS currently under development are a nodal and semi-structured transport core solver (INSTANT), a depletion module (MRTAU) and a cross section interpolation (MIXER) module. The INSTANT module is the most developed of the mentioned above. Basic functionalities are ready to use, but the code is still in continuous development to extend its capabilities. This paper reports on the effort of coupling the nodal kinetics code package PHISICSmore » (INSTANT/MRTAU/MIXER) to the thermal hydraulics system code RELAP5-3D, to enable full core and system modeling. This will enable the possibility to model coupled (thermal-hydraulics and neutronics) problems with more options for 3D neutron kinetics, compared to the existing diffusion theory neutron kinetics module in RELAP5-3D (NESTLE). In the second part of the paper, an overview of the OECD/NEA MHTGR-350 MW benchmark is given. This benchmark has been approved by the OECD, and is based on the General Atomics 350 MW Modular High Temperature Gas Reactor (MHTGR) design. The benchmark includes coupled neutronics thermal hydraulics exercises that require more capabilities than RELAP5-3D with NESTLE offers. Therefore, the MHTGR benchmark makes extensive use of the new PHISICS/RELAP5-3D coupling capabilities. The paper presents the preliminary results of the three steady state exercises specified in Phase I of the benchmark using PHISICS/RELAP5-3D.« less

  15. A Review of Flood Loss Models as Basis for Harmonization and Benchmarking

    PubMed Central

    Kreibich, Heidi; Franco, Guillermo; Marechal, David

    2016-01-01

    Risk-based approaches have been increasingly accepted and operationalized in flood risk management during recent decades. For instance, commercial flood risk models are used by the insurance industry to assess potential losses, establish the pricing of policies and determine reinsurance needs. Despite considerable progress in the development of loss estimation tools since the 1980s, loss estimates still reflect high uncertainties and disparities that often lead to questioning their quality. This requires an assessment of the validity and robustness of loss models as it affects prioritization and investment decision in flood risk management as well as regulatory requirements and business decisions in the insurance industry. Hence, more effort is needed to quantify uncertainties and undertake validations. Due to a lack of detailed and reliable flood loss data, first order validations are difficult to accomplish, so that model comparisons in terms of benchmarking are essential. It is checked if the models are informed by existing data and knowledge and if the assumptions made in the models are aligned with the existing knowledge. When this alignment is confirmed through validation or benchmarking exercises, the user gains confidence in the models. Before these benchmarking exercises are feasible, however, a cohesive survey of existing knowledge needs to be undertaken. With that aim, this work presents a review of flood loss–or flood vulnerability–relationships collected from the public domain and some professional sources. Our survey analyses 61 sources consisting of publications or software packages, of which 47 are reviewed in detail. This exercise results in probably the most complete review of flood loss models to date containing nearly a thousand vulnerability functions. These functions are highly heterogeneous and only about half of the loss models are found to be accompanied by explicit validation at the time of their proposal. This paper exemplarily presents an approach for a quantitative comparison of disparate models via the reduction to the joint input variables of all models. Harmonization of models for benchmarking and comparison requires profound insight into the model structures, mechanisms and underlying assumptions. Possibilities and challenges are discussed that exist in model harmonization and the application of the inventory in a benchmarking framework. PMID:27454604

  16. A Review of Flood Loss Models as Basis for Harmonization and Benchmarking.

    PubMed

    Gerl, Tina; Kreibich, Heidi; Franco, Guillermo; Marechal, David; Schröter, Kai

    2016-01-01

    Risk-based approaches have been increasingly accepted and operationalized in flood risk management during recent decades. For instance, commercial flood risk models are used by the insurance industry to assess potential losses, establish the pricing of policies and determine reinsurance needs. Despite considerable progress in the development of loss estimation tools since the 1980s, loss estimates still reflect high uncertainties and disparities that often lead to questioning their quality. This requires an assessment of the validity and robustness of loss models as it affects prioritization and investment decision in flood risk management as well as regulatory requirements and business decisions in the insurance industry. Hence, more effort is needed to quantify uncertainties and undertake validations. Due to a lack of detailed and reliable flood loss data, first order validations are difficult to accomplish, so that model comparisons in terms of benchmarking are essential. It is checked if the models are informed by existing data and knowledge and if the assumptions made in the models are aligned with the existing knowledge. When this alignment is confirmed through validation or benchmarking exercises, the user gains confidence in the models. Before these benchmarking exercises are feasible, however, a cohesive survey of existing knowledge needs to be undertaken. With that aim, this work presents a review of flood loss-or flood vulnerability-relationships collected from the public domain and some professional sources. Our survey analyses 61 sources consisting of publications or software packages, of which 47 are reviewed in detail. This exercise results in probably the most complete review of flood loss models to date containing nearly a thousand vulnerability functions. These functions are highly heterogeneous and only about half of the loss models are found to be accompanied by explicit validation at the time of their proposal. This paper exemplarily presents an approach for a quantitative comparison of disparate models via the reduction to the joint input variables of all models. Harmonization of models for benchmarking and comparison requires profound insight into the model structures, mechanisms and underlying assumptions. Possibilities and challenges are discussed that exist in model harmonization and the application of the inventory in a benchmarking framework.

  17. Quality in E-Learning--A Conceptual Framework Based on Experiences from Three International Benchmarking Projects

    ERIC Educational Resources Information Center

    Ossiannilsson, E.; Landgren, L.

    2012-01-01

    Between 2008 and 2010, Lund University took part in three international benchmarking projects, "E-xcellence+," the "eLearning Benchmarking Exercise 2009," and the "First Dual-Mode Distance Learning Benchmarking Club." A comparison of these models revealed a rather high level of correspondence. From this finding and…

  18. Benchmarking clinical photography services in the NHS.

    PubMed

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  19. RELAP5-3D Results for Phase I (Exercise 2) of the OECD/NEA MHTGR-350 MW Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom

    2012-06-01

    The coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been initiated at the Idaho National Laboratory (INL) to provide a fully coupled prismatic Very High Temperature Reactor (VHTR) system modeling capability as part of the NGNP methods development program. The PHISICS code consists of three modules: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. As part of the verification and validation activities, steady state results have been obtained for Exercise 2 of Phase I of the newly-defined OECD/NEA MHTGR-350 MW Benchmark. This exercise requiresmore » participants to calculate a steady-state solution for an End of Equilibrium Cycle 350 MW Modular High Temperature Reactor (MHTGR), using the provided geometry, material, and coolant bypass flow description. The paper provides an overview of the MHTGR Benchmark and presents typical steady state results (e.g. solid and gas temperatures, thermal conductivities) for Phase I Exercise 2. Preliminary results are also provided for the early test phase of Exercise 3 using a two-group cross-section library and the Relap5-3D model developed for Exercise 2.« less

  20. RELAP5-3D results for phase I (Exercise 2) of the OECD/NEA MHTGR-350 MW benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strydom, G.; Epiney, A. S.

    2012-07-01

    The coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been initiated at the Idaho National Laboratory (INL) to provide a fully coupled prismatic Very High Temperature Reactor (VHTR) system modeling capability as part of the NGNP methods development program. The PHISICS code consists of three modules: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. As part of the verification and validation activities, steady state results have been obtained for Exercise 2 of Phase I of the newly-defined OECD/NEA MHTGR-350 MW Benchmark. This exercise requiresmore » participants to calculate a steady-state solution for an End of Equilibrium Cycle 350 MW Modular High Temperature Reactor (MHTGR), using the provided geometry, material, and coolant bypass flow description. The paper provides an overview of the MHTGR Benchmark and presents typical steady state results (e.g. solid and gas temperatures, thermal conductivities) for Phase I Exercise 2. Preliminary results are also provided for the early test phase of Exercise 3 using a two-group cross-section library and the Relap5-3D model developed for Exercise 2. (authors)« less

  1. OECD-NEA Expert Group on Multi-Physics Experimental Data, Benchmarks and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valentine, Timothy; Rohatgi, Upendra S.

    High-fidelity, multi-physics modeling and simulation (M&S) tools are being developed and utilized for a variety of applications in nuclear science and technology and show great promise in their abilities to reproduce observed phenomena for many applications. Even with the increasing fidelity and sophistication of coupled multi-physics M&S tools, the underpinning models and data still need to be validated against experiments that may require a more complex array of validation data because of the great breadth of the time, energy and spatial domains of the physical phenomena that are being simulated. The Expert Group on Multi-Physics Experimental Data, Benchmarks and Validationmore » (MPEBV) of the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) was formed to address the challenges with the validation of such tools. The work of the MPEBV expert group is shared among three task forces to fulfill its mandate and specific exercises are being developed to demonstrate validation principles for common industrial challenges. This paper describes the overall mission of the group, the specific objectives of the task forces, the linkages among the task forces, and the development of a validation exercise that focuses on a specific reactor challenge problem.« less

  2. PHISICS/RELAP5-3D RESULTS FOR EXERCISES II-1 AND II-2 OF THE OECD/NEA MHTGR-350 BENCHMARK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strydom, Gerhard

    2016-03-01

    The Idaho National Laboratory (INL) Advanced Reactor Technologies (ART) High-Temperature Gas-Cooled Reactor (HTGR) Methods group currently leads the Modular High-Temperature Gas-Cooled Reactor (MHTGR) 350 benchmark. The benchmark consists of a set of lattice-depletion, steady-state, and transient problems that can be used by HTGR simulation groups to assess the performance of their code suites. The paper summarizes the results obtained for the first two transient exercises defined for Phase II of the benchmark. The Parallel and Highly Innovative Simulation for INL Code System (PHISICS), coupled with the INL system code RELAP5-3D, was used to generate the results for the Depressurized Conductionmore » Cooldown (DCC) (exercise II-1a) and Pressurized Conduction Cooldown (PCC) (exercise II-2) transients. These exercises require the time-dependent simulation of coupled neutronics and thermal-hydraulics phenomena, and utilize the steady-state solution previously obtained for exercise I-3 of Phase I. This paper also includes a comparison of the benchmark results obtained with a traditional system code “ring” model against a more detailed “block” model that include kinetics feedback on an individual block level and thermal feedbacks on a triangular sub-mesh. The higher spatial fidelity that can be obtained by the block model is illustrated with comparisons of the maximum fuel temperatures, especially in the case of natural convection conditions that dominate the DCC and PCC events. Differences up to 125 K (or 10%) were observed between the ring and block model predictions of the DCC transient, mostly due to the block model’s capability of tracking individual block decay powers and more detailed helium flow distributions. In general, the block model only required DCC and PCC calculation times twice as long as the ring models, and it therefore seems that the additional development and calculation time required for the block model could be worth the gain that can be obtained in the spatial resolution« less

  3. Staff confidence in dealing with aggressive patients: a benchmarking exercise.

    PubMed

    McGowan, S; Wynaden, D; Harding, N; Yassine, A; Parker, J

    1999-09-01

    Interacting with potentially aggressive patients is a common occurrence for nurses working in psychiatric intensive care units. Although the literature highlights the need to educate staff in the prevention and management of aggression, often little, or no, training is provided by employers. This article describes a benchmarking exercise conducted in psychiatric intensive care units at two Western Australian hospitals to assess staff confidence in coping with patient aggression. Results demonstrated that staff in the hospital where regular training was undertaken were significantly more confident in dealing with aggression. Following the completion of a safe physical restraint module at the other hospital staff reported a significant increase in their level of confidence that either matched or bettered the results of their benchmark colleagues.

  4. Nonlinear model updating applied to the IMAC XXXII Round Robin benchmark system

    NASA Astrophysics Data System (ADS)

    Kurt, Mehmet; Moore, Keegan J.; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.

    2017-05-01

    We consider the application of a new nonlinear model updating strategy to a computational benchmark system. The approach relies on analyzing system response time series in the frequency-energy domain by constructing both Hamiltonian and forced and damped frequency-energy plots (FEPs). The system parameters are then characterized and updated by matching the backbone branches of the FEPs with the frequency-energy wavelet transforms of experimental and/or computational time series. The main advantage of this method is that no nonlinearity model is assumed a priori, and the system model is updated solely based on simulation and/or experimental measured time series. By matching the frequency-energy plots of the benchmark system and its reduced-order model, we show that we are able to retrieve the global strongly nonlinear dynamics in the frequency and energy ranges of interest, identify bifurcations, characterize local nonlinearities, and accurately reconstruct time series. We apply the proposed methodology to a benchmark problem, which was posed to the system identification community prior to the IMAC XXXII (2014) and XXXIII (2015) Conferences as a "Round Robin Exercise on Nonlinear System Identification". We show that we are able to identify the parameters of the non-linear element in the problem with a priori knowledge about its position.

  5. INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom; Javier Ortensi; Sonat Sen

    2013-09-01

    The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible formore » defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III results of all other international participants in 2014, while the remaining Phase II transient case results will be reported in 2015.« less

  6. Emerging Regulatory Regionalism in University Governance: A Comparative Study of China and Taiwan

    ERIC Educational Resources Information Center

    Mok, Ka Ho

    2010-01-01

    High EdWell aware of the growing importance of the global university ranking exercises, many governments in East Asia have introduced different strategies to benchmark with leading universities in order to enhance the global competitiveness of their universities. With strong determination to do better in such global ranking exercises, universities…

  7. Comparison of the PHISICS/RELAP5-3D Ring and Block Model Results for Phase I of the OECD MHTGR-350 Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom

    2014-04-01

    The INL PHISICS code system consists of three modules providing improved core simulation capability: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been finalized, and as part of the code verification and validation program the exercises defined for Phase I of the OECD/NEA MHTGR 350 MW Benchmark were completed. This paper provides an overview of the MHTGR Benchmark, and presents selected results of the three steady state exercises 1-3 defined for Phase I. For Exercise 1,more » a stand-alone steady-state neutronics solution for an End of Equilibrium Cycle Modular High Temperature Reactor (MHTGR) was calculated with INSTANT, using the provided geometry, material descriptions, and detailed cross-section libraries. Exercise 2 required the modeling of a stand-alone thermal fluids solution. The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 combined the first two exercises in a coupled neutronics and thermal fluids solution, and the coupled code suite PHISICS/RELAP5-3D was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of the traditional RELAP5-3D “ring” model approach vs. a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity of the block model is illustrated with comparison results on the temperature, power density and flux distributions, and the typical under-predictions produced by the ring model approach are highlighted.« less

  8. Detector Array Performance Estimates for Nuclear Resonance Fluorescence Applications

    NASA Astrophysics Data System (ADS)

    Johnson, Micah; Hall, J. M.; McNabb, D. P.

    2012-10-01

    There are a myriad of explorative efforts underway at several institutions to determine the feasibility of using photonuclear reactions to detect and assay materials of varying complexity and compositions. One photonuclear process that is being explored for several applications is nuclear resonance fluorescence (NRF). NRF is interesting because the resonant lines are unique to each isotope and the widths are sufficiently narrow and the level densities are sufficiently low so as to not cause interference. Therefore, NRF provides a means to isoptically map containers and materials. The choice of detector array is determined by the application and the source. We will present results from a variety of application studies of an assortment of detector arrays that may be useful. Our results stem from simulation and modeling exercises and benchmarking measurements. We will discuss the data requirements from basic scientific research that enables these application studies. We will discuss our results and the future outlook of this technology.

  9. The challenge of benchmarking health systems: is ICT innovation capacity more systemic than organizational dependent?

    PubMed

    Lapão, Luís Velez

    2015-01-01

    The article by Catan et al. presents a benchmarking exercise comparing Israel and Portugal on the implementation of Information and Communication Technologies in the healthcare sector. Special attention was given to e-Health and m-Health. The authors collected information via a set of interviews with key stakeholders. They compared two different cultures and societies, which have reached slightly different implementation outcomes. Although the comparison is very enlightening, it is also challenging. Benchmarking exercises present a set of challenges, such as the choice of methodologies and the assessment of the impact on organizational strategy. Precise benchmarking methodology is a valid tool for eliciting information about alternatives for improving health systems. However, many beneficial interventions, which benchmark as effective, fail to translate into meaningful healthcare outcomes across contexts. There is a relationship between results and the innovational and competitive environments. Differences in healthcare governance and financing models are well known; but little is known about their impact on Information and Communication Technology implementation. The article by Catan et al. provides interesting clues about this issue. Public systems (such as those of Portugal, UK, Sweden, Spain, etc.) present specific advantages and disadvantages concerning Information and Communication Technology development and implementation. Meanwhile, private systems based fundamentally on insurance packages, (such as Israel, Germany, Netherlands or USA) present a different set of advantages and disadvantages - especially a more open context for innovation. Challenging issues from both the Portuguese and Israeli cases will be addressed. Clearly, more research is needed on both benchmarking methodologies and on ICT implementation strategies.

  10. Reactivity Insertion Accident (RIA) Capability Status in the BISON Fuel Performance Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williamson, Richard L.; Folsom, Charles Pearson; Pastore, Giovanni

    2016-05-01

    One of the Challenge Problems being considered within CASL relates to modelling and simulation of Light Water Reactor LWR) fuel under Reactivity Insertion Accident (RIA) conditions. BISON is the fuel performance code used within CASL for LWR fuel under both normal operating and accident conditions, and thus must be capable of addressing the RIA challenge problem. This report outlines required BISON capabilities for RIAs and describes the current status of the code. Information on recent accident capability enhancements, application of BISON to a RIA benchmark exercise, and plans for validation to RIA behavior are included.

  11. LipidQC: Method Validation Tool for Visual Comparison to SRM 1950 Using NIST Interlaboratory Comparison Exercise Lipid Consensus Mean Estimate Values.

    PubMed

    Ulmer, Candice Z; Ragland, Jared M; Koelmel, Jeremy P; Heckert, Alan; Jones, Christina M; Garrett, Timothy J; Yost, Richard A; Bowden, John A

    2017-12-19

    As advances in analytical separation techniques, mass spectrometry instrumentation, and data processing platforms continue to spur growth in the lipidomics field, more structurally unique lipid species are detected and annotated. The lipidomics community is in need of benchmark reference values to assess the validity of various lipidomics workflows in providing accurate quantitative measurements across the diverse lipidome. LipidQC addresses the harmonization challenge in lipid quantitation by providing a semiautomated process, independent of analytical platform, for visual comparison of experimental results of National Institute of Standards and Technology Standard Reference Material (SRM) 1950, "Metabolites in Frozen Human Plasma", against benchmark consensus mean concentrations derived from the NIST Lipidomics Interlaboratory Comparison Exercise.

  12. The InterFrost benchmark of Thermo-Hydraulic codes for cold regions hydrology - first inter-comparison results

    NASA Astrophysics Data System (ADS)

    Grenier, Christophe; Roux, Nicolas; Anbergen, Hauke; Collier, Nathaniel; Costard, Francois; Ferrry, Michel; Frampton, Andrew; Frederick, Jennifer; Holmen, Johan; Jost, Anne; Kokh, Samuel; Kurylyk, Barret; McKenzie, Jeffrey; Molson, John; Orgogozo, Laurent; Rivière, Agnès; Rühaak, Wolfram; Selroos, Jan-Olof; Therrien, René; Vidstrand, Patrik

    2015-04-01

    The impacts of climate change in boreal regions has received considerable attention recently due to the warming trends that have been experienced in recent decades and are expected to intensify in the future. Large portions of these regions, corresponding to permafrost areas, are covered by water bodies (lakes, rivers) that interact with the surrounding permafrost. For example, the thermal state of the surrounding soil influences the energy and water budget of the surface water bodies. Also, these water bodies generate taliks (unfrozen zones below) that disturb the thermal regimes of permafrost and may play a key role in the context of climate change. Recent field studies and modeling exercises indicate that a fully coupled 2D or 3D Thermo-Hydraulic (TH) approach is required to understand and model the past and future evolution of landscapes, rivers, lakes and associated groundwater systems in a changing climate. However, there is presently a paucity of 3D numerical studies of permafrost thaw and associated hydrological changes, and the lack of study can be partly attributed to the difficulty in verifying multi-dimensional results produced by numerical models. Numerical approaches can only be validated against analytical solutions for a purely thermic 1D equation with phase change (e.g. Neumann, Lunardini). When it comes to the coupled TH system (coupling two highly non-linear equations), the only possible approach is to compare the results from different codes to provided test cases and/or to have controlled experiments for validation. Such inter-code comparisons can propel discussions to try to improve code performances. A benchmark exercise was initialized in 2014 with a kick-off meeting in Paris in November. Participants from USA, Canada, Germany, Sweden and France convened, representing altogether 13 simulation codes. The benchmark exercises consist of several test cases inspired by existing literature (e.g. McKenzie et al., 2007) as well as new ones. They range from simpler, purely thermal cases (benchmark T1) to more complex, coupled 2D TH cases (benchmarks TH1, TH2, and TH3). Some experimental cases conducted in cold room complement the validation approach. A web site hosted by LSCE (Laboratoire des Sciences du Climat et de l'Environnement) is an interaction platform for the participants and hosts the test cases database at the following address: https://wiki.lsce.ipsl.fr/interfrost. The results of the first stage of the benchmark exercise will be presented. We will mainly focus on the inter-comparison of participant results for the coupled cases (TH1, TH2 & TH3). Further perspectives of the exercise will also be presented. Extensions to more complex physical conditions (e.g. unsaturated conditions and geometrical deformations) are contemplated. In addition, 1D vertical cases of interest to the Climate Modeling community will be proposed. Keywords: Permafrost; Numerical modeling; River-soil interaction; Arctic systems; soil freeze-thaw

  13. Multidimensional Multiphysics Simulation of TRISO Particle Fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. D. Hales; R. L. Williamson; S. R. Novascone

    2013-11-01

    Multidimensional multiphysics analysis of TRISO-coated particle fuel using the BISON finite-element based nuclear fuels code is described. The governing equations and material models applicable to particle fuel and implemented in BISON are outlined. Code verification based on a recent IAEA benchmarking exercise is described, and excellant comparisons are reported. Multiple TRISO-coated particles of increasing geometric complexity are considered. It is shown that the code's ability to perform large-scale parallel computations permits application to complex 3D phenomena while very efficient solutions for either 1D spherically symmetric or 2D axisymmetric geometries are straightforward. Additionally, the flexibility to easily include new physical andmore » material models and uncomplicated ability to couple to lower length scale simulations makes BISON a powerful tool for simulation of coated-particle fuel. Future code development activities and potential applications are identified.« less

  14. Research Assessment Exercise Results and Research Funding in the United Kingdom: A Comparative Analysis

    ERIC Educational Resources Information Center

    Chatterji, Monojit; Seaman, Paul

    2006-01-01

    A considerable sum of money is allocated to UK universities on the basis of Research Assessment Exercise performance. In this paper we analyse the two main funding models used in the United Kingdom and discuss their strengths and weaknesses. We suggest that the benchmarking used by the two main models have significant weaknesses, and propose an…

  15. Increasing Left and Right Brain Communication to Improve Learning for Tenth Grade Students in a Public School

    ERIC Educational Resources Information Center

    Richardson, Jennifer J.

    2011-01-01

    The purpose of this exploratory correlation research study was to determine if students who engaged in exercises designed to increase left and right brain hemisphere connections would score higher on identical tests than those who did not perform the exercises. Because the 2001 No Child Left Behind Act requires students to reach benchmarks of…

  16. Combining Phase Identification and Statistic Modeling for Automated Parallel Benchmark Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Ye; Ma, Xiaosong; Liu, Qing Gary

    2015-01-01

    Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time-and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPRIME, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters tomore » create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPRIME benchmarks. They retain the original applications' performance characteristics, in particular the relative performance across platforms.« less

  17. PFLOTRAN Verification: Development of a Testing Suite to Ensure Software Quality

    NASA Astrophysics Data System (ADS)

    Hammond, G. E.; Frederick, J. M.

    2016-12-01

    In scientific computing, code verification ensures the reliability and numerical accuracy of a model simulation by comparing the simulation results to experimental data or known analytical solutions. The model is typically defined by a set of partial differential equations with initial and boundary conditions, and verification ensures whether the mathematical model is solved correctly by the software. Code verification is especially important if the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment [Oberkampf and Trucano (2007)]. Justified confidence in a particular computational tool requires clarity in the exercised physics and transparency in its verification process with proper documentation. We present a quality assurance (QA) testing suite developed by Sandia National Laboratories that performs code verification for PFLOTRAN, an open source, massively-parallel subsurface simulator. PFLOTRAN solves systems of generally nonlinear partial differential equations describing multiphase, multicomponent and multiscale reactive flow and transport processes in porous media. PFLOTRAN's QA test suite compares the numerical solutions of benchmark problems in heat and mass transport against known, closed-form, analytical solutions, including documentation of the exercised physical process models implemented in each PFLOTRAN benchmark simulation. The QA test suite development strives to follow the recommendations given by Oberkampf and Trucano (2007), which describes four essential elements in high-quality verification benchmark construction: (1) conceptual description, (2) mathematical description, (3) accuracy assessment, and (4) additional documentation and user information. Several QA tests within the suite will be presented, including details of the benchmark problems and their closed-form analytical solutions, implementation of benchmark problems in PFLOTRAN simulations, and the criteria used to assess PFLOTRAN's performance in the code verification procedure. References Oberkampf, W. L., and T. G. Trucano (2007), Verification and Validation Benchmarks, SAND2007-0853, 67 pgs., Sandia National Laboratories, Albuquerque, NM.

  18. The Earthquake‐Source Inversion Validation (SIV) Project

    USGS Publications Warehouse

    Mai, P. Martin; Schorlemmer, Danijel; Page, Morgan T.; Ampuero, Jean-Paul; Asano, Kimiyuki; Causse, Mathieu; Custodio, Susana; Fan, Wenyuan; Festa, Gaetano; Galis, Martin; Gallovic, Frantisek; Imperatori, Walter; Käser, Martin; Malytskyy, Dmytro; Okuwaki, Ryo; Pollitz, Fred; Passone, Luca; Razafindrakoto, Hoby N. T.; Sekiguchi, Haruko; Song, Seok Goo; Somala, Surendra N.; Thingbaijam, Kiran K. S.; Twardzik, Cedric; van Driel, Martin; Vyas, Jagdish C.; Wang, Rongjiang; Yagi, Yuji; Zielke, Olaf

    2016-01-01

    Finite‐fault earthquake source inversions infer the (time‐dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, multiple source models for the same earthquake, obtained by different research teams, often exhibit remarkable dissimilarities. To address the uncertainties in earthquake‐source inversion methods and to understand strengths and weaknesses of the various approaches used, the Source Inversion Validation (SIV) project conducts a set of forward‐modeling exercises and inversion benchmarks. In this article, we describe the SIV strategy, the initial benchmarks, and current SIV results. Furthermore, we apply statistical tools for quantitative waveform comparison and for investigating source‐model (dis)similarities that enable us to rank the solutions, and to identify particularly promising source inversion approaches. All SIV exercises (with related data and descriptions) and statistical comparison tools are available via an online collaboration platform, and we encourage source modelers to use the SIV benchmarks for developing and testing new methods. We envision that the SIV efforts will lead to new developments for tackling the earthquake‐source imaging problem.

  19. MoMaS reactive transport benchmark using PFLOTRAN

    NASA Astrophysics Data System (ADS)

    Park, H.

    2017-12-01

    MoMaS benchmark was developed to enhance numerical simulation capability for reactive transport modeling in porous media. The benchmark was published in late September of 2009; it is not taken from a real chemical system, but realistic and numerically challenging tests. PFLOTRAN is a state-of-art massively parallel subsurface flow and reactive transport code that is being used in multiple nuclear waste repository projects at Sandia National Laboratories including Waste Isolation Pilot Plant and Used Fuel Disposition. MoMaS benchmark has three independent tests with easy, medium, and hard chemical complexity. This paper demonstrates how PFLOTRAN is applied to this benchmark exercise and shows results of the easy benchmark test case which includes mixing of aqueous components and surface complexation. Surface complexations consist of monodentate and bidentate reactions which introduces difficulty in defining selectivity coefficient if the reaction applies to a bulk reference volume. The selectivity coefficient becomes porosity dependent for bidentate reaction in heterogeneous porous media. The benchmark is solved by PFLOTRAN with minimal modification to address the issue and unit conversions were made properly to suit PFLOTRAN.

  20. Fingerprinting sea-level variations in response to continental ice loss: a benchmark exercise

    NASA Astrophysics Data System (ADS)

    Barletta, Valentina R.; Spada, Giorgio; Riva, Riccardo E. M.; James, Thomas S.; Simon, Karen M.; van der Wal, Wouter; Martinec, Zdenek; Klemann, Volker; Olsson, Per-Anders; Hagedoorn, Jan; Stocchi, Paolo; Vermeersen, Bert

    2013-04-01

    Understanding the response of the Earth to the waxing and waning ice sheets is crucial in various contexts, ranging from the interpretation of modern satellite geodetic measurements to the projections of future sea level trends in response to climate change. All the processes accompanying Glacial Isostatic Adjustment (GIA) can be described solving the so-called Sea Level Equation (SLE), an integral equation that accounts for the interactions between the ice sheets, the solid Earth, and the oceans. Modern approaches to the SLE are based on various techniques that range from purely analytical formulations to fully numerical methods. Here we present the results of a benchmark exercise of independently developed codes designed to solve the SLE. The study involves predictions of current sea level changes due to present-day ice mass loss. In spite of the differences in the methods employed, the comparison shows that a significant number of GIA modellers can reproduce their sea-level computations within 2% for well defined, large-scale present-day ice mass changes. Smaller and more detailed loads need further and dedicated benchmarking and high resolution computation. This study shows how the details of the implementation and the inputs specifications are an important, and often underappreciated, aspect. Hence this represents a step toward the assessment of reliability of sea level projections obtained with benchmarked SLE codes.

  1. Biomechanical Modeling of the Deadlift Exercise to Improve the Efficacy of Resistive Exercise Microgravity Countermeasures

    NASA Technical Reports Server (NTRS)

    Jagodnik, K. M.; Thompson, W. K.; Gallo, C. A.; DeWitt, J. K.; Funk, J. H.; Funk, N. W.; Perusek, G. P.; Sheehan, C. C.; Lewandowski, B. E.

    2016-01-01

    During long-duration spaceflight missions, astronauts exposure to microgravity without adequate countermeasures can result in losses of muscular strength and endurance, as well as loss of bone mass. As a countermeasure to this challenge, astronauts engage in resistive exercise during spaceflight to maintain their musculoskeletal function. The Hybrid Ultimate Lifting Kit (HULK) has been designed as a prototype exercise device for an exploration-class vehicle; the HULK features a much smaller footprint than previous devices such as the Advanced Resistive Exercise Device (ARED) on the International Space Station (ISS), which makes the HULK suitable for extended spaceflight missions in vehicles with limited volume. As current ISS exercise countermeasure equipment represents an improvement over previous generations of such devices, the ARED is being employed as a benchmark of functional performance. This project involves the development of a biomechanical model of the deadlift exercise, and is novel in that it is the first exercise analyzed in this context to include the upper limbs in the loading path, in contrast to the squat, single-leg squat, and heel raise exercises also being modeled by our team. OpenSim software is employed to develop these biomechanical models of humans performing resistive exercises to assess and improve the new exercise device designs. Analyses include determining differences in joint and muscle forces when using different loading strategies with the device, comparing and contrasting with the ARED benchmark, and determining whether the loading is sufficient to maintain musculoskeletal health. During data collection, the number of repetitions, load, cadence, stance, and grip width are controlled in order to facilitate comparisons between loading configurations. To date, data have been collected for two human subjects performing the deadlift exercise on the HULK device using two different loading conditions. Recorded data include motion capture, electromyography (EMG), ground reaction forces, device load cell data, photos and videos, and anthropometric data. Work is ongoing to perform biomechanical analyses including inverse kinematics and inverse dynamics to compare different versions of the deadlift model in order to determine which provides an appropriate level of detail to study this exercise. This work is supported by the National Space Biomedical Research Institute through NCC 9-58.

  2. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  3. The General Concept of Benchmarking and Its Application in Higher Education in Europe

    ERIC Educational Resources Information Center

    Nazarko, Joanicjusz; Kuzmicz, Katarzyna Anna; Szubzda-Prutis, Elzbieta; Urban, Joanna

    2009-01-01

    The purposes of this paper are twofold: a presentation of the theoretical basis of benchmarking and a discussion on practical benchmarking applications. Benchmarking is also analyzed as a productivity accelerator. The authors study benchmarking usage in the private and public sectors with due consideration of the specificities of the two areas.…

  4. Interface COMSOL-PHREEQC (iCP), an efficient numerical framework for the solution of coupled multiphysics and geochemistry

    NASA Astrophysics Data System (ADS)

    Nardi, Albert; Idiart, Andrés; Trinchero, Paolo; de Vries, Luis Manuel; Molinero, Jorge

    2014-08-01

    This paper presents the development, verification and application of an efficient interface, denoted as iCP, which couples two standalone simulation programs: the general purpose Finite Element framework COMSOL Multiphysics® and the geochemical simulator PHREEQC. The main goal of the interface is to maximize the synergies between the aforementioned codes, providing a numerical platform that can efficiently simulate a wide number of multiphysics problems coupled with geochemistry. iCP is written in Java and uses the IPhreeqc C++ dynamic library and the COMSOL Java-API. Given the large computational requirements of the aforementioned coupled models, special emphasis has been placed on numerical robustness and efficiency. To this end, the geochemical reactions are solved in parallel by balancing the computational load over multiple threads. First, a benchmark exercise is used to test the reliability of iCP regarding flow and reactive transport. Then, a large scale thermo-hydro-chemical (THC) problem is solved to show the code capabilities. The results of the verification exercise are successfully compared with those obtained using PHREEQC and the application case demonstrates the scalability of a large scale model, at least up to 32 threads.

  5. RASSP signal processing architectures

    NASA Astrophysics Data System (ADS)

    Shirley, Fred; Bassett, Bob; Letellier, J. P.

    1995-06-01

    The rapid prototyping of application specific signal processors (RASSP) program is an ARPA/tri-service effort to dramatically improve the process by which complex digital systems, particularly embedded signal processors, are specified, designed, documented, manufactured, and supported. The domain of embedded signal processing was chosen because it is important to a variety of military and commercial applications as well as for the challenge it presents in terms of complexity and performance demands. The principal effort is being performed by two major contractors, Lockheed Sanders (Nashua, NH) and Martin Marietta (Camden, NJ). For both, improvements in methodology are to be exercised and refined through the performance of individual 'Demonstration' efforts. The Lockheed Sanders' Demonstration effort is to develop an infrared search and track (IRST) processor. In addition, both contractors' results are being measured by a series of externally administered (by Lincoln Labs) six-month Benchmark programs that measure process improvement as a function of time. The first two Benchmark programs are designing and implementing a synthetic aperture radar (SAR) processor. Our demonstration team is using commercially available VME modules from Mercury Computer to assemble a multiprocessor system scalable from one to hundreds of Intel i860 microprocessors. Custom modules for the sensor interface and display driver are also being developed. This system implements either proprietary or Navy owned algorithms to perform the compute-intensive IRST function in real time in an avionics environment. Our Benchmark team is designing custom modules using commercially available processor ship sets, communication submodules, and reconfigurable logic devices. One of the modules contains multiple vector processors optimized for fast Fourier transform processing. Another module is a fiberoptic interface that accepts high-rate input data from the sensors and provides video-rate output data to a display. This paper discusses the impact of simulation on choosing signal processing algorithms and architectures, drawing from the experiences of the Demonstration and Benchmark inter-company teams at Lockhhed Sanders, Motorola, Hughes, and ISX.

  6. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barszcz, E.; Barton, J. T.; Carter, R. L.; Lasinski, T. A.; Browning, D. S.; Dagum, L.; Fatoohi, R. A.; Frederickson, P. O.; Schreiber, R. S.

    1991-01-01

    A new set of benchmarks has been developed for the performance evaluation of highly parallel supercomputers in the framework of the NASA Ames Numerical Aerodynamic Simulation (NAS) Program. These consist of five 'parallel kernel' benchmarks and three 'simulated application' benchmarks. Together they mimic the computation and data movement characteristics of large-scale computational fluid dynamics applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification-all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rouxelin, Pascal Nicolas; Strydom, Gerhard

    Best-estimate plus uncertainty analysis of reactors is replacing the traditional conservative (stacked uncertainty) method for safety and licensing analysis. To facilitate uncertainty analysis applications, a comprehensive approach and methodology must be developed and applied. High temperature gas cooled reactors (HTGRs) have several features that require techniques not used in light-water reactor analysis (e.g., coated-particle design and large graphite quantities at high temperatures). The International Atomic Energy Agency has therefore launched the Coordinated Research Project on HTGR Uncertainty Analysis in Modeling to study uncertainty propagation in the HTGR analysis chain. The benchmark problem defined for the prismatic design is represented bymore » the General Atomics Modular HTGR 350. The main focus of this report is the compilation and discussion of the results obtained for various permutations of Exercise I 2c and the use of the cross section data in Exercise II 1a of the prismatic benchmark, which is defined as the last and first steps of the lattice and core simulation phases, respectively. The report summarizes the Idaho National Laboratory (INL) best estimate results obtained for Exercise I 2a (fresh single-fuel block), Exercise I 2b (depleted single-fuel block), and Exercise I 2c (super cell) in addition to the first results of an investigation into the cross section generation effects for the super-cell problem. The two dimensional deterministic code known as the New ESC based Weighting Transport (NEWT) included in the Standardized Computer Analyses for Licensing Evaluation (SCALE) 6.1.2 package was used for the cross section evaluation, and the results obtained were compared to the three dimensional stochastic SCALE module KENO VI. The NEWT cross section libraries were generated for several permutations of the current benchmark super-cell geometry and were then provided as input to the Phase II core calculation of the stand alone neutronics Exercise II 1a. The steady state core calculations were simulated with the INL coupled-code system known as the Parallel and Highly Innovative Simulation for INL Code System (PHISICS) and the system thermal-hydraulics code known as the Reactor Excursion and Leak Analysis Program (RELAP) 5 3D using the nuclear data libraries previously generated with NEWT. It was observed that significant differences in terms of multiplication factor and neutron flux exist between the various permutations of the Phase I super-cell lattice calculations. The use of these cross section libraries only leads to minor changes in the Phase II core simulation results for fresh fuel but shows significantly larger discrepancies for spent fuel cores. Furthermore, large incongruities were found between the SCALE NEWT and KENO VI results for the super cells, and while some trends could be identified, a final conclusion on this issue could not yet be reached. This report will be revised in mid 2016 with more detailed analyses of the super-cell problems and their effects on the core models, using the latest version of SCALE (6.2). The super-cell models seem to show substantial improvements in terms of neutron flux as compared to single-block models, particularly at thermal energies.« less

  8. The InterFrost benchmark of Thermo-Hydraulic codes for cold regions hydrology - first inter-comparison phase results

    NASA Astrophysics Data System (ADS)

    Grenier, Christophe; Rühaak, Wolfram

    2016-04-01

    Climate change impacts in permafrost regions have received considerable attention recently due to the pronounced warming trends experienced in recent decades and which have been projected into the future. Large portions of these permafrost regions are characterized by surface water bodies (lakes, rivers) that interact with the surrounding permafrost often generating taliks (unfrozen zones) within the permafrost that allow for hydrologic interactions between the surface water bodies and underlying aquifers and thus influence the hydrologic response of a landscape to climate change. Recent field studies and modeling exercises indicate that a fully coupled 2D or 3D Thermo-Hydraulic (TH) approach is required to understand and model past and future evolution such units (Kurylyk et al. 2014). However, there is presently a paucity of 3D numerical studies of permafrost thaw and associated hydrological changes, which can be partly attributed to the difficulty in verifying multi-dimensional results produced by numerical models. A benchmark exercise was initialized at the end of 2014. Participants convened from USA, Canada, Europe, representing 13 simulation codes. The benchmark exercises consist of several test cases inspired by existing literature (e.g. McKenzie et al., 2007) as well as new ones (Kurylyk et al. 2014; Grenier et al. in prep.; Rühaak et al. 2015). They range from simpler, purely thermal 1D cases to more complex, coupled 2D TH cases (benchmarks TH1, TH2, and TH3). Some experimental cases conducted in a cold room complement the validation approach. A web site hosted by LSCE (Laboratoire des Sciences du Climat et de l'Environnement) is an interaction platform for the participants and hosts the test case databases at the following address: https://wiki.lsce.ipsl.fr/interfrost. The results of the first stage of the benchmark exercise will be presented. We will mainly focus on the inter-comparison of participant results for the coupled cases TH2 & TH3. Both cases are essentially theoretical but include the full complexity of the coupled non-linear set of equations (heat transfer with conduction, advection, phase change and Darcian flow). The complete set of inter-comparison results shows that the participating codes all produce simulations which are quantitatively similar and correspond to physical intuition. From a quantitative perspective, they agree well over the whole set of performance measures. The differences among the simulation results will be discussed in more depth throughout the test cases especially for the identification of the threshold times for each system as these exhibited the least agreement. However, the results suggest that in spite of the difficulties associated with the resolution of the set of TH equations (coupled and non-linear structure with phase change providing steep slopes), the developed codes provide robust results with a qualitatively reasonable representation of the processes and offer a quantitatively realistic basis. Further perspectives of the exercise will also be presented.

  9. Application of the docking program SOL for CSAR benchmark.

    PubMed

    Sulimov, Alexey V; Kutov, Danil C; Oferkin, Igor V; Katkova, Ekaterina V; Sulimov, Vladimir B

    2013-08-26

    This paper is devoted to results obtained by the docking program SOL and the post-processing program DISCORE at the CSAR benchmark. SOL and DISCORE programs are described. SOL is the original docking program developed on the basis of the genetic algorithm, MMFF94 force field, rigid protein, precalculated energy grid including desolvation in the frame of simplified GB model, vdW, and electrostatic interactions and taking into account the ligand internal strain energy. An important SOL feature is the single- or multi-processor performance for up to hundreds of CPUs. DISCORE improves the binding energy scoring by the local energy optimization of the ligand docked pose and a simple linear regression on the base of available experimental data. The docking program SOL has demonstrated a good ability for correct ligand positioning in the active sites of the tested proteins in most cases of CSAR exercises. SOL and DISCORE have not demonstrated very exciting results on the protein-ligand binding free energy estimation. Nevertheless, for some target proteins, SOL and DISCORE were among the first in prediction of inhibition activity. Ways to improve SOL and DISCORE are discussed.

  10. DeltaSA tool for source apportionment benchmarking, description and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Pernigotti, D.; Belis, C. A.

    2018-05-01

    DeltaSA is an R-package and a Java on-line tool developed at the EC-Joint Research Centre to assist and benchmark source apportionment applications. Its key functionalities support two critical tasks in this kind of studies: the assignment of a factor to a source in factor analytical models (source identification) and the model performance evaluation. The source identification is based on the similarity between a given factor and source chemical profiles from public databases. The model performance evaluation is based on statistical indicators used to compare model output with reference values generated in intercomparison exercises. The references values are calculated as the ensemble average of the results reported by participants that have passed a set of testing criteria based on chemical profiles and time series similarity. In this study, a sensitivity analysis of the model performance criteria is accomplished using the results of a synthetic dataset where "a priori" references are available. The consensus modulated standard deviation punc gives the best choice for the model performance evaluation when a conservative approach is adopted.

  11. Lessons Learned over Four Benchmark Exercises from the Community Structure-Activity Resource

    PubMed Central

    Carlson, Heather A.

    2016-01-01

    Preparing datasets and analyzing the results is difficult and time-consuming, and I hope the points raised here will help other scientists avoid some of the thorny issues we wrestled with. PMID:27345761

  12. Analysis of the influence of the heat transfer phenomena on the late phase of the ThAI Iod-12 test

    NASA Astrophysics Data System (ADS)

    Gonfiotti, B.; Paci, S.

    2014-11-01

    Iodine is one of the major contributors to the source term during a severe accident in a Nuclear Power Plant for its volatility and high radiological consequences. Therefore, large efforts have been made to describe the Iodine behaviour during an accident, especially in the containment system. Due to the lack of experimental data, in the last years many attempts were carried out to fill the gaps on the knowledge of Iodine behaviour. In this framework, two tests (ThAI Iod-11 and Iod-12) were carried out inside a multi-compartment steel vessel. A quite complex transient characterizes these two tests; therefore they are also suitable for thermal- hydraulic benchmarks. The two tests were originally released for a benchmark exercise during the SARNET2 EU Project. At the end of this benchmark a report covering the main findings was issued, stating that the common codes employed in SA studies were able to simulate the tests but with large discrepancies. The present work is then related to the application of the new versions of ASTEC and MELCOR codes with the aim of carry out a new code-to-code comparison vs. ThAI Iod-12 experimental data, focusing on the influence of the heat exchanges with the outer environment, which seems to be one of the most challenging issues to cope with.

  13. Full Chain Benchmarking for Open Architecture Airborne ISR Systems: A Case Study for GMTI Radar Applications

    DTIC Science & Technology

    2015-09-15

    middleware implementations via a common object-oriented software hierarchy, with library -specific implementations of the five GMTI benchmark ...Full-Chain Benchmarking for Open Architecture Airborne ISR Systems A Case Study for GMTI Radar Applications Matthias Beebe, Matthew Alexander...time performance, effective benchmarks are necessary to ensure that an ARP system can meet the mission constraints and performance requirements of

  14. A performance comparison of the Cray-2 and the Cray X-MP

    NASA Technical Reports Server (NTRS)

    Schmickley, Ronald; Bailey, David H.

    1986-01-01

    A suite of thirteen large Fortran benchmark codes were run on Cray-2 and Cray X-MP supercomputers. These codes were a mix of compute-intensive scientific application programs (mostly Computational Fluid Dynamics) and some special vectorized computation exercise programs. For the general class of programs tested on the Cray-2, most of which were not specially tuned for speed, the floating point operation rates varied under a variety of system load configurations from 40 percent up to 125 percent of X-MP performance rates. It is concluded that the Cray-2, in the original system configuration studied (without memory pseudo-banking) will run untuned Fortran code, on average, about 70 percent of X-MP speeds.

  15. Improving the Accuracy of Predicting Maximal Oxygen Consumption (VO2pk)

    NASA Technical Reports Server (NTRS)

    Downs, Meghan E.; Lee, Stuart M. C.; Ploutz-Snyder, Lori; Feiveson, Alan

    2016-01-01

    Maximal oxygen (VO2pk) is the maximum amount of oxygen that the body can use during intense exercise and is used for benchmarking endurance exercise capacity. The most accurate method to determineVO2pk requires continuous measurements of ventilation and gas exchange during an exercise test to maximal effort, which necessitates expensive equipment, a trained staff, and time to set-up the equipment. For astronauts, accurate VO2pk measures are important to assess mission critical task performance capabilities and to prescribe exercise intensities to optimize performance. Currently, astronauts perform submaximal exercise tests during flight to predict VO2pk; however, while submaximal VO2pk prediction equations provide reliable estimates of mean VO2pk for populations, they can be unacceptably inaccurate for a given individual. The error in current predictions and logistical limitations of measuring VO2pk, particularly during spaceflight, highlights the need for improved estimation methods.

  16. Application of Benchmark Dose Methodology to a Variety of Endpoints and Exposures

    EPA Science Inventory

    This latest beta version (1.1b) of the U.S. Environmental Protection Agency (EPA) Benchmark Dose Software (BMDS) is being distributed for public comment. The BMDS system is being developed as a tool to facilitate the application of benchmark dose (BMD) methods to EPA hazardous p...

  17. OpenSim Model Improvements to Support High Joint Angle Resistive Exercising

    NASA Technical Reports Server (NTRS)

    Gallo, Christopher; Thompson, William; Lewandowski, Beth; Humphreys, Brad

    2016-01-01

    Long duration space travel to Mars or to an asteroid will expose astronauts to extended periods of reduced gravity. Since gravity is not present to aid loading, astronauts will use resistive and aerobic exercise regimes for the duration of the space flight to minimize the loss of bone density, muscle mass and aerobic capacity that occurs during exposure to a reduced gravity environment. Unlike the International Space Station (ISS), the area available for an exercise device in the next generation of spacecraft is limited. Therefore, compact resistance exercise device prototypes are being developed. The Advanced Resistive Exercise Device (ARED) currently on the ISS is being used as a benchmark for the functional performance of these new devices. Rigorous testing of these proposed devices in space flight is difficult so computational modeling provides an estimation of the muscle forces and joint loads during exercise to gain insight on the efficacy to protect the musculoskeletal health of astronauts. The NASA Digital Astronaut Project (DAP) is supporting the Advanced Exercise Concepts (AEC) Project, Exercise Physiology and Countermeasures (ExPC) project and the National Space Biomedical Research Institute (NSBRI) funded researchers by developing computational models of exercising with these new advanced exercise device concepts

  18. Performance and Scalability of the NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael A.; Schultz, Matthew; Jin, Haoqiang; Yan, Jerry; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Several features make Java an attractive choice for scientific applications. In order to gauge the applicability of Java to Computational Fluid Dynamics (CFD), we have implemented the NAS (NASA Advanced Supercomputing) Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would position Java closer to Fortran in the competition for scientific applications.

  19. Ontology for Semantic Data Integration in the Domain of IT Benchmarking.

    PubMed

    Pfaff, Matthias; Neubig, Stefan; Krcmar, Helmut

    2018-01-01

    A domain-specific ontology for IT benchmarking has been developed to bridge the gap between a systematic characterization of IT services and their data-based valuation. Since information is generally collected during a benchmark exercise using questionnaires on a broad range of topics, such as employee costs, software licensing costs, and quantities of hardware, it is commonly stored as natural language text; thus, this information is stored in an intrinsically unstructured form. Although these data form the basis for identifying potentials for IT cost reductions, neither a uniform description of any measured parameters nor the relationship between such parameters exists. Hence, this work proposes an ontology for the domain of IT benchmarking, available at https://w3id.org/bmontology. The design of this ontology is based on requirements mainly elicited from a domain analysis, which considers analyzing documents and interviews with representatives from Small- and Medium-Sized Enterprises and Information and Communications Technology companies over the last eight years. The development of the ontology and its main concepts is described in detail (i.e., the conceptualization of benchmarking events, questionnaires, IT services, indicators and their values) together with its alignment with the DOLCE-UltraLite foundational ontology.

  20. Translational benchmark risk analysis

    PubMed Central

    Piegorsch, Walter W.

    2010-01-01

    Translational development – in the sense of translating a mature methodology from one area of application to another, evolving area – is discussed for the use of benchmark doses in quantitative risk assessment. Illustrations are presented with traditional applications of the benchmark paradigm in biology and toxicology, and also with risk endpoints that differ from traditional toxicological archetypes. It is seen that the benchmark approach can apply to a diverse spectrum of risk management settings. This suggests a promising future for this important risk-analytic tool. Extensions of the method to a wider variety of applications represent a significant opportunity for enhancing environmental, biomedical, industrial, and socio-economic risk assessments. PMID:20953283

  1. Accreditation of University Undergraduate Programs in Nigeria from 2001-2012: Implications for Graduates Employability

    ERIC Educational Resources Information Center

    Dada, M. S.; Imam, Hauwa

    2015-01-01

    This study analysed accreditation exercises of universities undergraduate programs in Nigeria from 2001-2013. Accreditation is a quality assurance mechanism to ensure that undergraduate programs offered in Nigeria satisfies benchmark minimum academic standards for producing graduates with requisite skills for employability. The study adopted the…

  2. Learning for Learning Providers

    ERIC Educational Resources Information Center

    Appleby, Alex; Robson, Andrew; Owen, Jane

    2003-01-01

    Presents the findings from a study of 48 Colleges of Further Education (FE) who have participated in a diagnostic benchmarking exercise using the learning probe methodology. Learning probe has been developed from the established service probe tool (developed originally by London Business School and IBM Consulting) to support colleges of FE in…

  3. Summer 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendoza, Paul Michael

    2016-08-31

    The project goals seek to develop applications in order to automate MCNP criticality benchmark execution; create a dataset containing static benchmark information; combine MCNP output with benchmark information; and fit and visually represent data.

  4. Methodology and issues of integral experiments selection for nuclear data validation

    NASA Astrophysics Data System (ADS)

    Tatiana, Ivanova; Ivanov, Evgeny; Hill, Ian

    2017-09-01

    Nuclear data validation involves a large suite of Integral Experiments (IEs) for criticality, reactor physics and dosimetry applications. [1] Often benchmarks are taken from international Handbooks. [2, 3] Depending on the application, IEs have different degrees of usefulness in validation, and usually the use of a single benchmark is not advised; indeed, it may lead to erroneous interpretation and results. [1] This work aims at quantifying the importance of benchmarks used in application dependent cross section validation. The approach is based on well-known General Linear Least Squared Method (GLLSM) extended to establish biases and uncertainties for given cross sections (within a given energy interval). The statistical treatment results in a vector of weighting factors for the integral benchmarks. These factors characterize the value added by a benchmark for nuclear data validation for the given application. The methodology is illustrated by one example, selecting benchmarks for 239Pu cross section validation. The studies were performed in the framework of Subgroup 39 (Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files) established at the Working Party on International Nuclear Data Evaluation Cooperation (WPEC) of the Nuclear Science Committee under the Nuclear Energy Agency (NEA/OECD).

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandor, Debra; Chung, Donald; Keyser, David

    This report documents the CEMAC methodologies for developing and reporting annual global clean energy manufacturing benchmarks. The report reviews previously published manufacturing benchmark reports and foundational data, establishes a framework for benchmarking clean energy technologies, describes the CEMAC benchmark analysis methodologies, and describes the application of the methodologies to the manufacturing of four specific clean energy technologies.

  6. Advantages and applicability of commonly used homogenisation methods for climate data

    NASA Astrophysics Data System (ADS)

    Ribeiro, Sara; Caineta, Júlio; Henriques, Roberto; Soares, Amílcar; Costa, Ana Cristina

    2014-05-01

    Homogenisation of climate data is a very relevant subject since these data are required as an input in a wide range of studies, such as atmospheric modelling, weather forecasting, climate change monitoring, or hydrological and environmental projects. Often, climate data series include non-natural irregularities which have to be detected and removed prior to their use, otherwise it would generate biased and erroneous results. Relocation of weather stations or changes in the measuring instruments are amongst the most relevant causes for these inhomogeneities. Depending on the climate variable, its temporal resolution and spatial continuity, homogenisation methods can be more or less effective. For example, due to its natural variability, precipitation is identified as a very challenging variable to be homogenised. During the last two decades, numerous methods have been proposed to homogenise climate data. In order to compare, evaluate and develop those methods, the European project COST Action ES0601, Advances in homogenisation methods of climate series: an integrated approach (HOME), was released in 2008. Existing homogenisation methods were improved based on the benchmark exercise issued by this project. A recent approach based on Direct Sequential Simulation (DSS), not yet evaluated by the benchmark exercise, is also presented as an innovative methodology for homogenising climate data series. DSS already proved to be a successful geostatistical method in environmental and hydrological studies, and it provides promising results for the homogenisation of climate data. Since DSS is a geostatistical stochastic approach, it accounts for the joint spatial and temporal dependence between observations, as well as the relative importance of stations both in terms of distance and correlation. This work presents a chronological review of the most commonly used homogenisation methods for climate data and available software packages. A short description and classification is provided for each method. Their advantages and applicability are discussed based on literature review and on the results of the HOME project. Acknowledgements: The authors gratefully acknowledge the financial support of "Fundação para a Ciência e Tecnologia" (FCT), Portugal, through the research project PTDC/GEO-MET/4026/2012 ("GSIMCLI - Geostatistical simulation with local distributions for the homogenization and interpolation of climate data").

  7. Preliminary Results for the OECD/NEA Time Dependent Benchmark using Rattlesnake, Rattlesnake-IQS and TDKENO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeHart, Mark D.; Mausolff, Zander; Weems, Zach

    2016-08-01

    One goal of the MAMMOTH M&S project is to validate the analysis capabilities within MAMMOTH. Historical data has shown limited value for validation of full three-dimensional (3D) multi-physics methods. Initial analysis considered the TREAT startup minimum critical core and one of the startup transient tests. At present, validation is focusing on measurements taken during the M8CAL test calibration series. These exercises will valuable in preliminary assessment of the ability of MAMMOTH to perform coupled multi-physics calculations; calculations performed to date are being used to validate the neutron transport solver Rattlesnake\\cite{Rattlesnake} and the fuels performance code BISON. Other validation projects outsidemore » of TREAT are available for single-physics benchmarking. Because the transient solution capability of Rattlesnake is one of the key attributes that makes it unique for TREAT transient simulations, validation of the transient solution of Rattlesnake using other time dependent kinetics benchmarks has considerable value. The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has recently developed a computational benchmark for transient simulations. This benchmark considered both two-dimensional (2D) and 3D configurations for a total number of 26 different transients. All are negative reactivity insertions, typically returning to the critical state after some time.« less

  8. 77 FR 70643 - Patient Protection and Affordable Care Act; Standards Related to Essential Health Benefits...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-26

    ... coverage \\1\\ in the individual and small group markets, Medicaid benchmark and benchmark-equivalent plans...) Act extends the coverage of the EHB package to issuers of non-grandfathered individual and small group... small group markets, and not to Medicaid benchmark or benchmark-equivalent plans. EHB applicability to...

  9. Implementation of the NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael A.; Schultz, Matthew; Jin, Haoqiang; Yan, Jerry; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Several features make Java an attractive choice for High Performance Computing (HPC). In order to gauge the applicability of Java to Computational Fluid Dynamics (CFD), we have implemented the NAS (NASA Advanced Supercomputing) Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would position Java closer to Fortran in the competition for CFD applications.

  10. Dietary Interventions to Extend Life Span and Health Span Based on Calorie Restriction

    PubMed Central

    Minor, Robin K.; Allard, Joanne S.; Younts, Caitlin M.; Ward, Theresa M.

    2010-01-01

    The societal impact of obesity, diabetes, and other metabolic disorders continues to rise despite increasing evidence of their negative long-term consequences on health span, longevity, and aging. Unfortunately, dietary management and exercise frequently fail as remedies, underscoring the need for the development of alternative interventions to successfully treat metabolic disorders and enhance life span and health span. Using calorie restriction (CR)—which is well known to improve both health and longevity in controlled studies—as their benchmark, gerontologists are coming closer to identifying dietary and pharmacological therapies that may be applicable to aging humans. This review covers some of the more promising interventions targeted to affect pathways implicated in the aging process as well as variations on classical CR that may be better suited to human adaptation. PMID:20371545

  11. 77 FR 58512 - Corrosion-Resistant Carbon Steel Flat Products From the Republic of Korea: Preliminary Results of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-21

    ... 2006 Decision Memorandum) at ``Benchmarks for Short-Term Financing.'' B. Benchmark for Long-Term Loans.... Subsidies Valuation Information A. Benchmarks for Short-Term Financing For those programs requiring the application of a won-denominated, short-term interest rate benchmark, in accordance with 19 CFR 351.505(a)(2...

  12. Laboratory Instruction in the Service of Science Teaching and Learning: Reinventing and Reinvigorating the Laboratory Experience

    ERIC Educational Resources Information Center

    McComas, William

    2005-01-01

    The Benchmarks for Science Literacy and the National Science Education Standards strongly suggest that students should be engaged in hands-on learning. However, from many corners, the original "mental training" rationale for school labs has been criticized, the "cookbook" nature of laboratory exercises condemned, and the prevalence of using…

  13. Implementation of NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Schultz, Matthew; Jin, Hao-Qiang; Yan, Jerry

    2000-01-01

    A number of features make Java an attractive but a debatable choice for High Performance Computing (HPC). In order to gauge the applicability of Java to the Computational Fluid Dynamics (CFD) we have implemented NAS Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would move Java closer to Fortran in the competition for CFD applications.

  14. Evaluation of CHO Benchmarks on the Arria 10 FPGA using Intel FPGA SDK for OpenCL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Zheming; Yoshii, Kazutomo; Finkel, Hal

    The OpenCL standard is an open programming model for accelerating algorithms on heterogeneous computing system. OpenCL extends the C-based programming language for developing portable codes on different platforms such as CPU, Graphics processing units (GPUs), Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs). The Intel FPGA SDK for OpenCL is a suite of tools that allows developers to abstract away the complex FPGA-based development flow for a high-level software development flow. Users can focus on the design of hardware-accelerated kernel functions in OpenCL and then direct the tools to generate the low-level FPGA implementations. The approach makes themore » FPGA-based development more accessible to software users as the needs for hybrid computing using CPUs and FPGAs are increasing. It can also significantly reduce the hardware development time as users can evaluate different ideas with high-level language without deep FPGA domain knowledge. Benchmarking of OpenCL-based framework is an effective way for analyzing the performance of system by studying the execution of the benchmark applications. CHO is a suite of benchmark applications that provides support for OpenCL [1]. The authors presented CHO as an OpenCL port of the CHStone benchmark. Using Altera OpenCL (AOCL) compiler to synthesize the benchmark applications, they listed the resource usage and performance of each kernel that can be successfully synthesized by the compiler. In this report, we evaluate the resource usage and performance of the CHO benchmark applications using the Intel FPGA SDK for OpenCL and Nallatech 385A FPGA board that features an Arria 10 FPGA device. The focus of the report is to have a better understanding of the resource usage and performance of the kernel implementations using Arria-10 FPGA devices compared to Stratix-5 FPGA devices. In addition, we also gain knowledge about the limitations of the current compiler when it fails to synthesize a benchmark application.« less

  15. EPA's Benchmark Dose Modeling Software

    EPA Science Inventory

    The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...

  16. Design of the MISMIP+, ISOMIP+, and MISOMIP ice-sheet, ocean, and coupled ice sheet-ocean intercomparison projects

    NASA Astrophysics Data System (ADS)

    Asay-Davis, Xylar; Cornford, Stephen; Martin, Daniel; Gudmundsson, Hilmar; Holland, David; Holland, Denise

    2015-04-01

    The MISMIP and MISMIP3D marine ice sheet model intercomparison exercises have become popular benchmarks, and several modeling groups have used them to show how their models compare to both analytical results and other models. Similarly, the ISOMIP (Ice Shelf-Ocean Model Intercomparison Project) experiments have acted as a proving ground for ocean models with sub-ice-shelf cavities.As coupled ice sheet-ocean models become available, an updated set of benchmark experiments is needed. To this end, we propose sequel experiments, MISMIP+ and ISOMIP+, with an end goal of coupling the two in a third intercomparison exercise, MISOMIP (the Marine Ice Sheet-Ocean Model Intercomparison Project). Like MISMIP3D, the MISMIP+ experiments take place in an idealized, three-dimensional setting and compare full 3D (Stokes) and reduced, hydrostatic models. Unlike the earlier exercises, the primary focus will be the response of models to sub-shelf melting. The chosen configuration features an ice shelf that experiences substantial lateral shear and buttresses the upstream ice, and so is well suited to melting experiments. Differences between the steady states of each model are minor compared to the response to melt-rate perturbations, reflecting typical real-world applications where parameters are chosen so that the initial states of all models tend to match observations. The three ISOMIP+ experiments have been designed to to make use of the same bedrock topography as MISMIP+ and using ice-shelf geometries from MISMIP+ results produced by the BISICLES ice-sheet model. The first two experiments use static ice-shelf geometries to simulate the evolution of ocean dynamics and resulting melt rates to a quasi-steady state when far-field forcing changes in either from cold to warm or from warm to cold states. The third experiment prescribes 200 years of dynamic ice-shelf geometry (with both retreating and advancing ice) based on a BISICLES simulation along with similar flips between warm and cold states in the far-field ocean forcing. The MISOMIP experiment combines the MISMIP+ experiments with the third ISOMIP+ experiment. Changes in far-field ocean forcing lead to a rapid (over ~1-2 years) increase in sub-ice-shelf melting, which is allowed to drive ice-shelf retreat for ~100 years. Then, the far-field forcing is switched to a cold state, leading to a rapid decrease in melting and a subsequent advance over ~100 years. To illustrate, we present results from BISICLES and POP2x experiments for each of the three intercomparison exercises.

  17. Adding Fault Tolerance to NPB Benchmarks Using ULFM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parchman, Zachary W; Vallee, Geoffroy R; Naughton III, Thomas J

    2016-01-01

    In the world of high-performance computing, fault tolerance and application resilience are becoming some of the primary concerns because of increasing hardware failures and memory corruptions. While the research community has been investigating various options, from system-level solutions to application-level solutions, standards such as the Message Passing Interface (MPI) are also starting to include such capabilities. The current proposal for MPI fault tolerant is centered around the User-Level Failure Mitigation (ULFM) concept, which provides means for fault detection and recovery of the MPI layer. This approach does not address application-level recovery, which is currently left to application developers. In thismore » work, we present a mod- ification of some of the benchmarks of the NAS parallel benchmark (NPB) to include support of the ULFM capabilities as well as application-level strategies and mechanisms for application-level failure recovery. As such, we present: (i) an application-level library to checkpoint and restore data, (ii) extensions of NPB benchmarks for fault tolerance based on different strategies, (iii) a fault injection tool, and (iv) some preliminary results that show the impact of such fault tolerant strategies on the application execution.« less

  18. Processor Emulator with Benchmark Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lloyd, G. Scott; Pearce, Roger; Gokhale, Maya

    2015-11-13

    A processor emulator and a suite of benchmark applications have been developed to assist in characterizing the performance of data-centric workloads on current and future computer architectures. Some of the applications have been collected from other open source projects. For more details on the emulator and an example of its usage, see reference [1].

  19. SeSBench - An initiative to benchmark reactive transport models for environmental subsurface processes

    NASA Astrophysics Data System (ADS)

    Jacques, Diederik

    2017-04-01

    As soil functions are governed by a multitude of interacting hydrological, geochemical and biological processes, simulation tools coupling mathematical models for interacting processes are needed. Coupled reactive transport models are a typical example of such coupled tools mainly focusing on hydrological and geochemical coupling (see e.g. Steefel et al., 2015). Mathematical and numerical complexity for both the tool itself or of the specific conceptual model can increase rapidly. Therefore, numerical verification of such type of models is a prerequisite for guaranteeing reliability and confidence and qualifying simulation tools and approaches for any further model application. In 2011, a first SeSBench -Subsurface Environmental Simulation Benchmarking- workshop was held in Berkeley (USA) followed by four other ones. The objective is to benchmark subsurface environmental simulation models and methods with a current focus on reactive transport processes. The final outcome was a special issue in Computational Geosciences (2015, issue 3 - Reactive transport benchmarks for subsurface environmental simulation) with a collection of 11 benchmarks. Benchmarks, proposed by the participants of the workshops, should be relevant for environmental or geo-engineering applications; the latter were mostly related to radioactive waste disposal issues - excluding benchmarks defined for pure mathematical reasons. Another important feature is the tiered approach within a benchmark with the definition of a single principle problem and different sub problems. The latter typically benchmarked individual or simplified processes (e.g. inert solute transport, simplified geochemical conceptual model) or geometries (e.g. batch or one-dimensional, homogeneous). Finally, three codes should be involved into a benchmark. The SeSBench initiative contributes to confidence building for applying reactive transport codes. Furthermore, it illustrates the use of those type of models for different environmental and geo-engineering applications. SeSBench will organize new workshops to add new benchmarks in a new special issue. Steefel, C. I., et al. (2015). "Reactive transport codes for subsurface environmental simulation." Computational Geosciences 19: 445-478.

  20. Comparison of Origin 2000 and Origin 3000 Using NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Turney, Raymond D.

    2001-01-01

    This report describes results of benchmark tests on the Origin 3000 system currently being installed at the NASA Ames National Advanced Supercomputing facility. This machine will ultimately contain 1024 R14K processors. The first part of the system, installed in November, 2000 and named mendel, is an Origin 3000 with 128 R12K processors. For comparison purposes, the tests were also run on lomax, an Origin 2000 with R12K processors. The BT, LU, and SP application benchmarks in the NAS Parallel Benchmark Suite and the kernel benchmark FT were chosen to determine system performance and measure the impact of changes on the machine as it evolves. Having been written to measure performance on Computational Fluid Dynamics applications, these benchmarks are assumed appropriate to represent the NAS workload. Since the NAS runs both message passing (MPI) and shared-memory, compiler directive type codes, both MPI and OpenMP versions of the benchmarks were used. The MPI versions used were the latest official release of the NAS Parallel Benchmarks, version 2.3. The OpenMP versiqns used were PBN3b2, a beta version that is in the process of being released. NPB 2.3 and PBN 3b2 are technically different benchmarks, and NPB results are not directly comparable to PBN results.

  1. Benchmarking on the evaluation of major accident-related risk assessment.

    PubMed

    Fabbri, Luciano; Contini, Sergio

    2009-03-15

    This paper summarises the main results of a European project BEQUAR (Benchmarking Exercise in Quantitative Area Risk Assessment in Central and Eastern European Countries). This project is among the first attempts to explore how independent evaluations of the same risk study associated with a certain chemical establishment could differ from each other and the consequent effects on the resulting area risk estimate. The exercise specifically aimed at exploring the manner and degree to which independent experts may disagree on the interpretation of quantitative risk assessments for the same entity. The project first compared the results of a number of independent expert evaluations of a quantitative risk assessment study for the same reference chemical establishment. This effort was then followed by a study of the impact of the different interpretations on the estimate of the overall risk on the area concerned. In order to improve the inter-comparability of the results, this exercise was conducted using a single tool for area risk assessment based on the ARIPAR methodology. The results of this study are expected to contribute to an improved understanding of the inspection criteria and practices used by the different national authorities responsible for the implementation of the Seveso II Directive in their countries. The activity was funded under the Enlargement and Integration Action of the Joint Research Centre (JRC), that aims at providing scientific and technological support for promoting integration of the New Member States and assisting the Candidate Countries on their way towards accession to the European Union.

  2. Cross-industry benchmarking: is it applicable to the operating room?

    PubMed

    Marco, A P; Hart, S

    2001-01-01

    The use of benchmarking has been growing in nonmedical industries. This concept is being increasingly applied to medicine as the industry strives to improve quality and improve financial performance. Benchmarks can be either internal (set by the institution) or external (use other's performance as a goal). In some industries, benchmarking has crossed industry lines to identify breakthroughs in thinking. In this article, we examine whether the airline industry can be used as a source of external process benchmarking for the operating room.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, J; Dossa, D; Gokhale, M

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe:more » (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.« less

  4. Engineering department physical plant staffing requirements.

    PubMed

    Cole, C

    1997-05-01

    There is a considerable effort in the health care arena to establish credible engineering manpower yardsticks that are universally applicable as a benchmark. This document was created by using one facility's own benchmark criteria that can be used to help develop either internal or competitive benchmarking comparisons.

  5. Performance Characteristics of the Multi-Zone NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang; VanderWijngaart, Rob F.

    2003-01-01

    We describe a new suite of computational benchmarks that models applications featuring multiple levels of parallelism. Such parallelism is often available in realistic flow computations on systems of grids, but had not previously been captured in bench-marks. The new suite, named NPB Multi-Zone, is extended from the NAS Parallel Benchmarks suite, and involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy provides relatively easily exploitable coarse-grain parallelism between meshes. Three reference implementations are available: one serial, one hybrid using the Message Passing Interface (MPI) and OpenMP, and another hybrid using a shared memory multi-level programming model (SMP+OpenMP). We examine the effectiveness of hybrid parallelization paradigms in these implementations on three different parallel computers. We also use an empirical formula to investigate the performance characteristics of the multi-zone benchmarks.

  6. Marking Closely or on the Bench?: An Australian's Benchmark Statement.

    ERIC Educational Resources Information Center

    Jones, Roy

    2000-01-01

    Reviews the benchmark statements of the Quality Assurance Agency for Higher Education in the United Kingdom. Examines the various sections within the benchmark. States that in terms of emphasizing the positive attributes of the geography discipline the statements have wide utility and applicability. (CMK)

  7. AN OVERVIEW OF THE DEVELOPMENT, STATUS, AND APPLICATION OF EQUILIBRIUM PARTITIONING SEDIMENT BENCHMARKS FOR PAH MIXTURES

    EPA Science Inventory

    This article provides an overview of the development, theoretical basis, regulatory status, and application of the U.S. Environmental Protection Agency's (USEPA's)< Equilibrium Partitioning Sediment Benchmarks (ESBs) for PAH mixtures. ESBs are compared to other sediment quality g...

  8. An automated protocol for performance benchmarking a widefield fluorescence microscope.

    PubMed

    Halter, Michael; Bier, Elianna; DeRose, Paul C; Cooksey, Gregory A; Choquette, Steven J; Plant, Anne L; Elliott, John T

    2014-11-01

    Widefield fluorescence microscopy is a highly used tool for visually assessing biological samples and for quantifying cell responses. Despite its widespread use in high content analysis and other imaging applications, few published methods exist for evaluating and benchmarking the analytical performance of a microscope. Easy-to-use benchmarking methods would facilitate the use of fluorescence imaging as a quantitative analytical tool in research applications, and would aid the determination of instrumental method validation for commercial product development applications. We describe and evaluate an automated method to characterize a fluorescence imaging system's performance by benchmarking the detection threshold, saturation, and linear dynamic range to a reference material. The benchmarking procedure is demonstrated using two different materials as the reference material, uranyl-ion-doped glass and Schott 475 GG filter glass. Both are suitable candidate reference materials that are homogeneously fluorescent and highly photostable, and the Schott 475 GG filter glass is currently commercially available. In addition to benchmarking the analytical performance, we also demonstrate that the reference materials provide for accurate day to day intensity calibration. Published 2014 Wiley Periodicals Inc. Published 2014 Wiley Periodicals Inc. This article is a US government work and, as such, is in the public domain in the United States of America.

  9. Hospital benchmarking: are U.S. eye hospitals ready?

    PubMed

    de Korne, Dirk F; van Wijngaarden, Jeroen D H; Sol, Kees J C A; Betz, Robert; Thomas, Richard C; Schein, Oliver D; Klazinga, Niek S

    2012-01-01

    Benchmarking is increasingly considered a useful management instrument to improve quality in health care, but little is known about its applicability in hospital settings. The aims of this study were to assess the applicability of a benchmarking project in U.S. eye hospitals and compare the results with an international initiative. We evaluated multiple cases by applying an evaluation frame abstracted from the literature to five U.S. eye hospitals that used a set of 10 indicators for efficiency benchmarking. Qualitative analysis entailed 46 semistructured face-to-face interviews with stakeholders, document analyses, and questionnaires. The case studies only partially met the conditions of the evaluation frame. Although learning and quality improvement were stated as overall purposes, the benchmarking initiative was at first focused on efficiency only. No ophthalmic outcomes were included, and clinicians were skeptical about their reporting relevance and disclosure. However, in contrast with earlier findings in international eye hospitals, all U.S. hospitals worked with internal indicators that were integrated in their performance management systems and supported benchmarking. Benchmarking can support performance management in individual hospitals. Having a certain number of comparable institutes provide similar services in a noncompetitive milieu seems to lay fertile ground for benchmarking. International benchmarking is useful only when these conditions are not met nationally. Although the literature focuses on static conditions for effective benchmarking, our case studies show that it is a highly iterative and learning process. The journey of benchmarking seems to be more important than the destination. Improving patient value (health outcomes per unit of cost) requires, however, an integrative perspective where clinicians and administrators closely cooperate on both quality and efficiency issues. If these worlds do not share such a relationship, the added "public" value of benchmarking in health care is questionable.

  10. Cbench

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ogden, Jeffry B.

    2005-09-26

    Cbench is intended to be a relatively straightforward collection of tests, benchmarks, applications, utilities, and framework with the goal of facilitating scalable testing and benchmarking of a Linus cluster.

  11. Thermo-hydro-mechanical-chemical processes in fractured-porous media: Benchmarks and examples

    NASA Astrophysics Data System (ADS)

    Kolditz, O.; Shao, H.; Görke, U.; Kalbacher, T.; Bauer, S.; McDermott, C. I.; Wang, W.

    2012-12-01

    The book comprises an assembly of benchmarks and examples for porous media mechanics collected over the last twenty years. Analysis of thermo-hydro-mechanical-chemical (THMC) processes is essential to many applications in environmental engineering, such as geological waste deposition, geothermal energy utilisation, carbon capture and storage, water resources management, hydrology, even climate change. In order to assess the feasibility as well as the safety of geotechnical applications, process-based modelling is the only tool to put numbers, i.e. to quantify future scenarios. This charges a huge responsibility concerning the reliability of computational tools. Benchmarking is an appropriate methodology to verify the quality of modelling tools based on best practices. Moreover, benchmarking and code comparison foster community efforts. The benchmark book is part of the OpenGeoSys initiative - an open source project to share knowledge and experience in environmental analysis and scientific computation.

  12. Planning and executing complex large-scale exercises.

    PubMed

    McCormick, Lisa C; Hites, Lisle; Wakelee, Jessica F; Rucks, Andrew C; Ginter, Peter M

    2014-01-01

    Increasingly, public health departments are designing and engaging in complex operations-based full-scale exercises to test multiple public health preparedness response functions. The Department of Homeland Security's Homeland Security Exercise and Evaluation Program (HSEEP) supplies benchmark guidelines that provide a framework for both the design and the evaluation of drills and exercises; however, the HSEEP framework does not seem to have been designed to manage the development and evaluation of multiple, operations-based, parallel exercises combined into 1 complex large-scale event. Lessons learned from the planning of the Mississippi State Department of Health Emergency Support Function--8 involvement in National Level Exercise 2011 were used to develop an expanded exercise planning model that is HSEEP compliant but accounts for increased exercise complexity and is more functional for public health. The Expanded HSEEP (E-HSEEP) model was developed through changes in the HSEEP exercise planning process in areas of Exercise Plan, Controller/Evaluator Handbook, Evaluation Plan, and After Action Report and Improvement Plan development. The E-HSEEP model was tested and refined during the planning and evaluation of Mississippi's State-level Emergency Support Function-8 exercises in 2012 and 2013. As a result of using the E-HSEEP model, Mississippi State Department of Health was able to capture strengths, lessons learned, and areas for improvement, and identify microlevel issues that may have been missed using the traditional HSEEP framework. The South Central Preparedness and Emergency Response Learning Center is working to create an Excel-based E-HSEEP tool that will allow practice partners to build a database to track corrective actions and conduct many different types of analyses and comparisons.

  13. Technical Report: Benchmarking for Quasispecies Abundance Inference with Confidence Intervals from Metagenomic Sequence Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McLoughlin, K.

    2016-01-22

    The software application “MetaQuant” was developed by our group at Lawrence Livermore National Laboratory (LLNL). It is designed to profile microbial populations in a sample using data from whole-genome shotgun (WGS) metagenomic DNA sequencing. Several other metagenomic profiling applications have been described in the literature. We ran a series of benchmark tests to compare the performance of MetaQuant against that of a few existing profiling tools, using real and simulated sequence datasets. This report describes our benchmarking procedure and results.

  14. Validation of Tendril TrueHome Using Software-to-Software Comparison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maguire, Jeffrey B; Horowitz, Scott G; Moore, Nathan

    This study performed comparative evaluation of EnergyPlus version 8.6 and Tendril TrueHome, two physics-based home energy simulation models, to identify differences in energy consumption predictions between the two programs and resolve discrepancies between them. EnergyPlus is considered a benchmark, best-in-class software tool for building energy simulation. This exercise sought to improve both software tools through additional evaluation/scrutiny.

  15. Comparison of the PHISICS/RELAP5-3D ring and block model results for phase I of the OECD/NEA MHTGR-350 benchmark

    DOE PAGES

    Strydom, G.; Epiney, A. S.; Alfonsi, Andrea; ...

    2015-12-02

    The PHISICS code system has been under development at INL since 2010. It consists of several modules providing improved coupled core simulation capability: INSTANT (3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and modules performing criticality searches, fuel shuffling and generalized perturbation. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D was finalized in 2013, and as part of the verification and validation effort the first phase of the OECD/NEA MHTGR-350 Benchmark has now been completed. The theoretical basis and latest development status of the coupled PHISICS/RELAP5-3D tool are described in more detailmore » in a concurrent paper. This paper provides an overview of the OECD/NEA MHTGR-350 Benchmark and presents the results of Exercises 2 and 3 defined for Phase I. Exercise 2 required the modelling of a stand-alone thermal fluids solution at End of Equilibrium Cycle for the Modular High Temperature Reactor (MHTGR). The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 required a coupled neutronics and thermal fluids solution, and the PHISICS/RELAP5-3D code suite was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of results obtained with the traditional RELAP5-3D “ring” model approach against a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity that can be obtained by this “block” model is illustrated with comparison results on the temperature, power density and flux distributions. Furthermore, it is shown that the ring model leads to significantly lower fuel temperatures (up to 10%) when compared with the higher fidelity block model, and that the additional model development and run-time efforts are worth the gains obtained in the improved spatial temperature and flux distributions.« less

  16. Development and application of freshwater sediment-toxicity benchmarks for currently used pesticides

    USGS Publications Warehouse

    Nowell, Lisa H.; Norman, Julia E.; Ingersoll, Christopher G.; Moran, Patrick W.

    2016-01-01

    Sediment-toxicity benchmarks are needed to interpret the biological significance of currently used pesticides detected in whole sediments. Two types of freshwater sediment benchmarks for pesticides were developed using spiked-sediment bioassay (SSB) data from the literature. These benchmarks can be used to interpret sediment-toxicity data or to assess the potential toxicity of pesticides in whole sediment. The Likely Effect Benchmark (LEB) defines a pesticide concentration in whole sediment above which there is a high probability of adverse effects on benthic invertebrates, and the Threshold Effect Benchmark (TEB) defines a concentration below which adverse effects are unlikely. For compounds without available SSBs, benchmarks were estimated using equilibrium partitioning (EqP). When a sediment sample contains a pesticide mixture, benchmark quotients can be summed for all detected pesticides to produce an indicator of potential toxicity for that mixture. Benchmarks were developed for 48 pesticide compounds using SSB data and 81 compounds using the EqP approach. In an example application, data for pesticides measured in sediment from 197 streams across the United States were evaluated using these benchmarks, and compared to measured toxicity from whole-sediment toxicity tests conducted with the amphipod Hyalella azteca (28-d exposures) and the midge Chironomus dilutus (10-d exposures). Amphipod survival, weight, and biomass were significantly and inversely related to summed benchmark quotients, whereas midge survival, weight, and biomass showed no relationship to benchmarks. Samples with LEB exceedances were rare (n = 3), but all were toxic to amphipods (i.e., significantly different from control). Significant toxicity to amphipods was observed for 72% of samples exceeding one or more TEBs, compared to 18% of samples below all TEBs. Factors affecting toxicity below TEBs may include the presence of contaminants other than pesticides, physical/chemical characteristics of sediment, and uncertainty in TEB values. Additional evaluations of benchmarks in relation to sediment chemistry and toxicity are ongoing.

  17. Benchmarking reference services: an introduction.

    PubMed

    Marshall, J G; Buchanan, H S

    1995-01-01

    Benchmarking is based on the common sense idea that someone else, either inside or outside of libraries, has found a better way of doing certain things and that your own library's performance can be improved by finding out how others do things and adopting the best practices you find. Benchmarking is one of the tools used for achieving continuous improvement in Total Quality Management (TQM) programs. Although benchmarking can be done on an informal basis, TQM puts considerable emphasis on formal data collection and performance measurement. Used to its full potential, benchmarking can provide a common measuring stick to evaluate process performance. This article introduces the general concept of benchmarking, linking it whenever possible to reference services in health sciences libraries. Data collection instruments that have potential application in benchmarking studies are discussed and the need to develop common measurement tools to facilitate benchmarking is emphasized.

  18. Energy benchmarking in wastewater treatment plants: the importance of site operation and layout.

    PubMed

    Belloir, C; Stanford, C; Soares, A

    2015-01-01

    Energy benchmarking is a powerful tool in the optimization of wastewater treatment plants (WWTPs) in helping to reduce costs and greenhouse gas emissions. Traditionally, energy benchmarking methods focused solely on reporting electricity consumption, however, recent developments in this area have led to the inclusion of other types of energy, including electrical, manual, chemical and mechanical consumptions that can be expressed in kWh/m3. In this study, two full-scale WWTPs were benchmarked, both incorporated preliminary, secondary (oxidation ditch) and tertiary treatment processes, Site 1 also had an additional primary treatment step. The results indicated that Site 1 required 2.32 kWh/m3 against 0.98 kWh/m3 for Site 2. Aeration presented the highest energy consumption for both sites with 2.08 kWh/m3 required for Site 1 and 0.91 kWh/m3 in Site 2. The mechanical energy represented the second biggest consumption for Site 1 (9%, 0.212 kWh/m3) and chemical input was significant in Site 2 (4.1%, 0.026 kWh/m3). The analysis of the results indicated that Site 2 could be optimized by constructing a primary settling tank that would reduce the biochemical oxygen demand, total suspended solids and NH4 loads to the oxidation ditch by 55%, 75% and 12%, respectively, and at the same time reduce the aeration requirements by 49%. This study demonstrated that the effectiveness of the energy benchmarking exercise in identifying the highest energy-consuming assets, nevertheless it points out the need to develop a holistic overview of the WWTP and the need to include parameters such as effluent quality, site operation and plant layout to allow adequate benchmarking.

  19. Development of Constraint Force Equation Methodology for Application to Multi-Body Dynamics Including Launch Vehicle Stage Seperation

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Toniolo, Matthew D.; Tartabini, Paul V.; Roithmayr, Carlos M.; Albertson, Cindy W.; Karlgaard, Christopher D.

    2016-01-01

    The objective of this report is to develop and implement a physics based method for analysis and simulation of multi-body dynamics including launch vehicle stage separation. The constraint force equation (CFE) methodology discussed in this report provides such a framework for modeling constraint forces and moments acting at joints when the vehicles are still connected. Several stand-alone test cases involving various types of joints were developed to validate the CFE methodology. The results were compared with ADAMS(Registered Trademark) and Autolev, two different industry standard benchmark codes for multi-body dynamic analysis and simulations. However, these two codes are not designed for aerospace flight trajectory simulations. After this validation exercise, the CFE algorithm was implemented in Program to Optimize Simulated Trajectories II (POST2) to provide a capability to simulate end-to-end trajectories of launch vehicles including stage separation. The POST2/CFE methodology was applied to the STS-1 Space Shuttle solid rocket booster (SRB) separation and Hyper-X Research Vehicle (HXRV) separation from the Pegasus booster as a further test and validation for its application to launch vehicle stage separation problems. Finally, to demonstrate end-to-end simulation capability, POST2/CFE was applied to the ascent, orbit insertion, and booster return of a reusable two-stage-to-orbit (TSTO) vehicle concept. With these validation exercises, POST2/CFE software can be used for performing conceptual level end-to-end simulations, including launch vehicle stage separation, for problems similar to those discussed in this report.

  20. BENCHMARK DOSE TECHNICAL GUIDANCE DOCUMENT ...

    EPA Pesticide Factsheets

    The purpose of this document is to provide guidance for the Agency on the application of the benchmark dose approach in determining the point of departure (POD) for health effects data, whether a linear or nonlinear low dose extrapolation is used. The guidance includes discussion on computation of benchmark doses and benchmark concentrations (BMDs and BMCs) and their lower confidence limits, data requirements, dose-response analysis, and reporting requirements. This guidance is based on today's knowledge and understanding, and on experience gained in using this approach.

  1. Benchmarking CRISPR on-target sgRNA design.

    PubMed

    Yan, Jifang; Chuai, Guohui; Zhou, Chi; Zhu, Chenyu; Yang, Jing; Zhang, Chao; Gu, Feng; Xu, Han; Wei, Jia; Liu, Qi

    2017-02-15

    CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats)-based gene editing has been widely implemented in various cell types and organisms. A major challenge in the effective application of the CRISPR system is the need to design highly efficient single-guide RNA (sgRNA) with minimal off-target cleavage. Several tools are available for sgRNA design, while limited tools were compared. In our opinion, benchmarking the performance of the available tools and indicating their applicable scenarios are important issues. Moreover, whether the reported sgRNA design rules are reproducible across different sgRNA libraries, cell types and organisms remains unclear. In our study, a systematic and unbiased benchmark of the sgRNA predicting efficacy was performed on nine representative on-target design tools, based on six benchmark data sets covering five different cell types. The benchmark study presented here provides novel quantitative insights into the available CRISPR tools. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Algorithm and Architecture Independent Benchmarking with SEAK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tallent, Nathan R.; Manzano Franco, Joseph B.; Gawande, Nitin A.

    2016-05-23

    Many applications of high performance embedded computing are limited by performance or power bottlenecks. We have designed the Suite for Embedded Applications & Kernels (SEAK), a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions; and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) and goal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user black-box evaluation that can capture tradeoffs between performance, power, accuracy, size, andmore » weight. The tradeoffs are especially informative for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desjarlais, Andre Omer; Kriner, Scott; Miller, William A

    An alternative to white and cool-color roofs that meets prescriptive requirements for steep-slope (residential and non-residential) and low-slope (non-residential) roofing has been documented. Roofs fitted with an inclined air space above the sheathing (herein termed above-sheathing ventilation, or ASV), performed as well as if not better than high-reflectance, high-emittance roofs fastened directly to the deck. Field measurements demonstrated the benefit of roofs designed with ASV. A computer tool was benchmarked against the field data. Testing and benchmarks were conducted at roofs inclined at 18.34 ; the roof span from soffit to ridge was 18.7 ft (5.7 m). The tool wasmore » then exercised to compute the solar reflectance needed by a roof equipped with ASV to exhibit the same annual cooling load as that for a direct-to-deck cool-color roof. A painted metal roof with an air space height of 0.75 in. (0.019 m) and spanning 18.7 ft (5.7 m) up the roof incline of 18.34 needed only a 0.10 solar reflectance to exhibit the same annual cooling load as a direct-to-deck cool-color metal roof (solar reflectance of 0.25). This held for all eight ASHRAE climate zones complying with ASHRAE 90.1 (2007a). A dark heat-absorbing roof fitted with 1.5 in. (0.038 m) air space spanning 18.7 ft (5.7 m) and inclined at 18.34 was shown to have a seasonal cooling load equivalent to that of a conventional direct-to-deck cool-color metal roof. Computations for retrofit application based on ASHRAE 90.1 (1980) showed that ASV air spaces of either 0.75 or 1.5 in. (0.019 and 0.038 m) would permit black roofs to have annual cooling loads equivalent to the direct-to-deck cool roof. Results are encouraging, and a parametric study of roof slope and ASV aspect ratio is needed for developing guidelines applicable to all steep- and low-slope roof applications.« less

  4. PFLOTRAN-RepoTREND Source Term Comparison Summary.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frederick, Jennifer M.

    Code inter-comparison studies are useful exercises to verify and benchmark independently developed software to ensure proper function, especially when the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment. This summary describes the results of the first portion of the code inter-comparison between PFLOTRAN and RepoTREND, which compares the radionuclide source term used in a typical performance assessment.

  5. An Assessment of Current Fan Noise Prediction Capability

    NASA Technical Reports Server (NTRS)

    Envia, Edmane; Woodward, Richard P.; Elliott, David M.; Fite, E. Brian; Hughes, Christopher E.; Podboy, Gary G.; Sutliff, Daniel L.

    2008-01-01

    In this paper, the results of an extensive assessment exercise carried out to establish the current state of the art for predicting fan noise at NASA are presented. Representative codes in the empirical, analytical, and computational categories were exercised and assessed against a set of benchmark acoustic data obtained from wind tunnel tests of three model scale fans. The chosen codes were ANOPP, representing an empirical capability, RSI, representing an analytical capability, and LINFLUX, representing a computational aeroacoustics capability. The selected benchmark fans cover a wide range of fan pressure ratios and fan tip speeds, and are representative of modern turbofan engine designs. The assessment results indicate that the ANOPP code can predict fan noise spectrum to within 4 dB of the measurement uncertainty band on a third-octave basis for the low and moderate tip speed fans except at extreme aft emission angles. The RSI code can predict fan broadband noise spectrum to within 1.5 dB of experimental uncertainty band provided the rotor-only contribution is taken into account. The LINFLUX code can predict interaction tone power levels to within experimental uncertainties at low and moderate fan tip speeds, but could deviate by as much as 6.5 dB outside the experimental uncertainty band at the highest tip speeds in some case.

  6. CLEAR: Cross-Layer Exploration for Architecting Resilience

    DTIC Science & Technology

    2017-03-01

    benchmark analysis, also provides cost-effective solutions (~1% additional energy cost for the same 50× improvement). This paper addresses the...core (OoO-core) [Wang 04], across 18 benchmarks . Such extensive exploration enables us to conclusively answer the above cross-layer resilience...analysis of the effects of soft errors on application benchmarks , provides a highly effective soft error resilience approach. 3. The above

  7. Benchmarking in health care: using the Internet to identify resources.

    PubMed

    Lingle, V A

    1996-01-01

    Benchmarking is a quality improvement tool that is increasingly being applied to the health care field and to the libraries within that field. Using mostly resources assessible at no charge through the Internet, a collection of information was compiled on benchmarking and its applications. Sources could be identified in several formats including books, journals and articles, multi-media materials, and organizations.

  8. 8 CFR 212.16 - Applications for exercise of discretion relating to T nonimmigrant status.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 8 Aliens and Nationality 1 2011-01-01 2011-01-01 false Applications for exercise of discretion... INADMISSIBLE ALIENS; PAROLE § 212.16 Applications for exercise of discretion relating to T nonimmigrant status. (a) Filing the waiver application. An alien applying for the exercise of discretion under section 212...

  9. 8 CFR 212.16 - Applications for exercise of discretion relating to T nonimmigrant status.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 8 Aliens and Nationality 1 2014-01-01 2014-01-01 false Applications for exercise of discretion... INADMISSIBLE ALIENS; PAROLE § 212.16 Applications for exercise of discretion relating to T nonimmigrant status. (a) Filing the waiver application. An alien applying for the exercise of discretion under section 212...

  10. 8 CFR 212.16 - Applications for exercise of discretion relating to T nonimmigrant status.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 8 Aliens and Nationality 1 2012-01-01 2012-01-01 false Applications for exercise of discretion... INADMISSIBLE ALIENS; PAROLE § 212.16 Applications for exercise of discretion relating to T nonimmigrant status. (a) Filing the waiver application. An alien applying for the exercise of discretion under section 212...

  11. 8 CFR 212.16 - Applications for exercise of discretion relating to T nonimmigrant status.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 8 Aliens and Nationality 1 2013-01-01 2013-01-01 false Applications for exercise of discretion... INADMISSIBLE ALIENS; PAROLE § 212.16 Applications for exercise of discretion relating to T nonimmigrant status. (a) Filing the waiver application. An alien applying for the exercise of discretion under section 212...

  12. 8 CFR 212.16 - Applications for exercise of discretion relating to T nonimmigrant status.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 8 Aliens and Nationality 1 2010-01-01 2010-01-01 false Applications for exercise of discretion... INADMISSIBLE ALIENS; PAROLE § 212.16 Applications for exercise of discretion relating to T nonimmigrant status. (a) Filing the waiver application. An alien applying for the exercise of discretion under section 212...

  13. 8 CFR 212.3 - Application for the exercise of discretion under section 212(c).

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 8 Aliens and Nationality 1 2014-01-01 2014-01-01 false Application for the exercise of discretion... ALIENS; PAROLE § 212.3 Application for the exercise of discretion under section 212(c). (a) Jurisdiction. An application for the exercise of discretion under section 212(c) of the Act must be submitted on...

  14. 8 CFR 212.3 - Application for the exercise of discretion under section 212(c).

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 8 Aliens and Nationality 1 2012-01-01 2012-01-01 false Application for the exercise of discretion... ALIENS; PAROLE § 212.3 Application for the exercise of discretion under section 212(c). (a) Jurisdiction. An application for the exercise of discretion under section 212(c) of the Act must be submitted on...

  15. 8 CFR 212.3 - Application for the exercise of discretion under section 212(c).

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 8 Aliens and Nationality 1 2011-01-01 2011-01-01 false Application for the exercise of discretion... ALIENS; PAROLE § 212.3 Application for the exercise of discretion under section 212(c). (a) Jurisdiction. An application for the exercise of discretion under section 212(c) of the Act must be submitted on...

  16. 8 CFR 212.3 - Application for the exercise of discretion under section 212(c).

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 8 Aliens and Nationality 1 2013-01-01 2013-01-01 false Application for the exercise of discretion... ALIENS; PAROLE § 212.3 Application for the exercise of discretion under section 212(c). (a) Jurisdiction. An application for the exercise of discretion under section 212(c) of the Act must be submitted on...

  17. Benchmarks for target tracking

    NASA Astrophysics Data System (ADS)

    Dunham, Darin T.; West, Philip D.

    2011-09-01

    The term benchmark originates from the chiseled horizontal marks that surveyors made, into which an angle-iron could be placed to bracket ("bench") a leveling rod, thus ensuring that the leveling rod can be repositioned in exactly the same place in the future. A benchmark in computer terms is the result of running a computer program, or a set of programs, in order to assess the relative performance of an object by running a number of standard tests and trials against it. This paper will discuss the history of simulation benchmarks that are being used by multiple branches of the military and agencies of the US government. These benchmarks range from missile defense applications to chemical biological situations. Typically, a benchmark is used with Monte Carlo runs in order to tease out how algorithms deal with variability and the range of possible inputs. We will also describe problems that can be solved by a benchmark.

  18. Thermal Analysis of a TREAT Fuel Assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papadias, Dionissios; Wright, Arthur E.

    2014-07-09

    The objective of this study was to explore options as to reduce peak cladding temperatures despite an increase in peak fuel temperatures. A 3D thermal-hydraulic model for a single TREAT fuel assembly was benchmarked to reproduce results obtained with previous thermal models developed for a TREAT HEU fuel assembly. In exercising this model, and variants thereof depending on the scope of analysis, various options were explored to reduce the peak cladding temperatures.

  19. : A Scalable and Transparent System for Simulating MPI Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S

    2010-01-01

    is a scalable, transparent system for experimenting with the execution of parallel programs on simulated computing platforms. The level of simulated detail can be varied for application behavior as well as for machine characteristics. Unique features of are repeatability of execution, scalability to millions of simulated (virtual) MPI ranks, scalability to hundreds of thousands of host (real) MPI ranks, portability of the system to a variety of host supercomputing platforms, and the ability to experiment with scientific applications whose source-code is available. The set of source-code interfaces supported by is being expanded to support a wider set of applications, andmore » MPI-based scientific computing benchmarks are being ported. In proof-of-concept experiments, has been successfully exercised to spawn and sustain very large-scale executions of an MPI test program given in source code form. Low slowdowns are observed, due to its use of purely discrete event style of execution, and due to the scalability and efficiency of the underlying parallel discrete event simulation engine, sik. In the largest runs, has been executed on up to 216,000 cores of a Cray XT5 supercomputer, successfully simulating over 27 million virtual MPI ranks, each virtual rank containing its own thread context, and all ranks fully synchronized by virtual time.« less

  20. Results of the GABLS3 diurnal-cycle benchmark for wind energy applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodrigo, J. Sanz; Allaerts, D.; Avila, M.

    We present results of the GABLS3 model intercomparison benchmark revisited for wind energy applications. The case consists of a diurnal cycle, measured at the 200-m tall Cabauw tower in the Netherlands, including a nocturnal low-level jet. The benchmark includes a sensitivity analysis of WRF simulations using two input meteorological databases and five planetary boundary-layer schemes. A reference set of mesoscale tendencies is used to drive microscale simulations using RANS k-ϵ and LES turbulence models. The validation is based on rotor-based quantities of interest. Cycle-integrated mean absolute errors are used to quantify model performance. The results of the benchmark are usedmore » to discuss input uncertainties from mesoscale modelling, different meso-micro coupling strategies (online vs offline) and consistency between RANS and LES codes when dealing with boundary-layer mean flow quantities. Altogether, all the microscale simulations produce a consistent coupling with mesoscale forcings.« less

  1. Results of the GABLS3 diurnal-cycle benchmark for wind energy applications

    DOE PAGES

    Rodrigo, J. Sanz; Allaerts, D.; Avila, M.; ...

    2017-06-13

    We present results of the GABLS3 model intercomparison benchmark revisited for wind energy applications. The case consists of a diurnal cycle, measured at the 200-m tall Cabauw tower in the Netherlands, including a nocturnal low-level jet. The benchmark includes a sensitivity analysis of WRF simulations using two input meteorological databases and five planetary boundary-layer schemes. A reference set of mesoscale tendencies is used to drive microscale simulations using RANS k-ϵ and LES turbulence models. The validation is based on rotor-based quantities of interest. Cycle-integrated mean absolute errors are used to quantify model performance. The results of the benchmark are usedmore » to discuss input uncertainties from mesoscale modelling, different meso-micro coupling strategies (online vs offline) and consistency between RANS and LES codes when dealing with boundary-layer mean flow quantities. Altogether, all the microscale simulations produce a consistent coupling with mesoscale forcings.« less

  2. Integrative care for the management of low back pain: use of a clinical care pathway.

    PubMed

    Maiers, Michele J; Westrom, Kristine K; Legendre, Claire G; Bronfort, Gert

    2010-10-29

    For the treatment of chronic back pain, it has been theorized that integrative care plans can lead to better outcomes than those achieved by monodisciplinary care alone, especially when using a collaborative, interdisciplinary, and non-hierarchical team approach. This paper describes the use of a care pathway designed to guide treatment by an integrative group of providers within a randomized controlled trial. A clinical care pathway was used by a multidisciplinary group of providers, which included acupuncturists, chiropractors, cognitive behavioral therapists, exercise therapists, massage therapists and primary care physicians. Treatment recommendations were based on an evidence-informed practice model, and reached by group consensus. Research study participants were empowered to select one of the treatment recommendations proposed by the integrative group. Common principles and benchmarks were established to guide treatment management throughout the study. Thirteen providers representing 5 healthcare professions collaborated to provide integrative care to study participants. On average, 3 to 4 treatment plans, each consisting of 2 to 3 modalities, were recommended to study participants. Exercise, massage, and acupuncture were both most commonly recommended by the team and selected by study participants. Changes to care commonly incorporated cognitive behavioral therapy into treatment plans. This clinical care pathway was a useful tool for the consistent application of evidence-based care for low back pain in the context of an integrative setting. ClinicalTrials.gov NCT00567333.

  3. Evaluation of the ACEC Benchmark Suite for Real-Time Applications

    DTIC Science & Technology

    1990-07-23

    1.0 benchmark suite waSanalyzed with respect to its measuring of Ada real-time features such as tasking, memory management, input/output, scheduling...and delay statement, Chapter 13 features , pragmas, interrupt handling, subprogram overhead, numeric computations etc. For most of the features that...meant for programming real-time systems. The ACEC benchmarks have been analyzed extensively with respect to their measuring of Ada real-time features

  4. Seismo-acoustic ray model benchmarking against experimental tank data.

    PubMed

    Camargo Rodríguez, Orlando; Collis, Jon M; Simpson, Harry J; Ey, Emanuel; Schneiderwind, Joseph; Felisberto, Paulo

    2012-08-01

    Acoustic predictions of the recently developed traceo ray model, which accounts for bottom shear properties, are benchmarked against tank experimental data from the EPEE-1 and EPEE-2 (Elastic Parabolic Equation Experiment) experiments. Both experiments are representative of signal propagation in a Pekeris-like shallow-water waveguide over a non-flat isotropic elastic bottom, where significant interaction of the signal with the bottom can be expected. The benchmarks show, in particular, that the ray model can be as accurate as a parabolic approximation model benchmarked in similar conditions. The results of benchmarking are important, on one side, as a preliminary experimental validation of the model and, on the other side, demonstrates the reliability of the ray approach for seismo-acoustic applications.

  5. Modification and benchmarking of MCNP for low-energy tungsten spectra.

    PubMed

    Mercier, J R; Kopp, D T; McDavid, W D; Dove, S B; Lancaster, J L; Tucker, D M

    2000-12-01

    The MCNP Monte Carlo radiation transport code was modified for diagnostic medical physics applications. In particular, the modified code was thoroughly benchmarked for the production of polychromatic tungsten x-ray spectra in the 30-150 kV range. Validating the modified code for coupled electron-photon transport with benchmark spectra was supplemented with independent electron-only and photon-only transport benchmarks. Major revisions to the code included the proper treatment of characteristic K x-ray production and scoring, new impact ionization cross sections, and new bremsstrahlung cross sections. Minor revisions included updated photon cross sections, electron-electron bremsstrahlung production, and K x-ray yield. The modified MCNP code is benchmarked to electron backscatter factors, x-ray spectra production, and primary and scatter photon transport.

  6. Pynamic: the Python Dynamic Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, G L; Ahn, D H; de Supinksi, B R

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, wemore » present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.« less

  7. 8 CFR 212.17 - Applications for the exercise of discretion relating to U nonimmigrant status.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 8 Aliens and Nationality 1 2014-01-01 2014-01-01 false Applications for the exercise of discretion... INADMISSIBLE ALIENS; PAROLE § 212.17 Applications for the exercise of discretion relating to U nonimmigrant....C. 1182(d)(14), if it determines that it is in the public or national interest to exercise...

  8. 8 CFR 1212.16 - Applications for exercise of discretion relating to T nonimmigrant status.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 8 Aliens and Nationality 1 2012-01-01 2012-01-01 false Applications for exercise of discretion...; WAIVERS; ADMISSION OF CERTAIN INADMISSIBLE ALIENS; PAROLE § 1212.16 Applications for exercise of... exercise of discretion under section 212(d)(13) or (d)(3)(B) of the Act (waivers of inadmissibility) in...

  9. 8 CFR 212.17 - Applications for the exercise of discretion relating to U nonimmigrant status.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 8 Aliens and Nationality 1 2013-01-01 2013-01-01 false Applications for the exercise of discretion... INADMISSIBLE ALIENS; PAROLE § 212.17 Applications for the exercise of discretion relating to U nonimmigrant....C. 1182(d)(14), if it determines that it is in the public or national interest to exercise...

  10. 8 CFR 212.17 - Applications for the exercise of discretion relating to U nonimmigrant status.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 8 Aliens and Nationality 1 2012-01-01 2012-01-01 false Applications for the exercise of discretion... INADMISSIBLE ALIENS; PAROLE § 212.17 Applications for the exercise of discretion relating to U nonimmigrant....C. 1182(d)(14), if it determines that it is in the public or national interest to exercise...

  11. 8 CFR 1212.16 - Applications for exercise of discretion relating to T nonimmigrant status.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 8 Aliens and Nationality 1 2013-01-01 2013-01-01 false Applications for exercise of discretion...; WAIVERS; ADMISSION OF CERTAIN INADMISSIBLE ALIENS; PAROLE § 1212.16 Applications for exercise of... exercise of discretion under section 212(d)(13) or (d)(3)(B) of the Act (waivers of inadmissibility) in...

  12. 8 CFR 1212.16 - Applications for exercise of discretion relating to T nonimmigrant status.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 8 Aliens and Nationality 1 2014-01-01 2014-01-01 false Applications for exercise of discretion...; WAIVERS; ADMISSION OF CERTAIN INADMISSIBLE ALIENS; PAROLE § 1212.16 Applications for exercise of... exercise of discretion under section 212(d)(13) or (d)(3)(B) of the Act (waivers of inadmissibility) in...

  13. 8 CFR 1212.16 - Applications for exercise of discretion relating to T nonimmigrant status.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 8 Aliens and Nationality 1 2011-01-01 2011-01-01 false Applications for exercise of discretion...; WAIVERS; ADMISSION OF CERTAIN INADMISSIBLE ALIENS; PAROLE § 1212.16 Applications for exercise of... exercise of discretion under section 212(d)(13) or (d)(3)(B) of the Act (waivers of inadmissibility) in...

  14. 8 CFR 1212.16 - Applications for exercise of discretion relating to T nonimmigrant status.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 8 Aliens and Nationality 1 2010-01-01 2010-01-01 false Applications for exercise of discretion...; WAIVERS; ADMISSION OF CERTAIN INADMISSIBLE ALIENS; PAROLE § 1212.16 Applications for exercise of... exercise of discretion under section 212(d)(13) or (d)(3)(B) of the Act (waivers of inadmissibility) in...

  15. 26 CFR 11.401(d)(1)-1 - Nonbank trustees of trusts benefiting owner-employees.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... business of the applicant consists of exercising fiduciary powers similar to those he will exercise if his... personnel experienced in the administration of fiduciary powers similar to those he will exercise if his... directors of the applicant will be responsible for the proper exercise of fiduciary powers by the applicant...

  16. 26 CFR 11.401(d)(1)-1 - Nonbank trustees of trusts benefiting owner-employees.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... business of the applicant consists of exercising fiduciary powers similar to those he will exercise if his... personnel experienced in the administration of fiduciary powers similar to those he will exercise if his... directors of the applicant will be responsible for the proper exercise of fiduciary powers by the applicant...

  17. 26 CFR 11.401(d)(1)-1 - Nonbank trustees of trusts benefiting owner-employees.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... business of the applicant consists of exercising fiduciary powers similar to those he will exercise if his... personnel experienced in the administration of fiduciary powers similar to those he will exercise if his... directors of the applicant will be responsible for the proper exercise of fiduciary powers by the applicant...

  18. A comprehensive assessment of somatic mutation detection in cancer using whole-genome sequencing

    PubMed Central

    Alioto, Tyler S.; Buchhalter, Ivo; Derdak, Sophia; Hutter, Barbara; Eldridge, Matthew D.; Hovig, Eivind; Heisler, Lawrence E.; Beck, Timothy A.; Simpson, Jared T.; Tonon, Laurie; Sertier, Anne-Sophie; Patch, Ann-Marie; Jäger, Natalie; Ginsbach, Philip; Drews, Ruben; Paramasivam, Nagarajan; Kabbe, Rolf; Chotewutmontri, Sasithorn; Diessl, Nicolle; Previti, Christopher; Schmidt, Sabine; Brors, Benedikt; Feuerbach, Lars; Heinold, Michael; Gröbner, Susanne; Korshunov, Andrey; Tarpey, Patrick S.; Butler, Adam P.; Hinton, Jonathan; Jones, David; Menzies, Andrew; Raine, Keiran; Shepherd, Rebecca; Stebbings, Lucy; Teague, Jon W.; Ribeca, Paolo; Giner, Francesc Castro; Beltran, Sergi; Raineri, Emanuele; Dabad, Marc; Heath, Simon C.; Gut, Marta; Denroche, Robert E.; Harding, Nicholas J.; Yamaguchi, Takafumi N.; Fujimoto, Akihiro; Nakagawa, Hidewaki; Quesada, Víctor; Valdés-Mas, Rafael; Nakken, Sigve; Vodák, Daniel; Bower, Lawrence; Lynch, Andrew G.; Anderson, Charlotte L.; Waddell, Nicola; Pearson, John V.; Grimmond, Sean M.; Peto, Myron; Spellman, Paul; He, Minghui; Kandoth, Cyriac; Lee, Semin; Zhang, John; Létourneau, Louis; Ma, Singer; Seth, Sahil; Torrents, David; Xi, Liu; Wheeler, David A.; López-Otín, Carlos; Campo, Elías; Campbell, Peter J.; Boutros, Paul C.; Puente, Xose S.; Gerhard, Daniela S.; Pfister, Stefan M.; McPherson, John D.; Hudson, Thomas J.; Schlesner, Matthias; Lichter, Peter; Eils, Roland; Jones, David T. W.; Gut, Ivo G.

    2015-01-01

    As whole-genome sequencing for cancer genome analysis becomes a clinical tool, a full understanding of the variables affecting sequencing analysis output is required. Here using tumour-normal sample pairs from two different types of cancer, chronic lymphocytic leukaemia and medulloblastoma, we conduct a benchmarking exercise within the context of the International Cancer Genome Consortium. We compare sequencing methods, analysis pipelines and validation methods. We show that using PCR-free methods and increasing sequencing depth to ∼100 × shows benefits, as long as the tumour:control coverage ratio remains balanced. We observe widely varying mutation call rates and low concordance among analysis pipelines, reflecting the artefact-prone nature of the raw data and lack of standards for dealing with the artefacts. However, we show that, using the benchmark mutation set we have created, many issues are in fact easy to remedy and have an immediate positive impact on mutation detection accuracy. PMID:26647970

  19. Benchmarking In-Flight Icing Detection Products for Future Upgrades

    NASA Technical Reports Server (NTRS)

    Politovich, M. K.; Minnis, P.; Johnson, D. B.; Wolff, C. A.; Chapman, M.; Heck, P. W.; Haggerty, J. A.

    2004-01-01

    This paper summarizes the results of a benchmarking exercise conducted as part of the NASA supported Advanced Satellite Aviation-Weather Products (ASAP) Program. The goal of ASAP is to increase and optimize the use of satellite data sets within the existing FAA Aviation Weather Research Program (AWRP) Product Development Team (PDT) structure and to transfer advanced satellite expertise to the PDTs. Currently, ASAP fosters collaborative efforts between NASA Laboratories, the University of Wisconsin Cooperative Institute for Meteorological Satellite Studies (UW-CIMSS), the University of Alabama in Huntsville (UAH), and the AWRP PDTs. This collaboration involves the testing and evaluation of existing satellite algorithms developed or proposed by AWRP teams, the introduction of new techniques and data sets to the PDTs from the satellite community, and enhanced access to new satellite data sets available through CIMSS and NASA Langley Research Center for evaluation and testing.

  20. Source-term development for a contaminant plume for use by multimedia risk assessment models

    NASA Astrophysics Data System (ADS)

    Whelan, Gene; McDonald, John P.; Taira, Randal Y.; Gnanapragasam, Emmanuel K.; Yu, Charley; Lew, Christine S.; Mills, William B.

    2000-02-01

    Multimedia modelers from the US Environmental Protection Agency (EPA) and US Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: MEPAS, MMSOILS, PRESTO, and RESRAD. These models represent typical analytically based tools that are used in human-risk and endangerment assessments at installations containing radioactive and hazardous contaminants. The objective is to demonstrate an approach for developing an adequate source term by simplifying an existing, real-world, 90Sr plume at DOE's Hanford installation in Richland, WA, for use in a multimedia benchmarking exercise between MEPAS, MMSOILS, PRESTO, and RESRAD. Source characteristics and a release mechanism are developed and described; also described is a typical process and procedure that an analyst would follow in developing a source term for using this class of analytical tool in a preliminary assessment.

  1. Feedback on the Surveillance 8 challenge: Vibration-based diagnosis of a Safran aircraft engine

    NASA Astrophysics Data System (ADS)

    Antoni, Jérôme; Griffaton, Julien; André, Hugo; Avendaño-Valencia, Luis David; Bonnardot, Frédéric; Cardona-Morales, Oscar; Castellanos-Dominguez, German; Daga, Alessandro Paolo; Leclère, Quentin; Vicuña, Cristián Molina; Acuña, David Quezada; Ompusunggu, Agusmian Partogi; Sierra-Alonso, Edgar F.

    2017-12-01

    This paper presents the content and outcomes of the Safran contest organized during the International Conference Surveillance 8, October 20-21, 2015, at the Roanne Institute of Technology, France. The contest dealt with the diagnosis of a civil aircraft engine based on vibration data measured in a transient operating mode and provided by Safran. Based on two independent exercises, the contest offered the possibility to benchmark current diagnostic methods on real data supplemented with several challenges. Outcomes of seven competing teams are reported and discussed. The object of the paper is twofold. It first aims at giving a picture of the current state-of-the-art in vibration-based diagnosis of rolling-element bearings in nonstationary operating conditions. Second, it aims at providing the scientific community with a benchmark and some baseline solutions. In this respect, the data used in the contest are made available as supplementary material.

  2. Benchmarking methods and data sets for ligand enrichment assessment in virtual screening.

    PubMed

    Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon

    2015-01-01

    Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. "analogue bias", "artificial enrichment" and "false negative". In addition, we introduce our recent algorithm to build maximum-unbiased benchmarking sets applicable to both ligand-based and structure-based VS approaches, and its implementations to three important human histone deacetylases (HDACs) isoforms, i.e. HDAC1, HDAC6 and HDAC8. The leave-one-out cross-validation (LOO CV) demonstrates that the benchmarking sets built by our algorithm are maximum-unbiased as measured by property matching, ROC curves and AUCs. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Benchmarking Methods and Data Sets for Ligand Enrichment Assessment in Virtual Screening

    PubMed Central

    Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon

    2014-01-01

    Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. “analogue bias”, “artificial enrichment” and “false negative”. In addition, we introduced our recent algorithm to build maximum-unbiased benchmarking sets applicable to both ligand-based and structure-based VS approaches, and its implementations to three important human histone deacetylase (HDAC) isoforms, i.e. HDAC1, HDAC6 and HDAC8. The Leave-One-Out Cross-Validation (LOO CV) demonstrates that the benchmarking sets built by our algorithm are maximum-unbiased in terms of property matching, ROC curves and AUCs. PMID:25481478

  4. 7 CFR 25.404 - Validation of designation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... maintain a process for ensuring ongoing broad-based participation by community residents consistent with the approved application and planning process outlined in the strategic plan. (1) Continuous... benchmarks, the process it will use for reviewing goals and benchmarks and revising its strategic plan. (2...

  5. APPLICATION OF BENCHMARK DOSE METHODOLOGY TO DATA FROM PRENATAL DEVELOPMENTAL TOXICITY STUDIES

    EPA Science Inventory

    The benchmark dose (BMD) concept was applied to 246 conventional developmental toxicity datasets from government, industry and commercial laboratories. Five modeling approaches were used, two generic and three specific to developmental toxicity (DT models). BMDs for both quantal ...

  6. Comparison of mapping algorithms used in high-throughput sequencing: application to Ion Torrent data

    PubMed Central

    2014-01-01

    Background The rapid evolution in high-throughput sequencing (HTS) technologies has opened up new perspectives in several research fields and led to the production of large volumes of sequence data. A fundamental step in HTS data analysis is the mapping of reads onto reference sequences. Choosing a suitable mapper for a given technology and a given application is a subtle task because of the difficulty of evaluating mapping algorithms. Results In this paper, we present a benchmark procedure to compare mapping algorithms used in HTS using both real and simulated datasets and considering four evaluation criteria: computational resource and time requirements, robustness of mapping, ability to report positions for reads in repetitive regions, and ability to retrieve true genetic variation positions. To measure robustness, we introduced a new definition for a correctly mapped read taking into account not only the expected start position of the read but also the end position and the number of indels and substitutions. We developed CuReSim, a new read simulator, that is able to generate customized benchmark data for any kind of HTS technology by adjusting parameters to the error types. CuReSim and CuReSimEval, a tool to evaluate the mapping quality of the CuReSim simulated reads, are freely available. We applied our benchmark procedure to evaluate 14 mappers in the context of whole genome sequencing of small genomes with Ion Torrent data for which such a comparison has not yet been established. Conclusions A benchmark procedure to compare HTS data mappers is introduced with a new definition for the mapping correctness as well as tools to generate simulated reads and evaluate mapping quality. The application of this procedure to Ion Torrent data from the whole genome sequencing of small genomes has allowed us to validate our benchmark procedure and demonstrate that it is helpful for selecting a mapper based on the intended application, questions to be addressed, and the technology used. This benchmark procedure can be used to evaluate existing or in-development mappers as well as to optimize parameters of a chosen mapper for any application and any sequencing platform. PMID:24708189

  7. Global-local methodologies and their application to nonlinear analysis

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1989-01-01

    An assessment is made of the potential of different global-local analysis strategies for predicting the nonlinear and postbuckling responses of structures. Two postbuckling problems of composite panels are used as benchmarks and the application of different global-local methodologies to these benchmarks is outlined. The key elements of each of the global-local strategies are discussed and future research areas needed to realize the full potential of global-local methodologies are identified.

  8. OWL2 benchmarking for the evaluation of knowledge based systems.

    PubMed

    Khan, Sher Afgun; Qadir, Muhammad Abdul; Abbas, Muhammad Azeem; Afzal, Muhammad Tanvir

    2017-01-01

    OWL2 semantics are becoming increasingly popular for the real domain applications like Gene engineering and health MIS. The present work identifies the research gap that negligible attention has been paid to the performance evaluation of Knowledge Base Systems (KBS) using OWL2 semantics. To fulfil this identified research gap, an OWL2 benchmark for the evaluation of KBS is proposed. The proposed benchmark addresses the foundational blocks of an ontology benchmark i.e. data schema, workload and performance metrics. The proposed benchmark is tested on memory based, file based, relational database and graph based KBS for performance and scalability measures. The results show that the proposed benchmark is able to evaluate the behaviour of different state of the art KBS on OWL2 semantics. On the basis of the results, the end users (i.e. domain expert) would be able to select a suitable KBS appropriate for his domain.

  9. Performance Evaluation and Benchmarking of Next Intelligent Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    del Pobil, Angel; Madhavan, Raj; Bonsignorio, Fabio

    Performance Evaluation and Benchmarking of Intelligent Systems presents research dedicated to the subject of performance evaluation and benchmarking of intelligent systems by drawing from the experiences and insights of leading experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. This contributed volume offers a detailed and coherent picture of state-of-the-art, recent developments, and further research areas in intelligent systems. The chapters cover a broad range of applications, such as assistive robotics, planetary surveying, urban search and rescue, and line tracking for automotive assembly. Subsystems or components described in this bookmore » include human-robot interaction, multi-robot coordination, communications, perception, and mapping. Chapters are also devoted to simulation support and open source software for cognitive platforms, providing examples of the type of enabling underlying technologies that can help intelligent systems to propagate and increase in capabilities. Performance Evaluation and Benchmarking of Intelligent Systems serves as a professional reference for researchers and practitioners in the field. This book is also applicable to advanced courses for graduate level students and robotics professionals in a wide range of engineering and related disciplines including computer science, automotive, healthcare, manufacturing, and service robotics.« less

  10. Benchmarking Brain-Computer Interfaces Outside the Laboratory: The Cybathlon 2016

    PubMed Central

    Novak, Domen; Sigrist, Roland; Gerig, Nicolas J.; Wyss, Dario; Bauer, René; Götz, Ulrich; Riener, Robert

    2018-01-01

    This paper presents a new approach to benchmarking brain-computer interfaces (BCIs) outside the lab. A computer game was created that mimics a real-world application of assistive BCIs, with the main outcome metric being the time needed to complete the game. This approach was used at the Cybathlon 2016, a competition for people with disabilities who use assistive technology to achieve tasks. The paper summarizes the technical challenges of BCIs, describes the design of the benchmarking game, then describes the rules for acceptable hardware, software and inclusion of human pilots in the BCI competition at the Cybathlon. The 11 participating teams, their approaches, and their results at the Cybathlon are presented. Though the benchmarking procedure has some limitations (for instance, we were unable to identify any factors that clearly contribute to BCI performance), it can be successfully used to analyze BCI performance in realistic, less structured conditions. In the future, the parameters of the benchmarking game could be modified to better mimic different applications (e.g., the need to use some commands more frequently than others). Furthermore, the Cybathlon has the potential to showcase such devices to the general public. PMID:29375294

  11. Application of Benchmark Examples to Assess the Single and Mixed-Mode Static Delamination Propagation Capabilities in ANSYS

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2012-01-01

    The application of benchmark examples for the assessment of quasi-static delamination propagation capabilities is demonstrated for ANSYS. The examples are independent of the analysis software used and allow the assessment of the automated delamination propagation in commercial finite element codes based on the virtual crack closure technique (VCCT). The examples selected are based on two-dimensional finite element models of Double Cantilever Beam (DCB), End-Notched Flexure (ENF), Mixed-Mode Bending (MMB) and Single Leg Bending (SLB) specimens. First, the quasi-static benchmark examples were recreated for each specimen using the current implementation of VCCT in ANSYS . Second, the delamination was allowed to propagate under quasi-static loading from its initial location using the automated procedure implemented in the finite element software. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall the results are encouraging, but further assessment for three-dimensional solid models is required.

  12. Methods and Issues for the Combined Use of Integral Experiments and Covariance Data: Results of a NEA International Collaborative Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmiotti, Giuseppe; Salvatores, Massimo

    2014-04-01

    The Working Party on International Nuclear Data Evaluation Cooperation (WPEC) of the Nuclear Science Committee under the Nuclear Energy Agency (NEA/OECD) established a Subgroup (called “Subgroup 33”) in 2009 on “Methods and issues for the combined use of integral experiments and covariance data.” The first stage was devoted to producing the description of different adjustment methodologies and assessing their merits. A detailed document related to this first stage has been issued. Nine leading organizations (often with a long and recognized expertise in the field) have contributed: ANL, CEA, INL, IPPE, JAEA, JSI, NRG, IRSN and ORNL. In the second stagemore » a practical benchmark exercise was defined in order to test the reliability of the nuclear data adjustment methodology. A comparison of the results obtained by the participants and major lessons learned in the exercise are discussed in the present paper that summarizes individual contributions which often include several original developments not reported separately. The paper provides the analysis of the most important results of the adjustment of the main nuclear data of 11 major isotopes in a 33-group energy structure. This benchmark exercise was based on a set of 20 well defined integral parameters from 7 fast assembly experiments. The exercise showed that using a common shared set of integral experiments but different starting evaluated libraries and/or different covariance matrices, there is a good convergence of trends for adjustments. Moreover, a significant reduction of the original uncertainties is often observed. Using the a–posteriori covariance data, there is a strong reduction of the uncertainties of integral parameters for reference reactor designs, mainly due to the new correlations in the a–posteriori covariance matrix. Furthermore, criteria have been proposed and applied to verify the consistency of differential and integral data used in the adjustment. Finally, recommendations are given for an appropriate use of sensitivity analysis methods and indications for future work are provided.« less

  13. Benchmarking Ada tasking on tightly coupled multiprocessor architectures

    NASA Technical Reports Server (NTRS)

    Collard, Philippe; Goforth, Andre; Marquardt, Matthew

    1989-01-01

    The development of benchmarks and performance measures for parallel Ada tasking is reported with emphasis on the macroscopic behavior of the benchmark across a set of load parameters. The application chosen for the study was the NASREM model for telerobot control, relevant to many NASA missions. The results of the study demonstrate the potential of parallel Ada in accomplishing the task of developing a control system for a system such as the Flight Telerobotic Servicer using the NASREM framework.

  14. Shallow water models as tool for tsunami current predictions in ports and harbors. Validation with Tohoku 2011 field data

    NASA Astrophysics Data System (ADS)

    Gonzalez Vida, J. M., Sr.; Macias Sanchez, J.; Castro, M. J.; Ortega, S.

    2015-12-01

    Model ability to compute and predict tsunami flow velocities is of importance in risk assessment and hazard mitigation. Substantial damage can be produced by high velocity flows, particularly in harbors and bays, even when the wave height is small. Besides, an accurate simulation of tsunami flow velocities and accelerations is fundamental for advancing in the study of tsunami sediment transport. These considerations made the National Tsunami Hazard Mitigation Program (NTHMP) proposing a benchmark exercise focused on modeling and simulating tsunami currents. Until recently, few direct measurements of tsunami velocities were available to compare and to validate model results. After Tohoku 2011 many current meters measurement were made, mainly in harbors and channels. In this work we present a part of the contribution made by the EDANYA group from the University of Malaga to the NTHMP workshop organized at Portland (USA), 9-10 of February 2015. We have selected three out of the five proposed benchmark problems. Two of them consist in real observed data from the Tohoku 2011 event, one at Hilo Habour (Hawaii) and the other at Tauranga Bay (New Zealand). The third one consists in laboratory experimental data for the inundation of Seaside City in Oregon. For this model validation the Tsunami-HySEA model, developed by EDANYA group, was used. The overall conclusion that we could extract from this validation exercise was that the Tsunami-HySEA model performed well in all benchmark problems proposed. The greater spatial variability in tsunami velocity than wave height makes it more difficult its precise numerical representation. The larger variability in velocities is likely a result of the behaviour of the flow as it is channelized and as it flows around bathymetric highs and structures. In the other hand wave height do not respond as strongly to chanelized flow as current velocity.

  15. Does feedback matter? Practice-based learning for medical students after a multi-institutional clinical performance examination.

    PubMed

    Srinivasan, Malathi; Hauer, Karen E; Der-Martirosian, Claudia; Wilkes, Michael; Gesundheit, Neil

    2007-09-01

    Achieving competence in 'practice-based learning' implies that doctors can accurately self- assess their clinical skills to identify behaviours that need improvement. This study examines the impact of receiving feedback via performance benchmarks on medical students' self-assessment after a clinical performance examination (CPX). The authors developed a practice-based learning exercise at 3 institutions following a required 8-station CPX for medical students at the end of Year 3. Standardised patients (SPs) scored students after each station using checklists developed by experts. Students assessed their own performance immediately after the CPX (Phase 1). One month later, students watched their videotaped performance and reassessed (Phase 2). Some students received performance benchmarks (their scores, plus normative class data) before the video review. Pearson's correlations between self-ratings and SP ratings were calculated for overall performance and specific skill areas (history taking, physical examination, doctor-patient communication) for Phase 1 and Phase 2. The 2 correlations were then compared for each student group (i.e. those who received and those who did not receive feedback). A total of 280 students completed both study phases. Mean CPX scores ranged from 51% to 71% of items correct overall and for each skill area. Phase 1 self-assessment correlated weakly with SP ratings of student performance (r = 0.01-0.16). Without feedback, Phase 2 correlations remained weak (r = 0.13-0.18; n = 109). With feedback, Phase 2 correlations improved significantly (r = 0.26-0.47; n = 171). Low-performing students showed the greatest improvement after receiving feedback. The accuracy of student self-assessment was poor after a CPX, but improved significantly with performance feedback (scores and benchmarks). Videotape review alone (without feedback) did not improve self-assessment accuracy. Practice-based learning exercises that incorporate feedback to medical students hold promise to improve self-assessment skills.

  16. Performance Analysis of the ARL Linux Networx Cluster

    DTIC Science & Technology

    2004-06-01

    OVERFLOW, used processors selected by SGE. All benchmarks on the GAMESS, COBALT, LSDYNA and FLUENT. Each code Origin 3800 were executed using IRIX cpusets...scheduler. for these benchmarks defines a missile with grid fins consisting of seventeen million cells [31. 4. Application Performance Results and

  17. Development and Applications of Benchmark Examples for Static Delamination Propagation Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2013-01-01

    The development and application of benchmark examples for the assessment of quasistatic delamination propagation capabilities was demonstrated for ANSYS (TradeMark) and Abaqus/Standard (TradeMark). The examples selected were based on finite element models of Double Cantilever Beam (DCB) and Mixed-Mode Bending (MMB) specimens. First, quasi-static benchmark results were created based on an approach developed previously. Second, the delamination was allowed to propagate under quasi-static loading from its initial location using the automated procedure implemented in ANSYS (TradeMark) and Abaqus/Standard (TradeMark). Input control parameters were varied to study the effect on the computed delamination propagation. Overall, the benchmarking procedure proved valuable by highlighting the issues associated with choosing the appropriate input parameters for the VCCT implementations in ANSYS® and Abaqus/Standard®. However, further assessment for mixed-mode delamination fatigue onset and growth is required. Additionally studies should include the assessment of the propagation capabilities in more complex specimens and on a structural level.

  18. Parallelization of NAS Benchmarks for Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.

  19. Benchmarking FEniCS for mantle convection simulations

    NASA Astrophysics Data System (ADS)

    Vynnytska, L.; Rognes, M. E.; Clark, S. R.

    2013-01-01

    This paper evaluates the usability of the FEniCS Project for mantle convection simulations by numerical comparison to three established benchmarks. The benchmark problems all concern convection processes in an incompressible fluid induced by temperature or composition variations, and cover three cases: (i) steady-state convection with depth- and temperature-dependent viscosity, (ii) time-dependent convection with constant viscosity and internal heating, and (iii) a Rayleigh-Taylor instability. These problems are modeled by the Stokes equations for the fluid and advection-diffusion equations for the temperature and composition. The FEniCS Project provides a novel platform for the automated solution of differential equations by finite element methods. In particular, it offers a significant flexibility with regard to modeling and numerical discretization choices; we have here used a discontinuous Galerkin method for the numerical solution of the advection-diffusion equations. Our numerical results are in agreement with the benchmarks, and demonstrate the applicability of both the discontinuous Galerkin method and FEniCS for such applications.

  20. The Suite for Embedded Applications and Kernels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-05-10

    Many applications of high performance embedded computing are limited by performance or power bottlenecks. We havedesigned SEAK, a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions to these bottlenecks? and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) andgoal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user blackbox evaluation that can capture tradeoffs between performance, power, accuracy, size, and weight. The tradeoffs are especially informativemore » for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.« less

  1. Benchmarking multimedia performance

    NASA Astrophysics Data System (ADS)

    Zandi, Ahmad; Sudharsanan, Subramania I.

    1998-03-01

    With the introduction of faster processors and special instruction sets tailored to multimedia, a number of exciting applications are now feasible on the desktops. Among these is the DVD playback consisting, among other things, of MPEG-2 video and Dolby digital audio or MPEG-2 audio. Other multimedia applications such as video conferencing and speech recognition are also becoming popular on computer systems. In view of this tremendous interest in multimedia, a group of major computer companies have formed, Multimedia Benchmarks Committee as part of Standard Performance Evaluation Corp. to address the performance issues of multimedia applications. The approach is multi-tiered with three tiers of fidelity from minimal to full compliant. In each case the fidelity of the bitstream reconstruction as well as quality of the video or audio output are measured and the system is classified accordingly. At the next step the performance of the system is measured. In many multimedia applications such as the DVD playback the application needs to be run at a specific rate. In this case the measurement of the excess processing power, makes all the difference. All these make a system level, application based, multimedia benchmark very challenging. Several ideas and methodologies for each aspect of the problems will be presented and analyzed.

  2. Revel8or: Model Driven Capacity Planning Tool Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Liming; Liu, Yan; Bui, Ngoc B.

    2007-05-31

    Designing complex multi-tier applications that must meet strict performance requirements is a challenging software engineering problem. Ideally, the application architect could derive accurate performance predictions early in the project life-cycle, leveraging initial application design-level models and a description of the target software and hardware platforms. To this end, we have developed a capacity planning tool suite for component-based applications, called Revel8tor. The tool adheres to the model driven development paradigm and supports benchmarking and performance prediction for J2EE, .Net and Web services platforms. The suite is composed of three different tools: MDAPerf, MDABench and DSLBench. MDAPerf allows annotation of designmore » diagrams and derives performance analysis models. MDABench allows a customized benchmark application to be modeled in the UML 2.0 Testing Profile and automatically generates a deployable application, with measurement automatically conducted. DSLBench allows the same benchmark modeling and generation to be conducted using a simple performance engineering Domain Specific Language (DSL) in Microsoft Visual Studio. DSLBench integrates with Visual Studio and reuses its load testing infrastructure. Together, the tool suite can assist capacity planning across platforms in an automated fashion.« less

  3. Clomp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gylenhaal, J.; Bronevetsky, G.

    2007-05-25

    CLOMP is the C version of the Livermore OpenMP benchmark deeloped to measure OpenMP overheads and other performance impacts due to threading (like NUMA memory layouts, memory contention, cache effects, etc.) in order to influence future system design. Current best-in-class implementations of OpenMP have overheads at least ten times larger than is required by many of our applications for effective use of OpenMP. This benchmark shows the significant negative performance impact of these relatively large overheads and of other thread effects. The CLOMP benchmark highly configurable to allow a variety of problem sizes and threading effects to be studied andmore » it carefully checks its results to catch many common threading errors. This benchmark is expected to be included as part of the Sequoia Benchmark suite for the Sequoia procurement.« less

  4. Benchmarking and validation activities within JEFF project

    NASA Astrophysics Data System (ADS)

    Cabellos, O.; Alvarez-Velarde, F.; Angelone, M.; Diez, C. J.; Dyrda, J.; Fiorito, L.; Fischer, U.; Fleming, M.; Haeck, W.; Hill, I.; Ichou, R.; Kim, D. H.; Klix, A.; Kodeli, I.; Leconte, P.; Michel-Sendis, F.; Nunnenmann, E.; Pecchia, M.; Peneliau, Y.; Plompen, A.; Rochman, D.; Romojaro, P.; Stankovskiy, A.; Sublet, J. Ch.; Tamagno, P.; Marck, S. van der

    2017-09-01

    The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF) Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient benchmarking process. The aim of this paper is to present the activities carried out by the new JEFF Benchmarking and Validation Working Group, and to describe the role of the NEA Data Bank in this context. The paper will also review the status of preliminary benchmarking for the next JEFF-3.3 candidate cross-section files.

  5. Deterministic Modeling of the High Temperature Test Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortensi, J.; Cogliati, J. J.; Pope, M. A.

    2010-06-01

    Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is usedmore » in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the control rods were adjusted to maintain criticality, whereas in the model, the rod positions were fixed. In addition, this work includes a brief study of a cross section generation approach that seeks to decouple the domain in order to account for neighbor effects. This spectral interpenetration is a dominant effect in annular HTR physics. This analysis methodology should be further explored in order to reduce the error that is systematically propagated in the traditional generation of cross sections.« less

  6. Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems

    DTIC Science & Technology

    2017-04-13

    modelling code, a parallel benchmark , and a communication avoiding version of the QR algorithm. Further, several improvements to the OmpSs model were...movement; and a port of the dynamic load balancing library to OmpSs. Finally, several updates to the tools infrastructure were accomplished, including: an...OmpSs: a basic algorithm on image processing applications, a mini application representative of an ocean modelling code, a parallel benchmark , and a

  7. Global-local methodologies and their application to nonlinear analysis. [for structural postbuckling study

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1986-01-01

    An assessment is made of the potential of different global-local analysis strategies for predicting the nonlinear and postbuckling responses of structures. Two postbuckling problems of composite panels are used as benchmarks and the application of different global-local methodologies to these benchmarks is outlined. The key elements of each of the global-local strategies are discussed and future research areas needed to realize the full potential of global-local methodologies are identified.

  8. Optimized selection of benchmark test parameters for image watermark algorithms based on Taguchi methods and corresponding influence on design decisions for real-world applications

    NASA Astrophysics Data System (ADS)

    Rodriguez, Tony F.; Cushman, David A.

    2003-06-01

    With the growing commercialization of watermarking techniques in various application scenarios it has become increasingly important to quantify the performance of watermarking products. The quantification of relative merits of various products is not only essential in enabling further adoption of the technology by society as a whole, but will also drive the industry to develop testing plans/methodologies to ensure quality and minimize cost (to both vendors & customers.) While the research community understands the theoretical need for a publicly available benchmarking system to quantify performance, there has been less discussion on the practical application of these systems. By providing a standard set of acceptance criteria, benchmarking systems can dramatically increase the quality of a particular watermarking solution, validating the product performances if they are used efficiently and frequently during the design process. In this paper we describe how to leverage specific design of experiments techniques to increase the quality of a watermarking scheme, to be used with the benchmark tools being developed by the Ad-Hoc Watermark Verification Group. A Taguchi Loss Function is proposed for an application and orthogonal arrays used to isolate optimal levels for a multi-factor experimental situation. Finally, the results are generalized to a population of cover works and validated through an exhaustive test.

  9. Toxicological benchmarks for screening potential contaminants of concern for effects on aquatic biota: 1994 Revision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W. II; Mabrey, J.B.

    1994-07-01

    This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronicmore » Value (SCV), the lowest chronic values for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility.« less

  10. Second Computational Aeroacoustics (CAA) Workshop on Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Tam, C. K. W. (Editor); Hardin, J. C. (Editor)

    1997-01-01

    The proceedings of the Second Computational Aeroacoustics (CAA) Workshop on Benchmark Problems held at Florida State University are the subject of this report. For this workshop, problems arising in typical industrial applications of CAA were chosen. Comparisons between numerical solutions and exact solutions are presented where possible.

  11. Proposing application of results in sport and exercise research reports.

    PubMed

    Knudson, Duane; Elliott, Bruce; Hamill, Joseph

    2014-09-01

    The application of sport and exercise research findings to practice requires careful interpretation and integration of evidence. This paper reviews principles of evidence-based practice and the application of research in sports and exercise, in order to provide recommendations on developing appropriate application sections in research reports for sport and exercise journals. The strength of recommendations for application fall into one of four levels, with potential applications qualified as strong, limited, preliminary, or hypothesized. Specific limitations that should be discussed in framing recommendations for practice are also noted for each of these levels that should be useful for authors, and for practitioners and clinicians in interpreting these recommendations.

  12. Multirate Flutter Suppression System Design for the Benchmark Active Controls Technology Wing. Part 2; Methodology Application Software Toolbox

    NASA Technical Reports Server (NTRS)

    Mason, Gregory S.; Berg, Martin C.; Mukhopadhyay, Vivek

    2002-01-01

    To study the effectiveness of various control system design methodologies, the NASA Langley Research Center initiated the Benchmark Active Controls Project. In this project, the various methodologies were applied to design a flutter suppression system for the Benchmark Active Controls Technology (BACT) Wing. This report describes the user's manual and software toolbox developed at the University of Washington to design a multirate flutter suppression control law for the BACT wing.

  13. The art and science of using routine outcome measurement in mental health benchmarking.

    PubMed

    McKay, Roderick; Coombs, Tim; Duerden, David

    2014-02-01

    To report and critique the application of routine outcome measurement data when benchmarking Australian mental health services. The experience of the authors as participants and facilitators of benchmarking activities is augmented by a review of the literature regarding mental health benchmarking in Australia. Although the published literature is limited, in practice, routine outcome measures, in particular the Health of the National Outcomes Scales (HoNOS) family of measures, are used in a variety of benchmarking activities. Use in exploring similarities and differences in consumers between services and the outcomes of care are illustrated. This requires the rigour of science in data management and interpretation, supplemented by the art that comes from clinical experience, a desire to reflect on clinical practice and the flexibility to use incomplete data to explore clinical practice. Routine outcome measurement data can be used in a variety of ways to support mental health benchmarking. With the increasing sophistication of information development in mental health, the opportunity to become involved in benchmarking will continue to increase. The techniques used during benchmarking and the insights gathered may prove useful to support reflection on practice by psychiatrists and other senior mental health clinicians.

  14. Benchmarking Strategies for Measuring the Quality of Healthcare: Problems and Prospects

    PubMed Central

    Lovaglio, Pietro Giorgio

    2012-01-01

    Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed. PMID:22666140

  15. Benchmarking strategies for measuring the quality of healthcare: problems and prospects.

    PubMed

    Lovaglio, Pietro Giorgio

    2012-01-01

    Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed.

  16. Qualification of CASMO5 / SIMULATE-3K against the SPERT-III E-core cold start-up experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grandi, G.; Moberg, L.

    SIMULATE-3K is a three-dimensional kinetic code applicable to LWR Reactivity Initiated Accidents. S3K has been used to calculate several international recognized benchmarks. However, the feedback models in the benchmark exercises are different from the feedback models that SIMULATE-3K uses for LWR reactors. For this reason, it is worth comparing the SIMULATE-3K capabilities for Reactivity Initiated Accidents against kinetic experiments. The Special Power Excursion Reactor Test III was a pressurized-water, nuclear-research facility constructed to analyze the reactor kinetic behavior under initial conditions similar to those of commercial LWRs. The SPERT III E-core resembles a PWR in terms of fuel type, moderator,more » coolant flow rate, and system pressure. The initial test conditions (power, core flow, system pressure, core inlet temperature) are representative of cold start-up, hot start-up, hot standby, and hot full power. The qualification of S3K against the SPERT III E-core measurements is an ongoing work at Studsvik. In this paper, the results for the 30 cold start-up tests are presented. The results show good agreement with the experiments for the reactivity initiated accident main parameters: peak power, energy release and compensated reactivity. Predicted and measured peak powers differ at most by 13%. Measured and predicted reactivity compensations at the time of the peak power differ less than 0.01 $. Predicted and measured energy release differ at most by 13%. All differences are within the experimental uncertainty. (authors)« less

  17. Benchmarking: applications to transfusion medicine.

    PubMed

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Preliminary analyses of WL experiment No. 701, space environment effects on operating fiber optic systems

    NASA Technical Reports Server (NTRS)

    Taylor, E. W.; Berry, J. N.; Sanchez, A. D.; Padden, R. J.; Chapman, S. P.

    1992-01-01

    A brief overview of the analyses performed to date on WL Experiment-701 is presented. Four active digital fiber optic links were directly exposed to the space environment for a period of 2114 days. The links were situated aboard the Long Duration Exposure Facility (LDEF) with the cabled, single fiber windings atop an experimental tray containing instrumentation for exercising the experiment in orbit. Despite the unplanned and prolonged exposure to trapped and galactic radiation, wide temperature extremes, atomic oxygen interactions, and micro-meteorite and debris impacts, in most instances the optical data links performed well within the experimental limits. Analysis of the recorded orbital data clearly indicates that fiber optic applications in space will meet with success. Ongoing tests and analysis of the experiment at the Phillips Laboratory's Optoelectronics Laboratory will expand this premise, and establish the first known and extensive database of active fiber optic link performance during prolonged space exposure. WL Exp-701 was designed as a feasibility demonstration for fiber optic technology in space applications, and to study the performance of operating fiber systems exposed to space environmental factors such as galactic radiation, and wide temperature cycling. WL Exp-701 is widely acknowledged as a benchmark accomplishment that clearly demonstrates, for the first time, that fiber optic technology can be successfully used in a variety of space applications.

  19. 42 CFR 440.370 - Economy and efficiency.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 4 2014-10-01 2014-10-01 false Economy and efficiency. 440.370 Section 440.370...-Equivalent Coverage § 440.370 Economy and efficiency. Benchmark and benchmark-equivalent coverage and any... requirements and other economy and efficiency principles that would otherwise be applicable to the services or...

  20. 42 CFR 440.370 - Economy and efficiency.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Economy and efficiency. 440.370 Section 440.370...-Equivalent Coverage § 440.370 Economy and efficiency. Benchmark and benchmark-equivalent coverage and any... requirements and other economy and efficiency principles that would otherwise be applicable to the services or...

  1. 42 CFR 440.370 - Economy and efficiency.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Economy and efficiency. 440.370 Section 440.370...-Equivalent Coverage § 440.370 Economy and efficiency. Benchmark and benchmark-equivalent coverage and any... requirements and other economy and efficiency principles that would otherwise be applicable to the services or...

  2. 42 CFR 440.370 - Economy and efficiency.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Economy and efficiency. 440.370 Section 440.370...-Equivalent Coverage § 440.370 Economy and efficiency. Benchmark and benchmark-equivalent coverage and any... requirements and other economy and efficiency principles that would otherwise be applicable to the services or...

  3. Using Grid Benchmarks for Dynamic Scheduling of Grid Applications

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Hood, Robert

    2003-01-01

    Navigation or dynamic scheduling of applications on computational grids can be improved through the use of an application-specific characterization of grid resources. Current grid information systems provide a description of the resources, but do not contain any application-specific information. We define a GridScape as dynamic state of the grid resources. We measure the dynamic performance of these resources using the grid benchmarks. Then we use the GridScape for automatic assignment of the tasks of a grid application to grid resources. The scalability of the system is achieved by limiting the navigation overhead to a few percent of the application resource requirements. Our task submission and assignment protocol guarantees that the navigation system does not cause grid congestion. On a synthetic data mining application we demonstrate that Gridscape-based task assignment reduces the application tunaround time.

  4. XWeB: The XML Warehouse Benchmark

    NASA Astrophysics Data System (ADS)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  5. Application Exercises Improve Transfer of Statistical Knowledge in Real-World Situations

    ERIC Educational Resources Information Center

    Daniel, Frances; Braasch, Jason L. G.

    2013-01-01

    The present research investigated whether real-world application exercises promoted students' abilities to spontaneously transfer statistical knowledge and to recognize the use of statistics in real-world contexts. Over the course of a semester of psychological statistics, two classes completed multiple application exercises designed to mimic…

  6. Memory-Intensive Benchmarks: IRAM vs. Cache-Based Machines

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Gaeke, Brian R.; Husbands, Parry; Li, Xiaoye S.; Oliker, Leonid; Yelick, Katherine A.; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The increasing gap between processor and memory performance has lead to new architectural models for memory-intensive applications. In this paper, we explore the performance of a set of memory-intensive benchmarks and use them to compare the performance of conventional cache-based microprocessors to a mixed logic and DRAM processor called VIRAM. The benchmarks are based on problem statements, rather than specific implementations, and in each case we explore the fundamental hardware requirements of the problem, as well as alternative algorithms and data structures that can help expose fine-grained parallelism or simplify memory access patterns. The benchmarks are characterized by their memory access patterns, their basic control structures, and the ratio of computation to memory operation.

  7. Validating vignette and conjoint survey experiments against real-world behavior

    PubMed Central

    Hainmueller, Jens; Hangartner, Dominik; Yamamoto, Teppei

    2015-01-01

    Survey experiments, like vignette and conjoint analyses, are widely used in the social sciences to elicit stated preferences and study how humans make multidimensional choices. However, there is a paucity of research on the external validity of these methods that examines whether the determinants that explain hypothetical choices made by survey respondents match the determinants that explain what subjects actually do when making similar choices in real-world situations. This study compares results from conjoint and vignette analyses on which immigrant attributes generate support for naturalization with closely corresponding behavioral data from a natural experiment in Switzerland, where some municipalities used referendums to decide on the citizenship applications of foreign residents. Using a representative sample from the same population and the official descriptions of applicant characteristics that voters received before each referendum as a behavioral benchmark, we find that the effects of the applicant attributes estimated from the survey experiments perform remarkably well in recovering the effects of the same attributes in the behavioral benchmark. We also find important differences in the relative performances of the different designs. Overall, the paired conjoint design, where respondents evaluate two immigrants side by side, comes closest to the behavioral benchmark; on average, its estimates are within 2% percentage points of the effects in the behavioral benchmark. PMID:25646415

  8. Scale/TSUNAMI Sensitivity Data for ICSBEP Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T; Reed, Davis Allan; Lefebvre, Robert A

    2011-01-01

    The Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) software developed at Oak Ridge National Laboratory (ORNL) as part of the Scale code system provide unique methods for code validation, gap analysis, and experiment design. For TSUNAMI analysis, sensitivity data are generated for each application and each existing or proposed experiment used in the assessment. The validation of diverse sets of applications requires potentially thousands of data files to be maintained and organized by the user, and a growing number of these files are available through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE) distributed through themore » International Criticality Safety Benchmark Evaluation Program (ICSBEP). To facilitate the use of the IHECSBE benchmarks in rigorous TSUNAMI validation and gap analysis techniques, ORNL generated SCALE/TSUNAMI sensitivity data files (SDFs) for several hundred benchmarks for distribution with the IHECSBE. For the 2010 edition of IHECSBE, the sensitivity data were generated using 238-group cross-section data based on ENDF/B-VII.0 for 494 benchmark experiments. Additionally, ORNL has developed a quality assurance procedure to guide the generation of Scale inputs and sensitivity data, as well as a graphical user interface to facilitate the use of sensitivity data in identifying experiments and applying them in validation studies.« less

  9. Benchmarking Big Data Systems and the BigData Top100 List.

    PubMed

    Baru, Chaitanya; Bhandarkar, Milind; Nambiar, Raghunath; Poess, Meikel; Rabl, Tilmann

    2013-03-01

    "Big data" has become a major force of innovation across enterprises of all sizes. New platforms with increasingly more features for managing big datasets are being announced almost on a weekly basis. Yet, there is currently a lack of any means of comparability among such platforms. While the performance of traditional database systems is well understood and measured by long-established institutions such as the Transaction Processing Performance Council (TCP), there is neither a clear definition of the performance of big data systems nor a generally agreed upon metric for comparing these systems. In this article, we describe a community-based effort for defining a big data benchmark. Over the past year, a Big Data Benchmarking Community has become established in order to fill this void. The effort focuses on defining an end-to-end application-layer benchmark for measuring the performance of big data applications, with the ability to easily adapt the benchmark specification to evolving challenges in the big data space. This article describes the efforts that have been undertaken thus far toward the definition of a BigData Top100 List. While highlighting the major technical as well as organizational challenges, through this article, we also solicit community input into this process.

  10. Analytical three-dimensional neutron transport benchmarks for verification of nuclear engineering codes. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganapol, B.D.; Kornreich, D.E.

    Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) pointmore » source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green`s function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade.« less

  11. Benchmarking the MCNP code for Monte Carlo modelling of an in vivo neutron activation analysis system.

    PubMed

    Natto, S A; Lewis, D G; Ryde, S J

    1998-01-01

    The Monte Carlo computer code MCNP (version 4A) has been used to develop a personal computer-based model of the Swansea in vivo neutron activation analysis (IVNAA) system. The model included specification of the neutron source (252Cf), collimators, reflectors and shielding. The MCNP model was 'benchmarked' against fast neutron and thermal neutron fluence data obtained experimentally from the IVNAA system. The Swansea system allows two irradiation geometries using 'short' and 'long' collimators, which provide alternative dose rates for IVNAA. The data presented here relate to the short collimator, although results of similar accuracy were obtained using the long collimator. The fast neutron fluence was measured in air at a series of depths inside the collimator. The measurements agreed with the MCNP simulation within the statistical uncertainty (5-10%) of the calculations. The thermal neutron fluence was measured and calculated inside the cuboidal water phantom. The depth of maximum thermal fluence was 3.2 cm (measured) and 3.0 cm (calculated). The width of the 50% thermal fluence level across the phantom at its mid-depth was found to be the same by both MCNP and experiment. This benchmarking exercise has given us a high degree of confidence in MCNP as a tool for the design of IVNAA systems.

  12. Application of Science and Medicine to Sport.

    ERIC Educational Resources Information Center

    Taylor, Albert W., Ed.

    Great progress has been made in recent years in the scientific study of exercise and application to sport. This book provides an analysis of the state of physiological and clinical knowledge related to exercise and sports. The three sections--medicine and physical activity, science and exercise, and practical application to sport--cover a variety…

  13. Squat Biomechanical Modeling Results from Exercising on the Hybrid Ultimate Lifting Kit

    NASA Technical Reports Server (NTRS)

    Gallo, Christopher A.; Thompson, William K.; Lewandowski, Beth E.; Jagodnik, Kathleen M.

    2016-01-01

    Long duration space travel will expose astronauts to extended periods of reduced gravity. Since gravity is not present to aid loading, astronauts will use resistive and aerobic exercise regimes for the duration of the space flight to minimize loss of bone density, muscle mass and aerobic capacity that occurs during exposure to a reduced gravity environment. Unlike the International Space Station (ISS), the area available for an exercise device in the next generation of spacecraft is limited and therefore compact resistance exercise device prototypes are being developed. The Advanced Resistive Exercise Device (ARED) currently on the ISS is being used as a benchmark for the functional performance of these new devices. Biomechanical data collection and computational modeling aid the device design process by quantifying the joint torques and the musculoskeletal forces that occur during exercises performed on the prototype devices. The computational models currently under development utilize the OpenSim software, an open source code for musculoskeletal modeling, with biomechanical input data from test subjects for estimation of muscle and joint loads. The subjects are instrumented with reflective markers for motion capture data collection while exercising on the Hybrid Ultimate Lifting Kit (HULK) prototype device. Ground reaction force data is collected with force plates under the feet and device loading is recorded through load cells internal to the HULK. Test variables include applied device load, narrow or wide foot stance, slow or fast cadence and the harness or long bar interface between the test subject and the device. Data is also obtained using free weights for a comparison to the resistively loaded exercise device. This data is input into the OpenSim biomechanical model, which has been scaled to match the anthropometrics of the test subject, to calculate the body loads. The focus of this presentation is to summarize the results from the full squat exercises across the different test variables.

  14. Facial Emotion Recognition System – A Machine Learning Approach

    NASA Astrophysics Data System (ADS)

    Ramalingam, V. V.; Pandian, A.; Jayakumar, Lavanya

    2018-04-01

    Frown is a medium for people correlation and it could be exercised in multiple real systems. Single crucial stage for frown realizing is to exactly select hysterical aspects. This journal proposed a frown realization scheme applying transformative Particle Swarm Optimization (PSO) based aspect accumulation. This entity initially employs changed LVP, handles crisscross adjacent picture element contrast, for achieving the selective first frown portrayal. Then the PSO entity inserted with a concept of micro Genetic Algorithm (mGA) called mGA-embedded PSO designed for achieving aspect accumulation. This study, the technique subsumes no disposable memory, a little-populace insignificant flock, a latest acceleration that amends with the approach and a sub dimension-based in-depth local frown aspect examines. Assistance of provincial utilization and comprehensive inspection examine structure of alleviating of an immature concurrence complication of conventional PSO. Numerous identifiers are used to diagnose different frown expositions. Stationed on extensive study within and other-sphere pictures from the continued Cohn Kanade and MMI benchmark directory appropriately. Determination of the application exceeds most advanced level PSO variants, conventional PSO, classical GA and alternate relevant frown realization structures is described with powerful limit. Extending our accession to a motion based FER application for connecting patch-based Gabor aspects with continuous data in multi-frames.

  15. 76 FR 54209 - Corrosion-Resistant Carbon Steel Flat Products From the Republic of Korea: Preliminary Results of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-31

    ... description of the merchandise is dispositive. Subsidies Valuation Information A. Benchmarks for Short-Term Financing For those programs requiring the application of a won-denominated, short-term interest rate... Issues and Decision Memorandum (CORE from Korea 2006 Decision Memorandum) at ``Benchmarks for Short-Term...

  16. A review of the current state-of-the-art methodology for handling bias and uncertainty in performing criticality safety evaluations. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Disney, R.K.

    1994-10-01

    The methodology for handling bias and uncertainty when calculational methods are used in criticality safety evaluations (CSE`s) is a rapidly evolving technology. The changes in the methodology are driven by a number of factors. One factor responsible for changes in the methodology for handling bias and uncertainty in CSE`s within the overview of the US Department of Energy (DOE) is a shift in the overview function from a ``site`` perception to a more uniform or ``national`` perception. Other causes for change or improvement in the methodology for handling calculational bias and uncertainty are; (1) an increased demand for benchmark criticalsmore » data to expand the area (range) of applicability of existing data, (2) a demand for new data to supplement existing benchmark criticals data, (3) the increased reliance on (or need for) computational benchmarks which supplement (or replace) experimental measurements in critical assemblies, and (4) an increased demand for benchmark data applicable to the expanded range of conditions and configurations encountered in DOE site restoration and remediation.« less

  17. Verification and benchmark testing of the NUFT computer code

    NASA Astrophysics Data System (ADS)

    Lee, K. H.; Nitao, J. J.; Kulshrestha, A.

    1993-10-01

    This interim report presents results of work completed in the ongoing verification and benchmark testing of the NUFT (Nonisothermal Unsaturated-saturated Flow and Transport) computer code. NUFT is a suite of multiphase, multicomponent models for numerical solution of thermal and isothermal flow and transport in porous media, with application to subsurface contaminant transport problems. The code simulates the coupled transport of heat, fluids, and chemical components, including volatile organic compounds. Grid systems may be cartesian or cylindrical, with one-, two-, or fully three-dimensional configurations possible. In this initial phase of testing, the NUFT code was used to solve seven one-dimensional unsaturated flow and heat transfer problems. Three verification and four benchmarking problems were solved. In the verification testing, excellent agreement was observed between NUFT results and the analytical or quasianalytical solutions. In the benchmark testing, results of code intercomparison were very satisfactory. From these testing results, it is concluded that the NUFT code is ready for application to field and laboratory problems similar to those addressed here. Multidimensional problems, including those dealing with chemical transport, will be addressed in a subsequent report.

  18. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    PubMed

    Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  19. [Do you mean benchmarking?].

    PubMed

    Bonnet, F; Solignac, S; Marty, J

    2008-03-01

    The purpose of benchmarking is to settle improvement processes by comparing the activities to quality standards. The proposed methodology is illustrated by benchmark business cases performed inside medical plants on some items like nosocomial diseases or organization of surgery facilities. Moreover, the authors have built a specific graphic tool, enhanced with balance score numbers and mappings, so that the comparison between different anesthesia-reanimation services, which are willing to start an improvement program, is easy and relevant. This ready-made application is even more accurate as far as detailed tariffs of activities are implemented.

  20. Navigation in Grid Space with the NAS Grid Benchmarks

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Hood, Robert; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We present a navigational tool for computational grids. The navigational process is based on measuring the grid characteristics with the NAS Grid Benchmarks (NGB) and using the measurements to assign tasks of a grid application to the grid machines. The tool allows the user to explore the grid space and to navigate the execution at a grid application to minimize its turnaround time. We introduce the notion of gridscape as a user view of the grid and show how it can be me assured by NGB, Then we demonstrate how the gridscape can be used with two different schedulers to navigate a grid application through a rudimentary grid.

  1. Toward benchmarking in catalysis science: Best practices, challenges, and opportunities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bligaard, Thomas; Bullock, R. Morris; Campbell, Charles T.

    Benchmarking is a community-based and (preferably) community-driven activity involving consensus-based decisions on how to make reproducible, fair, and relevant assessments. In catalysis science, important catalyst performance metrics include activity, selectivity, and the deactivation profile, which enable comparisons between new and standard catalysts. Benchmarking also requires careful documentation, archiving, and sharing of methods and measurements, to ensure that the full value of research data can be realized. Beyond these goals, benchmarking presents unique opportunities to advance and accelerate understanding of complex reaction systems by combining and comparing experimental information from multiple, in situ and operando techniques with theoretical insights derived frommore » calculations characterizing model systems. This Perspective describes the origins and uses of benchmarking and its applications in computational catalysis, heterogeneous catalysis, molecular catalysis, and electrocatalysis. As a result, it also discusses opportunities and challenges for future developments in these fields.« less

  2. Toward benchmarking in catalysis science: Best practices, challenges, and opportunities

    DOE PAGES

    Bligaard, Thomas; Bullock, R. Morris; Campbell, Charles T.; ...

    2016-03-07

    Benchmarking is a community-based and (preferably) community-driven activity involving consensus-based decisions on how to make reproducible, fair, and relevant assessments. In catalysis science, important catalyst performance metrics include activity, selectivity, and the deactivation profile, which enable comparisons between new and standard catalysts. Benchmarking also requires careful documentation, archiving, and sharing of methods and measurements, to ensure that the full value of research data can be realized. Beyond these goals, benchmarking presents unique opportunities to advance and accelerate understanding of complex reaction systems by combining and comparing experimental information from multiple, in situ and operando techniques with theoretical insights derived frommore » calculations characterizing model systems. This Perspective describes the origins and uses of benchmarking and its applications in computational catalysis, heterogeneous catalysis, molecular catalysis, and electrocatalysis. As a result, it also discusses opportunities and challenges for future developments in these fields.« less

  3. Benchmarking and the laboratory

    PubMed Central

    Galloway, M; Nadin, L

    2001-01-01

    This article describes how benchmarking can be used to assess laboratory performance. Two benchmarking schemes are reviewed, the Clinical Benchmarking Company's Pathology Report and the College of American Pathologists' Q-Probes scheme. The Clinical Benchmarking Company's Pathology Report is undertaken by staff based in the clinical management unit, Keele University with appropriate input from the professional organisations within pathology. Five annual reports have now been completed. Each report is a detailed analysis of 10 areas of laboratory performance. In this review, particular attention is focused on the areas of quality, productivity, variation in clinical practice, skill mix, and working hours. The Q-Probes scheme is part of the College of American Pathologists programme in studies of quality assurance. The Q-Probes scheme and its applicability to pathology in the UK is illustrated by reviewing two recent Q-Probe studies: routine outpatient test turnaround time and outpatient test order accuracy. The Q-Probes scheme is somewhat limited by the small number of UK laboratories that have participated. In conclusion, as a result of the government's policy in the UK, benchmarking is here to stay. Benchmarking schemes described in this article are one way in which pathologists can demonstrate that they are providing a cost effective and high quality service. Key Words: benchmarking • pathology PMID:11477112

  4. Incentive structure in team-based learning: graded versus ungraded Group Application exercises.

    PubMed

    Deardorff, Adam S; Moore, Jeremy A; McCormick, Colleen; Koles, Paul G; Borges, Nicole J

    2014-04-21

    Previous studies on team-based learning (TBL) in medical education demonstrated improved learner engagement, learner satisfaction, and academic performance; however, a paucity of information exists on modifications of the incentive structure of "traditional" TBL practices. The current study investigates the impact of modification to conventional Group Application exercises by examining student preference and student perceptions of TBL outcomes when Group Application exercises are excluded from TBL grades. During the 2009-2010 and 2010-2011 academic years, 175 students (95.6% response rate) completed a 22-item multiple choice survey followed by 3 open response questions at the end of their second year of medical school. These students had participated in a TBL supplemented preclinical curriculum with graded Group Application exercises during year one and ungraded Group Application exercises during year two of medical school. Chi-square analyses showed significant differences between grading categories for general assessment of TBL, participation and communication, intra-team discussion, inter-team discussion, student perceptions of their own effort and development of teamwork skills. Furthermore, 83.8% of students polled prefer ungraded Group Application exercises with only 7.2% preferring graded and 9.0% indicating no preference. The use of ungraded Group Application exercises appears to be a successful modification of TBL, making it more "student-friendly" while maintaining the goals of active learning and development of teamwork skills.

  5. Clinical utility of measures of breathlessness.

    PubMed

    Cullen, Deborah L; Rodak, Bernadette

    2002-09-01

    The clinical utility of measures of dyspnea has been debated in the health care community. Although breathlessness can be evaluated with various instruments, the most effective dyspnea measurement tool for patients with chronic lung disease or for measuring treatment effectiveness remains uncertain. Understanding the evidence for the validity and reliability of these instruments may provide a basis for appropriate clinical application. Evaluate instruments designed to measure breathlessness, either as single-symptom or multidimensional instruments, based on psychometrics foundations such as validity, reliability, and discriminative and evaluative properties. Classification of each dyspnea measurement instrument will recommend clinical application in terms of exercise, benchmarking patients, activities of daily living, patient outcomes, clinical trials, and responsiveness to treatment. Eleven dyspnea measurement instruments were selected. Each instrument was assessed as discriminative or evaluative and then analyzed as to its psychometric properties and purpose of design. Descriptive data from all studies were described according to their primary patient application (ie, chronic obstructive pulmonary disease, asthma, or other patient populations). The Borg Scale and the Visual Analogue Scale are applicable to exertion and thus can be applied to any cardiopulmonary patient to determine dyspnea. All other measures were determined appropriate for chronic obstructive pulmonary disease, whereas the Shortness of Breath Questionnaire can be applied to cystic fibrosis and lung transplant patients. The most appropriate utility for all instruments was measuring the effects on activities of daily living and for benchmarking patient progress. Instruments that quantify function and health-related quality of life have great utility for documenting outcomes but may be limited as to documenting treatment responsiveness in terms of clinically important changes. The dyspnea measurement instruments we studied meet important standards of validity and reliability. Discriminative measures have limited clinical utility and, when used for populations or conditions for which they are not designed or validated, the data collected may not be clinically relevant. Evaluative measures have greater clinical utility and can be applied for outcome purposes. Measures should be applied to the populations and conditions for which they were designed. The relationship between clinical therapies and the measurement of dyspnea as an outcome can develop as respiratory therapists become more comfortable with implementing dyspnea measurement instruments and use the data to improve patient treatment. Dyspnea evaluation should be considered for all clinical practice guidelines and care pathways.

  6. Develop applications based on android: Teacher Engagement Control of Health (TECH)

    NASA Astrophysics Data System (ADS)

    Sasmoko; Manalu, S. R.; Widhoyoko, S. A.; Indrianti, Y.; Suparto

    2018-03-01

    Physical and psychological condition of teachers is very important because it helped determine the realization of a positive school climate and productive so that they can run their profession optimally. This research is an advanced research on the design of ITEI application that able to see the profile of teacher’s engagement in Indonesia and to optimize the condition is needed an application that can detect the health of teachers both physically and psychologically. The research method used is the neuroresearch method combined with the development of IT system design for TECH which includes server design, database and android TECH application display. The study yielded 1) mental health benchmarks, 2) physical health benchmarks, and 3) the design of Android Application for Teacher Engagement Control of Health (TECH).

  7. Object-Oriented Implementation of the NAS Parallel Benchmarks using Charm++

    NASA Technical Reports Server (NTRS)

    Krishnan, Sanjeev; Bhandarkar, Milind; Kale, Laxmikant V.

    1996-01-01

    This report describes experiences with implementing the NAS Computational Fluid Dynamics benchmarks using a parallel object-oriented language, Charm++. Our main objective in implementing the NAS CFD kernel benchmarks was to develop a code that could be used to easily experiment with different domain decomposition strategies and dynamic load balancing. We also wished to leverage the object-orientation provided by the Charm++ parallel object-oriented language, to develop reusable abstractions that would simplify the process of developing parallel applications. We first describe the Charm++ parallel programming model and the parallel object array abstraction, then go into detail about each of the Scalar Pentadiagonal (SP) and Lower/Upper Triangular (LU) benchmarks, along with performance results. Finally we conclude with an evaluation of the methodology used.

  8. Training radial basis function networks for wind speed prediction using PSO enhanced differential search optimizer

    PubMed Central

    2018-01-01

    This paper presents an integrated hybrid optimization algorithm for training the radial basis function neural network (RBF NN). Training of neural networks is still a challenging exercise in machine learning domain. Traditional training algorithms in general suffer and trap in local optima and lead to premature convergence, which makes them ineffective when applied for datasets with diverse features. Training algorithms based on evolutionary computations are becoming popular due to their robust nature in overcoming the drawbacks of the traditional algorithms. Accordingly, this paper proposes a hybrid training procedure with differential search (DS) algorithm functionally integrated with the particle swarm optimization (PSO). To surmount the local trapping of the search procedure, a new population initialization scheme is proposed using Logistic chaotic sequence, which enhances the population diversity and aid the search capability. To demonstrate the effectiveness of the proposed RBF hybrid training algorithm, experimental analysis on publicly available 7 benchmark datasets are performed. Subsequently, experiments were conducted on a practical application case for wind speed prediction to expound the superiority of the proposed RBF training algorithm in terms of prediction accuracy. PMID:29768463

  9. Training radial basis function networks for wind speed prediction using PSO enhanced differential search optimizer.

    PubMed

    Rani R, Hannah Jessie; Victoire T, Aruldoss Albert

    2018-01-01

    This paper presents an integrated hybrid optimization algorithm for training the radial basis function neural network (RBF NN). Training of neural networks is still a challenging exercise in machine learning domain. Traditional training algorithms in general suffer and trap in local optima and lead to premature convergence, which makes them ineffective when applied for datasets with diverse features. Training algorithms based on evolutionary computations are becoming popular due to their robust nature in overcoming the drawbacks of the traditional algorithms. Accordingly, this paper proposes a hybrid training procedure with differential search (DS) algorithm functionally integrated with the particle swarm optimization (PSO). To surmount the local trapping of the search procedure, a new population initialization scheme is proposed using Logistic chaotic sequence, which enhances the population diversity and aid the search capability. To demonstrate the effectiveness of the proposed RBF hybrid training algorithm, experimental analysis on publicly available 7 benchmark datasets are performed. Subsequently, experiments were conducted on a practical application case for wind speed prediction to expound the superiority of the proposed RBF training algorithm in terms of prediction accuracy.

  10. Vodcasts and active-learning exercises in a "flipped classroom" model of a renal pharmacotherapy module.

    PubMed

    Pierce, Richard; Fox, Jeremy

    2012-12-12

    To implement a "flipped classroom" model for a renal pharmacotherapy topic module and assess the impact on pharmacy students' performance and attitudes. Students viewed vodcasts (video podcasts) of lectures prior to the scheduled class and then discussed interactive cases of patients with end-stage renal disease in class. A process-oriented guided inquiry learning (POGIL) activity was developed and implemented that complemented, summarized, and allowed for application of the material contained in the previously viewed lectures. Students' performance on the final examination significantly improved compared to performance of students the previous year who completed the same module in a traditional classroom setting. Students' opinions of the POGIL activity and the flipped classroom instructional model were mostly positive. Implementing a flipped classroom model to teach a renal pharmacotherapy module resulted in improved student performance and favorable student perceptions about the instructional approach. Some of the factors that may have contributed to students' improved scores included: student mediated contact with the course material prior to classes, benchmark and formative assessments administered during the module, and the interactive class activities.

  11. Results of Three Ongoing Beneficiary Surveys

    DTIC Science & Technology

    2011-01-25

    care  Special Topics – Other health insurance – Unhealthy behavior (tobacco use, obesity, nutrition, exercise), preventive services ( flu shots ...of Healthcare Providers & Systems (HCAHPS) – National benchmarks are available for HCAHPS  Composites – Communication with Nurses – Communication...COMMUNICATION W/  NURSES         Rating Scale: Always Nurses  treat with courtesy/respect  Nurses  listen carefully to you      Nurses  explained things in way you

  12. Employing Nested OpenMP for the Parallelization of Multi-Zone Computational Fluid Dynamics Applications

    NASA Technical Reports Server (NTRS)

    Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Jost, Gabriele

    2004-01-01

    In this paper we describe the parallelization of the multi-zone code versions of the NAS Parallel Benchmarks employing multi-level OpenMP parallelism. For our study we use the NanosCompiler, which supports nesting of OpenMP directives and provides clauses to control the grouping of threads, load balancing, and synchronization. We report the benchmark results, compare the timings with those of different hybrid parallelization paradigms and discuss OpenMP implementation issues which effect the performance of multi-level parallel applications.

  13. Development and Application of Benchmark Examples for Mixed-Mode I/II Quasi-Static Delamination Propagation Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2012-01-01

    The development of benchmark examples for quasi-static delamination propagation prediction is presented. The example is based on a finite element model of the Mixed-Mode Bending (MMB) specimen for 50% mode II. The benchmarking is demonstrated for Abaqus/Standard, however, the example is independent of the analysis software used and allows the assessment of the automated delamination propagation prediction capability in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to propagate under quasi-static loading. Third, the load-displacement as well as delamination length versus applied load/displacement relationships from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall, the results are encouraging, but further assessment for mixed-mode delamination fatigue onset and growth is required.

  14. Benchmarking worker nodes using LHCb productions and comparing with HEPSpec06

    NASA Astrophysics Data System (ADS)

    Charpentier, P.

    2017-10-01

    In order to estimate the capabilities of a computing slot with limited processing time, it is necessary to know with a rather good precision its “power”. This allows for example pilot jobs to match a task for which the required CPU-work is known, or to define the number of events to be processed knowing the CPU-work per event. Otherwise one always has the risk that the task is aborted because it exceeds the CPU capabilities of the resource. It also allows a better accounting of the consumed resources. The traditional way the CPU power is estimated in WLCG since 2007 is using the HEP-Spec06 benchmark (HS06) suite that was verified at the time to scale properly with a set of typical HEP applications. However, the hardware architecture of processors has evolved, all WLCG experiments moved to using 64-bit applications and use different compilation flags from those advertised for running HS06. It is therefore interesting to check the scaling of HS06 with the HEP applications. For this purpose, we have been using CPU intensive massive simulation productions from the LHCb experiment and compared their event throughput to the HS06 rating of the worker nodes. We also compared it with a much faster benchmark script that is used by the DIRAC framework used by LHCb for evaluating at run time the performance of the worker nodes. This contribution reports on the finding of these comparisons: the main observation is that the scaling with HS06 is no longer fulfilled, while the fast benchmarks have a better scaling but are less precise. One can also clearly see that some hardware or software features when enabled on the worker nodes may enhance their performance beyond expectation from either benchmark, depending on external factors.

  15. Anharmonic Vibrational Spectroscopy on Metal Transition Complexes

    NASA Astrophysics Data System (ADS)

    Latouche, Camille; Bloino, Julien; Barone, Vincenzo

    2014-06-01

    Advances in hardware performance and the availability of efficient and reliable computational models have made possible the application of computational spectroscopy to ever larger molecular systems. The systematic interpretation of experimental data and the full characterization of complex molecules can then be facilitated. Focusing on vibrational spectroscopy, several approaches have been proposed to simulate spectra beyond the double harmonic approximation, so that more details become available. However, a routine use of such tools requires the preliminary definition of a valid protocol with the most appropriate combination of electronic structure and nuclear calculation models. Several benchmark of anharmonic calculations frequency have been realized on organic molecules. Nevertheless, benchmarks of organometallics or inorganic metal complexes at this level are strongly lacking despite the interest of these systems due to their strong emission and vibrational properties. Herein we report the benchmark study realized with anharmonic calculations on simple metal complexes, along with some pilot applications on systems of direct technological or biological interest.

  16. Benchmarks of programming languages for special purposes in the space station

    NASA Technical Reports Server (NTRS)

    Knoebel, Arthur

    1986-01-01

    Although Ada is likely to be chosen as the principal programming language for the Space Station, certain needs, such as expert systems and robotics, may be better developed in special languages. The languages, LISP and Prolog, are studied and some benchmarks derived. The mathematical foundations for these languages are reviewed. Likely areas of the space station are sought out where automation and robotics might be applicable. Benchmarks are designed which are functional, mathematical, relational, and expert in nature. The coding will depend on the particular versions of the languages which become available for testing.

  17. A high-fidelity airbus benchmark for system fault detection and isolation and flight control law clearance

    NASA Astrophysics Data System (ADS)

    Goupil, Ph.; Puyou, G.

    2013-12-01

    This paper presents a high-fidelity generic twin engine civil aircraft model developed by Airbus for advanced flight control system research. The main features of this benchmark are described to make the reader aware of the model complexity and representativeness. It is a complete representation including the nonlinear rigid-body aircraft model with a full set of control surfaces, actuator models, sensor models, flight control laws (FCL), and pilot inputs. Two applications of this benchmark in the framework of European projects are presented: FCL clearance using optimization and advanced fault detection and diagnosis (FDD).

  18. Human Research Program Advanced Exercise Concepts (AEC) Overview

    NASA Technical Reports Server (NTRS)

    Perusek, Gail; Lewandowski, Beth; Nall, Marsha; Norsk, Peter; Linnehan, Rick; Baumann, David

    2015-01-01

    Exercise countermeasures provide benefits that are crucial for successful human spaceflight, to mitigate the spaceflight physiological deconditioning which occurs during exposure to microgravity. The NASA Human Research Program (HRP) within the Human Exploration and Operations Mission Directorate (HEOMD) is managing next generation Advanced Exercise Concepts (AEC) requirements development and candidate technology maturation to Technology Readiness Level (TRL) 7 (ground prototyping and flight demonstration) for all exploration mission profiles from Multi Purpose Crew Vehicle (MPCV) Exploration Missions (up to 21 day duration) to Mars Transit (up to 1000 day duration) missions. These validated and optimized exercise countermeasures systems will be provided to the ISS Program and MPCV Program for subsequent flight development and operations. The International Space Station (ISS) currently has three major pieces of operational exercise countermeasures hardware: the Advanced Resistive Exercise Device (ARED), the second-generation (T2) treadmill, and the cycle ergometer with vibration isolation system (CEVIS). This suite of exercise countermeasures hardware serves as a benchmark and is a vast improvement over previous generations of countermeasures hardware, providing both aerobic and resistive exercise for the crew. However, vehicle and resource constraints for future exploration missions beyond low Earth orbit will require that the exercise countermeasures hardware mass, volume, and power be minimized, while preserving the current ISS capabilities or even enhancing these exercise capabilities directed at mission specific physiological functional performance and medical standards requirements. Further, mission-specific considerations such as preservation of sensorimotor function, autonomous and adaptable operation, integration with medical data systems, rehabilitation, and in-flight monitoring and feedback are being developed for integration with the exercise countermeasures systems. Numerous technologies have been considered and evaluated against HRP-approved functional device requirements for these extreme mission profiles, and include wearable sensors, exoskeletons, flywheel, pneumatic, and closed-loop microprocessor controlled motor driven systems. Each technology has unique advantages and disadvantages. The Advanced Exercise Concepts project oversees development of candidate next generation exercise countermeasures hardware, performs trade studies of current and state of the art exercise technologies, manages and supports candidate systems physiological evaluations with human test subjects on the ground, in flight analogs and flight. The near term goal is evaluation of candidate systems in flight, culminating in an integrated candidate next generation exercise countermeasures suite on the ISS which coalesces research findings from HRP disciplines in the areas of exercise performance for muscle, bone, cardiovascular, sensorimotor, behavioral health, and nutrition for optimal benefit to the crew.

  19. Biomechanical Modeling of Split-leg Squat and Heel Raise on the Hybrid Ultimate Lifting Kit (HULK)

    NASA Technical Reports Server (NTRS)

    Thompson, William K.; Gallo, Christopher A.; Lewandowski, Beth E.; Jagodnik, Kathleen M.; Humphreys, Brad; Funk, Justin; Funk, Nathan; Dewitt, John K.

    2016-01-01

    Long duration space travel will expose astronauts to extended periods of reduced gravity. Since gravity is not present to aid loading, astronauts will use resistive and aerobic exercise regimes for the duration of the space flight to minimize the loss of bone density, muscle mass and aerobic capacity that occurs during exposure to a reduced gravity environment. Unlike the International Space Station (ISS), the area available for an exercise device in the next generation of spacecraft is limited and therefore compact resistance exercise device prototypes are being developed. The Advanced Resistive Exercise Device (ARED) currently on the ISS is being used as a benchmark for the functional performance of these new devices. Biomechanical data collection and computational modeling aid the device design process by quantifying the joint torques and musculoskeletal forces that occur during exercises performed on the prototype devices. Computational models currently use OpenSim software, an open source code for musculoskeletal modeling, with biomechanical input data from subjects for estimation of muscle and joint loads. Subjects are instrumented with reflective markers for motion capture data collection while exercising on the Hybrid Ultimate Lifting Kit (HULK) prototype device. Ground reaction force data is collected with force plates under the feet and device loading is recorded through load cells internal to the HULK. This data is input into the OpenSim biomechanical model, which has been scaled to match the anthropometrics of the test subject, to calculate the loads on the body. Multiple exercises are performed and evaluated during a test session such as a full squat, single leg squat, heel raise and dead lift. Variables for these exercises include applied device load, narrow or wide foot stance, slow or fast cadence and the harness or long bar interface between the test subject and the device. Data from free weights are compared to the resistively loaded exercise device. The focus of this presentation is to summarize the results from the single-leg squat and heel raise exercises performed during three sessions occurring in 2015. Differences in loading configuration, cadence and stance produce differences in kinematics, joint toques and force and muscle forces.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henzlova, Daniela; Kouzes, R.; McElroy, R.

    International safeguards inspectorates (e.g., International Atomic Energy Agency {IAEA}, or Euratom) rely heavily on neutron assay techniques, and in particular, on coincidence counters for the verification of declared nuclear materials under safeguards and for monitoring purposes. While 3He was readily available, the reliability, safety, ease of use, gamma-ray insensitivity, and high intrinsic thermal neutron detection efficiency of 3He-based detectors obviated the need for alternative detector technologies. However, the recent decline of the 3He gas supply has triggered international efforts to develop and field neutron detectors that make use of alternative materials. In response to this global effort, the U.S. Departmentmore » of Energy’s (DOE) National Nuclear Security Administration (NNSA) and Euratom launched a joint effort aimed at bringing together international experts, technology users and developers in the field of nuclear safeguards to discuss and evaluate the proposed 3He alternative materials and technologies. The effort involved a series of two workshops focused on detailed overviews and viability assessments of various 3He alternative technologies for use in nuclear safeguards applications. The key objective was to provide a platform for collaborative discussions and technical presentations organized in a compact, workshop-like format to stimulate interactions among the participants. The meetings culminated in a benchmark exercise providing a unique opportunity for the first inter-comparison of several available alternative technologies. This report provides an overview of the alternative technology efforts presented during the two workshops along with a summary of the benchmarking activities and results. The workshop recommendations and key consensus observations are discussed in the report, and used to outline a proposed path forward and future needs foreseeable in the area of 3He-alternative technologies.« less

  1. Deterministic Modeling of the High Temperature Test Reactor with DRAGON-HEXPEDITE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Ortensi; M.A. Pope; R.M. Ferrer

    2010-10-01

    The Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine the INL’s current prismatic reactor analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 fuel column thin annular core, and the fully loaded core critical condition with 30 fuel columns. Special emphasis is devoted to physical phenomena and artifacts in HTTR that are similar to phenomena and artifacts in themore » NGNP base design. The DRAGON code is used in this study since it offers significant ease and versatility in modeling prismatic designs. DRAGON can generate transport solutions via Collision Probability (CP), Method of Characteristics (MOC) and Discrete Ordinates (Sn). A fine group cross-section library based on the SHEM 281 energy structure is used in the DRAGON calculations. The results from this study show reasonable agreement in the calculation of the core multiplication factor with the MC methods, but a consistent bias of 2–3% with the experimental values is obtained. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement partially stems from the fact that during the experiments the control rods were adjusted to maintain criticality, whereas in the model, the rod positions were fixed. In addition, this work includes a brief study of a cross section generation approach that seeks to decouple the domain in order to account for neighbor effects. This spectral interpenetration is a dominant effect in annular HTR physics. This analysis methodology should be further explored in order to reduce the error that is systematically propagated in the traditional generation of cross sections.« less

  2. A benchmark system to optimize our defense against an attack on the US food supply using the Risk Reduction Effectiveness and Capabilities Assessment Program.

    PubMed

    Hodoh, Ofia; Dallas, Cham E; Williams, Paul; Jaine, Andrew M; Harris, Curt

    2015-01-01

    A predictive system was developed and tested in a series of exercises with the objective of evaluating the preparedness and effectiveness of the multiagency response to food terrorism attacks. A computerized simulation model, Risk Reduction Effectiveness and Capabilities Assessment Program (RRECAP), was developed to identify the key factors that influence the outcomes of an attack and quantify the relative reduction of such outcomes caused by each factor. The model was evaluated in a set of Tabletop and Full-Scale Exercises that simulate biological and chemical attacks on the food system. More than 300 participants representing more than 60 federal, state, local, and private sector agencies and organizations. The exercises showed that agencies could use RRECAP to identify and prioritize their advance preparation to mitigate such attacks with minimal expense. RRECAP also demonstrated the relative utility and limitations of the ability of medical resources to treat patients if responders do not recognize and mitigate the attack rapidly, and the exercise results showed that proper advance preparation would reduce these deficiencies. Using computer simulation prediction of the medical outcomes of food supply attacks to identify optimal remediation activities and quantify the benefits of various measures provides a significant tool to agencies in both the public and private sector as they seek to prepare for such an attack.

  3. Toxicological Benchmarks for Screening of Potential Contaminants of Concern for Effects on Aquatic Biota on the Oak Ridge Reservation, Oak Ridge, Tennessee

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W., II

    1993-01-01

    One of the initial stages in ecological risk assessment of hazardous waste sites is the screening of contaminants to determine which, if any, of them are worthy of further consideration; this process is termed contaminant screening. Screening is performed by comparing concentrations in ambient media to benchmark concentrations that are either indicative of a high likelihood of significant effects (upper screening benchmarks) or of a very low likelihood of significant effects (lower screening benchmarks). Exceedance of an upper screening benchmark indicates that the chemical in question is clearly of concern and remedial actions are likely to be needed. Exceedance ofmore » a lower screening benchmark indicates that a contaminant is of concern unless other information indicates that the data are unreliable or the comparison is inappropriate. Chemicals with concentrations below the lower benchmark are not of concern if the ambient data are judged to be adequate. This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronic Value (SCV), the lowest chronic values for fish and daphnids, the lowest EC20 for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. This report supersedes a prior aquatic benchmarks report (Suter and Mabrey 1994). It adds two new types of benchmarks. It also updates the benchmark values where appropriate, adds some new benchmark values, replaces secondary sources with primary sources, and provides more complete documentation of the sources and derivation of all values.« less

  4. Effects of the application of ankle functional rehabilitation exercise on the ankle joint functional movement screen and isokinetic muscular function in patients with chronic ankle sprain.

    PubMed

    Ju, Sung-Bum; Park, Gi Duck

    2017-02-01

    [Purpose] This study was conducted to investigate the effects of ankle functional rehabilitation exercise on ankle joint functional movement screen results and isokinetic muscular function in patients with chronic ankle sprain patients. [Subjects and Methods] In this study, 16 patients with chronic ankle sprain were randomized to an ankle functional rehabilitation exercise group (n=8) and a control group (n=8). The ankle functional rehabilitation exercise centered on a proprioceptive sense exercise program, which was applied 12 times for 2 weeks. To verify changes after the application, ankle joint functional movement screen scores and isokinetic muscular function were measured and analyzed. [Results] The ankle functional rehabilitation exercise group showed significant improvements in all items of the ankle joint functional movement screen and in isokinetic muscular function after the exercise, whereas the control group showed no difference after the application. [Conclusion] The ankle functional rehabilitation exercise program can be effectively applied in patients with chronic ankle sprain for the improvement of ankle joint functional movement screen score and isokinetic muscular function.

  5. Effects of the application of ankle functional rehabilitation exercise on the ankle joint functional movement screen and isokinetic muscular function in patients with chronic ankle sprain

    PubMed Central

    Ju, Sung-Bum; Park, Gi Duck

    2017-01-01

    [Purpose] This study was conducted to investigate the effects of ankle functional rehabilitation exercise on ankle joint functional movement screen results and isokinetic muscular function in patients with chronic ankle sprain patients. [Subjects and Methods] In this study, 16 patients with chronic ankle sprain were randomized to an ankle functional rehabilitation exercise group (n=8) and a control group (n=8). The ankle functional rehabilitation exercise centered on a proprioceptive sense exercise program, which was applied 12 times for 2 weeks. To verify changes after the application, ankle joint functional movement screen scores and isokinetic muscular function were measured and analyzed. [Results] The ankle functional rehabilitation exercise group showed significant improvements in all items of the ankle joint functional movement screen and in isokinetic muscular function after the exercise, whereas the control group showed no difference after the application. [Conclusion] The ankle functional rehabilitation exercise program can be effectively applied in patients with chronic ankle sprain for the improvement of ankle joint functional movement screen score and isokinetic muscular function. PMID:28265157

  6. Theory and use of modern microscopical methods with applications to studies of wetlands microbial community dynamics. Final performance reports

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-12-31

    Funds were granted to the University of Southwestern Louisiana to coordinate and offer a summer enhancement institute for science teachers. Following are highlights from that institute: (1) 20 teachers from Louisiana attended the institute as students; (2) institute faculty included staff members from USL`s Departments of Biology, Mathematics, and Education and 3 principal scientists plus technicians from the Southern Science Center; (3) the institute began June 5, 1995 and ended June 30, 1995, and it featured daily lectures, laboratory exercises, examinations, and field trips--assignments for students included journal keeping, lesson plan development, and presentations, the student`s journal entries proved valuablemore » for evaluating institute activities, students received copies of lesson plans developed at the institute, videos entitled ``Pond Life Diversity`` and ``Chesapeake: The Twilight Estuary,`` a guide to ``Free-lining Freshwater Protozoa,`` a graphing calculator, 2 x 2 slide set of pond life, software or hardware (selected by the teacher to meet specific needs), a field manual for water quality monitoring laboratory exercises (Project Green), and a book on Benchmarks for Science Literacy; (4) follow-up measures included the following--a newsletter disseminated by USL but written with teacher input; making equipment (such as a trinocular compound microscope and video monitor) and materials and supplies available to the teachers and their students in the classroom; and mentoring between USL and SSC staff and the teachers during the school year. Attached to this report are copies of the institute agenda and lesson plans developed in the institute.« less

  7. 8 CFR 212.4 - Applications for the exercise of discretion under section 212(d)(1) and 212(d)(3).

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 8 Aliens and Nationality 1 2011-01-01 2011-01-01 false Applications for the exercise of discretion under section 212(d)(1) and 212(d)(3). 212.4 Section 212.4 Aliens and Nationality DEPARTMENT OF HOMELAND... INADMISSIBLE ALIENS; PAROLE § 212.4 Applications for the exercise of discretion under section 212(d)(1) and 212...

  8. Standardised Benchmarking in the Quest for Orthologs

    PubMed Central

    Altenhoff, Adrian M.; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A.; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P.; Schreiber, Fabian; Sousa da Silva, Alan; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Juhl Jensen, Lars; Martin, Maria J.; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E.; Thomas, Paul D.; Sonnhammer, Erik; Dessimoz, Christophe

    2016-01-01

    The identification of evolutionarily related genes across different species—orthologs in particular—forms the backbone of many comparative, evolutionary, and functional genomic analyses. Achieving high accuracy in orthology inference is thus essential. Yet the true evolutionary history of genes, required to ascertain orthology, is generally unknown. Furthermore, orthologs are used for very different applications across different phyla, with different requirements in terms of the precision-recall trade-off. As a result, assessing the performance of orthology inference methods remains difficult for both users and method developers. Here, we present a community effort to establish standards in orthology benchmarking and facilitate orthology benchmarking through an automated web-based service (http://orthology.benchmarkservice.org). Using this new service, we characterise the performance of 15 well-established orthology inference methods and resources on a battery of 20 different benchmarks. Standardised benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimal requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  9. Discordance between Lifestyle-Related Health Practices and Beliefs of People Living in Kuwait: A Community-Based Study

    PubMed Central

    Alfadhli, Suad; Al-Mazeedi, Sabriyah; Bodner, Michael E.; Dean, Elizabeth

    2017-01-01

    Objective To examine the concordance between lifestyle practices and beliefs of people living in Kuwait, and between their lifestyle practices and established evidence-informed recommendations for health. Subjects and Methods A cross-sectional interview questionnaire study was conducted using a convenience sample of 100 adults living in Kuwait (age range 19-75 years). The interview included sections on demographics, and lifestyle-related practices and beliefs related to smoking, diet/nutrition, physical activity/exercise, sleep, and stress. Diet/nutrition and physical activity/exercise benchmarks were based on international standards. Analyses included descriptive statistics and the χ2 test. Results Beliefs about the importance of nutrition in lifestyle-related conditions were limited, and this was apparent in participants' dietary habits, e.g., low consumption of fruit/vegetables and multigrains: 16 (16%) and 9 (9%) met the recommended guidelines, respectively. Ninety-nine (99%) believed physical activity/exercise affects health overall, and 44 (44%) exercised regularly. Of the sample of 100, 20 (20%) exercised in accordance with evidence-based recommendations for maximal health. Compared with beliefs about other lifestyle-related behaviors/attributes, respondents believed nutrition contributed more than stress to heart disease, cancer, and stroke, and stress contributed more than nutrition to hypertension and diabetes. Conclusion In this study, our findings showed a discrepancy between lifestyle-related practices and beliefs, and between each of these and evidence-based recommendations for maximal health, i.e., not smoking, several servings of fruit and vegetables and whole-grain foods daily, healthy weight, restorative sleep, and low-to-moderate stress levels. PMID:27764822

  10. 76 FR 39143 - Self-Regulatory Organizations; International Securities Exchange, LLC; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-05

    ... Rule Change To Increase the Position and Exercise Limit for Options on the Standard & Poor's[supreg... exercise limit applicable to options on the Standard and Poor's[supreg] Depositary Receipts (``SPDRs[supreg... increase the position and exercise limit applicable to options on SPDRs[supreg], which are trading under...

  11. Space Weather Action Plan Ionizing Radiation Benchmarks: Phase 1 update and plans for Phase 2

    NASA Astrophysics Data System (ADS)

    Talaat, E. R.; Kozyra, J.; Onsager, T. G.; Posner, A.; Allen, J. E., Jr.; Black, C.; Christian, E. R.; Copeland, K.; Fry, D. J.; Johnston, W. R.; Kanekal, S. G.; Mertens, C. J.; Minow, J. I.; Pierson, J.; Rutledge, R.; Semones, E.; Sibeck, D. G.; St Cyr, O. C.; Xapsos, M.

    2017-12-01

    Changes in the near-Earth radiation environment can affect satellite operations, astronauts in space, commercial space activities, and the radiation environment on aircraft at relevant latitudes or altitudes. Understanding the diverse effects of increased radiation is challenging, but producing ionizing radiation benchmarks will help address these effects. The following areas have been considered in addressing the near-Earth radiation environment: the Earth's trapped radiation belts, the galactic cosmic ray background, and solar energetic-particle events. The radiation benchmarks attempt to account for any change in the near-Earth radiation environment, which, under extreme cases, could present a significant risk to critical infrastructure operations or human health. The goal of these ionizing radiation benchmarks and associated confidence levels will define at least the radiation intensity as a function of time, particle type, and energy for an occurrence frequency of 1 in 100 years and an intensity level at the theoretical maximum for the event. In this paper, we present the benchmarks that address radiation levels at all applicable altitudes and latitudes in the near-Earth environment, the assumptions made and the associated uncertainties, and the next steps planned for updating the benchmarks.

  12. Translating an AI application from Lisp to Ada: A case study

    NASA Technical Reports Server (NTRS)

    Davis, Gloria J.

    1991-01-01

    A set of benchmarks was developed to test the performance of a newly designed computer executing both Lisp and Ada. Among these was AutoClassII -- a large Artificial Intelligence (AI) application written in Common Lisp. The extraction of a representative subset of this complex application was aided by a Lisp Code Analyzer (LCA). The LCA enabled rapid analysis of the code, putting it in a concise and functionally readable form. An equivalent benchmark was created in Ada through manual translation of the Lisp version. A comparison of the execution results of both programs across a variety of compiler-machine combinations indicate that line-by-line translation coupled with analysis of the initial code can produce relatively efficient and reusable target code.

  13. Development and Application of Benchmark Examples for Mode II Static Delamination Propagation and Fatigue Growth Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2011-01-01

    The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.

  14. ICSBEP Benchmarks For Nuclear Data Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Briggs, J. Blair

    2005-05-24

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) -- Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive andmore » internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled ''International Handbook of Evaluated Criticality Safety Benchmark Experiments.'' The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.« less

  15. Benchmark Testing of a New 56Fe Evaluation for Criticality Safety Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leal, Luiz C; Ivanov, E.

    2015-01-01

    The SAMMY code was used to evaluate resonance parameters of the 56Fe cross section in the resolved resonance energy range of 0–2 MeV using transmission data, capture, elastic, inelastic, and double differential elastic cross sections. The resonance analysis was performed with the code SAMMY that fits R-matrix resonance parameters using the generalized least-squares technique (Bayes’ theory). The evaluation yielded a set of resonance parameters that reproduced the experimental data very well, along with a resonance parameter covariance matrix for data uncertainty calculations. Benchmark tests were conducted to assess the evaluation performance in benchmark calculations.

  16. Measuring human capital cost through benchmarking in health care environment.

    PubMed

    Kocakülâh, Mehmet C; Harris, Donna

    2002-01-01

    Each organization should seek to maximize its human capital investments, which ultimately lead to increased profits and asset efficiency. Service companies utilize less capital equipment and more human productivity, customer service, and/or delivery of service as the product. With the measurement of human capital, one can understand what is happening, exercise some degree of control, and make positive changes. Senior management lives or dies by the numbers and if Human Resources (HR) really wants to be a strategic business partner, HR must be judged by the same standards as everyone else in the health care organization.

  17. Dysfunctional breathing and reaching one’s physiological limit as causes of exercise-induced dyspnoea

    PubMed Central

    Everard, Mark L.

    2016-01-01

    Key points Excessive exercise-induced shortness of breath is a common complaint. For some, exercise-induced bronchoconstriction is the primary cause and for a small minority there may be an alternative organic pathology. However for many, the cause will be simply reaching their physiological limit or be due to a functional form of dysfunctional breathing, neither of which require drug therapy. The physiological limit category includes deconditioned individuals, such as those who have been through intensive care and require rehabilitation, as well as the unfit and the fit competitive athlete who has reached their limit with both of these latter groups requiring explanation and advice. Dysfunctional breathing is an umbrella term for an alteration in the normal biomechanical patterns of breathing that result in intermittent or chronic symptoms, which may be respiratory and/or nonrespiratory. This alteration may be due to structural causes or, much more commonly, be functional as exemplified by thoracic pattern disordered breathing (PDB) and extrathoracic paradoxical vocal fold motion disorder (pVFMD). Careful history and examination together with spirometry may identify those likely to have PDB and/or pVFMD. Where there is doubt about aetiology, cardiopulmonary exercise testing may be required to identify the deconditioned, unfit or fit individual reaching their physiological limit and PDB, while continuous laryngoscopy during exercise is increasingly becoming the benchmark for assessing extrathoracic causes. Accurate assessment and diagnosis can prevent excessive use of drug therapy and result in effective management of the cause of the individual’s complaint through cost-effective approaches such as reassurance, advice, breathing retraining and vocal exercises. This review provides an overview of the spectrum of conditions that can present as exercise-­induced breathlessness experienced by young subjects participating in sport and aims to promote understanding of the need for accurate assessment of an individual’s symptoms. We will highlight the high incidence of nonasthmatic causes, which simply require reassurance or simple interventions from respiratory physiotherapists or speech pathologists. PMID:27408630

  18. AECM-4; Proceedings of the 4th International Symposium on Acoustic Emission from Composite Materials, Seattle, WA, July 27-31, 1992

    NASA Astrophysics Data System (ADS)

    Various papers on AE from composite materials are presented. Among the individual topics addressed are: acoustic analysis of tranverse lamina cracking in CFRP laminates under tensile loading, characterization of fiber failure in graphite-epoxy (G/E) composites, application of AE in the study of microfissure damage to composite used in the aeronautic and space industries, interfacial shear properties and AE behavior of model aluminum and titanium matrix composites, amplitude distribution modelling and ultimate strength prediction of ASTM D-3039 G/E tensile specimens, AE prefailure warning system for composite structural tests, characterization of failure mechanisms in G/E tensile tests specimens using AE data, development of a standard testing procedure to yield an AE vs. strain curve, benchmark exercise on AE measurements from carbon fiber-epoxy composites. Also discussed are: interpretation of optically detected AE signals, acoustic emission monitoring of fracture process of SiC/Al composites under cyclic loading, application of pattern recognition techniques to acousto-ultrasonic testing of Kevlar composite panels, AE for high temperature monitoring of processing of carbon/carbon composite, monitoring the resistance welding of thermoplastic composites through AE, plate wave AE composite materials, determination of the elastic properties of composite materials using simulated AE signals, AE source location in thin plates using cross-correlation, propagation of flexural mode AE signals in Gr/Ep composite plates.

  19. The EJES-3D tool for personalized prescription of exercise in axial spondyloarthritis through multimedia animations: pilot study.

    PubMed

    Flórez, Mariano Tomás; Almodóvar, Raquel; García Pérez, Fernando; Rodríguez Cambrón, Ana Belén; Carmona, Loreto; Pérez Manzanero, María Ángeles; Aboitiz Cantalapiedra, Juan; Urruticoechea-Arana, Ana; Rodríguez Lozano, Carlos J; Castro, Carmen; Fernández-Carballido, Cristina; de Miguel, Eugenio; Galíndez, Eva; Álvarez Vega, José Luis; Torre Alonso, Juan Carlos; Linares, Luis F; Moreno, Mireia; Navarro-Compán, Victoria; Juanola, Xavier; Zarco, Pedro

    2018-05-21

    To develop and evaluate a web application based on multimedia animations, combined with a training program, to improve the prescription of exercises in spondyloarthritis (SpA). After a review of exercises included in the main clinical trials and recommendations of international societies, a multidisciplinary team-rehabilitators, rheumatologists, physiotherapists, computer scientists and graphic designers-developed a web application for the prescription of exercises (EJES-3D). Once completed, this was presented to 12 pairs of rehabilitators-rheumatologists from the same hospital in a workshop. Knowledge about exercise was tested in rheumatologists before and 6 months after the workshop, when they also evaluated the application. The EJES-3D application includes 38 multimedia videos and allows prescribing predesigned programs or customizing them. A patient can consult the prescribed exercises at any time from a device with internet connection (mobile, tablet, or computer). The vast majority of the evaluators (89%) were satisfied or very satisfied and considered that their expectations regarding the usefulness of the web application had been met. They highlighted the ability to tailor exercises adapted to the different stages of the disease and the quality and variety of the videos. They also indicated some limitations of the application and operational problems. The EJES-3D tool was positively evaluated by experts in SpA, potentially the most demanding group of users with the most critical capacity. This allows a preliminary validation of the contents, usefulness, and ease of use. Analyzing and correcting the errors and limitations detected is allowing us to improve the EJES-3D tool.

  20. Effects of different cooling treatments on water diffusion, microcirculation, and water content within exercised muscles: evaluation by magnetic resonance T2-weighted and diffusion-weighted imaging.

    PubMed

    Yanagisawa, Osamu; Takahashi, Hideyuki; Fukubayashi, Toru

    2010-09-01

    In this study, we determined the effects of different cooling treatments on exercised muscles. Seven adults underwent four post-exercise treatments (20-min ice-bag application, 60-min gel-pack application at 10 degrees C and 17 degrees C, and non-cooling treatment) with at least 1 week between treatments. Magnetic resonance diffusion- and T2-weighted images were obtained to calculate the apparent diffusion coefficients (apparent diffusion coefficient 1, which reflects intramuscular water diffusion and microcirculation, and apparent diffusion coefficient 2, which is approximately equal to the true diffusion coefficient that excludes as much of the effect of intramuscular microcirculation as possible) and the T2 values (intramuscular water content level) of the ankle dorsiflexors, respectively, before and after ankle dorsiflexion exercise and after post-exercise treatment. The T2 values increased significantly after exercise and returned to pre-exercise values after each treatment; no significant differences were observed among the four post-exercise treatments. Both apparent diffusion coefficients also increased significantly after exercise and decreased significantly after the three cooling treatments; no significant difference was detected among the three cooling treatments. Local cooling suppresses both water diffusion and microcirculation within exercised muscles. Moreover, although the treatment time was longer, adequate cooling effects could be achieved using the gel-pack applications at relatively mild cooling temperatures.

  1. CSAR 2014: A Benchmark Exercise Using Unpublished Data from Pharma.

    PubMed

    Carlson, Heather A; Smith, Richard D; Damm-Ganamet, Kelly L; Stuckey, Jeanne A; Ahmed, Aqeel; Convery, Maire A; Somers, Donald O; Kranz, Michael; Elkins, Patricia A; Cui, Guanglei; Peishoff, Catherine E; Lambert, Millard H; Dunbar, James B

    2016-06-27

    The 2014 CSAR Benchmark Exercise was the last community-wide exercise that was conducted by the group at the University of Michigan, Ann Arbor. For this event, GlaxoSmithKline (GSK) donated unpublished crystal structures and affinity data from in-house projects. Three targets were used: tRNA (m1G37) methyltransferase (TrmD), Spleen Tyrosine Kinase (SYK), and Factor Xa (FXa). A particularly strong feature of the GSK data is its large size, which lends greater statistical significance to comparisons between different methods. In Phase 1 of the CSAR 2014 Exercise, participants were given several protein-ligand complexes and asked to identify the one near-native pose from among 200 decoys provided by CSAR. Though decoys were requested by the community, we found that they complicated our analysis. We could not discern whether poor predictions were failures of the chosen method or an incompatibility between the participant's method and the setup protocol we used. This problem is inherent to decoys, and we strongly advise against their use. In Phase 2, participants had to dock and rank/score a set of small molecules given only the SMILES strings of the ligands and a protein structure with a different ligand bound. Overall, docking was a success for most participants, much better in Phase 2 than in Phase 1. However, scoring was a greater challenge. No particular approach to docking and scoring had an edge, and successful methods included empirical, knowledge-based, machine-learning, shape-fitting, and even those with solvation and entropy terms. Several groups were successful in ranking TrmD and/or SYK, but ranking FXa ligands was intractable for all participants. Methods that were able to dock well across all submitted systems include MDock,1 Glide-XP,2 PLANTS,3 Wilma,4 Gold,5 SMINA,6 Glide-XP2/PELE,7 FlexX,8 and MedusaDock.9 In fact, the submission based on Glide-XP2/PELE7 cross-docked all ligands to many crystal structures, and it was particularly impressive to see success across an ensemble of protein structures for multiple targets. For scoring/ranking, submissions that showed statistically significant achievement include MDock1 using ITScore1,10 with a flexible-ligand term,11 SMINA6 using Autodock-Vina,12,13 FlexX8 using HYDE,14 and Glide-XP2 using XP DockScore2 with and without ROCS15 shape similarity.16 Of course, these results are for only three protein targets, and many more systems need to be investigated to truly identify which approaches are more successful than others. Furthermore, our exercise is not a competition.

  2. Benchmarking: a method for continuous quality improvement in health.

    PubMed

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-05-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical-social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted.

  3. Parameterized Micro-benchmarking: An Auto-tuning Approach for Complex Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Wenjing; Krishnamoorthy, Sriram; Agrawal, Gagan

    2012-05-15

    Auto-tuning has emerged as an important practical method for creating highly optimized implementations of key computational kernels and applications. However, the growing complexity of architectures and applications is creating new challenges for auto-tuning. Complex applications can involve a prohibitively large search space that precludes empirical auto-tuning. Similarly, architectures are becoming increasingly complicated, making it hard to model performance. In this paper, we focus on the challenge to auto-tuning presented by applications with a large number of kernels and kernel instantiations. While these kernels may share a somewhat similar pattern, they differ considerably in problem sizes and the exact computation performed.more » We propose and evaluate a new approach to auto-tuning which we refer to as parameterized micro-benchmarking. It is an alternative to the two existing classes of approaches to auto-tuning: analytical model-based and empirical search-based. Particularly, we argue that the former may not be able to capture all the architectural features that impact performance, whereas the latter might be too expensive for an application that has several different kernels. In our approach, different expressions in the application, different possible implementations of each expression, and the key architectural features, are used to derive a simple micro-benchmark and a small parameter space. This allows us to learn the most significant features of the architecture that can impact the choice of implementation for each kernel. We have evaluated our approach in the context of GPU implementations of tensor contraction expressions encountered in excited state calculations in quantum chemistry. We have focused on two aspects of GPUs that affect tensor contraction execution: memory access patterns and kernel consolidation. Using our parameterized micro-benchmarking approach, we obtain a speedup of up to 2 over the version that used default optimizations, but no auto-tuning. We demonstrate that observations made from microbenchmarks match the behavior seen from real expressions. In the process, we make important observations about the memory hierarchy of two of the most recent NVIDIA GPUs, which can be used in other optimization frameworks as well.« less

  4. Feasibility of a Mobile Application to Enhance Swallowing Therapy for Patients Undergoing Radiation-Based Treatment for Head and Neck Cancer.

    PubMed

    Starmer, Heather M; Abrams, Rina; Webster, Kimberly; Kizner, Jennifer; Beadle, Beth; Holsinger, F Christopher; Quon, Harry; Richmon, Jeremy

    2018-04-01

    Dysphagia following treatment for head and neck cancer is one of the most significant morbidities impacting quality of life. Despite the value of prophylactic exercises to mitigate the impact of radiation on long-term swallowing function, adherence to treatment is limited. The purpose of this investigation was to explore the feasibility of a mobile health application to support patient adherence to swallowing therapy during radiation-based treatment. 36 patients undergoing radiation therapy were provided with the Vibrent™ mobile application as an adjunct to standard swallowing therapy. The application included exercise videos, written instructions, reminders, exercise logging, and educational content. 80% of participants used the app during treatment and logged an average of 102 exercise sessions over the course of treatment. 25% of participants logged at least two exercise sessions per day over the 7-week treatment period, and 53% recorded at least one session per day. Exit interviews regarding the patient experience with the Vibrent™ mobile application were largely positive, but also provided actionable strategies to improve future versions of the application. The Vibrent™ mobile application appears to be a tool that can be feasibly integrated into existing patient care practices and may assist patients in adhering to treatment recommendations and facilitate communication between patients and providers between encounters.

  5. The relevance of applying exercise training principles when designing therapeutic interventions for patients with inflammatory myopathies: a systematic review.

    PubMed

    Baschung Pfister, Pierrette; de Bruin, Eling D; Tobler-Ammann, Bernadette C; Maurer, Britta; Knols, Ruud H

    2015-10-01

    Physical exercise seems to be a safe and effective intervention in patients with inflammatory myopathy (IM). However, the optimal training intervention is not clear. To achieve an optimum training effect, physical exercise training principles must be considered and to replicate research findings, FITT components (frequency, intensity, time, and type) of exercise training should be reported. This review aims to evaluate exercise interventions in studies with IM patients in relation to (1) the application of principles of exercise training, (2) the reporting of FITT components, (3) the adherence of participants to the intervention, and (4) to assess the methodological quality of the included studies. The literature was searched for exercise studies in IM patients. Data were extracted to evaluate the application of the training principles, the reporting of and the adherence to the exercise prescription. The Downs and Black checklist was used to assess methodological quality of the included studies. From the 14 included studies, four focused on resistance, two on endurance, and eight on combined training. In terms of principles of exercise training, 93 % reported specificity, 50 % progression and overload, and 79 % initial values. Reversibility and diminishing returns were never reported. Six articles reported all FITT components in the prescription of the training though no study described adherence to all of these components. Incomplete application of the exercise training principles and insufficient reporting of the exercise intervention prescribed and completed hamper the reproducibility of the intervention and the ability to determine the optimal dose of exercise.

  6. User and Performance Impacts from Franklin Upgrades

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Yun

    2009-05-10

    The NERSC flagship computer Cray XT4 system"Franklin" has gone through three major upgrades: quad core upgrade, CLE 2.1 upgrade, and IO upgrade, during the past year. In this paper, we will discuss the various aspects of the user impacts such as user access, user environment, and user issues etc from these upgrades. The performance impacts on the kernel benchmarks and selected application benchmarks will also be presented.

  7. Implementation of BT, SP, LU, and FT of NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Schultz, Matthew; Frumkin, Michael; Jin, Hao-Qiang; Yan, Jerry

    2000-01-01

    A number of Java features make it an attractive but a debatable choice for High Performance Computing. We have implemented benchmarks working on single structured grid BT,SP,LU and FT in Java. The performance and scalability of the Java code shows that a significant improvement in Java compiler technology and in Java thread implementation are necessary for Java to compete with Fortran in HPC applications.

  8. The MCNP6 Analytic Criticality Benchmark Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less

  9. Benchmarking a Soil Moisture Data Assimilation System for Agricultural Drought Monitoring

    NASA Technical Reports Server (NTRS)

    Hun, Eunjin; Crow, Wade T.; Holmes, Thomas; Bolten, John

    2014-01-01

    Despite considerable interest in the application of land surface data assimilation systems (LDAS) for agricultural drought applications, relatively little is known about the large-scale performance of such systems and, thus, the optimal methodological approach for implementing them. To address this need, this paper evaluates an LDAS for agricultural drought monitoring by benchmarking individual components of the system (i.e., a satellite soil moisture retrieval algorithm, a soil water balance model and a sequential data assimilation filter) against a series of linear models which perform the same function (i.e., have the same basic inputoutput structure) as the full system component. Benchmarking is based on the calculation of the lagged rank cross-correlation between the normalized difference vegetation index (NDVI) and soil moisture estimates acquired for various components of the system. Lagged soil moistureNDVI correlations obtained using individual LDAS components versus their linear analogs reveal the degree to which non-linearities andor complexities contained within each component actually contribute to the performance of the LDAS system as a whole. Here, a particular system based on surface soil moisture retrievals from the Land Parameter Retrieval Model (LPRM), a two-layer Palmer soil water balance model and an Ensemble Kalman filter (EnKF) is benchmarked. Results suggest significant room for improvement in each component of the system.

  10. MPI, HPF or OpenMP: A Study with the NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Frumkin, Michael; Hribar, Michelle; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1999-01-01

    Porting applications to new high performance parallel and distributed platforms is a challenging task. Writing parallel code by hand is time consuming and costly, but the task can be simplified by high level languages and would even better be automated by parallelizing tools and compilers. The definition of HPF (High Performance Fortran, based on data parallel model) and OpenMP (based on shared memory parallel model) standards has offered great opportunity in this respect. Both provide simple and clear interfaces to language like FORTRAN and simplify many tedious tasks encountered in writing message passing programs. In our study we implemented the parallel versions of the NAS Benchmarks with HPF and OpenMP directives. Comparison of their performance with the MPI implementation and pros and cons of different approaches will be discussed along with experience of using computer-aided tools to help parallelize these benchmarks. Based on the study,potentials of applying some of the techniques to realistic aerospace applications will be presented

  11. MPI, HPF or OpenMP: A Study with the NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, H.; Frumkin, M.; Hribar, M.; Waheed, A.; Yan, J.; Saini, Subhash (Technical Monitor)

    1999-01-01

    Porting applications to new high performance parallel and distributed platforms is a challenging task. Writing parallel code by hand is time consuming and costly, but this task can be simplified by high level languages and would even better be automated by parallelizing tools and compilers. The definition of HPF (High Performance Fortran, based on data parallel model) and OpenMP (based on shared memory parallel model) standards has offered great opportunity in this respect. Both provide simple and clear interfaces to language like FORTRAN and simplify many tedious tasks encountered in writing message passing programs. In our study, we implemented the parallel versions of the NAS Benchmarks with HPF and OpenMP directives. Comparison of their performance with the MPI implementation and pros and cons of different approaches will be discussed along with experience of using computer-aided tools to help parallelize these benchmarks. Based on the study, potentials of applying some of the techniques to realistic aerospace applications will be presented.

  12. StirMark Benchmark: audio watermarking attacks based on lossy compression

    NASA Astrophysics Data System (ADS)

    Steinebach, Martin; Lang, Andreas; Dittmann, Jana

    2002-04-01

    StirMark Benchmark is a well-known evaluation tool for watermarking robustness. Additional attacks are added to it continuously. To enable application based evaluation, in our paper we address attacks against audio watermarks based on lossy audio compression algorithms to be included in the test environment. We discuss the effect of different lossy compression algorithms like MPEG-2 audio Layer 3, Ogg or VQF on a selection of audio test data. Our focus is on changes regarding the basic characteristics of the audio data like spectrum or average power and on removal of embedded watermarks. Furthermore we compare results of different watermarking algorithms and show that lossy compression is still a challenge for most of them. There are two strategies for adding evaluation of robustness against lossy compression to StirMark Benchmark: (a) use of existing free compression algorithms (b) implementation of a generic lossy compression simulation. We discuss how such a model can be implemented based on the results of our tests. This method is less complex, as no real psycho acoustic model has to be applied. Our model can be used for audio watermarking evaluation of numerous application fields. As an example, we describe its importance for e-commerce applications with watermarking security.

  13. Bio-inspired benchmark generator for extracellular multi-unit recordings

    PubMed Central

    Mondragón-González, Sirenia Lizbeth; Burguière, Eric

    2017-01-01

    The analysis of multi-unit extracellular recordings of brain activity has led to the development of numerous tools, ranging from signal processing algorithms to electronic devices and applications. Currently, the evaluation and optimisation of these tools are hampered by the lack of ground-truth databases of neural signals. These databases must be parameterisable, easy to generate and bio-inspired, i.e. containing features encountered in real electrophysiological recording sessions. Towards that end, this article introduces an original computational approach to create fully annotated and parameterised benchmark datasets, generated from the summation of three components: neural signals from compartmental models and recorded extracellular spikes, non-stationary slow oscillations, and a variety of different types of artefacts. We present three application examples. (1) We reproduced in-vivo extracellular hippocampal multi-unit recordings from either tetrode or polytrode designs. (2) We simulated recordings in two different experimental conditions: anaesthetised and awake subjects. (3) Last, we also conducted a series of simulations to study the impact of different level of artefacts on extracellular recordings and their influence in the frequency domain. Beyond the results presented here, such a benchmark dataset generator has many applications such as calibration, evaluation and development of both hardware and software architectures. PMID:28233819

  14. Automatic Thread-Level Parallelization in the Chombo AMR Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christen, Matthias; Keen, Noel; Ligocki, Terry

    2011-05-26

    The increasing on-chip parallelism has some substantial implications for HPC applications. Currently, hybrid programming models (typically MPI+OpenMP) are employed for mapping software to the hardware in order to leverage the hardware?s architectural features. In this paper, we present an approach that automatically introduces thread level parallelism into Chombo, a parallel adaptive mesh refinement framework for finite difference type PDE solvers. In Chombo, core algorithms are specified in the ChomboFortran, a macro language extension to F77 that is part of the Chombo framework. This domain-specific language forms an already used target language for an automatic migration of the large number ofmore » existing algorithms into a hybrid MPI+OpenMP implementation. It also provides access to the auto-tuning methodology that enables tuning certain aspects of an algorithm to hardware characteristics. Performance measurements are presented for a few of the most relevant kernels with respect to a specific application benchmark using this technique as well as benchmark results for the entire application. The kernel benchmarks show that, using auto-tuning, up to a factor of 11 in performance was gained with 4 threads with respect to the serial reference implementation.« less

  15. Revisiting Turbulence Model Validation for High-Mach Number Axisymmetric Compression Corner Flows

    NASA Technical Reports Server (NTRS)

    Georgiadis, Nicholas J.; Rumsey, Christopher L.; Huang, George P.

    2015-01-01

    Two axisymmetric shock-wave/boundary-layer interaction (SWBLI) cases are used to benchmark one- and two-equation Reynolds-averaged Navier-Stokes (RANS) turbulence models. This validation exercise was executed in the philosophy of the NASA Turbulence Modeling Resource and the AIAA Turbulence Model Benchmarking Working Group. Both SWBLI cases are from the experiments of Kussoy and Horstman for axisymmetric compression corner geometries with SWBLI inducing flares of 20 and 30 degrees, respectively. The freestream Mach number was approximately 7. The RANS closures examined are the Spalart-Allmaras one-equation model and the Menter family of kappa - omega two equation models including the Baseline and Shear Stress Transport formulations. The Wind-US and CFL3D RANS solvers are employed to simulate the SWBLI cases. Comparisons of RANS solutions to experimental data are made for a boundary layer survey plane just upstream of the SWBLI region. In the SWBLI region, comparisons of surface pressure and heat transfer are made. The effects of inflow modeling strategy, grid resolution, grid orthogonality, turbulent Prandtl number, and code-to-code variations are also addressed.

  16. An approach to estimate body dimensions through constant body ratio benchmarks.

    PubMed

    Chao, Wei-Cheng; Wang, Eric Min-Yang

    2010-12-01

    Building a new anthropometric database is a difficult and costly job that requires considerable manpower and time. However, most designers and engineers do not know how to convert old anthropometric data into applicable new data with minimal errors and costs (Wang et al., 1999). To simplify the process of converting old anthropometric data into useful new data, this study analyzed the available data in paired body dimensions in an attempt to determine constant body ratio (CBR) benchmarks that are independent of gender and age. In total, 483 CBR benchmarks were identified and verified from 35,245 ratios analyzed. Additionally, 197 estimation formulae, taking as inputs 19 easily measured body dimensions, were built using 483 CBR benchmarks. Based on the results for 30 recruited participants, this study determined that the described approach is more accurate and cost-effective than alternative techniques. Copyright © 2010 Elsevier Ltd. All rights reserved.

  17. Generating Shifting Workloads to Benchmark Adaptability in Relational Database Systems

    NASA Astrophysics Data System (ADS)

    Rabl, Tilmann; Lang, Andreas; Hackl, Thomas; Sick, Bernhard; Kosch, Harald

    A large body of research concerns the adaptability of database systems. Many commercial systems already contain autonomic processes that adapt configurations as well as data structures and data organization. Yet there is virtually no possibility for a just measurement of the quality of such optimizations. While standard benchmarks have been developed that simulate real-world database applications very precisely, none of them considers variations in workloads produced by human factors. Today’s benchmarks test the performance of database systems by measuring peak performance on homogeneous request streams. Nevertheless, in systems with user interaction access patterns are constantly shifting. We present a benchmark that simulates a web information system with interaction of large user groups. It is based on the analysis of a real online eLearning management system with 15,000 users. The benchmark considers the temporal dependency of user interaction. Main focus is to measure the adaptability of a database management system according to shifting workloads. We will give details on our design approach that uses sophisticated pattern analysis and data mining techniques.

  18. Evaluation and optimization of virtual screening workflows with DEKOIS 2.0--a public library of challenging docking benchmark sets.

    PubMed

    Bauer, Matthias R; Ibrahim, Tamer M; Vogel, Simon M; Boeckler, Frank M

    2013-06-24

    The application of molecular benchmarking sets helps to assess the actual performance of virtual screening (VS) workflows. To improve the efficiency of structure-based VS approaches, the selection and optimization of various parameters can be guided by benchmarking. With the DEKOIS 2.0 library, we aim to further extend and complement the collection of publicly available decoy sets. Based on BindingDB bioactivity data, we provide 81 new and structurally diverse benchmark sets for a wide variety of different target classes. To ensure a meaningful selection of ligands, we address several issues that can be found in bioactivity data. We have improved our previously introduced DEKOIS methodology with enhanced physicochemical matching, now including the consideration of molecular charges, as well as a more sophisticated elimination of latent actives in the decoy set (LADS). We evaluate the docking performance of Glide, GOLD, and AutoDock Vina with our data sets and highlight existing challenges for VS tools. All DEKOIS 2.0 benchmark sets will be made accessible at http://www.dekois.com.

  19. An Application-Based Performance Characterization of the Columbia Supercluster

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Djomehri, Jahed M.; Hood, Robert; Jin, Hoaqiang; Kiris, Cetin; Saini, Subhash

    2005-01-01

    Columbia is a 10,240-processor supercluster consisting of 20 Altix nodes with 512 processors each, and currently ranked as the second-fastest computer in the world. In this paper, we present the performance characteristics of Columbia obtained on up to four computing nodes interconnected via the InfiniBand and/or NUMAlink4 communication fabrics. We evaluate floating-point performance, memory bandwidth, message passing communication speeds, and compilers using a subset of the HPC Challenge benchmarks, and some of the NAS Parallel Benchmarks including the multi-zone versions. We present detailed performance results for three scientific applications of interest to NASA, one from molecular dynamics, and two from computational fluid dynamics. Our results show that both the NUMAlink4 and the InfiniBand hold promise for application scaling to a large number of processors.

  20. A Machine-to-Machine protocol benchmark for eHealth applications - Use case: Respiratory rehabilitation.

    PubMed

    Talaminos-Barroso, Alejandro; Estudillo-Valderrama, Miguel A; Roa, Laura M; Reina-Tosina, Javier; Ortega-Ruiz, Francisco

    2016-06-01

    M2M (Machine-to-Machine) communications represent one of the main pillars of the new paradigm of the Internet of Things (IoT), and is making possible new opportunities for the eHealth business. Nevertheless, the large number of M2M protocols currently available hinders the election of a suitable solution that satisfies the requirements that can demand eHealth applications. In the first place, to develop a tool that provides a benchmarking analysis in order to objectively select among the most relevant M2M protocols for eHealth solutions. In the second place, to validate the tool with a particular use case: the respiratory rehabilitation. A software tool, called Distributed Computing Framework (DFC), has been designed and developed to execute the benchmarking tests and facilitate the deployment in environments with a large number of machines, with independence of the protocol and performance metrics selected. DDS, MQTT, CoAP, JMS, AMQP and XMPP protocols were evaluated considering different specific performance metrics, including CPU usage, memory usage, bandwidth consumption, latency and jitter. The results obtained allowed to validate a case of use: respiratory rehabilitation of chronic obstructive pulmonary disease (COPD) patients in two scenarios with different types of requirement: Home-Based and Ambulatory. The results of the benchmark comparison can guide eHealth developers in the choice of M2M technologies. In this regard, the framework presented is a simple and powerful tool for the deployment of benchmark tests under specific environments and conditions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Effects of C5/C6 Intervertebral Space Distraction Height on Pressure on the Adjacent Intervertebral Disks and Articular Processes and Cervical Vertebrae Range of Motion.

    PubMed

    Lu, Tingsheng; Luo, Chunshan; Ouyang, Beiping; Chen, Qiling; Deng, Zhongliang

    2018-04-25

    BACKGROUND This study aimed to investigate the association between range of motion of the cervical vertebrae and various C5/C6 intervertebral space distraction heights. MATERIAL AND METHODS The cervical vertebrae from 6 fresh adult human cadavers were used to prepare the models. Changes in C4/C5 and C6/C7 intervertebral disk pressures, articular process pressure, and range of motion of the cervical vertebrae before and after the distraction of the C5/C6 intervertebral space at benchmark heights of 100%, 120%, 140%, and 160% were tested under different exercise loads. RESULTS The pressure on the adjacent intervertebral disks was highest with the standing upright position before distraction, varied with different positions of the specimens and distraction heights after distraction, and was closest to that before distraction at a distraction height of 120% (P<0.05). The pressure of the adjacent articular processes was highest with left and right rotations before distraction, varied with different positions of the specimens and distraction heights after distraction, and was lowest under the same exercise load with different positions at a distraction height of 120% (P<0.05). The ranges of motion of the cervical vertebrae and intervertebral disks were largest without distraction and at a distraction height of 120% after distraction, respectively (P<0.05). CONCLUSIONS When removing the C5/C6 intervertebral disk and implanting an intervertebral bone graft, a benchmark height of 120% had little influence on the pressure of the adjacent intervertebral disks and articular processes and range of motion of the cervical vertebrae and is therefore an appropriate intervertebral space distraction height.

  2. Effects of C5/C6 Intervertebral Space Distraction Height on Pressure on the Adjacent Intervertebral Disks and Articular Processes and Cervical Vertebrae Range of Motion

    PubMed Central

    Lu, Tingsheng; Luo, Chunshan; Ouyang, Beiping; Chen, Qiling

    2018-01-01

    Background This study aimed to investigate the association between range of motion of the cervical vertebrae and various C5/C6 intervertebral space distraction heights. Material/Methods The cervical vertebrae from 6 fresh adult human cadavers were used to prepare the models. Changes in C4/C5 and C6/C7 intervertebral disk pressures, articular process pressure, and range of motion of the cervical vertebrae before and after the distraction of the C5/C6 intervertebral space at benchmark heights of 100%, 120%, 140%, and 160% were tested under different exercise loads. Results The pressure on the adjacent intervertebral disks was highest with the standing upright position before distraction, varied with different positions of the specimens and distraction heights after distraction, and was closest to that before distraction at a distraction height of 120% (P<0.05). The pressure of the adjacent articular processes was highest with left and right rotations before distraction, varied with different positions of the specimens and distraction heights after distraction, and was lowest under the same exercise load with different positions at a distraction height of 120% (P<0.05). The ranges of motion of the cervical vertebrae and intervertebral disks were largest without distraction and at a distraction height of 120% after distraction, respectively (P<0.05). Conclusions When removing the C5/C6 intervertebral disk and implanting an intervertebral bone graft, a benchmark height of 120% had little influence on the pressure of the adjacent intervertebral disks and articular processes and range of motion of the cervical vertebrae and is therefore an appropriate intervertebral space distraction height. PMID:29693646

  3. Benchmarking of Neutron Production of Heavy-Ion Transport Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less

  4. Benchmarking of Heavy Ion Transport Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in designing and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less

  5. Bin packing problem solution through a deterministic weighted finite automaton

    NASA Astrophysics Data System (ADS)

    Zavala-Díaz, J. C.; Pérez-Ortega, J.; Martínez-Rebollar, A.; Almanza-Ortega, N. N.; Hidalgo-Reyes, M.

    2016-06-01

    In this article the solution of Bin Packing problem of one dimension through a weighted finite automaton is presented. Construction of the automaton and its application to solve three different instances, one synthetic data and two benchmarks are presented: N1C1W1_A.BPP belonging to data set Set_1; and BPP13.BPP belonging to hard28. The optimal solution of synthetic data is obtained. In the first benchmark the solution obtained is one more container than the ideal number of containers and in the second benchmark the solution is two more containers than the ideal solution (approximately 2.5%). The runtime in all three cases was less than one second.

  6. Development and Application of Benchmark Examples for Mixed-Mode I/II Quasi-Static Delamination Propagation Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2012-01-01

    The development of benchmark examples for quasi-static delamination propagation prediction is presented and demonstrated for a commercial code. The examples are based on finite element models of the Mixed-Mode Bending (MMB) specimen. The examples are independent of the analysis software used and allow the assessment of the automated delamination propagation prediction capability in commercial finite element codes based on the virtual crack closure technique (VCCT). First, quasi-static benchmark examples were created for the specimen. Second, starting from an initially straight front, the delamination was allowed to propagate under quasi-static loading. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Good agreement between the results obtained from the automated propagation analysis and the benchmark results could be achieved by selecting input parameters that had previously been determined during analyses of mode I Double Cantilever Beam and mode II End Notched Flexure specimens. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall the results are encouraging, but further assessment for mixed-mode delamination fatigue onset and growth is required.

  7. Dedicated cardiac rehabilitation wearable sensor and its clinical potential.

    PubMed

    Lee, Hooseok; Chung, Heewon; Ko, Hoon; Jeong, Changwon; Noh, Se-Eung; Kim, Chul; Lee, Jinseok

    2017-01-01

    We describe a wearable sensor developed for cardiac rehabilitation (CR) exercise. To effectively guide CR exercise, the dedicated CR wearable sensor (DCRW) automatically recommends the exercise intensity to the patient by comparing heart rate (HR) measured in real time with a predefined target heart rate zone (THZ) during exercise. The CR exercise includes three periods: pre-exercise, exercise with intensity guidance, and post-exercise. In the pre-exercise period, information such as THZ, exercise type, exercise stage order, and duration of each stage are set up through a smartphone application we developed for iPhones and Android devices. The set-up information is transmitted to the DCRW via Bluetooth communication. In the period of exercise with intensity guidance, the DCRW continuously estimates HR using a reflected pulse signal in the wrist. To achieve accurate HR measurements, we used multichannel photo sensors and increased the chances of acquiring a clean signal. Subsequently, we used singular value decomposition (SVD) for de-noising. For the median and variance of RMSEs in the measured HRs, our proposed method with DCRW provided lower values than those from a single channel-based method and template-based multiple-channel method for the entire exercise stage. In the post-exercise period, the DCRW transmits all the measured HR data to the smartphone application via Bluetooth communication, and the patient can monitor his/her own exercise history.

  8. [Application of the 6-Minute Walking Test and Shuttle Walking Test in the Exercise Tests of Patients With COPD].

    PubMed

    Ho, Chiung-Fang; Maa, Suh-Hwa

    2016-08-01

    Exercise training improves the management of stable chronic obstructive pulmonary disease (COPD). COPD patients benefit from exercise training programs in terms of improved VO2 peak values and decreased dyspnea, fatigue, hospital admissions, and rates of mortality, increasing exercise capacity and health-related quality of life (HRQOL). COPD is often associated with impairment in exercise tolerance. About 51% of patients have a limited capacity for normal activity, which often further degrades exercise capacity, creating a vicious circle. Exercise testing is highly recommended to assess a patient's individualized functions and limitations in order to determine the optimal level of training intensity prior to initiating an exercise-training regimen. The outcomes of exercise testing provide a powerful indicator of prognosis in COPD patients. The six-minute walking test (6MWT) and the incremental shuttle-walking test (ISWT) are widely used in exercise testing to measure a patient's exercise ability by walking distances. While nursing-related articles published in Taiwan frequently cite and use the 6MWT to assess exercise capacity in COPD patients, the ISWT is rarely used. This paper introduces the testing method, strengths and weaknesses, and application of the two tests in order to provide clinical guidelines for assessing the current exercise capacity of COPD patients.

  9. Vodcasts and Active-Learning Exercises in a “Flipped Classroom” Model of a Renal Pharmacotherapy Module

    PubMed Central

    Fox, Jeremy

    2012-01-01

    Objective. To implement a “flipped classroom” model for a renal pharmacotherapy topic module and assess the impact on pharmacy students’ performance and attitudes. Design. Students viewed vodcasts (video podcasts) of lectures prior to the scheduled class and then discussed interactive cases of patients with end-stage renal disease in class. A process-oriented guided inquiry learning (POGIL) activity was developed and implemented that complemented, summarized, and allowed for application of the material contained in the previously viewed lectures. Assessment. Students’ performance on the final examination significantly improved compared to performance of students the previous year who completed the same module in a traditional classroom setting. Students’ opinions of the POGIL activity and the flipped classroom instructional model were mostly positive. Conclusion. Implementing a flipped classroom model to teach a renal pharmacotherapy module resulted in improved student performance and favorable student perceptions about the instructional approach. Some of the factors that may have contributed to students’ improved scores included: student mediated contact with the course material prior to classes, benchmark and formative assessments administered during the module, and the interactive class activities. PMID:23275661

  10. Using relative survival measures for cross-sectional and longitudinal benchmarks of countries, states, and districts: the BenchRelSurv- and BenchRelSurvPlot-macros

    PubMed Central

    2013-01-01

    Background The objective of screening programs is to discover life threatening diseases in as many patients as early as possible and to increase the chance of survival. To be able to compare aspects of health care quality, methods are needed for benchmarking that allow comparisons on various health care levels (regional, national, and international). Objectives Applications and extensions of algorithms can be used to link the information on disease phases with relative survival rates and to consolidate them in composite measures. The application of the developed SAS-macros will give results for benchmarking of health care quality. Data examples for breast cancer care are given. Methods A reference scale (expected, E) must be defined at a time point at which all benchmark objects (observed, O) are measured. All indices are defined as O/E, whereby the extended standardized screening-index (eSSI), the standardized case-mix-index (SCI), the work-up-index (SWI), and the treatment-index (STI) address different health care aspects. The composite measures called overall-performance evaluation (OPE) and relative overall performance indices (ROPI) link the individual indices differently for cross-sectional or longitudinal analyses. Results Algorithms allow a time point and a time interval associated comparison of the benchmark objects in the indices eSSI, SCI, SWI, STI, OPE, and ROPI. Comparisons between countries, states and districts are possible. Exemplarily comparisons between two countries are made. The success of early detection and screening programs as well as clinical health care quality for breast cancer can be demonstrated while the population’s background mortality is concerned. Conclusions If external quality assurance programs and benchmark objects are based on population-based and corresponding demographic data, information of disease phase and relative survival rates can be combined to indices which offer approaches for comparative analyses between benchmark objects. Conclusions on screening programs and health care quality are possible. The macros can be transferred to other diseases if a disease-specific phase scale of prognostic value (e.g. stage) exists. PMID:23316692

  11. High-performance conjugate-gradient benchmark: A new metric for ranking high-performance computing systems

    DOE PAGES

    Dongarra, Jack; Heroux, Michael A.; Luszczek, Piotr

    2015-08-17

    Here, we describe a new high-performance conjugate-gradient (HPCG) benchmark. HPCG is composed of computations and data-access patterns commonly found in scientific applications. HPCG strives for a better correlation to existing codes from the computational science domain and to be representative of their performance. Furthermore, HPCG is meant to help drive the computer system design and implementation in directions that will better impact future performance improvement.

  12. Determining personal talents and behavioral styles of applicants to surgical training: a new look at an old problem, part II.

    PubMed

    Bell, Richard M; Fann, Stephen A; Morrison, James E; Lisk, J Ryan

    2012-01-01

    The selection of applicants for training in any particular surgical program is an imprecise exercise. Despite the abundance of information on particular candidates, many of the fundamental qualities that are associated with success for the surgical trainee cannot be identified by review of the applicants' grades, scores, letters of recommendation, personal statement, or even from the interview process. We sought a method to determine behavior, motivation, and values possessed by applicants that coincided with traits by our current residents who had demonstrated success in our program. The methods have been described in detail in Part I.(1) Briefly, the individual applicants' personal talent report was used to develop a rank-ordered list by the outside consultant and was compared to the traditionally developed rank list developed by the Department in the traditional fashion and the newly developed job benchmark. Five hundred thirty-five applications were received and interviews were offered to 112 (21%) applicants. Seventy-five on-line surveys were completed by the 77 applicants who were interviewed. The consultant was able to identify important personal talents, elements of motivation, and behavioral style that were not gleaned from the application or the interview process, some of which prompted a revision of our final ranking order.(1) This report discusses the results of the motivational analysis and of the Personal Talents Skills Inventory. Applicants with a strong motivation for the theoretical (knowledge) and social commitment (desire to help others) are important characteristics. Clear views of the external world and of self, as well as a sense of satisfaction with the applicants' vision of their future are positively associated with success in our program. The ability to identify unique behavioral, motivational and personal talents that applicants bring to the program that were not identifiable from the traditional application and interview process has allowed us to determine applicants who were a good match for the structure and culture of our program. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  13. Benchmarking: A Method for Continuous Quality Improvement in Health

    PubMed Central

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-01-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical–social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted. PMID:23634166

  14. Exercise-based cardiac rehabilitation in twelve European countries results of the European cardiac rehabilitation registry.

    PubMed

    Benzer, Werner; Rauch, Bernhard; Schmid, Jean-Paul; Zwisler, Ann Dorthe; Dendale, Paul; Davos, Constantinos H; Kouidi, Evangelia; Simon, Attila; Abreu, Ana; Pogosova, Nana; Gaita, Dan; Miletic, Bojan; Bönner, Gerd; Ouarrak, Taoufik; McGee, Hannah

    2017-02-01

    Results from EuroCaReD study should serve as a benchmark to improve guideline adherence and treatment quality of cardiac rehabilitation (CR) in Europe. Data from 2.054 CR patients in 12 European countries were derived from 69 centres. 76% were male. Indication for CR differed between countries being predominantly ACS in Switzerland (79%), Portugal (62%) and Germany (61%), elective PCI in Greece (37%), Austria (36%) and Spain (32%), and CABG in Croatia and Russia (36%). A minority of patients presented with chronic heart failure (4%). At CR start, most patients already were under medication according to current guidelines for the treatment of CV risk factors. A wide range of CR programme designs was found (duration 3 to 24weeks; total number of sessions 30 to 196). Patient programme adherence after admission was high (85%). With reservations that eCRF follow-up data exchange remained incomplete, patient CV risk profiles experienced only small improvements. CR success as defined by an increase of exercise capacity >25W was significantly higher in young patients and those who were employed. Results differed by countries. After CR only 9% of patients were admitted to a structured post-CR programme. Clinical characteristics of CR patients, indications and programmes in Europe are different. Guideline adherence is poor. Thus, patient selection and CR programme designs should become more evidence-based. Routine eCRF documentation of CR results throughout European countries was not sufficient in its first application because of incomplete data exchange. Therefore better adherence of CR centres to minimal routine clinical standards is requested. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Clinical Applications for Exercise.

    ERIC Educational Resources Information Center

    Goldstein, David

    1989-01-01

    Patients with chronic conditions such as coronary artery disease, hypertension, diabetes, and obesity might benefit from prescribed exercise. Although exercise does not reverse pathologic changes, it may play a role in disease management. (JD)

  16. Patients' Perspectives on and Experiences of Home Exercise Programmes Delivered with a Mobile Application.

    PubMed

    Abramsky, Hillary; Kaur, Puneet; Robitaille, Mikale; Taggio, Leanna; Kosemetzky, Paul K; Foster, Hillary; Gibson Bmr Pt MSc PhD, Barbara E; Bergeron, Maggie; Jachyra, Patrick

    2018-01-01

    Purpose: We explored patients' perspectives on home exercise programmes (HEPs) and their experiences using a mobile application designed to facilitate home exercise. Method: Data were generated using qualitative, semi-structured, face-to-face interviews with 10 participants who were receiving outpatient physiotherapy. Results: Establishing a therapeutic partnership between physiotherapists and patients enabled therapists to customize the HEPs to the patients' lifestyles and preferences. Analysis suggests that using the mobile application improved participants' ability to integrate the HEP into their daily life and was overwhelmingly preferred to traditional paper handouts. Conclusions: The results suggest that efforts to engage patients in HEPs need to take their daily lives into account. To move in this direction, sample exercise prescription questions are offered. Mobile applications do not replace the clinical encounter, but they can be an effective tool and an extension of delivering personalized HEPs in an existing therapeutic partnership.

  17. Half-Cell RF Gun Simulations with the Electromagnetic Particle-in-Cell Code VORPAL

    NASA Astrophysics Data System (ADS)

    Paul, K.; Dimitrov, D. A.; Busby, R.; Bruhwiler, D. L.; Smithe, D.; Cary, J. R.; Kewisch, J.; Kayran, D.; Calaga, R.; Ben-Zvi, I.

    2009-01-01

    We have simulated Brookhaven National Laboratory's half-cell superconducting RF gun design for a proposed high-current ERL using the three-dimensional, electromagnetic particle-in-cell code VORPAL. VORPAL computes the fully self-consistent electromagnetic fields produced by the electron bunches, meaning that it accurately models space-charge effects as well as bunch-to-bunch beam loading effects and the effects of higher-order cavity modes, though these are beyond the scope of this paper. We compare results from VORPAL to the well-established space-charge code PARMELA, using RF fields produced by SUPERFISH, as a benchmarking exercise in which the two codes should agree well.

  18. Evaluation of Inelastic Constitutive Models for Nonlinear Structural Analysis

    NASA Technical Reports Server (NTRS)

    Kaufman, A.

    1983-01-01

    The influence of inelastic material models on computed stress-strain states, and therefore predicted lives, was studied for thermomechanically loaded structures. Nonlinear structural analyses were performed on a fatigue specimen which was subjected to thermal cycling in fluidized beds and on a mechanically load cycled benchmark notch specimen. Four incremental plasticity creep models (isotropic, kinematic, combined isotropic-kinematic, combined plus transient creep) were exercised. Of the plasticity models, kinematic hardening gave results most consistent with experimental observations. Life predictions using the computed strain histories at the critical location with a Strainrange Partitioning approach considerably overpredicted the crack initiation life of the thermal fatigue specimen.

  19. Anisn-Dort Neutron-Gamma Flux Intercomparison Exercise for a Simple Testing Model

    NASA Astrophysics Data System (ADS)

    Boehmer, B.; Konheiser, J.; Borodkin, G.; Brodkin, E.; Egorov, A.; Kozhevnikov, A.; Zaritsky, S.; Manturov, G.; Voloschenko, A.

    2003-06-01

    The ability of transport codes ANISN, DORT, ROZ-6, MCNP and TRAMO, as well as nuclear data libraries BUGLE-96, ABBN-93, VITAMIN-B6 and ENDF/B-6 to deliver consistent gamma and neutron flux results was tested in the calculation of a one-dimensional cylindrical model consisting of a homogeneous core and an outer zone with a single material. Model variants with H2O, Fe, Cr and Ni in the outer zones were investigated. The results are compared with MCNP-ENDF/B-6 results. Discrepancies are discussed. The specified test model is proposed as a computational benchmark for testing calculation codes and data libraries.

  20. Bloomington Writing Assessment 1977; Student Exercise, Teacher Directions, Scoring.

    ERIC Educational Resources Information Center

    Bloomington Public Schools, MN.

    This booklet contains the 14 exercises that are used in the Bloomington, Minnesota, school system's writing assessment program. Depending on their applicability, the exercises may be used to assess the writing performance of fourth-, eighth-, or eleventh-grade students. Thirteen of the exercises are from the National Assessment of Educational…

  1. Benchmarking of neutron production of heavy-ion transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, I.; Ronningen, R. M.; Heilbronn, L.

    Document available in abstract form only, full text of document follows: Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondarymore » neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required. (authors)« less

  2. Heterogeneous Distributed Computing for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Sunderam, Vaidy S.

    1998-01-01

    The research supported under this award focuses on heterogeneous distributed computing for high-performance applications, with particular emphasis on computational aerosciences. The overall goal of this project was to and investigate issues in, and develop solutions to, efficient execution of computational aeroscience codes in heterogeneous concurrent computing environments. In particular, we worked in the context of the PVM[1] system and, subsequent to detailed conversion efforts and performance benchmarking, devising novel techniques to increase the efficacy of heterogeneous networked environments for computational aerosciences. Our work has been based upon the NAS Parallel Benchmark suite, but has also recently expanded in scope to include the NAS I/O benchmarks as specified in the NHT-1 document. In this report we summarize our research accomplishments under the auspices of the grant.

  3. A benchmark for fault tolerant flight control evaluation

    NASA Astrophysics Data System (ADS)

    Smaili, H.; Breeman, J.; Lombaerts, T.; Stroosma, O.

    2013-12-01

    A large transport aircraft simulation benchmark (REconfigurable COntrol for Vehicle Emergency Return - RECOVER) has been developed within the GARTEUR (Group for Aeronautical Research and Technology in Europe) Flight Mechanics Action Group 16 (FM-AG(16)) on Fault Tolerant Control (2004 2008) for the integrated evaluation of fault detection and identification (FDI) and reconfigurable flight control strategies. The benchmark includes a suitable set of assessment criteria and failure cases, based on reconstructed accident scenarios, to assess the potential of new adaptive control strategies to improve aircraft survivability. The application of reconstruction and modeling techniques, based on accident flight data, has resulted in high-fidelity nonlinear aircraft and fault models to evaluate new Fault Tolerant Flight Control (FTFC) concepts and their real-time performance to accommodate in-flight failures.

  4. The application of a Web-geographic information system for improving urban water cycle modelling.

    PubMed

    Mair, M; Mikovits, C; Sengthaler, M; Schöpf, M; Kinzel, H; Urich, C; Kleidorfer, M; Sitzenfrei, R; Rauch, W

    2014-01-01

    Research in urban water management has experienced a transition from traditional model applications to modelling water cycles as an integrated part of urban areas. This includes the interlinking of models of many research areas (e.g. urban development, socio-economy, urban water management). The integration and simulation is realized in newly developed frameworks (e.g. DynaMind and OpenMI) and often assumes a high knowledge in programming. This work presents a Web based urban water management modelling platform which simplifies the setup and usage of complex integrated models. The platform is demonstrated with a small application example on a case study within the Alpine region. The used model is a DynaMind model benchmarking the impact of newly connected catchments on the flooding behaviour of an existing combined sewer system. As a result the workflow of the user within a Web browser is demonstrated and benchmark results are shown. The presented platform hides implementation specific aspects behind Web services based technologies such that the user can focus on his main aim, which is urban water management modelling and benchmarking. Moreover, this platform offers a centralized data management, automatic software updates and access to high performance computers accessible with desktop computers and mobile devices.

  5. Comparing the OpenMP, MPI, and Hybrid Programming Paradigm on an SMP Cluster

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Jin, Haoqiang; anMey, Dieter; Hatay, Ferhat F.

    2003-01-01

    With the advent of parallel hardware and software technologies users are faced with the challenge to choose a programming paradigm best suited for the underlying computer architecture. With the current trend in parallel computer architectures towards clusters of shared memory symmetric multi-processors (SMP), parallel programming techniques have evolved to support parallelism beyond a single level. Which programming paradigm is the best will depend on the nature of the given problem, the hardware architecture, and the available software. In this study we will compare different programming paradigms for the parallelization of a selected benchmark application on a cluster of SMP nodes. We compare the timings of different implementations of the same CFD benchmark application employing the same numerical algorithm on a cluster of Sun Fire SMP nodes. The rest of the paper is structured as follows: In section 2 we briefly discuss the programming models under consideration. We describe our compute platform in section 3. The different implementations of our benchmark code are described in section 4 and the performance results are presented in section 5. We conclude our study in section 6.

  6. Counseling through Physical Fitness and Exercise.

    ERIC Educational Resources Information Center

    Carlson, Jon

    1990-01-01

    Discusses health, emotional, cognitive, social, and behavioral benefits of physical exercise. Discusses applications of physical exercise and diet in counseling children. Concludes counselors need to develop physical fitness levels and diets for their clients to model. (ABL)

  7. Analysis of 2D Torus and Hub Topologies of 100Mb/s Ethernet for the Whitney Commodity Computing Testbed

    NASA Technical Reports Server (NTRS)

    Pedretti, Kevin T.; Fineberg, Samuel A.; Kutler, Paul (Technical Monitor)

    1997-01-01

    A variety of different network technologies and topologies are currently being evaluated as part of the Whitney Project. This paper reports on the implementation and performance of a Fast Ethernet network configured in a 4x4 2D torus topology in a testbed cluster of 'commodity' Pentium Pro PCs. Several benchmarks were used for performance evaluation: an MPI point to point message passing benchmark, an MPI collective communication benchmark, and the NAS Parallel Benchmarks version 2.2 (NPB2). Our results show that for point to point communication on an unloaded network, the hub and 1 hop routes on the torus have about the same bandwidth and latency. However, the bandwidth decreases and the latency increases on the torus for each additional route hop. Collective communication benchmarks show that the torus provides roughly four times more aggregate bandwidth and eight times faster MPI barrier synchronizations than a hub based network for 16 processor systems. Finally, the SOAPBOX benchmarks, which simulate real-world CFD applications, generally demonstrated substantially better performance on the torus than on the hub. In the few cases the hub was faster, the difference was negligible. In total, our experimental results lead to the conclusion that for Fast Ethernet networks, the torus topology has better performance and scales better than a hub based network.

  8. Interactive visual optimization and analysis for RFID benchmarking.

    PubMed

    Wu, Yingcai; Chung, Ka-Kei; Qu, Huamin; Yuan, Xiaoru; Cheung, S C

    2009-01-01

    Radio frequency identification (RFID) is a powerful automatic remote identification technique that has wide applications. To facilitate RFID deployment, an RFID benchmarking instrument called aGate has been invented to identify the strengths and weaknesses of different RFID technologies in various environments. However, the data acquired by aGate are usually complex time varying multidimensional 3D volumetric data, which are extremely challenging for engineers to analyze. In this paper, we introduce a set of visualization techniques, namely, parallel coordinate plots, orientation plots, a visual history mechanism, and a 3D spatial viewer, to help RFID engineers analyze benchmark data visually and intuitively. With the techniques, we further introduce two workflow procedures (a visual optimization procedure for finding the optimum reader antenna configuration and a visual analysis procedure for comparing the performance and identifying the flaws of RFID devices) for the RFID benchmarking, with focus on the performance analysis of the aGate system. The usefulness and usability of the system are demonstrated in the user evaluation.

  9. Uncertainty Quantification Techniques of SCALE/TSUNAMI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T; Mueller, Don

    2011-01-01

    The Standardized Computer Analysis for Licensing Evaluation (SCALE) code system developed at Oak Ridge National Laboratory (ORNL) includes Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI). The TSUNAMI code suite can quantify the predicted change in system responses, such as k{sub eff}, reactivity differences, or ratios of fluxes or reaction rates, due to changes in the energy-dependent, nuclide-reaction-specific cross-section data. Where uncertainties in the neutron cross-section data are available, the sensitivity of the system to the cross-section data can be applied to propagate the uncertainties in the cross-section data to an uncertainty in the system response. Uncertainty quantification ismore » useful for identifying potential sources of computational biases and highlighting parameters important to code validation. Traditional validation techniques often examine one or more average physical parameters to characterize a system and identify applicable benchmark experiments. However, with TSUNAMI correlation coefficients are developed by propagating the uncertainties in neutron cross-section data to uncertainties in the computed responses for experiments and safety applications through sensitivity coefficients. The bias in the experiments, as a function of their correlation coefficient with the intended application, is extrapolated to predict the bias and bias uncertainty in the application through trending analysis or generalized linear least squares techniques, often referred to as 'data adjustment.' Even with advanced tools to identify benchmark experiments, analysts occasionally find that the application models include some feature or material for which adequately similar benchmark experiments do not exist to support validation. For example, a criticality safety analyst may want to take credit for the presence of fission products in spent nuclear fuel. In such cases, analysts sometimes rely on 'expert judgment' to select an additional administrative margin to account for gap in the validation data or to conclude that the impact on the calculated bias and bias uncertainty is negligible. As a result of advances in computer programs and the evolution of cross-section covariance data, analysts can use the sensitivity and uncertainty analysis tools in the TSUNAMI codes to estimate the potential impact on the application-specific bias and bias uncertainty resulting from nuclides not represented in available benchmark experiments. This paper presents the application of methods described in a companion paper.« less

  10. Implementation of Benchmarking Transportation Logistics Practices and Future Benchmarking Organizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thrower, A.W.; Patric, J.; Keister, M.

    2008-07-01

    The purpose of the Office of Civilian Radioactive Waste Management's (OCRWM) Logistics Benchmarking Project is to identify established government and industry practices for the safe transportation of hazardous materials which can serve as a yardstick for design and operation of OCRWM's national transportation system for shipping spent nuclear fuel and high-level radioactive waste to the proposed repository at Yucca Mountain, Nevada. The project will present logistics and transportation practices and develop implementation recommendations for adaptation by the national transportation system. This paper will describe the process used to perform the initial benchmarking study, highlight interim findings, and explain how thesemore » findings are being implemented. It will also provide an overview of the next phase of benchmarking studies. The benchmarking effort will remain a high-priority activity throughout the planning and operational phases of the transportation system. The initial phase of the project focused on government transportation programs to identify those practices which are most clearly applicable to OCRWM. These Federal programs have decades of safe transportation experience, strive for excellence in operations, and implement effective stakeholder involvement, all of which parallel OCRWM's transportation mission and vision. The initial benchmarking project focused on four business processes that are critical to OCRWM's mission success, and can be incorporated into OCRWM planning and preparation in the near term. The processes examined were: transportation business model, contract management/out-sourcing, stakeholder relations, and contingency planning. More recently, OCRWM examined logistics operations of AREVA NC's Business Unit Logistics in France. The next phase of benchmarking will focus on integrated domestic and international commercial radioactive logistic operations. The prospective companies represent large scale shippers and have vast experience in safely and efficiently shipping spent nuclear fuel and other radioactive materials. Additional business processes may be examined in this phase. The findings of these benchmarking efforts will help determine the organizational structure and requirements of the national transportation system. (authors)« less

  11. Genetic testing for exercise prescription and injury prevention: AIS-Athlome consortium-FIMS joint statement.

    PubMed

    Vlahovich, Nicole; Hughes, David C; Griffiths, Lyn R; Wang, Guan; Pitsiladis, Yannis P; Pigozzi, Fabio; Bachl, Nobert; Eynon, Nir

    2017-11-14

    There has been considerable growth in basic knowledge and understanding of how genes are influencing response to exercise training and predisposition to injuries and chronic diseases. On the basis of this knowledge, clinical genetic tests may in the future allow the personalisation and optimisation of physical activity, thus providing an avenue for increased efficiency of exercise prescription for health and disease. This review provides an overview of the current status of genetic testing for the purposes of exercise prescription and injury prevention. As such there are a variety of potential uses for genetic testing, including identification of risks associated with participation in sport and understanding individual response to particular types of exercise. However, there are many challenges remaining before genetic testing has evidence-based practical applications; including adoption of international standards for genomics research, as well as resistance against the agendas driven by direct-to-consumer genetic testing companies. Here we propose a way forward to develop an evidence-based approach to support genetic testing for exercise prescription and injury prevention. Based on current knowledge, there is no current clinical application for genetic testing in the area of exercise prescription and injury prevention, however the necessary steps are outlined for the development of evidence-based clinical applications involving genetic testing.

  12. Towards Systematic Benchmarking of Climate Model Performance

    NASA Astrophysics Data System (ADS)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine performance tests readily accessible will help advance a more transparent model evaluation process.

  13. State of practice and emerging application of analytical techniques of nuclear forensic analysis: highlights from the 4th Collaborative Materials Exercise of the Nuclear Forensics International Technical Working Group (ITWG)

    DOE PAGES

    Schwantes, Jon M.; Marsden, Oliva; Pellegrini, Kristi L.

    2016-09-16

    The Nuclear Forensics International Technical Working Group (ITWG) recently completed its fourth Collaborative Materials Exercise (CMX-4) in the 21 year history of the Group. This was also the largest materials exercise to date, with participating laboratories from 16 countries or international organizations. Moreover, exercise samples (including three separate samples of low enriched uranium oxide) were shipped as part of an illicit trafficking scenario, for which each laboratory was asked to conduct nuclear forensic analyses in support of a fictitious criminal investigation. In all, over 30 analytical techniques were applied to characterize exercise materials, for which ten of those techniques weremore » applied to ITWG exercises for the first time. We performed an objective review of the state of practice and emerging application of analytical techniques of nuclear forensic analysis based upon the outcome of this most recent exercise is provided.« less

  14. State of practice and emerging application of analytical techniques of nuclear forensic analysis: highlights from the 4th Collaborative Materials Exercise of the Nuclear Forensics International Technical Working Group (ITWG)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwantes, Jon M.; Marsden, Oliva; Pellegrini, Kristi L.

    The Nuclear Forensics International Technical Working Group (ITWG) recently completed its fourth Collaborative Materials Exercise (CMX-4) in the 21 year history of the Group. This was also the largest materials exercise to date, with participating laboratories from 16 countries or international organizations. Moreover, exercise samples (including three separate samples of low enriched uranium oxide) were shipped as part of an illicit trafficking scenario, for which each laboratory was asked to conduct nuclear forensic analyses in support of a fictitious criminal investigation. In all, over 30 analytical techniques were applied to characterize exercise materials, for which ten of those techniques weremore » applied to ITWG exercises for the first time. We performed an objective review of the state of practice and emerging application of analytical techniques of nuclear forensic analysis based upon the outcome of this most recent exercise is provided.« less

  15. Attacks, applications, and evaluation of known watermarking algorithms with Checkmark

    NASA Astrophysics Data System (ADS)

    Meerwald, Peter; Pereira, Shelby

    2002-04-01

    The Checkmark benchmarking tool was introduced to provide a framework for application-oriented evaluation of watermarking schemes. In this article we introduce new attacks and applications into the existing Checkmark framework. In addition to describing new attacks and applications, we also compare the performance of some well-known watermarking algorithms (proposed by Bruyndonckx,Cox, Fridrich, Dugad, Kim, Wang, Xia, Xie, Zhu and Pereira) with respect to the Checkmark benchmark. In particular, we consider the non-geometric application which contains tests that do not change the geometry of image. This attack constraint is artificial, but yet important for research purposes since a number of algorithms may be interesting, but would score poorly with respect to specific applications simply because geometric compensation has not been incorporated. We note, however, that with the help of image registration, even research algorithms that do not have counter-measures against geometric distortion -- such as a template or reference watermark -- can be evaluated. In the first version of the Checkmark benchmarking program, application-oriented evaluation was introduced, along with many new attacks not already considered in the literature. A second goal of this paper is to introduce new attacks and new applications into the Checkmark framework. In particular, we introduce the following new applications: video frame watermarking, medical imaging and watermarking of logos. Video frame watermarking includes low compression attacks and distortions which warp the edges of the video as well as general projective transformations which may result from someone filming the screen at a cinema. With respect to medical imaging, only small distortions are considered and furthermore it is essential that no distortions are present at embedding. Finally for logos, we consider images of small sizes and particularly compression, scaling, aspect ratio and other small distortions. The challenge of watermarking logos is essentially that of watermarking a small and typically simple image. With respect to new attacks, we consider: subsampling followed by interpolation, dithering and thresholding which both yield a binary image.

  16. 7 CFR 1700.107 - Considerations relevant to the exercise of SUTA discretionary provisions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 11 2014-01-01 2014-01-01 false Considerations relevant to the exercise of SUTA... Trust Areas § 1700.107 Considerations relevant to the exercise of SUTA discretionary provisions. (a) In... applicants as a means to exercise a discretionary authority under this subpart. (2) The Administrator may...

  17. 7 CFR 1700.107 - Considerations relevant to the exercise of SUTA discretionary provisions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 11 2013-01-01 2013-01-01 false Considerations relevant to the exercise of SUTA... Trust Areas § 1700.107 Considerations relevant to the exercise of SUTA discretionary provisions. (a) In... applicants as a means to exercise a discretionary authority under this subpart. (2) The Administrator may...

  18. 78 FR 24225 - Exercise of Authority Under the Immigration and Nationality Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-24

    ... DEPARTMENT OF HOMELAND SECURITY Office of the Secretary Exercise of Authority Under the... the particular applicant meets each of the criteria set forth above. This exercise of authority may be... subject to it. Any determination made under this exercise of authority as set out above can inform but...

  19. The General Adaptation Syndrome: Potential misapplications to resistance exercise.

    PubMed

    Buckner, Samuel L; Mouser, J Grant; Dankel, Scott J; Jessee, Matthew B; Mattocks, Kevin T; Loenneke, Jeremy P

    2017-11-01

    Within the resistance training literature, one of the most commonly cited tenets with respect to exercise programming is the "General Adaptation Syndrome" (GAS). The GAS is cited as a central theory behind the periodization of resistance exercise. However, after examining the original stress research by Hans Selye, the applications of GAS to resistance exercise may not be appropriate. To examine the original work of Hans Selye, as well as the original papers through which the GAS was established as a central theory for periodized resistance exercise. We conducted a review of Selye's work on the GAS, as well as the foundational papers through which this concept was applied to resistance exercise. The work of Hans Selye focused on the universal physiological stress responses noted upon exposure to toxic levels of a variety of pharmacological agents and stimuli. The extrapolations that have been made to resistance exercise appear loosely based on this concept and may not be an appropriate basis for application of the GAS to resistance exercise. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  20. Contributions to Integral Nuclear Data in ICSBEP and IRPhEP since ND 2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Briggs, J. Blair; Gulliford, Jim

    2016-09-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) was last discussed directly with the international nuclear data community at ND2013. Since ND2013, integral benchmark data that are available for nuclear data testing has continued to increase. The status of the international benchmark efforts and the latest contributions to integral nuclear data for testing is discussed. Select benchmark configurations that have been added to the ICSBEP and IRPhEP Handbooks since ND2013 are highlighted. The 2015 edition of the ICSBEP Handbook now contains 567 evaluations with benchmark specifications for 4,874more » critical, near-critical, or subcritical configurations, 31 criticality alarm placement/shielding configuration with multiple dose points apiece, and 207 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications. The 2015 edition of the IRPhEP Handbook contains data from 143 different experimental series that were performed at 50 different nuclear facilities. Currently 139 of the 143 evaluations are published as approved benchmarks with the remaining four evaluations published in draft format only. Measurements found in the IRPhEP Handbook include criticality, buckling and extrapolation length, spectral characteristics, reactivity effects, reactivity coefficients, kinetics, reaction-rate distributions, power distributions, isotopic compositions, and/or other miscellaneous types of measurements for various types of reactor systems. Annual technical review meetings for both projects were held in April 2016; additional approved benchmark evaluations will be included in the 2016 editions of these handbooks.« less

  1. Application of exercise ECG stress test in the current high cost modern-era healthcare system.

    PubMed

    Vaidya, Gaurang Nandkishor

    Exercise electrocardiogram (ECG) tests boasts of being more widely available, less resource intensive, lower cost and absence of radiation. In the presence of a normal baseline ECG, an exercise ECG test is able to generate a reliable and reproducible result almost comparable to Technitium-99m sestamibi perfusion imaging. Exercise ECG changes when combined with other clinical parameters obtained during the test has the potential to allow effective redistribution of scarce resources by excluding low risk patients with significant accuracy. As we look towards a future of rising healthcare costs, increased prevalence of cardiovascular disease and the need for proper allocation of limited resources; exercise ECG test offers low cost, vital and reliable disease interpretation. This article highlights the physiology of the exercise ECG test, patient selection, effective interpretation, describe previously reported scores and their clinical application in today's clinical practice. Copyright © 2017. Published by Elsevier B.V.

  2. IMAGESEER - IMAGEs for Education and Research

    NASA Technical Reports Server (NTRS)

    Le Moigne, Jacqueline; Grubb, Thomas; Milner, Barbara

    2012-01-01

    IMAGESEER is a new Web portal that brings easy access to NASA image data for non-NASA researchers, educators, and students. The IMAGESEER Web site and database are specifically designed to be utilized by the university community, to enable teaching image processing (IP) techniques on NASA data, as well as to provide reference benchmark data to validate new IP algorithms. Along with the data and a Web user interface front-end, basic knowledge of the application domains, benchmark information, and specific NASA IP challenges (or case studies) are provided.

  3. Evaluating the Information Power Grid using the NAS Grid Benchmarks

    NASA Technical Reports Server (NTRS)

    VanderWijngaartm Rob F.; Frumkin, Michael A.

    2004-01-01

    The NAS Grid Benchmarks (NGB) are a collection of synthetic distributed applications designed to rate the performance and functionality of computational grids. We compare several implementations of the NGB to determine programmability and efficiency of NASA's Information Power Grid (IPG), whose services are mostly based on the Globus Toolkit. We report on the overheads involved in porting existing NGB reference implementations to the IPG. No changes were made to the component tasks of the NGB can still be improved.

  4. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    NASA Technical Reports Server (NTRS)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  5. Effects of Performance Versus Game-Based Mobile Applications on Response to Exercise.

    PubMed

    Gillman, Arielle S; Bryan, Angela D

    2016-02-01

    Given the popularity of mobile applications (apps) designed to increase exercise participation, it is important to understand their effects on psychological predictors of exercise behavior. This study tested a performance feedback-based app compared to a game-based app to examine their effects on aspects of immediate response to an exercise bout. Twenty-eight participants completed a 30-min treadmill run while using one of two randomly assigned mobile running apps: Nike + Running, a performance-monitoring app which theoretically induces an associative, goal-driven state, or Zombies Run!, an app which turns the experience of running into a virtual reality game, theoretically inducing dissociation from primary exercise goals. The two conditions did not differ on primary motivational state outcomes; however, participants reported more associative attentional focus in the performance-monitoring app condition compared to more dissociative focus in the game-based app condition. Game-based and performance-tracking running apps may not have differential effects on goal motivation during exercise. However, game-based apps may help recreational exercisers dissociate from exercise more readily. Increasing the enjoyment of an exercise bout through the development of new and innovative mobile technologies is an important avenue for future research.

  6. The definition and application of Pilates exercise to treat people with chronic low back pain: a Delphi survey of Australian physical therapists.

    PubMed

    Wells, Cherie; Kolt, Gregory S; Marshall, Paul; Bialocerkowski, Andrea

    2014-06-01

    Pilates exercise is recommended for people with chronic low back pain (CLBP). In the literature, however, Pilates exercise is described and applied differently to treat people with CLBP. These differences in the definition and application of Pilates exercise make it difficult to evaluate its effectiveness. The aim of this study was to establish consensus regarding the definition and application of Pilates exercise to treat people with CLBP. A panel of Australian physical therapists who are experienced in treating people with CLBP using Pilates exercise were surveyed using the Delphi technique. Three electronic questionnaires were used to collect the respondents' opinions. Answers to open-ended questions were analyzed thematically, combined with systematic literature review findings, and translated into statements about Pilates exercise for people with CLBP. Participants then rated their level of agreement with these statements using a 6-point Likert scale. Consensus was achieved when 70% of the panel members strongly agreed, agreed, or somewhat agreed (or strongly disagreed, disagreed, or somewhat disagreed) with an item. Thirty physical therapists completed all 3 questionnaires and reached consensus on the majority of items. Participants agreed that Pilates exercise requires body awareness, breathing, movement control, posture, and education. It was recommended that people with CLBP should undertake supervised sessions for 30 to 60 minutes, twice per week, for 3 to 6 months. Participants also suggested that people with CLBP would benefit from individualized assessment and exercise prescription, supervision and functional integration of exercises, and use of specialized equipment. Item consensus does not guarantee the accuracy of findings. This survey reflects the opinion of only 30 physical therapists and requires validation in future trials. These findings contribute to a better understanding of Pilates exercise and how it is utilized by physical therapists to treat people with CLBP. This information provides direction for future research into Pilates exercise, but findings need to be interpreted within the context of study limitations. © 2014 American Physical Therapy Association.

  7. Outcomes of hip resurfacing in a professional dancer: a case report.

    PubMed

    Dunleavy, Kim

    2012-02-01

    A new surgical option (hip resurfacing arthroplasty) is now available for younger patients with hip osteoarthritis. A more aggressive rehabilitation program than the typical total hip arthroplasty protocol is needed for active individuals. This case report describes interventions used to maximize function in a 46-year-old professional dancer after hip resurfacing with a progressive therapeutic exercise program. Exercise choices were selected to address dance-specific requirements while respecting healing of the posterior capsular incision. Strengthening focused on hip abduction, extension, and external rotation. Precautions included avoiding gluteal stretching until 6 months. Pelvic alignment and weight-bearing distribution were emphasized. The patient was able to return to rehearsal by 7 months, at which time strength was equivalent to the unaffected leg. Range of motion reached unaffected side values at week 8 for internal rotation, week 11 for extension, week 13 for adduction, and week 28 for flexion. External rotation and abduction were still limited at 1 year, which influenced pelvic alignment with resultant pain on the unaffected side. Functional and impairment outcomes are presented with timelines to provide a basis for postoperative benchmarks for active clients after hip resurfacing. Although this case report presents a dance-specific program, exercise progressions for other active individuals may benefit from similar exercise intensity and sports-specific focus. Future rehabilitation programs should take into account possible flexion and external rotation range limitations and the need for gluteal muscle strengthening along with symmetry and pelvic alignment correction. Long-term studies investigating intensity of rehabilitation are warranted for patients intending to participate in higher level athletic activity.

  8. Benchmarking a soil moisture data assimilation system for agricultural drought monitoring

    USDA-ARS?s Scientific Manuscript database

    Despite considerable interest in the application of land surface data assimilation systems (LDAS) for agricultural drought applications, relatively little is known about the large-scale performance of such systems and, thus, the optimal methodological approach for implementing them. To address this ...

  9. A framework for prescription in exercise-oncology research†

    PubMed Central

    Sasso, John P; Eves, Neil D; Christensen, Jesper F; Koelwyn, Graeme J; Scott, Jessica; Jones, Lee W

    2015-01-01

    The field of exercise-oncology has increased dramatically over the past two decades, with close to 100 published studies investigating the efficacy of structured exercise training interventions in patients with cancer. Of interest, despite considerable differences in study population and primary study end point, the vast majority of studies have tested the efficacy of an exercise prescription that adhered to traditional guidelines consisting of either supervised or home-based endurance (aerobic) training or endurance training combined with resistance training, prescribed at a moderate intensity (50–75% of a predetermined physiological parameter, typically age-predicted heart rate maximum or reserve), for two to three sessions per week, for 10 to 60 min per exercise session, for 12 to 15 weeks. The use of generic exercise prescriptions may, however, be masking the full therapeutic potential of exercise treatment in the oncology setting. Against this background, this opinion paper provides an overview of the fundamental tenets of human exercise physiology known as the principles of training, with specific application of these principles in the design and conduct of clinical trials in exercise-oncology research. We contend that the application of these guidelines will ensure continued progress in the field while optimizing the safety and efficacy of exercise treatment following a cancer diagnosis. PMID:26136187

  10. Sintered Cathodes for All-Solid-State Structural Lithium-Ion Batteries

    NASA Technical Reports Server (NTRS)

    Huddleston, William; Dynys, Frederick; Sehirlioglu, Alp

    2017-01-01

    All-solid-state structural lithium ion batteries serve as both structural load-bearing components and as electrical energy storage devices to achieve system level weight savings in aerospace and other transportation applications. This multifunctional design goal is critical for the realization of next generation hybrid or all-electric propulsion systems. Additionally, transitioning to solid state technology improves upon battery safety from previous volatile architectures. This research established baseline solid state processing conditions and performance benchmarks for intercalation-type layered oxide materials for multifunctional application. Under consideration were lithium cobalt oxide and lithium nickel manganese cobalt oxide. Pertinent characteristics such as electrical conductivity, strength, chemical stability, and microstructure were characterized for future application in all-solid-state structural battery cathodes. The study includes characterization by XRD, ICP, SEM, ring-on-ring mechanical testing, and electrical impedance spectroscopy to elucidate optimal processing parameters, material characteristics, and multifunctional performance benchmarks. These findings provide initial conditions for implementing existing cathode materials in load bearing applications.

  11. Fourth Computational Aeroacoustics (CAA) Workshop on Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D. (Editor)

    2004-01-01

    This publication contains the proceedings of the Fourth Computational Aeroacoustics (CAA) Workshop on Benchmark Problems. In this workshop, as in previous workshops, the problems were devised to gauge the technological advancement of computational techniques to calculate all aspects of sound generation and propagation in air directly from the fundamental governing equations. A variety of benchmark problems have been previously solved ranging from simple geometries with idealized acoustic conditions to test the accuracy and effectiveness of computational algorithms and numerical boundary conditions; to sound radiation from a duct; to gust interaction with a cascade of airfoils; to the sound generated by a separating, turbulent viscous flow. By solving these and similar problems, workshop participants have shown the technical progress from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The fourth CAA workshop emphasized the application of CAA methods to the solution of realistic problems. The workshop was held at the Ohio Aerospace Institute in Cleveland, Ohio, on October 20 to 22, 2003. At that time, workshop participants presented their solutions to problems in one or more of five categories. Their solutions are presented in this proceedings along with the comparisons of their solutions to the benchmark solutions or experimental data. The five categories for the benchmark problems were as follows: Category 1:Basic Methods. The numerical computation of sound is affected by, among other issues, the choice of grid used and by the boundary conditions. Category 2:Complex Geometry. The ability to compute the sound in the presence of complex geometric surfaces is important in practical applications of CAA. Category 3:Sound Generation by Interacting With a Gust. The practical application of CAA for computing noise generated by turbomachinery involves the modeling of the noise source mechanism as a vortical gust interacting with an airfoil. Category 4:Sound Transmission and Radiation. Category 5:Sound Generation in Viscous Problems. Sound is generated under certain conditions by a viscous flow as the flow passes an object or a cavity.

  12. ComprehensiveBench: a Benchmark for the Extensive Evaluation of Global Scheduling Algorithms

    NASA Astrophysics Data System (ADS)

    Pilla, Laércio L.; Bozzetti, Tiago C.; Castro, Márcio; Navaux, Philippe O. A.; Méhaut, Jean-François

    2015-10-01

    Parallel applications that present tasks with imbalanced loads or complex communication behavior usually do not exploit the underlying resources of parallel platforms to their full potential. In order to mitigate this issue, global scheduling algorithms are employed. As finding the optimal task distribution is an NP-Hard problem, identifying the most suitable algorithm for a specific scenario and comparing algorithms are not trivial tasks. In this context, this paper presents ComprehensiveBench, a benchmark for global scheduling algorithms that enables the variation of a vast range of parameters that affect performance. ComprehensiveBench can be used to assist in the development and evaluation of new scheduling algorithms, to help choose a specific algorithm for an arbitrary application, to emulate other applications, and to enable statistical tests. We illustrate its use in this paper with an evaluation of Charm++ periodic load balancers that stresses their characteristics.

  13. Effects of exercise pressor reflex activation on carotid baroreflex function during exercise in humans

    NASA Technical Reports Server (NTRS)

    Gallagher, K. M.; Fadel, P. J.; Stromstad, M.; Ide, K.; Smith, S. A.; Querry, R. G.; Raven, P. B.; Secher, N. H.

    2001-01-01

    1. This investigation was designed to determine the contribution of the exercise pressor reflex to the resetting of the carotid baroreflex during exercise. 2. Ten subjects performed 3.5 min of static one-legged exercise (20 % maximal voluntary contraction) and 7 min dynamic cycling (20 % maximal oxygen uptake) under two conditions: control (no intervention) and with the application of medical anti-shock (MAS) trousers inflated to 100 mmHg (to activate the exercise pressor reflex). Carotid baroreflex function was determined at rest and during exercise using a rapid neck pressure/neck suction technique. 3. During exercise, the application of MAS trousers (MAS condition) increased mean arterial pressure (MAP), plasma noradrenaline concentration (dynamic exercise only) and perceived exertion (dynamic exercise only) when compared to control (P < 0.05). No effect of the MAS condition was evident at rest. The MAS condition had no effect on heart rate (HR), plasma lactate and adrenaline concentrations or oxygen uptake at rest and during exercise. The carotid baroreflex stimulus-response curve was reset upward on the response arm and rightward to a higher operating pressure by control exercise without alterations in gain. Activation of the exercise pressor reflex by MAS trousers further reset carotid baroreflex control of MAP, as indicated by the upward and rightward relocation of the curve. However, carotid baroreflex control of HR was only shifted rightward to higher operating pressures by MAS trousers. The sensitivity of the carotid baroreflex was unaltered by exercise pressor reflex activation. 4. These findings suggest that during dynamic and static exercise the exercise pressor reflex is capable of actively resetting carotid baroreflex control of mean arterial pressure; however, it would appear only to modulate carotid baroreflex control of heart rate.

  14. Defining Exercise Performance Metrics for Flight Hardware Development

    NASA Technical Reports Server (NTRS)

    Beyene, Nahon M.

    2004-01-01

    The space industry has prevailed over numerous design challenges in the spirit of exploration. Manned space flight entails creating products for use by humans and the Johnson Space Center has pioneered this effort as NASA's center for manned space flight. NASA Astronauts use a suite of flight exercise hardware to maintain strength for extravehicular activities and to minimize losses in muscle mass and bone mineral density. With a cycle ergometer, treadmill, and the Resistive Exercise Device available on the International Space Station (ISS), the Space Medicine community aspires to reproduce physical loading schemes that match exercise performance in Earth s gravity. The resistive exercise device presents the greatest challenge with the duty of accommodating 20 different exercises and many variations on the core set of exercises. This paper presents a methodology for capturing engineering parameters that can quantify proper resistive exercise performance techniques. For each specified exercise, the method provides engineering parameters on hand spacing, foot spacing, and positions of the point of load application at the starting point, midpoint, and end point of the exercise. As humans vary in height and fitness levels, the methodology presents values as ranges. In addition, this method shows engineers the proper load application regions on the human body. The methodology applies to resistive exercise in general and is in use for the current development of a Resistive Exercise Device. Exercise hardware systems must remain available for use and conducive to proper exercise performance as a contributor to mission success. The astronauts depend on exercise hardware to support extended stays aboard the ISS. Future plans towards exploration of Mars and beyond acknowledge the necessity of exercise. Continuous improvement in technology and our understanding of human health maintenance in space will allow us to support the exploration of Mars and the future of space exploration.

  15. Identifying Continuous Quality Improvement Priorities in Maternal, Infant, and Early Childhood Home Visiting.

    PubMed

    Preskitt, Julie; Fifolt, Matthew; Ginter, Peter M; Rucks, Andrew; Wingate, Martha S

    2016-01-01

    The purpose of this article was to describe a methodology to identify continuous quality improvement (CQI) priorities for one state's Maternal, Infant, and Early Childhood Home Visiting program from among the 40 required constructs associated with 6 program benchmarks. The authors discuss how the methodology provided consensus on system CQI quality measure priorities and describe variation among the 3 service delivery models used within the state. Q-sort methodology was used by home visiting (HV) service delivery providers (home visitors) to prioritize HV quality measures for the overall state HV system as well as their service delivery model. There was general consensus overall and among the service delivery models on CQI quality measure priorities, although some variation was observed. Measures associated with Maternal, Infant, and Early Childhood Home Visiting benchmark 1, Improved Maternal and Newborn Health, and benchmark 3, Improvement in School Readiness and Achievement, were the highest ranked. The Q-sort exercise allowed home visitors an opportunity to examine priorities within their service delivery model as well as for the overall First Teacher HV system. Participants engaged in meaningful discussions regarding how and why they selected specific quality measures and developed a greater awareness and understanding of a systems approach to HV within the state. The Q-sort methodology presented in this article can easily be replicated by other states to identify CQI priorities at the local and state levels and can be used effectively in states that use a single HV service delivery model or those that implement multiple evidence-based models for HV service delivery.

  16. The MPC&A Questionnaire

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Powell, Danny H; Elwood Jr, Robert H

    The questionnaire is the instrument used for recording performance data on the nuclear material protection, control, and accountability (MPC&A) system at a nuclear facility. The performance information provides a basis for evaluating the effectiveness of the MPC&A system. The goal for the questionnaire is to provide an accurate representation of the performance of the MPC&A system as it currently exists in the facility. Performance grades for all basic MPC&A functions should realistically reflect the actual level of performance at the time the survey is conducted. The questionnaire was developed after testing and benchmarking the material control and accountability (MC&A) systemmore » effectiveness tool (MSET) in the United States. The benchmarking exercise at the Idaho National Laboratory (INL) proved extremely valuable for improving the content and quality of the early versions of the questionnaire. Members of the INL benchmark team identified many areas of the questionnaire where questions should be clarified and areas where additional questions should be incorporated. The questionnaire addresses all elements of the MC&A system. Specific parts pertain to the foundation for the facility's overall MPC&A system, and other parts pertain to the specific functions of the operational MPC&A system. The questionnaire includes performance metrics for each of the basic functions or tasks performed in the operational MPC&A system. All of those basic functions or tasks are represented as basic events in the MPC&A fault tree. Performance metrics are to be used during completion of the questionnaire to report what is actually being done in relation to what should be done in the performance of MPC&A functions.« less

  17. Results of the Australasian (Trans-Tasman Oncology Group) radiotherapy benchmarking exercise in preparation for participation in the PORTEC-3 trial.

    PubMed

    Jameson, Michael G; McNamara, Jo; Bailey, Michael; Metcalfe, Peter E; Holloway, Lois C; Foo, Kerwyn; Do, Viet; Mileshkin, Linda; Creutzberg, Carien L; Khaw, Pearly

    2016-08-01

    Protocol deviations in Randomised Controlled Trials have been found to result in a significant decrease in survival and local control. In some cases, the magnitude of the detrimental effect can be larger than the anticipated benefits of the interventions involved. The implementation of appropriate quality assurance of radiotherapy measures for clinical trials has been found to result in fewer deviations from protocol. This paper reports on a benchmarking study conducted in preparation for the PORTEC-3 trial in Australasia. A benchmarking CT dataset was sent to each of the Australasian investigators, it was requested they contour and plan the case according to trial protocol using local treatment planning systems. These data was then sent back to Trans-Tasman Oncology Group for collation and analysis. Thirty three investigators from eighteen institutions across Australia and New Zealand took part in the study. The mean clinical target volume (CTV) volume was 383.4 (228.5-497.8) cm(3) and the mean dose to a reference gold standard CTV was 48.8 (46.4-50.3) Gy. Although there were some large differences in the contouring of the CTV and its constituent parts, these did not translate into large variations in dosimetry. Where individual investigators had deviations from the trial contouring protocol, feedback was provided. The results of this study will be used to compare with the international study QA for the PORTEC-3 trial. © 2016 The Royal Australian and New Zealand College of Radiologists.

  18. Fisk-based criteria to support validation of detection methods for drinking water and air.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacDonell, M.; Bhattacharyya, M.; Finster, M.

    2009-02-18

    This report was prepared to support the validation of analytical methods for threat contaminants under the U.S. Environmental Protection Agency (EPA) National Homeland Security Research Center (NHSRC) program. It is designed to serve as a resource for certain applications of benchmark and fate information for homeland security threat contaminants. The report identifies risk-based criteria from existing health benchmarks for drinking water and air for potential use as validation targets. The focus is on benchmarks for chronic public exposures. The priority sources are standard EPA concentration limits for drinking water and air, along with oral and inhalation toxicity values. Many contaminantsmore » identified as homeland security threats to drinking water or air would convert to other chemicals within minutes to hours of being released. For this reason, a fate analysis has been performed to identify potential transformation products and removal half-lives in air and water so appropriate forms can be targeted for detection over time. The risk-based criteria presented in this report to frame method validation are expected to be lower than actual operational targets based on realistic exposures following a release. Note that many target criteria provided in this report are taken from available benchmarks without assessing the underlying toxicological details. That is, although the relevance of the chemical form and analogues are evaluated, the toxicological interpretations and extrapolations conducted by the authoring organizations are not. It is also important to emphasize that such targets in the current analysis are not health-based advisory levels to guide homeland security responses. This integrated evaluation of chronic public benchmarks and contaminant fate has identified more than 200 risk-based criteria as method validation targets across numerous contaminants and fate products in drinking water and air combined. The gap in directly applicable values is considerable across the full set of threat contaminants, so preliminary indicators were developed from other well-documented benchmarks to serve as a starting point for validation efforts. By this approach, at least preliminary context is available for water or air, and sometimes both, for all chemicals on the NHSRC list that was provided for this evaluation. This means that a number of concentrations presented in this report represent indirect measures derived from related benchmarks or surrogate chemicals, as described within the many results tables provided in this report.« less

  19. Performance benchmark of LHCb code on state-of-the-art x86 architectures

    NASA Astrophysics Data System (ADS)

    Campora Perez, D. H.; Neufeld, N.; Schwemmer, R.

    2015-12-01

    For Run 2 of the LHC, LHCb is replacing a significant part of its event filter farm with new compute nodes. For the evaluation of the best performing solution, we have developed a method to convert our high level trigger application into a stand-alone, bootable benchmark image. With additional instrumentation we turned it into a self-optimising benchmark which explores techniques such as late forking, NUMA balancing and optimal number of threads, i.e. it automatically optimises box-level performance. We have run this procedure on a wide range of Haswell-E CPUs and numerous other architectures from both Intel and AMD, including also the latest Intel micro-blade servers. We present results in terms of performance, power consumption, overheads and relative cost.

  20. RISC Processors and High Performance Computing

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    This tutorial will discuss the top five RISC microprocessors and the parallel systems in which they are used. It will provide a unique cross-machine comparison not available elsewhere. The effective performance of these processors will be compared by citing standard benchmarks in the context of real applications. The latest NAS Parallel Benchmarks, both absolute performance and performance per dollar, will be listed. The next generation of the NPB will be described. The tutorial will conclude with a discussion of future directions in the field. Technology Transfer Considerations: All of these computer systems are commercially available internationally. Information about these processors is available in the public domain, mostly from the vendors themselves. The NAS Parallel Benchmarks and their results have been previously approved numerous times for public release, beginning back in 1991.

  1. Exercise as Punishment: An Application of the Theory of Planned Behavior

    ERIC Educational Resources Information Center

    Richardson, Karen; Rosenthal, Maura; Burak, Lydia

    2012-01-01

    Background: Lack of exercise and physical inactivity have been implicated as contributors to obesity and overweight in America. At a time where experts point to the need for increased exercise, many youth have experienced exercise as a form of punishment, which appears to be imbedded in physical education and sport culture. Purpose: This study…

  2. Analysis of Photonic Networks for a Chip Multiprocessor Using Scientific Applications

    DTIC Science & Technology

    2009-05-01

    Analysis of Photonic Networks for a Chip Multiprocessor Using Scientific Applications Gilbert Hendry†, Shoaib Kamil‡?, Aleksandr Biberman†, Johnnie...electronic networks -on-chip warrants investigating real application traces on functionally compa- rable photonic and electronic network designs. We... network can achieve 75× improvement in energy ef- ficiency for synthetic benchmarks and up to 37× improve- ment for real scientific applications

  3. Musk as a Pheromone? Didactic Exercise.

    ERIC Educational Resources Information Center

    Bersted, Chris T.

    A classroom/laboratory exercise has been used to introduce college students to factorial research designs, differentiate between interpretations for experimental and quasi-experimental variables, and exemplify application of laboratory research methods to test practical questions (advertising claims). The exercise involves having randomly divided…

  4. Limnology. Student Fieldbook.

    ERIC Educational Resources Information Center

    Jones, Michael

    This student fieldbook provides exercises for a three-week course in limnology. Exercises emphasize applications of knowledge in chemistry, physics, and biology to understand the natural operation of freshwater systems. Fourteen field exercises include: (1) testing for water quality; (2) determination of water temperature, turbidity, dissolved…

  5. [Quality of mental health services: a self audit in the South Verona mental health service].

    PubMed

    Allevi, Liliana; Salvi, Giovanni; Ruggeri, Mirella

    2006-01-01

    To start a process of Continuous Quality Improvement (CQI) in an Italian Community Mental Health Service by using a quality assurance questionnaire in a self audit exercise. The questionnaire was administered to 14 key workers and clinical managers with different roles and seniority. One senior manager's evaluation was used as a benchmark for all the others. Changes were introduced in the service practice according to what emerged from the evaluation. Meetings were scheduled to monitor those changes and renew the CQI process. There was a wide difference in the key workers' answers. Overall, the senior manager's evaluation was on the 60th percentile of the distribution of the other evaluations. Those areas that required prompt intervention were risk management, personnel development, and CQI. The CQI process was followed up for one year: some interventions were carried out to change the practice of the service. A self audit exercise in Community Mental Health Services was both feasible and useful. The CQI process was easier to start than to carry on over the long term.

  6. 47 CFR 73.3555 - Multiple ownership.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... vertical ownership chain and application of the relevant attribution benchmark to the resulting product, except that wherever the ownership percentage for any link in the chain exceeds 50%, it shall not be... multiplication of the ownership percentages for each link in the vertical ownership chain and application of the...

  7. 47 CFR 73.3555 - Multiple ownership.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... vertical ownership chain and application of the relevant attribution benchmark to the resulting product, except that wherever the ownership percentage for any link in the chain exceeds 50%, it shall not be... multiplication of the ownership percentages for each link in the vertical ownership chain and application of the...

  8. 47 CFR 73.3555 - Multiple ownership.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... vertical ownership chain and application of the relevant attribution benchmark to the resulting product, except that wherever the ownership percentage for any link in the chain exceeds 50%, it shall not be... multiplication of the ownership percentages for each link in the vertical ownership chain and application of the...

  9. 47 CFR 73.3555 - Multiple ownership.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... vertical ownership chain and application of the relevant attribution benchmark to the resulting product, except that wherever the ownership percentage for any link in the chain exceeds 50%, it shall not be... multiplication of the ownership percentages for each link in the vertical ownership chain and application of the...

  10. 47 CFR 73.3555 - Multiple ownership.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... vertical ownership chain and application of the relevant attribution benchmark to the resulting product, except that wherever the ownership percentage for any link in the chain exceeds 50%, it shall not be... multiplication of the ownership percentages for each link in the vertical ownership chain and application of the...

  11. Ultrasound-Guided Application of Percutaneous Electrolysis as an Adjunct to Exercise and Manual Therapy for Subacromial Pain Syndrome: a Randomized Clinical Trial.

    PubMed

    de-Miguel-Valtierra, Lorena; Salom-Moreno, Jaime; Fernández-de-Las-Peñas, César; Cleland, Joshua A; Arias-Buría, José L

    2018-05-16

    This randomized clinical trial compared the effects of adding US-guided percutaneous electrolysis into a program consisting of manual therapy and exercise on pain, related-disability, function and pressure sensitivity in subacromial pain syndrome. Fifty patients with subacromial pain syndrome were randomized into manual therapy and exercise or percutaneous electrolysis group. All patients received the same manual therapy and exercise program, one session per week for 5 consecutive weeks. Patients assigned to the electrolysis group also received the application of percutaneous electrolysis at each session. The primary outcome was Disabilities of the Arm, Shoulder and Hand (DASH). Secondary outcomes included pain, function (Shoulder Pain and Disability Index-SPADI) pressure pain thresholds (PPTs) and Global Rating of Change (GROC). They were assessed at baseline, post-treatment, and 3, and 6 months after treatment. Both groups showed similar improvements in the primary outcome (DASH) at all follow-ups (P=0.051). Subjects receiving manual therapy, exercise, and percutaneous electrolysis showed significantly greater changes in shoulder pain (P<0.001) and SPADI (P<0.001) than those receiving manual therapy and exercise alone at all follow-ups. Effect sizes were large (SMD>0.91) for shoulder pain and function at 3 and 6 months in favour of the percutaneous electrolysis group. No between-groups differences in PPT were found. The current clinical trial found that the inclusion of US-guided percutaneous electrolysis in combination with manual therapy and exercise resulted in no significant differences for related-disability (DASH) than the application of manual therapy and exercise alone in patients with subacromial pain syndrome. Nevertheless, differences were reported for some secondary outcomes such as shoulder pain and function (SPADI). Whether or not these effects are reliable should be addressed in future studies Perspective This study found that the inclusion of US-guided percutaneous electrolysis into a manual therapy and exercise program resulted in no significant differences for disability and pressure pain sensitivity than the application of manual therapy and exercise alone in patients with subacromial pain syndrome. Copyright © 2018. Published by Elsevier Inc.

  12. A method to improve the nutritional quality of foods and beverages based on dietary recommendations.

    PubMed

    Nijman, C A J; Zijp, I M; Sierksma, A; Roodenburg, A J C; Leenen, R; van den Kerkhoff, C; Weststrate, J A; Meijer, G W

    2007-04-01

    The increasing consumer interest in health prompted Unilever to develop a globally applicable method (Nutrition Score) to evaluate and improve the nutritional composition of its foods and beverages portfolio. Based on (inter)national dietary recommendations, generic benchmarks were developed to evaluate foods and beverages on their content of trans fatty acids, saturated fatty acids, sodium and sugars. High intakes of these key nutrients are associated with undesirable health effects. In principle, the developed generic benchmarks can be applied globally for any food and beverage product. Product category-specific benchmarks were developed when it was not feasible to meet generic benchmarks because of technological and/or taste factors. The whole Unilever global foods and beverages portfolio has been evaluated and actions have been taken to improve the nutritional quality. The advantages of this method over other initiatives to assess the nutritional quality of foods are that it is based on the latest nutritional scientific insights and its global applicability. The Nutrition Score is the first simple, transparent and straightforward method that can be applied globally and across all food and beverage categories to evaluate the nutritional composition. It can help food manufacturers to improve the nutritional value of their products. In addition, the Nutrition Score can be a starting point for a powerful health indicator front-of-pack. This can have a significant positive impact on public health, especially when implemented by all food manufacturers.

  13. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    NASA Astrophysics Data System (ADS)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  14. Nonlinear viscoplasticity in ASPECT: benchmarking and applications to subduction

    NASA Astrophysics Data System (ADS)

    Glerum, Anne; Thieulot, Cedric; Fraters, Menno; Blom, Constantijn; Spakman, Wim

    2018-03-01

    ASPECT (Advanced Solver for Problems in Earth's ConvecTion) is a massively parallel finite element code originally designed for modeling thermal convection in the mantle with a Newtonian rheology. The code is characterized by modern numerical methods, high-performance parallelism and extensibility. This last characteristic is illustrated in this work: we have extended the use of ASPECT from global thermal convection modeling to upper-mantle-scale applications of subduction.

    Subduction modeling generally requires the tracking of multiple materials with different properties and with nonlinear viscous and viscoplastic rheologies. To this end, we implemented a frictional plasticity criterion that is combined with a viscous diffusion and dislocation creep rheology. Because ASPECT uses compositional fields to represent different materials, all material parameters are made dependent on a user-specified number of fields.

    The goal of this paper is primarily to describe and verify our implementations of complex, multi-material rheology by reproducing the results of four well-known two-dimensional benchmarks: the indentor benchmark, the brick experiment, the sandbox experiment and the slab detachment benchmark. Furthermore, we aim to provide hands-on examples for prospective users by demonstrating the use of multi-material viscoplasticity with three-dimensional, thermomechanical models of oceanic subduction, putting ASPECT on the map as a community code for high-resolution, nonlinear rheology subduction modeling.

  15. A self-adapting heuristic for automatically constructing terrain appreciation exercises

    NASA Astrophysics Data System (ADS)

    Nanda, S.; Lickteig, C. L.; Schaefer, P. S.

    2008-04-01

    Appreciating terrain is a key to success in both symmetric and asymmetric forms of warfare. Training to enable Soldiers to master this vital skill has traditionally required their translocation to a selected number of areas, each affording a desired set of topographical features, albeit with limited breadth of variety. As a result, the use of such methods has proved to be costly and time consuming. To counter this, new computer-aided training applications permit users to rapidly generate and complete training exercises in geo-specific open and urban environments rendered by high-fidelity image generation engines. The latter method is not only cost-efficient, but allows any given exercise and its conditions to be duplicated or systematically varied over time. However, even such computer-aided applications have shortcomings. One of the principal ones is that they usually require all training exercises to be painstakingly constructed by a subject matter expert. Furthermore, exercise difficulty is usually subjectively assessed and frequently ignored thereafter. As a result, such applications lack the ability to grow and adapt to the skill level and learning curve of each trainee. In this paper, we present a heuristic that automatically constructs exercises for identifying key terrain. Each exercise is created and administered in a unique iteration, with its level of difficulty tailored to the trainee's ability based on the correctness of that trainee's responses in prior iterations.

  16. Neutron Deep Penetration Calculations in Light Water with Monte Carlo TRIPOLI-4® Variance Reduction Techniques

    NASA Astrophysics Data System (ADS)

    Lee, Yi-Kang

    2017-09-01

    Nuclear decommissioning takes place in several stages due to the radioactivity in the reactor structure materials. A good estimation of the neutron activation products distributed in the reactor structure materials impacts obviously on the decommissioning planning and the low-level radioactive waste management. Continuous energy Monte-Carlo radiation transport code TRIPOLI-4 has been applied on radiation protection and shielding analyses. To enhance the TRIPOLI-4 application in nuclear decommissioning activities, both experimental and computational benchmarks are being performed. To calculate the neutron activation of the shielding and structure materials of nuclear facilities, the knowledge of 3D neutron flux map and energy spectra must be first investigated. To perform this type of neutron deep penetration calculations with the Monte Carlo transport code, variance reduction techniques are necessary in order to reduce the uncertainty of the neutron activation estimation. In this study, variance reduction options of the TRIPOLI-4 code were used on the NAIADE 1 light water shielding benchmark. This benchmark document is available from the OECD/NEA SINBAD shielding benchmark database. From this benchmark database, a simplified NAIADE 1 water shielding model was first proposed in this work in order to make the code validation easier. Determination of the fission neutron transport was performed in light water for penetration up to 50 cm for fast neutrons and up to about 180 cm for thermal neutrons. Measurement and calculation results were benchmarked. Variance reduction options and their performance were discussed and compared.

  17. A Benchmark and Comparative Study of Video-Based Face Recognition on COX Face Database.

    PubMed

    Huang, Zhiwu; Shan, Shiguang; Wang, Ruiping; Zhang, Haihong; Lao, Shihong; Kuerban, Alifu; Chen, Xilin

    2015-12-01

    Face recognition with still face images has been widely studied, while the research on video-based face recognition is inadequate relatively, especially in terms of benchmark datasets and comparisons. Real-world video-based face recognition applications require techniques for three distinct scenarios: 1) Videoto-Still (V2S); 2) Still-to-Video (S2V); and 3) Video-to-Video (V2V), respectively, taking video or still image as query or target. To the best of our knowledge, few datasets and evaluation protocols have benchmarked for all the three scenarios. In order to facilitate the study of this specific topic, this paper contributes a benchmarking and comparative study based on a newly collected still/video face database, named COX(1) Face DB. Specifically, we make three contributions. First, we collect and release a largescale still/video face database to simulate video surveillance with three different video-based face recognition scenarios (i.e., V2S, S2V, and V2V). Second, for benchmarking the three scenarios designed on our database, we review and experimentally compare a number of existing set-based methods. Third, we further propose a novel Point-to-Set Correlation Learning (PSCL) method, and experimentally show that it can be used as a promising baseline method for V2S/S2V face recognition on COX Face DB. Extensive experimental results clearly demonstrate that video-based face recognition needs more efforts, and our COX Face DB is a good benchmark database for evaluation.

  18. TRIPOLI-4® - MCNP5 ITER A-lite neutronic model benchmarking

    NASA Astrophysics Data System (ADS)

    Jaboulay, J.-C.; Cayla, P.-Y.; Fausser, C.; Lee, Y.-K.; Trama, J.-C.; Li-Puma, A.

    2014-06-01

    The aim of this paper is to present the capability of TRIPOLI-4®, the CEA Monte Carlo code, to model a large-scale fusion reactor with complex neutron source and geometry. In the past, numerous benchmarks were conducted for TRIPOLI-4® assessment on fusion applications. Experiments (KANT, OKTAVIAN, FNG) analysis and numerical benchmarks (between TRIPOLI-4® and MCNP5) on the HCLL DEMO2007 and ITER models were carried out successively. In this previous ITER benchmark, nevertheless, only the neutron wall loading was analyzed, its main purpose was to present MCAM (the FDS Team CAD import tool) extension for TRIPOLI-4®. Starting from this work a more extended benchmark has been performed about the estimation of neutron flux, nuclear heating in the shielding blankets and tritium production rate in the European TBMs (HCLL and HCPB) and it is presented in this paper. The methodology to build the TRIPOLI-4® A-lite model is based on MCAM and the MCNP A-lite model (version 4.1). Simplified TBMs (from KIT) have been integrated in the equatorial-port. Comparisons of neutron wall loading, flux, nuclear heating and tritium production rate show a good agreement between the two codes. Discrepancies are mainly included in the Monte Carlo codes statistical error.

  19. A benchmark for vehicle detection on wide area motion imagery

    NASA Astrophysics Data System (ADS)

    Catrambone, Joseph; Amzovski, Ismail; Liang, Pengpeng; Blasch, Erik; Sheaff, Carolyn; Wang, Zhonghai; Chen, Genshe; Ling, Haibin

    2015-05-01

    Wide area motion imagery (WAMI) has been attracting an increased amount of research attention due to its large spatial and temporal coverage. An important application includes moving target analysis, where vehicle detection is often one of the first steps before advanced activity analysis. While there exist many vehicle detection algorithms, a thorough evaluation of them on WAMI data still remains a challenge mainly due to the lack of an appropriate benchmark data set. In this paper, we address a research need by presenting a new benchmark for wide area motion imagery vehicle detection data. The WAMI benchmark is based on the recently available Wright-Patterson Air Force Base (WPAFB09) dataset and the Temple Resolved Uncertainty Target History (TRUTH) associated target annotation. Trajectory annotations were provided in the original release of the WPAFB09 dataset, but detailed vehicle annotations were not available with the dataset. In addition, annotations of static vehicles, e.g., in parking lots, are also not identified in the original release. Addressing these issues, we re-annotated the whole dataset with detailed information for each vehicle, including not only a target's location, but also its pose and size. The annotated WAMI data set should be useful to community for a common benchmark to compare WAMI detection, tracking, and identification methods.

  20. PHABSIM for Windows User's Manual and Exercises

    USGS Publications Warehouse

    Waddle, Terry

    2001-01-01

    This document is a combined self-study textbook and reference manual. The material is presented in the general order of a PHABSIM study placed within the context of an IFIM application. The document may also be used as reading material for a lecture-based course. This manual provides documentation of the various PHABSIM programs so every option of each program is treated.This text is not a guidebook for organization and implementation of a PHABSIM study. Use of PHABSIM should take place in the context of an IFIM application. See Bovee et al. (1998) for guidance in designing and performing a PHABSIM study as part of a larger IFIM application.The document concludes with a set of 12 laboratory exercises. Users are strongly encouraged to work through the laboratory exercises prior to applying the software to a study. Working through the exercises will enhance familiarity with the programs and answer many questions that may arise during a PHABSIM analysis.

  1. An Unbiased Method To Build Benchmarking Sets for Ligand-Based Virtual Screening and its Application To GPCRs

    PubMed Central

    2015-01-01

    Benchmarking data sets have become common in recent years for the purpose of virtual screening, though the main focus had been placed on the structure-based virtual screening (SBVS) approaches. Due to the lack of crystal structures, there is great need for unbiased benchmarking sets to evaluate various ligand-based virtual screening (LBVS) methods for important drug targets such as G protein-coupled receptors (GPCRs). To date these ready-to-apply data sets for LBVS are fairly limited, and the direct usage of benchmarking sets designed for SBVS could bring the biases to the evaluation of LBVS. Herein, we propose an unbiased method to build benchmarking sets for LBVS and validate it on a multitude of GPCRs targets. To be more specific, our methods can (1) ensure chemical diversity of ligands, (2) maintain the physicochemical similarity between ligands and decoys, (3) make the decoys dissimilar in chemical topology to all ligands to avoid false negatives, and (4) maximize spatial random distribution of ligands and decoys. We evaluated the quality of our Unbiased Ligand Set (ULS) and Unbiased Decoy Set (UDS) using three common LBVS approaches, with Leave-One-Out (LOO) Cross-Validation (CV) and a metric of average AUC of the ROC curves. Our method has greatly reduced the “artificial enrichment” and “analogue bias” of a published GPCRs benchmarking set, i.e., GPCR Ligand Library (GLL)/GPCR Decoy Database (GDD). In addition, we addressed an important issue about the ratio of decoys per ligand and found that for a range of 30 to 100 it does not affect the quality of the benchmarking set, so we kept the original ratio of 39 from the GLL/GDD. PMID:24749745

  2. An unbiased method to build benchmarking sets for ligand-based virtual screening and its application to GPCRs.

    PubMed

    Xia, Jie; Jin, Hongwei; Liu, Zhenming; Zhang, Liangren; Wang, Xiang Simon

    2014-05-27

    Benchmarking data sets have become common in recent years for the purpose of virtual screening, though the main focus had been placed on the structure-based virtual screening (SBVS) approaches. Due to the lack of crystal structures, there is great need for unbiased benchmarking sets to evaluate various ligand-based virtual screening (LBVS) methods for important drug targets such as G protein-coupled receptors (GPCRs). To date these ready-to-apply data sets for LBVS are fairly limited, and the direct usage of benchmarking sets designed for SBVS could bring the biases to the evaluation of LBVS. Herein, we propose an unbiased method to build benchmarking sets for LBVS and validate it on a multitude of GPCRs targets. To be more specific, our methods can (1) ensure chemical diversity of ligands, (2) maintain the physicochemical similarity between ligands and decoys, (3) make the decoys dissimilar in chemical topology to all ligands to avoid false negatives, and (4) maximize spatial random distribution of ligands and decoys. We evaluated the quality of our Unbiased Ligand Set (ULS) and Unbiased Decoy Set (UDS) using three common LBVS approaches, with Leave-One-Out (LOO) Cross-Validation (CV) and a metric of average AUC of the ROC curves. Our method has greatly reduced the "artificial enrichment" and "analogue bias" of a published GPCRs benchmarking set, i.e., GPCR Ligand Library (GLL)/GPCR Decoy Database (GDD). In addition, we addressed an important issue about the ratio of decoys per ligand and found that for a range of 30 to 100 it does not affect the quality of the benchmarking set, so we kept the original ratio of 39 from the GLL/GDD.

  3. 75 FR 27781 - Agency Information Collection Activities: Proposed Collection Renewals; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-18

    ... respondent burden, invites the general public and other Federal agencies to take this opportunity to comment... information titled: Application For Consent to Exercise Trust Powers (3064-0025), and Insurance Sales Consumer...: 1. Title: Application for Consent to Exercise Trust Powers. OMB Number: 3064-0025. Form Number: FDIC...

  4. Perceived Exertion: An Old Exercise Tool Finds New Applications.

    ERIC Educational Resources Information Center

    Monahan, Terry

    1988-01-01

    Perceived exertion scales, based on subjective perception of energy output, are gaining respect as prescribing and monitoring tools for individual exercise programs. A review of recent literature indicates growing research interest in applications for individuals who are elderly, inactive, or subject to medical conditions such as angina. (IAH)

  5. Applicability of Cone Beam Computed Tomography to the Assessment of the Vocal Tract before and after Vocal Exercises in Normal Subjects.

    PubMed

    Garcia, Elisângela Zacanti; Yamashita, Hélio Kiitiro; Garcia, Davi Sousa; Padovani, Marina Martins Pereira; Azevedo, Renata Rangel; Chiari, Brasília Maria

    2016-01-01

    Cone beam computed tomography (CBCT), which represents an alternative to traditional computed tomography and magnetic resonance imaging, may be a useful instrument to study vocal tract physiology related to vocal exercises. This study aims to evaluate the applicability of CBCT to the assessment of variations in the vocal tract of healthy individuals before and after vocal exercises. Voice recordings and CBCT images before and after vocal exercises performed by 3 speech-language pathologists without vocal complaints were collected and compared. Each participant performed 1 type of exercise, i.e., Finnish resonance tube technique, prolonged consonant "b" technique, or chewing technique. The analysis consisted of an acoustic analysis and tomographic imaging. Modifications of the vocal tract settings following vocal exercises were properly detected by CBCT, and changes in the acoustic parameters were, for the most part, compatible with the variations detected in image measurements. CBCT was shown to be capable of properly assessing the changes in vocal tract settings promoted by vocal exercises. © 2017 S. Karger AG, Basel.

  6. Multi-Purpose, Application-Centric, Scalable I/O Proxy Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, M. C.

    2015-06-15

    MACSio is a Multi-purpose, Application-Centric, Scalable I/O proxy application. It is designed to support a number of goals with respect to parallel I/O performance testing and benchmarking including the ability to test and compare various I/O libraries and I/O paradigms, to predict scalable performance of real applications and to help identify where improvements in I/O performance can be made within the HPC I/O software stack.

  7. 77 FR 38288 - Ocean Transportation Intermediary License; Applicants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-27

    ...). Application Type: Name Change. Global Atlantic Logistics LLC (OFF), 1901 SW 31st Avenue, Pembroke Park, FL....gov . Bellcom, Inc. (NVO), 503 Commerce Park Drive, Suite E, Marietta, GA 30060. Officers: Cornelius U... License. Benchmark Worldwide Logistics, Inc. dba Star Ocean Lines (NVO & OFF), 24900 South Route 53...

  8. Endurance exercise modulates levodopa induced growth hormone release in patients with Parkinson's disease.

    PubMed

    Müller, Thomas; Welnic, Jacub; Woitalla, Dirk; Muhlack, Siegfried

    2007-07-11

    Acute levodopa (LD) application and exercise release human growth hormone (GH). An earlier trial showed, that combined stimulus of exercise and LD administration is the best provocative test for GH response in healthy participants. Objective was to show this combined effect of LD application and exercise on GH response and to investigate the impact on LD metabolism in 20 previously treated patients with Parkinson's disease (PD). We measured GH- and LD plasma concentrations following soluble 200 mg LD/50 mg benserazide administration during endurance exercise and rest on two separate consecutive days. GH concentrations significantly increased on both days, but GH release was significantly delayed during rest. LD metabolism was not altered due to exercise in a clinical relevant manner. Exercise induced a significant faster LD stimulated GH release in comparison with the rest condition. We did not find the supposed increase of LD induced GH release by endurance exercise. We assume, that only a limited amount of GH is available for GH release in the anterior pituitary following an acute 200 mg LD administration. GH disposal also depends on growth hormone releasing hormone (GHRH), which is secreted into hypothalamic portal capillaries. During the exercise condition, the resulting higher blood pressure supports blood flow and thus GHRH transport towards the GH producing cells in the pituitary. This might additionally have caused the significant faster GH release during exercise.

  9. A proposed benchmark problem for cargo nuclear threat monitoring

    NASA Astrophysics Data System (ADS)

    Wesley Holmes, Thomas; Calderon, Adan; Peeples, Cody R.; Gardner, Robin P.

    2011-10-01

    There is currently a great deal of technical and political effort focused on reducing the risk of potential attacks on the United States involving radiological dispersal devices or nuclear weapons. This paper proposes a benchmark problem for gamma-ray and X-ray cargo monitoring with results calculated using MCNP5, v1.51. The primary goal is to provide a benchmark problem that will allow researchers in this area to evaluate Monte Carlo models for both speed and accuracy in both forward and inverse calculational codes and approaches for nuclear security applications. A previous benchmark problem was developed by one of the authors (RPG) for two similar oil well logging problems (Gardner and Verghese, 1991, [1]). One of those benchmarks has recently been used by at least two researchers in the nuclear threat area to evaluate the speed and accuracy of Monte Carlo codes combined with variance reduction techniques. This apparent need has prompted us to design this benchmark problem specifically for the nuclear threat researcher. This benchmark consists of conceptual design and preliminary calculational results using gamma-ray interactions on a system containing three thicknesses of three different shielding materials. A point source is placed inside the three materials lead, aluminum, and plywood. The first two materials are in right circular cylindrical form while the third is a cube. The entire system rests on a sufficiently thick lead base so as to reduce undesired scattering events. The configuration was arranged in such a manner that as gamma-ray moves from the source outward it first passes through the lead circular cylinder, then the aluminum circular cylinder, and finally the wooden cube before reaching the detector. A 2 in.×4 in.×16 in. box style NaI (Tl) detector was placed 1 m from the point source located in the center with the 4 in.×16 in. side facing the system. The two sources used in the benchmark are 137Cs and 235U.

  10. Cross-Cultural and Psychometric Properties Assessment of the Exercise Self-Efficacy Scale in Individuals with Spinal Cord Injury.

    PubMed

    Pisconti, Fernando; Mahmoud Smaili Santos, Suhaila; Lopes, Josiane; Rosa Cardoso, Jefferson; Lopes Lavado, Edson

    2017-11-29

    The Exercise Self-Efficacy scale (ESES) is a reliable measure, in the English language, of exercise self-efficacy in individuals with spinal cord injury. The aim of this study was to culturally adjust and validate the Exercise Self-Efficacy scale in the Portuguese language. The Exercise Self-Efficacy scale was applied to 76 subjects, with three-month intervals (three applications in total). The reliability was appraised using the intra-class correlation coefficient and Bland-Altman methods, and the internal consistency was evaluated using Cronbach´s alpha. The Exercise Self-Efficacy scale was correlated with the domains of the Quality of life Questionnaire SF-36 and Functional Independence Measure and tested using the Spearman rho coefficient. The Exercise Self-Efficacy scale-Brazil presented good internal consistency (alpha 1 = 0.856; alpha 2 = 0.855; alpha 3 = 0.822) and high reliability in the test-retest (intra-class correlation coefficient = 0.97). There was a strong correlation between the Exercise Self-Efficacy scale-Brazil and the SF-36 only in the functional capacity domain (rho = 0.708). There were no changes in Exercise Self-Efficacy scale-Brazil scores between the three applications (p = 0.796). The validation of the Exercise Self-Efficacy scale questionnaire permits the assessor to use it reliably in Portuguese speaking countries, since it is the first instrument measuring self-efficacy specifically during exercises in individuals with spinal cord injury. Furthermore, the questionnaire can be used as an instrument to verify the effectiveness of interventions that use exercise as an outcome. The results of the Brazilian version of the Exercise Self-Efficacy scale support its use as a reliable and valid measurement of exercise self-efficacy for this population.

  11. Non-coding RNAs and exercise: pathophysiological role and clinical application in the cardiovascular system.

    PubMed

    Gomes, Clarissa P C; de Gonzalo-Calvo, David; Toro, Rocio; Fernandes, Tiago; Theisen, Daniel; Wang, Da-Zhi; Devaux, Yvan

    2018-05-23

    There is overwhelming evidence that regular exercise training is protective against cardiovascular disease (CVD), the main cause of death worldwide. Despite the benefits of exercise, the intricacies of their underlying molecular mechanisms remain largely unknown. Non-coding RNAs (ncRNAs) have been recognized as a major regulatory network governing gene expression in several physiological processes and appeared as pivotal modulators in a myriad of cardiovascular processes under physiological and pathological conditions. However, little is known about ncRNA expression and role in response to exercise. Revealing the molecular components and mechanisms of the link between exercise and health outcomes will catalyse discoveries of new biomarkers and therapeutic targets. Here we review the current understanding of the ncRNA role in exercise-induced adaptations focused on the cardiovascular system and address their potential role in clinical applications for CVD. Finally, considerations and perspectives for future studies will be proposed. © 2018 The Author(s). Published by Portland Press Limited on behalf of the Biochemical Society.

  12. Enabling the High Level Synthesis of Data Analytics Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minutoli, Marco; Castellana, Vito G.; Tumeo, Antonino

    Conventional High Level Synthesis (HLS) tools mainly tar- get compute intensive kernels typical of digital signal pro- cessing applications. We are developing techniques and ar- chitectural templates to enable HLS of data analytics appli- cations. These applications are memory intensive, present fine-grained, unpredictable data accesses, and irregular, dy- namic task parallelism. We discuss an architectural tem- plate based around a distributed controller to efficiently ex- ploit thread level parallelism. We present a memory in- terface that supports parallel memory subsystems and en- ables implementing atomic memory operations. We intro- duce a dynamic task scheduling approach to efficiently ex- ecute heavilymore » unbalanced workload. The templates are val- idated by synthesizing queries from the Lehigh University Benchmark (LUBM), a well know SPARQL benchmark.« less

  13. Benchmarking Deep Learning Models on Large Healthcare Datasets.

    PubMed

    Purushotham, Sanjay; Meng, Chuizheng; Che, Zhengping; Liu, Yan

    2018-06-04

    Deep learning models (aka Deep Neural Networks) have revolutionized many fields including computer vision, natural language processing, speech recognition, and is being increasingly used in clinical healthcare applications. However, few works exist which have benchmarked the performance of the deep learning models with respect to the state-of-the-art machine learning models and prognostic scoring systems on publicly available healthcare datasets. In this paper, we present the benchmarking results for several clinical prediction tasks such as mortality prediction, length of stay prediction, and ICD-9 code group prediction using Deep Learning models, ensemble of machine learning models (Super Learner algorithm), SAPS II and SOFA scores. We used the Medical Information Mart for Intensive Care III (MIMIC-III) (v1.4) publicly available dataset, which includes all patients admitted to an ICU at the Beth Israel Deaconess Medical Center from 2001 to 2012, for the benchmarking tasks. Our results show that deep learning models consistently outperform all the other approaches especially when the 'raw' clinical time series data is used as input features to the models. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Defining core elements and outstanding practice in Nutritional Science through collaborative benchmarking.

    PubMed

    Samman, Samir; McCarthur, Jennifer O; Peat, Mary

    2006-01-01

    Benchmarking has been adopted by educational institutions as a potentially sensitive tool for improving learning and teaching. To date there has been limited application of benchmarking methodology in the Discipline of Nutritional Science. The aim of this survey was to define core elements and outstanding practice in Nutritional Science through collaborative benchmarking. Questionnaires that aimed to establish proposed core elements for Nutritional Science, and inquired about definitions of " good" and " outstanding" practice were posted to named representatives at eight Australian universities. Seven respondents identified core elements that included knowledge of nutrient metabolism and requirement, food production and processing, modern biomedical techniques that could be applied to understanding nutrition, and social and environmental issues as related to Nutritional Science. Four of the eight institutions who agreed to participate in the present survey identified the integration of teaching with research as an indicator of outstanding practice. Nutritional Science is a rapidly evolving discipline. Further and more comprehensive surveys are required to consolidate and update the definition of the discipline, and to identify the optimal way of teaching it. Global ideas and specific regional requirements also need to be considered.

  15. Simple and effective exercise design for assessing in vivo mitochondrial function in clinical applications using (31)P magnetic resonance spectroscopy.

    PubMed

    Sleigh, Alison; Lupson, Victoria; Thankamony, Ajay; Dunger, David B; Savage, David B; Carpenter, T Adrian; Kemp, Graham J

    2016-01-11

    The growing recognition of diseases associated with dysfunction of mitochondria poses an urgent need for simple measures of mitochondrial function. Assessment of the kinetics of replenishment of the phosphocreatine pool after exercise using (31)P magnetic resonance spectroscopy can provide an in vivo measure of mitochondrial function; however, the wider application of this technique appears limited by complex or expensive MR-compatible exercise equipment and protocols not easily tolerated by frail participants or those with reduced mental capacity. Here we describe a novel in-scanner exercise method which is patient-focused, inexpensive, remarkably simple and highly portable. The device exploits an MR-compatible high-density material (BaSO4) to form a weight which is attached directly to the ankle, and a one-minute dynamic knee extension protocol produced highly reproducible measurements of post-exercise PCr recovery kinetics in both healthy subjects and patients. As sophisticated exercise equipment is unnecessary for this measurement, our extremely simple design provides an effective and easy-to-implement apparatus that is readily translatable across sites. Its design, being tailored to the needs of the patient, makes it particularly well suited to clinical applications, and we argue the potential of this method for investigating in vivo mitochondrial function in new cohorts of growing clinical interest.

  16. Taming Wild Horses: The Need for Virtual Time-based Scheduling of VMs in Network Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S; Henz, Brian J

    2012-01-01

    The next generation of scalable network simulators employ virtual machines (VMs) to act as high-fidelity models of traffic producer/consumer nodes in simulated networks. However, network simulations could be inaccurate if VMs are not scheduled according to virtual time, especially when many VMs are hosted per simulator core in a multi-core simulator environment. Since VMs are by default free-running, on the outset, it is not clear if, and to what extent, their untamed execution affects the results in simulated scenarios. Here, we provide the first quantitative basis for establishing the need for generalized virtual time scheduling of VMs in network simulators,more » based on an actual prototyped implementations. To exercise breadth, our system is tested with multiple disparate applications: (a) a set of message passing parallel programs, (b) a computer worm propagation phenomenon, and (c) a mobile ad-hoc wireless network simulation. We define and use error metrics and benchmarks in scaled tests to empirically report the poor match of traditional, fairness-based VM scheduling to VM-based network simulation, and also clearly show the better performance of our simulation-specific scheduler, with up to 64 VMs hosted on a 12-core simulator node.« less

  17. Benchmark results for few-body hypernuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruffino, Fabrizio Ferrari; Lonardoni, Diego; Barnea, Nir

    2017-03-16

    Here, the Non-Symmetrized Hyperspherical Harmonics method (NSHH) is introduced in the hypernuclear sector and benchmarked with three different ab-initio methods, namely the Auxiliary Field Diffusion Monte Carlo method, the Faddeev–Yakubovsky approach and the Gaussian Expansion Method. Binding energies and hyperon separation energies of three- to five-body hypernuclei are calculated by employing the two-body ΛN component of the phenomenological Bodmer–Usmani potential, and a hyperon-nucleon interaction simulating the scattering phase shifts given by NSC97f. The range of applicability of the NSHH method is briefly discussed.

  18. RCQ-GA: RDF Chain Query Optimization Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Hogenboom, Alexander; Milea, Viorel; Frasincar, Flavius; Kaymak, Uzay

    The application of Semantic Web technologies in an Electronic Commerce environment implies a need for good support tools. Fast query engines are needed for efficient querying of large amounts of data, usually represented using RDF. We focus on optimizing a special class of SPARQL queries, the so-called RDF chain queries. For this purpose, we devise a genetic algorithm called RCQ-GA that determines the order in which joins need to be performed for an efficient evaluation of RDF chain queries. The approach is benchmarked against a two-phase optimization algorithm, previously proposed in literature. The more complex a query is, the more RCQ-GA outperforms the benchmark in solution quality, execution time needed, and consistency of solution quality. When the algorithms are constrained by a time limit, the overall performance of RCQ-GA compared to the benchmark further improves.

  19. The Alpha consensus meeting on cryopreservation key performance indicators and benchmarks: proceedings of an expert meeting.

    PubMed

    2012-08-01

    This proceedings report presents the outcomes from an international workshop designed to establish consensus on: definitions for key performance indicators (KPIs) for oocyte and embryo cryopreservation, using either slow freezing or vitrification; minimum performance level values for each KPI, representing basic competency; and aspirational benchmark values for each KPI, representing best practice goals. This report includes general presentations about current practice and factors for consideration in the development of KPIs. A total of 14 KPIs were recommended and benchmarks for each are presented. No recommendations were made regarding specific cryopreservation techniques or devices, or whether vitrification is 'better' than slow freezing, or vice versa, for any particular stage or application, as this was considered to be outside the scope of this workshop. Copyright © 2012 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  20. 8 CFR 1212.4 - Applications for the exercise of discretion under section 212(d)(1) and 212(d)(3).

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 8 Aliens and Nationality 1 2011-01-01 2011-01-01 false Applications for the exercise of discretion under section 212(d)(1) and 212(d)(3). 1212.4 Section 1212.4 Aliens and Nationality EXECUTIVE OFFICE FOR... discretion under section 212(d)(1) and 212(d)(3). (a) Applications under section 212(d)(3)(A)—(1) General...

  1. Core stability training: applications to sports conditioning programs.

    PubMed

    Willardson, Jeffrey M

    2007-08-01

    In recent years, fitness practitioners have increasingly recommended core stability exercises in sports conditioning programs. Greater core stability may benefit sports performance by providing a foundation for greater force production in the upper and lower extremities. Traditional resistance exercises have been modified to emphasize core stability. Such modifications have included performing exercises on unstable rather than stable surfaces, performing exercises while standing rather than seated, performing exercises with free weights rather than machines, and performing exercises unilaterally rather than bilaterally. Despite the popularity of core stability training, relatively little scientific research has been conducted to demonstrate the benefits for healthy athletes. Therefore, the purpose of this review was to critically examine core stability training and other issues related to this topic to determine useful applications for sports conditioning programs. Based on the current literature, prescription of core stability exercises should vary based on the phase of training and the health status of the athlete. During preseason and in-season mesocycles, free weight exercises performed while standing on a stable surface are recommended for increases in core strength and power. Free weight exercises performed in this manner are specific to the core stability requirements of sports-related skills due to moderate levels of instability and high levels of force production. Conversely, during postseason and off-season mesocycles, Swiss ball exercises involving isometric muscle actions, small loads, and long tension times are recommended for increases in core endurance. Furthermore, balance board and stability disc exercises, performed in conjunction with plyometric exercises, are recommended to improve proprioceptive and reactive capabilities, which may reduce the likelihood of lower extremity injuries.

  2. 12 CFR 559.13 - How may a savings association exercise its salvage power in connection with a service corporation...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 5 2011-01-01 2011-01-01 false How may a savings association exercise its... Regulations Applicable to All Savings Associations § 559.13 How may a savings association exercise its salvage... section, a savings association (“you”) may exercise your salvage power to make a contribution or a loan...

  3. 12 CFR 559.13 - How may a savings association exercise its salvage power in connection with a service corporation...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 6 2014-01-01 2012-01-01 true How may a savings association exercise its... Regulations Applicable to All Savings Associations § 559.13 How may a savings association exercise its salvage... section, a savings association (“you”) may exercise your salvage power to make a contribution or a loan...

  4. 12 CFR 559.13 - How may a savings association exercise its salvage power in connection with a service corporation...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 6 2012-01-01 2012-01-01 false How may a savings association exercise its... Regulations Applicable to All Savings Associations § 559.13 How may a savings association exercise its salvage... section, a savings association (“you”) may exercise your salvage power to make a contribution or a loan...

  5. 12 CFR 559.13 - How may a savings association exercise its salvage power in connection with a service corporation...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 5 2010-01-01 2010-01-01 false How may a savings association exercise its... Regulations Applicable to All Savings Associations § 559.13 How may a savings association exercise its salvage... section, a savings association (“you”) may exercise your salvage power to make a contribution or a loan...

  6. Turnkey CAD/CAM selection and evaluation

    NASA Technical Reports Server (NTRS)

    Moody, T.

    1980-01-01

    The methodology to be followed in evaluating and selecting a computer system for manufacturing applications is discussed. Main frames and minicomputers are considered. Benchmark evaluations, demonstrations, and contract negotiations are discussed.

  7. Exercise Desert Rock, Staff Memorandums. Army, Camp Desert Rock, Nevada.

    DTIC Science & Technology

    1957-01-01

    I AD-AGAG 257 EXERCISE DESERT ROCK LAS VEGAS NV F/6 IS/ 3 EXERCISE DESERT ROCK, STAFF MEMORANDUMS. ARMY. CAMP DESERT ROCK-ETClUlCASIFE mm95i mm... Exercise Safety Progra - . 1. PUrose: To establish ane’ffective safety progr.Rm toreduce, and keep to a minimum, accident,1 manpower and monetary losses. at...agencies will be- followed. Supervispry personnel will: become familiar with those that Pre applicable to thei£r... operations. The Exercise Safety

  8. Effect of 3 Different Applications of Kinesio Taping Denko® on Electromyographic Activity: Inhibition or Facilitation of the Quadriceps of Males During Squat Exercise

    PubMed Central

    Serrão, Júlio C.; Mezêncio, Bruno; Claudino, João G.; Soncin, Rafael; Miyashiro, Pedro L. Sampaio; Sousa, Eric P.; Borges, Eduardo; Zanetti, Vinícius; Phillip, Igor; Mochizuki, Luiz; Amadio, Alberto C.

    2016-01-01

    Kinesio taping consists of a technique which uses the application of an elastic adhesive tape. It has become a widely used rehabilitation modality for the prevention and treatment of musculoskeletal disorders. The objective of this study was to verify the effect of the application of Kinesio Taping Denko® in three conditions (facilitation, inhibition, and placebo) on the electromyographic activity of the quadriceps and hamstrings muscles on facilitating or inhibiting the muscle function and on the perceived exertion during the barbell back squat exercise in healthy male subjects. Methods: It was a randomized, single-blinded and controlled study in which 18 males (28.0 ± 6.7 years old; 85.8 ± 8.2 kg mass; 1.80 ± 0.07 m tall; 0.97 ± 0.04 m lower limb length) performed barbell back squat exercise with different conditions of Kinesio Taping Denko® applications: Facilitation, inhibition and placebo. Previous to the mentioned conditions, all individuals were assessed without applying kinesio Taping Denko® during the exercise. OMNI scale was used after each set for perceived exertion evaluation. No differences (p < 0.05) in the electromyographic activity of the biceps femoris, vastus lateralis and vastus medialis or OMNI scale were recorded under any conditions. The results show that the kinesio taping denko®may not alter the magnitude of the electromyography activity of vastus lateralis, vastus medialis, and biceps femoris during the squat exercise. Furthermore, the perceived exertion was not affected by the kinesio taping denko® application. Key points Researchers involved in collecting data in this study have no financial or personal interest in the outcome of results or the sponsor. The perceived exertion was not affected by the kinesiology taping application. Kinesiology taping application may not alter the magnitude of EMG activity of vastuslateralis, vastusmedialis, and biceps femoris during the barbell back squat exercise. Electromyographic activity of kinesiology taping application on other muscle groups and in other cohorts, such as healthy elderly subjects and patients under a rehabilitation program require further investigation. PMID:27803618

  9. Biomechanical Modeling Analysis of Loads Configuration for Squat Exercise

    NASA Technical Reports Server (NTRS)

    Gallo, Christopher A.; Thompson, William K.; Lewandowski, Beth E.; Jagodnik, Kathleen; De Witt, John K.

    2017-01-01

    INTRODUCTION: Long duration space travel will expose astronauts to extended periods of reduced gravity. Since gravity is not present to assist loading, astronauts will use resistive and aerobic exercise regimes for the duration of the space flight to minimize loss of bone density, muscle mass and aerobic capacity that occurs during exposure to a reduced gravity environment. Unlike the International Space Station (ISS), the area available for an exercise device in the next generation of spacecraft for travel to the Moon or to Mars is limited and therefore compact resistance exercise device prototypes are being developed. The Advanced Resistive Exercise Device (ARED) currently on the ISS is being used as a benchmark for the functional performance of these new devices. Biomechanical data collection and computational modeling aid the device design process by quantifying the joint torques and the musculoskeletal forces that occur during exercises performed on the prototype devices. METHODS The computational models currently under development utilize the OpenSim [1] software platform, consisting of open source code for musculoskeletal modeling, using biomechanical input data from test subjects for estimation of muscle and joint loads. The OpenSim Full Body Model [2] is used for all analyses. The model incorporates simplified wrap surfaces, a new knee model and updated lower body muscle parameters derived from cadaver measurements and magnetic resonance imaging of young adults. The upper body uses torque actuators at the lumbar and extremity joints. The test subjects who volunteer for this study are instrumented with reflective markers for motion capture data collection while performing squat exercising on the Hybrid Ultimate Lifting Kit (HULK) prototype device (ZIN Technologies, Middleburg Heights, OH). Ground reaction force data is collected with force plates under the feet, and device loading is recorded through load cells internal to the HULK. Test variables include the applied device load and the dual cable long bar or single cable T-bar interface between the test subject and the device. Data is also obtained using free weights with the identical loading for a comparison to the resistively loaded exercise device trials. The data drives the OpenSim biomechanical model, which has been scaled to match the anthropometrics of the test subject, to calculate the body loads. RESULTS Lower body kinematics, joint moments, joint forces and muscle forces are obtained from the OpenSim biomechanical analysis of the squat exercises under different loading conditions. Preliminary results from the model for the loading conditions will be presented as will hypotheses developed for follow on work.

  10. My Job Application File. Third Edition.

    ERIC Educational Resources Information Center

    Kahn, Charles; And Others

    This guide contains ten exercises designed to aid students in completing job applications. Exercises included are (1) My Personal History, (2) My Educational Record, (3) Printing Neatly Helps, (4) Key Words and Abbreviations, (5) My Health Record, (6) Papers I Will Need, (7) Paid Work Experience, (8) Unpaid Work Experience, (9) My References, and…

  11. A perspective on the future role of brain pet imaging in exercise science.

    PubMed

    Boecker, Henning; Drzezga, Alexander

    2016-05-01

    Positron Emission Tomography (PET) bears a unique potential for examining the effects of physical exercise (acute or chronic) within the central nervous system in vivo, including cerebral metabolism, neuroreceptor occupancy, and neurotransmission. However, application of Neuro-PET in human exercise science is as yet surprisingly sparse. To date the field has been dominated by non-invasive neuroelectrical techniques (EEG, MEG) and structural/functional magnetic resonance imaging (sMRI/fMRI). Despite PET having certain inherent disadvantages, in particular radiation exposure and high costs limiting applicability at large scale, certain research questions in human exercise science can exclusively be addressed with PET: The "metabolic trapping" properties of (18)F-FDG PET as the most commonly used PET-tracer allow examining the neuronal mechanisms underlying various forms of acute exercise in a rather unconstrained manner, i.e. under realistic training scenarios outside the scanner environment. Beyond acute effects, (18)F-FDG PET measurements under resting conditions have a strong prospective for unraveling the influence of regular physical activity on neuronal integrity and potentially neuroprotective mechanisms in vivo, which is of special interest for aging and dementia research. Quantification of cerebral glucose metabolism may allow determining the metabolic effects of exercise interventions in the entire human brain and relating the regional cerebral rate of glucose metabolism (rCMRglc) with behavioral, neuropsychological, and physiological measures. Apart from FDG-PET, particularly interesting applications comprise PET ligand studies that focus on dopaminergic and opioidergic neurotransmission, both key transmitter systems for exercise-related psychophysiological effects, including mood changes, reward processing, antinociception, and in its most extreme form 'exercise dependence'. PET ligand displacement approaches even allow quantifying specific endogenous neurotransmitter release under acute exercise interventions, to which modern PET/MR hybrid technology will be additionally fruitful. Experimental studies exploiting the unprecedented multimodal imaging capacities of PET/MR in human exercise sciences are as yet pending. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. [The technical peculiarities of the application of therapeutic physical exercises for the rehabilitation of the patients presenting with post-infarction cardiosclerosis].

    PubMed

    Gusarova, S A; Stiazhkina, E M; Gurkina, M V

    2014-01-01

    The article reports the results of clinical and physiological studies of 93 patients presenting with post-infarction cardiosclerosis and sings of cerebrovascular disease. The experience with the application of the combined rehabilitative treatment including therapeutic physical exercises is based on the results of the observation of two groups of the patients. Those of the study group performed special physical exercises designed to act on brain hemodynamics. The patients of the control group used traditional therapeutic exercises usually prescribed to those suffering from coronary artery disease. It was shown that the treatment including therapeutic physical exercises offered to the patients of the study group has an advantage of the significant positive impact on haemodynamics and functional activity of the brain; moreover, it reduces the severity of cardio-vascular cerebral symptoms and thereby contributes to complete rehabilitation of the patients with post-infarction cardiosclerosis.

  13. Dynamic Rupture Benchmarking of the ADER-DG Method

    NASA Astrophysics Data System (ADS)

    Gabriel, Alice; Pelties, Christian

    2013-04-01

    We will verify the arbitrary high-order derivative Discontinuous Galerkin (ADER-DG) method in various test cases of the 'SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise' benchmark suite (Harris et al. 2009). The ADER-DG scheme is able to solve the spontaneous rupture problem with high-order accuracy in space and time on three-dimensional unstructured tetrahedral meshes. Strong mesh coarsening or refinement at areas of interest can be applied to keep the computational costs feasible. Moreover, the method does not generate spurious high-frequency contributions in the slip rate spectra and therefore does not require any artificial damping as demonstrated in previous presentations and publications (Pelties et al. 2010 and 2012). We will show that the mentioned features hold also for more advanced setups as e.g. a branching fault system, heterogeneous background stresses and bimaterial faults. The advanced geometrical flexibility combined with an enhanced accuracy will make the ADER-DG method a useful tool to study earthquake dynamics on complex fault systems in realistic rheologies. References: Harris, R.A., M. Barall, R. Archuleta, B. Aagaard, J.-P. Ampuero, H. Bhat, V. Cruz-Atienza, L. Dalguer, P. Dawson, S. Day, B. Duan, E. Dunham, G. Ely, Y. Kaneko, Y. Kase, N. Lapusta, Y. Liu, S. Ma, D. Oglesby, K. Olsen, A. Pitarka, S. Song, and E. Templeton, The SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise, Seismological Research Letters, vol. 80, no. 1, pages 119-126, 2009 Pelties, C., J. de la Puente, and M. Kaeser, Dynamic Rupture Modeling in Three Dimensions on Unstructured Meshes Using a Discontinuous Galerkin Method, AGU 2010 Fall Meeting, abstract #S21C-2068 Pelties, C., J. de la Puente, J.-P. Ampuero, G. Brietzke, and M. Kaeser, Three-Dimensional Dynamic Rupture Simulation with a High-order Discontinuous Galerkin Method on Unstructured Tetrahedral Meshes, JGR. - Solid Earth, VOL. 117, B02309, 2012

  14. Dynamic Rupture Benchmarking of the ADER-DG Method

    NASA Astrophysics Data System (ADS)

    Pelties, C.; Gabriel, A.

    2012-12-01

    We will verify the arbitrary high-order derivative Discontinuous Galerkin (ADER-DG) method in various test cases of the 'SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise' benchmark suite (Harris et al. 2009). The ADER-DG scheme is able to solve the spontaneous rupture problem with high-order accuracy in space and time on three-dimensional unstructured tetrahedral meshes. Strong mesh coarsening or refinement at areas of interest can be applied to keep the computational costs feasible. Moreover, the method does not generate spurious high-frequency contributions in the slip rate spectra and therefore does not require any artificial damping as demonstrated in previous presentations and publications (Pelties et al. 2010 and 2012). We will show that the mentioned features hold also for more advanced setups as e.g. a branching fault system, heterogeneous background stresses and bimaterial faults. The advanced geometrical flexibility combined with an enhanced accuracy will make the ADER-DG method a useful tool to study earthquake dynamics on complex fault systems in realistic rheologies. References: Harris, R.A., M. Barall, R. Archuleta, B. Aagaard, J.-P. Ampuero, H. Bhat, V. Cruz-Atienza, L. Dalguer, P. Dawson, S. Day, B. Duan, E. Dunham, G. Ely, Y. Kaneko, Y. Kase, N. Lapusta, Y. Liu, S. Ma, D. Oglesby, K. Olsen, A. Pitarka, S. Song, and E. Templeton, The SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise, Seismological Research Letters, vol. 80, no. 1, pages 119-126, 2009 Pelties, C., J. de la Puente, and M. Kaeser, Dynamic Rupture Modeling in Three Dimensions on Unstructured Meshes Using a Discontinuous Galerkin Method, AGU 2010 Fall Meeting, abstract #S21C-2068 Pelties, C., J. de la Puente, J.-P. Ampuero, G. Brietzke, and M. Kaeser, Three-Dimensional Dynamic Rupture Simulation with a High-order Discontinuous Galerkin Method on Unstructured Tetrahedral Meshes, JGR. - Solid Earth, VOL. 117, B02309, 2012

  15. A New Resource for College Distance Education Astronomy Laboratory Exercises

    ERIC Educational Resources Information Center

    Vogt, Nicole P.; Cook, Stephen P.; Muise, Amy Smith

    2013-01-01

    This article introduces a set of distance education astronomy laboratory exercises for use by college students and instructors and discusses first usage results. This General Astronomy Education Source exercise set contains eight two-week projects designed to guide students through both core content and mathematical applications of general…

  16. 75 FR 68249 - Policy Clarifying Definition of “Actively Engaged” for Purposes of Inspector Authorization

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-05

    ... exercising their mechanic certificate when employed full-time in aircraft maintenance to be actively engaged...) mechanic exercising the privileges of the mechanic certificate). An applicant can demonstrate that he or... applying. Note: Actively engaged means exercising the privileges of an airframe and powerplant mechanic...

  17. Effects of Directional Exercise on Lingual Strength

    ERIC Educational Resources Information Center

    Clark, Heather M.; O'Brien, Katy; Calleja, Aimee; Corrie, Sarah Newcomb

    2009-01-01

    Purpose: To examine the application of known muscle training principles to tongue strengthening exercises and to answer the following research questions: (a) Did lingual strength increase following 9 weeks of training? (b) Did training conducted using an exercise moving the tongue in one direction result in strength changes for tongue movements in…

  18. Exercise Prescribing: Computer Application in Older Adults

    ERIC Educational Resources Information Center

    Kressig, Reto W.; Echt, Katharina V.

    2002-01-01

    Purpose: The purpose of this study was to determine if older adults are capable and willing to interact with a computerized exercise promotion interface and to determine to what extent they accept computer-generated exercise recommendations. Design and Methods: Time and requests for assistance were recorded while 34 college-educated volunteers,…

  19. Health supply chain management.

    PubMed

    Zimmerman, Rolf; Gallagher, Pat

    2010-01-01

    This chapter gives an educational overview of: * The actual application of supply chain practice and disciplines required for service delivery improvement within the current health environment. * A rationale for the application of Supply Chain Management (SCM) approaches to the Health sector. * The tools and methods available for supply chain analysis and benchmarking. * Key supply chain success factors.

  20. Methodologie experimentale pour evaluer les caracteristiques des plateformes graphiques avioniques

    NASA Astrophysics Data System (ADS)

    Legault, Vincent

    Within a context where the aviation industry intensifies the development of new visually appealing features and where time-to-market must be as short as possible, rapid graphics processing benchmarking in a certified avionics environment becomes an important issue. With this work we intend to demonstrate that it is possible to deploy a high-performance graphics application on an avionics platform that uses certified graphical COTS components. Moreover, we would like to bring to the avionics community a methodology which will allow developers to identify the needed elements for graphics system optimisation and provide them tools that can measure the complexity of this type of application and measure the amount of resources to properly scale a graphics system according to their needs. As far as we know, no graphics performance profiling tool dedicated to critical embedded architectures has been proposed. We thus had the idea of implementing a specialized benchmarking tool that would be an appropriate and effective solution to this problem. Our solution resides in the extraction of the key graphics specifications from an inherited application to use them afterwards in a 3D image generation application.

  1. Methods for Derivation of Inhalation Reference Concentrations and Application of Inhalation Dosimetry

    EPA Pesticide Factsheets

    EPA's methodology for estimation of inhalation reference concentrations (RfCs) as benchmark estimates of the quantitative dose-response assessment of chronic noncancer toxicity for individual inhaled chemicals.

  2. Specific entrustable professional activities for undergraduate medical internships: a method compatible with the academic curriculum.

    PubMed

    Hamui-Sutton, Alicia; Monterrosas-Rojas, Ana María; Ortiz-Montalvo, Armando; Flores-Morones, Felipe; Torruco-García, Uri; Navarrete-Martínez, Andrea; Arrioja-Guerrero, Araceli

    2017-08-25

    Competency-based education has been considered the most important pedagogical trend in Medicine in the last two decades. In clinical contexts, competencies are implemented through Entrustable Professional Activities (EPAs) which are observable and measurable. The aim of this paper is to describe the methodology used in the design of educational tools to assess students´ competencies in clinical practice during their undergraduate internship (UI). In this paper, we present the construction of specific APROCs (Actividades Profesionales Confiables) in Surgery (S), Gynecology and Obstetrics (GO) and Family Medicine (FM) rotations with three levels of performance. The study considered a mixed method exploratory type design, a qualitative phase followed by a quantitative validation exercise. In the first stage data was obtained from three rotations (FM, GO and S) through focus groups about real and expected activities of medical interns. Triangulation with other sources was made to construct benchmarks. In the second stage, narrative descriptions with the three levels were validated by professors who teach the different subjects using the Delphi technique. The results may be described both curricular and methodological wise. From the curricular point of view, APROCs were identified in three UI rotations within clinical contexts in Mexico City, benchmarks were developed by levels and validated by experts' consensus. In regard to methodological issues, this research contributed to the development of a strategy, following six steps, to build APROCs using mixed methods. Developing benchmarks provides a regular and standardized language that helps to evaluate student's performance and define educational strategies efficiently and accurately. The university academic program was aligned with APROCs in clinical contexts to assure the acquisition of competencies by students.

  3. Do physiological measures predict selected CrossFit(®) benchmark performance?

    PubMed

    Butcher, Scotty J; Neyedly, Tyler J; Horvey, Karla J; Benko, Chad R

    2015-01-01

    CrossFit(®) is a new but extremely popular method of exercise training and competition that involves constantly varied functional movements performed at high intensity. Despite the popularity of this training method, the physiological determinants of CrossFit performance have not yet been reported. The purpose of this study was to determine whether physiological and/or muscle strength measures could predict performance on three common CrossFit "Workouts of the Day" (WODs). Fourteen CrossFit Open or Regional athletes completed, on separate days, the WODs "Grace" (30 clean and jerks for time), "Fran" (three rounds of thrusters and pull-ups for 21, 15, and nine repetitions), and "Cindy" (20 minutes of rounds of five pull-ups, ten push-ups, and 15 bodyweight squats), as well as the "CrossFit Total" (1 repetition max [1RM] back squat, overhead press, and deadlift), maximal oxygen consumption (VO2max), and Wingate anaerobic power/capacity testing. Performance of Grace and Fran was related to whole-body strength (CrossFit Total) (r=-0.88 and -0.65, respectively) and anaerobic threshold (r=-0.61 and -0.53, respectively); however, whole-body strength was the only variable to survive the prediction regression for both of these WODs (R (2)=0.77 and 0.42, respectively). There were no significant associations or predictors for Cindy. CrossFit benchmark WOD performance cannot be predicted by VO2max, Wingate power/capacity, or either respiratory compensation or anaerobic thresholds. Of the data measured, only whole-body strength can partially explain performance on Grace and Fran, although anaerobic threshold also exhibited association with performance. Along with their typical training, CrossFit athletes should likely ensure an adequate level of strength and aerobic endurance to optimize performance on at least some benchmark WODs.

  4. Planning strategies for development of effective exercise and nutrition countermeasures for long-duration space flight

    NASA Technical Reports Server (NTRS)

    Convertino, Victor A.

    2002-01-01

    Exercise and nutrition represent primary countermeasures used during space flight to maintain or restore maximal aerobic capacity, musculoskeletal structure, and orthostatic function. However, no single exercise, dietary regimen, or combination of prescriptions has proven entirely effective in maintaining or restoring cardiovascular and musculoskeletal functions to preflight levels after prolonged space flight. As human space flight exposures increase in duration, identification, assessment, and development of various effective exercise- and nutrition-based protective procedures will become paramount. The application of adequate dietary intake in combination with effective exercise prescription will be based on identification of basic physiologic stimuli that maintain normal function in terrestrial gravity, and understanding how specific combinations of exercise characteristics (e.g., duration, frequency, intensity, and mode) can be combined with minimal nutritional requirements that mimic the stimuli normally produced by living in Earth's gravity environment. This can be accomplished only with greater emphasis of research on ground-based experiments targeted at understanding the interactions between caloric intake and expenditure during space flight. Future strategies for application of nutrition and exercise countermeasures for long-duration space missions must be directed to minimizing crew time and the impact on life-support resources.

  5. Effectiveness of Interval Exercise Training in Patients with COPD

    PubMed Central

    Kortianou, Eleni A.; Nasis, Ioannis G.; Spetsioti, Stavroula T.; Daskalakis, Andreas M.; Vogiatzis, Ioannis

    2010-01-01

    Physical training is beneficial and should be included in the comprehensive management of all patients with COPD independently of disease severity. Different rehabilitative strategies and training modalities have been proposed to optimize exercise tolerance. Interval exercise training has been used as an effective alternative modality to continuous exercise in patients with moderate and severe COPD. Although in healthy elderly individuals and patients with chronic heart failure there is evidence that this training modality is superior to continuous exercise in terms of physiological training effects, in patients with COPD, there is not such evidence. Nevertheless, in patients with COPD application of interval training has been shown to be equally effective to continuous exercise as it induces equivalent physiological training effects but with less symptoms of dyspnea and leg discomfort during training. The main purpose of this review is to summarize previous studies of the effectiveness of interval training in COPD and also to provide arguments in support of the application of interval training to overcome the respiratory and peripheral muscle limiting factors of exercise capacity. To this end we make recommendations on how best to implement interval training in the COPD population in the rehabilitation setting so as to maximize training effects. PMID:20957074

  6. Planning strategies for development of effective exercise and nutrition countermeasures for long-duration space flight.

    PubMed

    Convertino, Victor A

    2002-10-01

    Exercise and nutrition represent primary countermeasures used during space flight to maintain or restore maximal aerobic capacity, musculoskeletal structure, and orthostatic function. However, no single exercise, dietary regimen, or combination of prescriptions has proven entirely effective in maintaining or restoring cardiovascular and musculoskeletal functions to preflight levels after prolonged space flight. As human space flight exposures increase in duration, identification, assessment, and development of various effective exercise- and nutrition-based protective procedures will become paramount. The application of adequate dietary intake in combination with effective exercise prescription will be based on identification of basic physiologic stimuli that maintain normal function in terrestrial gravity, and understanding how specific combinations of exercise characteristics (e.g., duration, frequency, intensity, and mode) can be combined with minimal nutritional requirements that mimic the stimuli normally produced by living in Earth's gravity environment. This can be accomplished only with greater emphasis of research on ground-based experiments targeted at understanding the interactions between caloric intake and expenditure during space flight. Future strategies for application of nutrition and exercise countermeasures for long-duration space missions must be directed to minimizing crew time and the impact on life-support resources.

  7. Breaking sarcomeres by in vitro exercise

    PubMed Central

    Orfanos, Zacharias; Gödderz, Markus P. O.; Soroka, Ekaterina; Gödderz, Tobias; Rumyantseva, Anastasia; van der Ven, Peter F. M.; Hawke, Thomas J.; Fürst, Dieter O.

    2016-01-01

    Eccentric exercise leads to focal disruptions in the myofibrils, referred to as “lesions”. These structures are thought to contribute to the post-exercise muscle weakness, and to represent areas of mechanical damage and/or remodelling. Lesions have been investigated in human biopsies and animal samples after exercise. However, this approach does not examine the mechanisms behind lesion formation, or their behaviour during contraction. To circumvent this, we used electrical pulse stimulation (EPS) to simulate exercise in C2C12 myotubes, combined with live microscopy. EPS application led to the formation of sarcomeric lesions in the myotubes, resembling those seen in exercised mice, increasing in number with the time of application or stimulation intensity. Furthermore, transfection with an EGFP-tagged version of the lesion and Z-disc marker filamin-C allowed us to observe the formation of lesions using live cell imaging. Finally, using the same technique we studied the behaviour of these structures during contraction, and observed them to be passively stretching. This passive behaviour supports the hypothesis that lesions contribute to the post-exercise muscle weakness, protecting against further damage. We conclude that EPS can be reliably used as a model for the induction and study of sarcomeric lesions in myotubes in vitro. PMID:26804343

  8. Exercise in Young Adulthood with Simultaneous and Future Changes in Fruit and Vegetable Intake.

    PubMed

    Jayawardene, Wasantha P; Torabi, Mohammad R; Lohrmann, David K

    2016-01-01

    Regarding weight management, changes in exercise behavior can also influence nutrition behavior by application of self-regulatory psychological resources across behaviors (transfer effect). This study aimed to determine: (1) if changes in exercise frequency in young adulthood predict simultaneous changes in fruit/vegetable intake (transfer as co-occurrence); and (2) if exercise frequency affects future fruit/vegetable intake (transfer as carry-over). 6244 respondents of the National Longitudinal Survey of Youth 1997 were followed at ages 18-22 (Time-1), 23-27 (Time-2), and 27-31 (Time-3). Repeated measures analysis of variance and hierarchical multiple regression determined if the change in exercise frequency between Time-1 and Time-2 was associated with simultaneous and sequential changes in fruit/vegetable intake frequency, controlling for sex, race/ethnicity, education, income, body mass index, and baseline fruit/vegetable intake. Only 9% continued exercising for 30 minutes more than 5 days/week, while 15% transitioned to adequate exercise and another 15% transitioned to inadequate exercise; for both fruits and vegetables, intake of once per day or more increased with age. Males were more likely to exercise adequately and females to consume fruits/vegetables adequately. Exercise frequency transition was linearly associated with concurrent fruit/vegetable intake during Time-1 and Time-2. The highest increase in mean fruit/vegetable intake occurred for participants who transitioned from inadequate to adequate exercise. A significant Time-2 exercise frequency effect on Time-3 fruit/vegetable intake emerged, after accounting for baseline intake. Increase in Time-2 exercise by one day/week resulted in increased Time-3 fruit and vegetable intakes by 0.17 and 0.13 times/week, respectively. Transfer effects, although usually discussed in interventions, may also be applicable to voluntary behavior change processes. Newly engaging in and continuing exercise behavior over time may establish exercise habits that facilitate improved fruit/vegetable consumption. Interventions that facilitate transferring resources across behaviors likely will enhance this effect.

  9. Towards unbiased benchmarking of evolutionary and hybrid algorithms for real-valued optimisation

    NASA Astrophysics Data System (ADS)

    MacNish, Cara

    2007-12-01

    Randomised population-based algorithms, such as evolutionary, genetic and swarm-based algorithms, and their hybrids with traditional search techniques, have proven successful and robust on many difficult real-valued optimisation problems. This success, along with the readily applicable nature of these techniques, has led to an explosion in the number of algorithms and variants proposed. In order for the field to advance it is necessary to carry out effective comparative evaluations of these algorithms, and thereby better identify and understand those properties that lead to better performance. This paper discusses the difficulties of providing benchmarking of evolutionary and allied algorithms that is both meaningful and logistically viable. To be meaningful the benchmarking test must give a fair comparison that is free, as far as possible, from biases that favour one style of algorithm over another. To be logistically viable it must overcome the need for pairwise comparison between all the proposed algorithms. To address the first problem, we begin by attempting to identify the biases that are inherent in commonly used benchmarking functions. We then describe a suite of test problems, generated recursively as self-similar or fractal landscapes, designed to overcome these biases. For the second, we describe a server that uses web services to allow researchers to 'plug in' their algorithms, running on their local machines, to a central benchmarking repository.

  10. Requirements for benchmarking personal image retrieval systems

    NASA Astrophysics Data System (ADS)

    Bouguet, Jean-Yves; Dulong, Carole; Kozintsev, Igor; Wu, Yi

    2006-01-01

    It is now common to have accumulated tens of thousands of personal ictures. Efficient access to that many pictures can only be done with a robust image retrieval system. This application is of high interest to Intel processor architects. It is highly compute intensive, and could motivate end users to upgrade their personal computers to the next generations of processors. A key question is how to assess the robustness of a personal image retrieval system. Personal image databases are very different from digital libraries that have been used by many Content Based Image Retrieval Systems.1 For example a personal image database has a lot of pictures of people, but a small set of different people typically family, relatives, and friends. Pictures are taken in a limited set of places like home, work, school, and vacation destination. The most frequent queries are searched for people, and for places. These attributes, and many others affect how a personal image retrieval system should be benchmarked, and benchmarks need to be different from existing ones based on art images, or medical images for examples. The attributes of the data set do not change the list of components needed for the benchmarking of such systems as specified in2: - data sets - query tasks - ground truth - evaluation measures - benchmarking events. This paper proposed a way to build these components to be representative of personal image databases, and of the corresponding usage models.

  11. An Application-Based Performance Evaluation of NASAs Nebula Cloud Computing Platform

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Heistand, Steve; Jin, Haoqiang; Chang, Johnny; Hood, Robert T.; Mehrotra, Piyush; Biswas, Rupak

    2012-01-01

    The high performance computing (HPC) community has shown tremendous interest in exploring cloud computing as it promises high potential. In this paper, we examine the feasibility, performance, and scalability of production quality scientific and engineering applications of interest to NASA on NASA's cloud computing platform, called Nebula, hosted at Ames Research Center. This work represents the comprehensive evaluation of Nebula using NUTTCP, HPCC, NPB, I/O, and MPI function benchmarks as well as four applications representative of the NASA HPC workload. Specifically, we compare Nebula performance on some of these benchmarks and applications to that of NASA s Pleiades supercomputer, a traditional HPC system. We also investigate the impact of virtIO and jumbo frames on interconnect performance. Overall results indicate that on Nebula (i) virtIO and jumbo frames improve network bandwidth by a factor of 5x, (ii) there is a significant virtualization layer overhead of about 10% to 25%, (iii) write performance is lower by a factor of 25x, (iv) latency for short MPI messages is very high, and (v) overall performance is 15% to 48% lower than that on Pleiades for NASA HPC applications. We also comment on the usability of the cloud platform.

  12. Protocol for a national blood transfusion data warehouse from donor to recipient

    PubMed Central

    van Hoeven, Loan R; Hooftman, Babette H; Janssen, Mart P; de Bruijne, Martine C; de Vooght, Karen M K; Kemper, Peter; Koopman, Maria M W

    2016-01-01

    Introduction Blood transfusion has health-related, economical and safety implications. In order to optimise the transfusion chain, comprehensive research data are needed. The Dutch Transfusion Data warehouse (DTD) project aims to establish a data warehouse where data from donors and transfusion recipients are linked. This paper describes the design of the data warehouse, challenges and illustrative applications. Study design and methods Quantitative data on blood donors (eg, age, blood group, antibodies) and products (type of product, processing, storage time) are obtained from the national blood bank. These are linked to data on the transfusion recipients (eg, transfusions administered, patient diagnosis, surgical procedures, laboratory parameters), which are extracted from hospital electronic health records. Applications Expected scientific contributions are illustrated for 4 applications: determine risk factors, predict blood use, benchmark blood use and optimise process efficiency. For each application, examples of research questions are given and analyses planned. Conclusions The DTD project aims to build a national, continuously updated transfusion data warehouse. These data have a wide range of applications, on the donor/production side, recipient studies on blood usage and benchmarking and donor–recipient studies, which ultimately can contribute to the efficiency and safety of blood transfusion. PMID:27491665

  13. Performance implications from sizing a VM on multi-core systems: A Data analytic application s view

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seung-Hwan; Horey, James L; Begoli, Edmon

    In this paper, we present a quantitative performance analysis of data analytics applications running on multi-core virtual machines. Such environments form the core of cloud computing. In addition, data analytics applications, such as Cassandra and Hadoop, are becoming increasingly popular on cloud computing platforms. This convergence necessitates a better understanding of the performance and cost implications of such hybrid systems. For example, the very rst step in hosting applications in virtualized environments, requires the user to con gure the number of virtual processors and the size of memory. To understand performance implications of this step, we benchmarked three Yahoo Cloudmore » Serving Benchmark (YCSB) workloads in a virtualized multi-core environment. Our measurements indicate that the performance of Cassandra for YCSB workloads does not heavily depend on the processing capacity of a system, while the size of the data set is critical to performance relative to allocated memory. We also identi ed a strong relationship between the running time of workloads and various hardware events (last level cache loads, misses, and CPU migrations). From this analysis, we provide several suggestions to improve the performance of data analytics applications running on cloud computing environments.« less

  14. Flight program language requirements. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The activities and results of a study for the definition of flight program language requirements are described. A set of detailed requirements are presented for a language capable of supporting onboard application programming for the Marshall Space Flight Center's anticipated future activities in the decade of 1975-85. These requirements are based, in part, on the evaluation of existing flight programming language designs to determine the applicability of these designs to flight programming activities which are anticipated. The coding of benchmark problems in the selected programming languages is discussed. These benchmarks are in the form of program kernels selected from existing flight programs. This approach was taken to insure that the results of the study would reflect state of the art language capabilities, as well as to determine whether an existing language design should be selected for adaptation.

  15. Benchmarking Memory Performance with the Data Cube Operator

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael A.; Shabanov, Leonid V.

    2004-01-01

    Data movement across a computer memory hierarchy and across computational grids is known to be a limiting factor for applications processing large data sets. We use the Data Cube Operator on an Arithmetic Data Set, called ADC, to benchmark capabilities of computers and of computational grids to handle large distributed data sets. We present a prototype implementation of a parallel algorithm for computation of the operatol: The algorithm follows a known approach for computing views from the smallest parent. The ADC stresses all levels of grid memory and storage by producing some of 2d views of an Arithmetic Data Set of d-tuples described by a small number of integers. We control data intensity of the ADC by selecting the tuple parameters, the sizes of the views, and the number of realized views. Benchmarking results of memory performance of a number of computer architectures and of a small computational grid are presented.

  16. Always Wanted to Hack the Pentagon? DoD Says Bring It

    Science.gov Websites

    test and find vulnerabilities in the department's applications, websites and networks, he added Resolve/Foal Eagle 2010, a joint U.S. and South Korean command-post exercise with computer-based command-post exercise with computer-based simulations and field exercises. Cook said other networks

  17. 75 FR 80097 - Self-Regulatory Organizations; Notice of Filing and Immediate Effectiveness of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-21

    ... LLC Establishing Strike Price Intervals of $1 and Increasing Position and Exercise Limits With Respect... ``Index'') to (i) establish strike price intervals of $1.00 and (ii) increase the position and exercise... price intervals of $1.00 and (ii) increase the position and exercise limits applicable thereto. The...

  18. 12 CFR 7.4009 - Applicability of state law to national bank operations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... operations. (a) Authority of national banks. A national bank may exercise all powers authorized to it under... laws that obstruct, impair, or condition a national bank's ability to fully exercise its powers to... they only incidentally affect the exercise of national bank powers: (i) Contracts; (ii) Torts; (iii...

  19. 76 FR 32327 - Regulatory Guidance on the Designation of Steerable Rear Axle Operators (Tillermen) as Drivers of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-06

    ..., to ``tillerman,'' a person exercising control over the movement of a steerable rear axle on a CMV... Riggers Association, asking about other circumstances under which a person exercising control over a CMV's... AND PENALTIES Section 383.3, ``Applicability.'' ``Question 34: Would a tillerman, a person exercising...

  20. Examining exercise dependence symptomatology from a self-determination perspective.

    PubMed

    Edmunds, Jemma; Ntoumanis, Nikos; Duda, Joan L

    2006-11-01

    Background Pulling from Self-Determination Theory (SDT; Deci & Ryan, 1985), this study examined whether individuals classified as 'nondependent-symptomatic' and 'nondependent-asymptomatic' for exercise dependence differed in terms of reported levels of exercise-related psychological need satisfaction, self-determined versus controlling motivation and exercise behavior. In addition, we examined the type of motivational regulations predicting exercise behavior among these different groups, and their role as mediators between psychological need satisfaction and behavioral outcomes. Methods Participants (N = 339) completed measures of exercise-specific psychological need satisfaction, motivational regulations, exercise behavior and exercise dependence. Results Nondependent-symptomatic individuals reported higher levels of competence need satisfaction and all forms of motivational regulation, compared to nondependent-asymptomatic individuals. Introjected regulation approached significance as a positive predictor of strenuous exercise behavior for symptomatic individuals. Identified regulation was a positive predictor of strenuous exercise, and completely mediated the relationship between competence need satisfaction and strenuous exercise behavior, for asymptomatic individuals. Conclusions The findings reinforce the applicability of SDT to understanding the quantity and quality of engagement in exercise.

  1. Design and development of a mobile exercise application for home care aides and older adult medicaid home and community-based clients.

    PubMed

    Danilovich, Margaret K; Diaz, Laura; Saberbein, Gustavo; Healey, William E; Huber, Gail; Corcos, Daniel M

    2017-01-01

    We describe a community-engaged approach with Medicaid home and community-based services (HCBS), home care aide (HCA), client, and physical therapist stakeholders to develop a mobile application (app) exercise intervention through focus groups and interviews. Participants desired a short exercise program with modification capabilities, goal setting, and mechanisms to track progress. Concerns regarding participation were training needs and feasibility within usual care services. Technological preferences were for simple, easy-to-use, and engaging content. The app was piloted with HCA-client dyads (n = 5) to refine the intervention and evaluate content. Engaging stakeholders in intervention development provides valuable user-feedback on both desired exercise program contents and mobile technology preferences for HCBS recipients.

  2. Comparing probabilistic microbial risk assessments for drinking water against daily rather than annualised infection probability targets.

    PubMed

    Signor, R S; Ashbolt, N J

    2009-12-01

    Some national drinking water guidelines provide guidance on how to define 'safe' drinking water. Regarding microbial water quality, a common position is that the chance of an individual becoming infected by some reference waterborne pathogen (e.g. Cryptsporidium) present in the drinking water should < 10(-4) in any year. However the instantaneous levels of risk to a water consumer vary over the course of a year, and waterborne disease outbreaks have been associated with shorter-duration periods of heightened risk. Performing probabilistic microbial risk assessments is becoming commonplace to capture the impacts of temporal variability on overall infection risk levels. A case is presented here for adoption of a shorter-duration reference period (i.e. daily) infection probability target over which to assess, report and benchmark such risks. A daily infection probability benchmark may provide added incentive and guidance for exercising control over short-term adverse risk fluctuation events and their causes. Management planning could involve outlining measures so that the daily target is met under a variety of pre-identified event scenarios. Other benefits of a daily target could include providing a platform for managers to design and assess management initiatives, as well as simplifying the technical components of the risk assessment process.

  3. Verification of a neutronic code for transient analysis in reactors with Hex-z geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzalez-Pintor, S.; Verdu, G.; Ginestar, D.

    Due to the geometry of the fuel bundles, to simulate reactors such as VVER reactors it is necessary to develop methods that can deal with hexagonal prisms as basic elements of the spatial discretization. The main features of a code based on a high order finite element method for the spatial discretization of the neutron diffusion equation and an implicit difference method for the time discretization of this equation are presented and the performance of the code is tested solving the first exercise of the AER transient benchmark. The obtained results are compared with the reference results of the benchmarkmore » and with the results provided by PARCS code. (authors)« less

  4. Blind Pose Prediction, Scoring, and Affinity Ranking of the CSAR 2014 Dataset.

    PubMed

    Martiny, Virginie Y; Martz, François; Selwa, Edithe; Iorga, Bogdan I

    2016-06-27

    The 2014 CSAR Benchmark Exercise was focused on three protein targets: coagulation factor Xa, spleen tyrosine kinase, and bacterial tRNA methyltransferase. Our protocol involved a preliminary analysis of the structural information available in the Protein Data Bank for the protein targets, which allowed the identification of the most appropriate docking software and scoring functions to be used for the rescoring of several docking conformations datasets, as well as for pose prediction and affinity ranking. The two key points of this study were (i) the prior evaluation of molecular modeling tools that are most adapted for each target and (ii) the increased search efficiency during the docking process to better explore the conformational space of big and flexible ligands.

  5. Status Report on Laboratory Testing and International Collaborations in Salt.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuhlman, Kristopher L.; Matteo, Edward N.; Hadgu, Teklu

    This report is a summary of the international collaboration and laboratory work funded by the US Department of Energy Office of Nuclear Energy Spent Fuel and Waste Science & Technology (SFWST) as part of the Sandia National Laboratories Salt R&D work package. This report satisfies milestone levelfour milestone M4SF-17SN010303014. Several stand-alone sections make up this summary report, each completed by the participants. The first two sections discuss international collaborations on geomechanical benchmarking exercises (WEIMOS) and bedded salt investigations (KOSINA), while the last three sections discuss laboratory work conducted on brucite solubility in brine, dissolution of borosilicate glass into brine, andmore » partitioning of fission products into salt phases.« less

  6. Qualifying for the Green500: Experience with the newest generation of supercomputers at LANL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yilk, Todd

    The High Performance Computing Division of Los Alamos National Laboratory recently brought four new supercomputing platforms on line: Trinity with separate partitions built around the Haswell and Knights Landing CPU architectures for capability computing and Grizzly, Fire, and Ice for capacity computing applications. The power monitoring infrastructure of these machines is significantly enhanced over previous supercomputing generations at LANL and all were qualified at the highest level of the Green500 benchmark. Here, this paper discusses supercomputing at LANL, the Green500 benchmark, and notes on our experience meeting the Green500's reporting requirements.

  7. Qualifying for the Green500: Experience with the newest generation of supercomputers at LANL

    DOE PAGES

    Yilk, Todd

    2018-02-17

    The High Performance Computing Division of Los Alamos National Laboratory recently brought four new supercomputing platforms on line: Trinity with separate partitions built around the Haswell and Knights Landing CPU architectures for capability computing and Grizzly, Fire, and Ice for capacity computing applications. The power monitoring infrastructure of these machines is significantly enhanced over previous supercomputing generations at LANL and all were qualified at the highest level of the Green500 benchmark. Here, this paper discusses supercomputing at LANL, the Green500 benchmark, and notes on our experience meeting the Green500's reporting requirements.

  8. A benchmarking procedure for PIGE related differential cross-sections

    NASA Astrophysics Data System (ADS)

    Axiotis, M.; Lagoyannis, A.; Fazinić, S.; Harissopulos, S.; Kokkoris, M.; Preketes-Sigalas, K.; Provatas, G.

    2018-05-01

    The application of standard-less PIGE requires the a priori knowledge of the differential cross section of the reaction used for the quantification of each detected light element. Towards this end, a lot of datasets have been published the last few years from several laboratories around the world. The discrepancies often found between different measured cross sections can be resolved by applying a rigorous benchmarking procedure through the measurement of thick target yields. Such a procedure is proposed in the present paper and is applied in the case of the 19F(p,p‧ γ)19F reaction.

  9. Do metaboreceptors alter heat loss responses following dynamic exercise?

    PubMed

    McGinn, Ryan; Swift, Brendan; Binder, Konrad; Gagnon, Daniel; Kenny, Glen P

    2014-01-01

    Metaboreceptor activation during passive heating is known to influence cutaneous vascular conductance (CVC) and sweat rate (SR). However, whether metaboreceptors modulate the suppression of heat loss following dynamic exercise remains unclear. On separate days, before and after 15 min of high-intensity treadmill running in the heat (35°C), eight males underwent either 1) no isometric handgrip exercise (IHG) or ischemia (CON), 2) 1 min IHG (60% of maximum, IHG), 3) 1 min IHG followed by 2 min of ischemia (IHG+OCC), 4) 2 min of ischemia (OCC), or 5) 1 min IHG followed by 2 min of ischemia with application of lower body negative pressure (IHG+LBNP). SR (ventilated capsule), cutaneous blood flow (Laser-Doppler), and mean arterial pressure (Finometer) were measured continuously before and after dynamic exercise. Following dynamic exercise, CVC was reduced with IHG exercise (P < 0.05) and remained attenuated with post-IHG ischemia during IHG+OCC relative to CON (39 ± 2 vs. 47 ± 6%, P < 0.05). Furthermore, the reduction in CVC was exacerbated by application of LBNP during post-IHG ischemia (35 ± 3%, P < 0.05) relative to IHG+OCC. SR increased during IHG exercise (P < 0.05) and remained elevated during post-IHG ischemia relative to CON following dynamic exercise (0.94 ± 0.15 vs. 0.53 ± 0.09 mg·min(-1)·cm(-2), P < 0.05). In contrast, application of LBNP during post-IHG ischemia had no effect on SR (0.93 ± 0.09 mg·min(-1)·cm(-2), P > 0.05) relative to post-IHG ischemia during IHG+OCC. We show that CVC is reduced and that SR is increased by metaboreceptor activation following dynamic exercise. In addition, we show that the metaboreflex-induced loading of the baroreceptors can influence the CVC response, but not the sweating response.

  10. Poster Development and Presentation to Improve Scientific Inquiry and Broaden Effective Scientific Communication Skills.

    PubMed

    Rauschenbach, Ines; Keddis, Ramaydalis; Davis, Diane

    2018-01-01

    We have redesigned a tried-and-true laboratory exercise into an inquiry-based team activity exploring microbial growth control, and implemented this activity as the basis for preparing a scientific poster in a large, multi-section laboratory course. Spanning most of the semester, this project culminates in a poster presentation of data generated from a student-designed experiment. Students use and apply the scientific method and improve written and verbal communication skills. The guided inquiry format of this exercise provides the opportunity for student collaboration through cooperative learning. For each learning objective, a percentage score was tabulated (learning objective score = points awarded/total possible points). A score of 80% was our benchmark for achieving each objective. At least 76% of the student groups participating in this project over two semesters achieved each learning goal. Student perceptions of the project were evaluated using a survey. Nearly 90% of participating students felt they had learned a great deal in the areas of formulating a hypothesis, experimental design, and collecting and analyzing data; 72% of students felt this project had improved their scientific writing skills. In a separate survey, 84% of students who responded felt that peer review was valuable in improving their final poster submission. We designed this inquiry-based poster project to improve student scientific communication skills. This exercise is appropriate for any microbiology laboratory course whose learning outcomes include the development of scientific inquiry and literacy.

  11. Poster Development and Presentation to Improve Scientific Inquiry and Broaden Effective Scientific Communication Skills †

    PubMed Central

    Rauschenbach, Ines; Keddis, Ramaydalis; Davis, Diane

    2018-01-01

    We have redesigned a tried-and-true laboratory exercise into an inquiry-based team activity exploring microbial growth control, and implemented this activity as the basis for preparing a scientific poster in a large, multi-section laboratory course. Spanning most of the semester, this project culminates in a poster presentation of data generated from a student-designed experiment. Students use and apply the scientific method and improve written and verbal communication skills. The guided inquiry format of this exercise provides the opportunity for student collaboration through cooperative learning. For each learning objective, a percentage score was tabulated (learning objective score = points awarded/total possible points). A score of 80% was our benchmark for achieving each objective. At least 76% of the student groups participating in this project over two semesters achieved each learning goal. Student perceptions of the project were evaluated using a survey. Nearly 90% of participating students felt they had learned a great deal in the areas of formulating a hypothesis, experimental design, and collecting and analyzing data; 72% of students felt this project had improved their scientific writing skills. In a separate survey, 84% of students who responded felt that peer review was valuable in improving their final poster submission. We designed this inquiry-based poster project to improve student scientific communication skills. This exercise is appropriate for any microbiology laboratory course whose learning outcomes include the development of scientific inquiry and literacy. PMID:29904518

  12. Detection and characterization of exercise induced muscle damage (EIMD) via thermography and image processing

    NASA Astrophysics Data System (ADS)

    Avdelidis, N. P.; Kappatos, V.; Georgoulas, G.; Karvelis, P.; Deli, C. K.; Theodorakeas, P.; Giakas, G.; Tsiokanos, A.; Koui, M.; Jamurtas, A. Z.

    2017-04-01

    Exercise induced muscle damage (EIMD), is usually experienced in i) humans who have been physically inactive for prolonged periods of time and then begin with sudden training trials and ii) athletes who train over their normal limits. EIMD is not so easy to be detected and quantified, by means of commonly measurement tools and methods. Thermography has been used successfully as a research detection tool in medicine for the last 6 decades but very limited work has been reported on EIMD area. The main purpose of this research is to assess and characterize EIMD, using thermography and image processing techniques. The first step towards that goal is to develop a reliable segmentation technique to isolate the region of interest (ROI). A semi-automatic image processing software was designed and regions of the left and right leg based on superpixels were segmented. The image is segmented into a number of regions and the user is able to intervene providing the regions which belong to each of the two legs. In order to validate the image processing software, an extensive experimental investigation was carried out, acquiring thermographic images of the rectus femoris muscle before, immediately post and 24, 48 and 72 hours after an acute bout of eccentric exercise (5 sets of 15 maximum repetitions), on males and females (20-30 year-old). Results indicate that the semi-automated approach provides an excellent bench-mark that can be used as a clinical reliable tool.

  13. Participation in proficiency test for tritium strontium and caesium isotopes in seawater 2015 (IAEA-RML-2015-02)

    NASA Astrophysics Data System (ADS)

    Visetpotjanakit, S.; Kaewpaluek, S.

    2017-06-01

    A proficiency test (PT) exercise has proposed by the International Atomic Energy Agency (IAEA) in the frame of the IAEA Technical Cooperation project RAS/7/021 “Marine benchmark study on the possible impact of the Fukushima radioactive releases in the Asia-Pacific Region for Caesium Determination in Sea Water” since 2012. In 2015 the exercise was referred to Proficiency Test for Tritium, Strontium and Caesium Isotopes in Seawater 2015 (IAEA-RML-2015-02) to analyse3H, 134Cs, 137Cs and90Sr in a seawater sample. OAP was one of the 17 laboratories from 15 countries from Asia-Pacific Region who joined the PT exercise. The aim of our participation was to validate our analytical performance for the accurate determination of radionuclides in seawater by developed methods of radiochemical analysis. OAP submitted results determining the concentration for the three elements i.e. 134Cs, 137Cs and90Sr in seawater to the IAEA. A critical review was made to check suitability of our methodology and the criteria for the accuracy, precision and trueness of our data. The results of both 134Cs and 137Cs passed all criteria which were assigned “Accepted” statuses. Whereas 90Sr analysis did not pass the accuracy test therefore it was considered as “Not accepted” Our results and all other participant results with critical comments were published in the IAEA proficiency test report.

  14. Outcome quality of in-patient cardiac rehabilitation in elderly patients--identification of relevant parameters.

    PubMed

    Salzwedel, Annett; Nosper, Manfred; Röhrig, Bernd; Linck-Eleftheriadis, Sigrid; Strandt, Gert; Völler, Heinz

    2014-02-01

    Outcome quality management requires the consecutive registration of defined variables. The aim was to identify relevant parameters in order to objectively assess the in-patient rehabilitation outcome. From February 2009 to June 2010 1253 patients (70.9 ± 7.0 years, 78.1% men) at 12 rehabilitation clinics were enrolled. Items concerning sociodemographic data, the impairment group (surgery, conservative/interventional treatment), cardiovascular risk factors, structural and functional parameters and subjective health were tested in respect of their measurability, sensitivity to change and their propensity to be influenced by rehabilitation. The majority of patients (61.1%) were referred for rehabilitation after cardiac surgery, 38.9% after conservative or interventional treatment for an acute coronary syndrome. Functionally relevant comorbidities were seen in 49.2% (diabetes mellitus, stroke, peripheral artery disease, chronic obstructive lung disease). In three key areas 13 parameters were identified as being sensitive to change and subject to modification by rehabilitation: cardiovascular risk factors (blood pressure, low-density lipoprotein cholesterol, triglycerides), exercise capacity (resting heart rate, maximal exercise capacity, maximal walking distance, heart failure, angina pectoris) and subjective health (IRES-24 (indicators of rehabilitation status): pain, somatic health, psychological well-being and depression as well as anxiety on the Hospital Anxiety and Depression Scale). The outcome of in-patient rehabilitation in elderly patients can be comprehensively assessed by the identification of appropriate key areas, that is, cardiovascular risk factors, exercise capacity and subjective health. This may well serve as a benchmark for internal and external quality management.

  15. Application of Stochastic Learning Theory to Elementary Arithmetic Exercises. Technical Report No. 302. Psychology and Education Series.

    ERIC Educational Resources Information Center

    Wagner, William J.

    The application of a linear learning model, which combines learning theory with a structural analysis of the exercises given to students, to an elementary mathematics curriculum is examined. Elementary arithmetic items taken by about 100 second-grade students on 26 weekly tests form the data base. Weekly predictions of group performance on…

  16. The 'Critical Power' Concept: Applications to Sports Performance with a Focus on Intermittent High-Intensity Exercise.

    PubMed

    Jones, Andrew M; Vanhatalo, Anni

    2017-03-01

    The curvilinear relationship between power output and the time for which it can be sustained is a fundamental and well-known feature of high-intensity exercise performance. This relationship 'levels off' at a 'critical power' (CP) that separates power outputs that can be sustained with stable values of, for example, muscle phosphocreatine, blood lactate, and pulmonary oxygen uptake ([Formula: see text]), from power outputs where these variables change continuously with time until their respective minimum and maximum values are reached and exercise intolerance occurs. The amount of work that can be done during exercise above CP (the so-called W') is constant but may be utilized at different rates depending on the proximity of the exercise power output to CP. Traditionally, this two-parameter CP model has been employed to provide insights into physiological responses, fatigue mechanisms, and performance capacity during continuous constant power output exercise in discrete exercise intensity domains. However, many team sports (e.g., basketball, football, hockey, rugby) involve frequent changes in exercise intensity and, even in endurance sports (e.g., cycling, running), intensity may vary considerably with environmental/course conditions and pacing strategy. In recent years, the appeal of the CP concept has been broadened through its application to intermittent high-intensity exercise. With the assumptions that W' is utilized during work intervals above CP and reconstituted during recovery intervals below CP, it can be shown that performance during intermittent exercise is related to four factors: the intensity and duration of the work intervals and the intensity and duration of the recovery intervals. However, while the utilization of W' may be assumed to be linear, studies indicate that the reconstitution of W' may be curvilinear with kinetics that are highly variable between individuals. This has led to the development of a new CP model for intermittent exercise in which the balance of W' remaining ([Formula: see text]) may be calculated with greater accuracy. Field trials of athletes performing stochastic exercise indicate that this [Formula: see text] model can accurately predict the time at which W' tends to zero and exhaustion is imminent. The [Formula: see text] model potentially has important applications in the real-time monitoring of athlete fatigue progression in endurance and team sports, which may inform tactics and influence pacing strategy.

  17. Functional electrical stimulation: cardiorespiratory adaptations and applications for training in paraplegia.

    PubMed

    Deley, Gaëlle; Denuziller, Jérémy; Babault, Nicolas

    2015-01-01

    Regular exercise can be broadly beneficial to health and quality of life in humans with spinal cord injury (SCI). However, exercises must meet certain criteria, such as the intensity and muscle mass involved, to induce significant benefits. SCI patients can have difficulty achieving these exercise requirements since the paralysed muscles cannot contribute to overall oxygen consumption. One solution is functional electrical stimulation (FES) and, more importantly, hybrid training that combines volitional arm and electrically controlled contractions of the lower limb muscles. However, it might be rather complicated for therapists to use FES because of the wide variety of protocols that can be employed, such as stimulation parameters or movements induced. Moreover, although the short-term physiological and psychological responses during different types of FES exercises have been extensively reported, there are fewer data regarding the long-term effects of FES. Therefore, the purpose of this brief review is to provide a critical appraisal and synthesis of the literature on the use of FES for exercise in paraplegic individuals. After a short introduction underlying the importance of exercise for SCI patients, the main applications and effects of FES are reviewed and discussed. Major findings reveal an increased physiological demand during FES hybrid exercises as compared with arms only exercises. In addition, when repeated within a training period, FES exercises showed beneficial effects on muscle characteristics, force output, exercise capacity, bone mineral density and cardiovascular parameters. In conclusion, there appears to be promising evidence of beneficial effects of FES training, and particularly FES hybrid training, for paraplegic individuals.

  18. A new deadlock resolution protocol and message matching algorithm for the extreme-scale simulator

    DOE PAGES

    Engelmann, Christian; Naughton, III, Thomas J.

    2016-03-22

    Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different HPC architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1)~a new deadlock resolution protocol to reduce the parallel discrete event simulation overhead and (2)~a new simulated MPI message matchingmore » algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement. The simulation overhead for running the NAS Parallel Benchmark suite was reduced from 102% to 0% for the embarrassingly parallel (EP) benchmark and from 1,020% to 238% for the conjugate gradient (CG) benchmark. xSim offers a highly accurate simulation mode for better tracking of injected MPI process failures. Furthermore, with highly accurate simulation, the overhead was reduced from 3,332% to 204% for EP and from 37,511% to 13,808% for CG.« less

  19. Paradoxical Acinetobacter-associated ventilator-associated pneumonia incidence rates within prevention studies using respiratory tract applications of topical polymyxin: benchmarking the evidence base.

    PubMed

    Hurley, J C

    2018-04-10

    Regimens containing topical polymyxin appear to be more effective in preventing ventilator-associated pneumonia (VAP) than other methods. To benchmark the incidence rates of Acinetobacter-associated VAP (AAVAP) within component (control and intervention) groups from concurrent controlled studies of polymyxin compared with studies of various VAP prevention methods other than polymyxin (non-polymyxin studies). An AAVAP benchmark was derived using data from 77 observational groups without any VAP prevention method under study. Data from 41 non-polymyxin studies provided additional points of reference. The benchmarking was undertaken by meta-regression using generalized estimating equation methods. Within 20 studies of topical polymyxin, the mean AAVAP was 4.6% [95% confidence interval (CI) 3.0-6.9] and 3.7% (95% CI 2.0-5.3) for control and intervention groups, respectively. In contrast, the AAVAP benchmark was 1.5% (95% CI 1.2-2.0). In the AAVAP meta-regression model, group origin from a trauma intensive care unit (+0.55; +0.16 to +0.94, P = 0.006) or membership of a polymyxin control group (+0.64; +0.21 to +1.31, P = 0.023), but not membership of a polymyxin intervention group (+0.24; -0.37 to +0.84, P = 0.45), were significant positive correlates. The mean incidence of AAVAP within the control groups of studies of topical polymyxin is more than double the benchmark, whereas the incidence rates within the groups of non-polymyxin studies and, paradoxically, polymyxin intervention groups are more similar to the benchmark. These incidence rates, which are paradoxical in the context of an apparent effect against VAP within controlled trials of topical polymyxin-based interventions, force a re-appraisal. Copyright © 2018 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  20. Crew Exercise Fact Sheet

    NASA Technical Reports Server (NTRS)

    Rafalik, Kerrie

    2017-01-01

    Johnson Space Center (JSC) provides research, engineering, development, integration, and testing of hardware and software technologies for exercise systems applications in support of human spaceflight. This includes sustaining the current suite of on-orbit exercise devices by reducing maintenance, addressing obsolescence, and increasing reliability through creative engineering solutions. Advanced exercise systems technology development efforts focus on the sustainment of crew's physical condition beyond Low Earth Orbit for extended mission durations with significantly reduced mass, volume, and power consumption when compared to the ISS.

  1. Crew Exercise

    NASA Technical Reports Server (NTRS)

    Rafalik, Kerrie K.

    2017-01-01

    Johnson Space Center (JSC) provides research, engineering, development, integration, and testing of hardware and software technologies for exercise systems applications in support of human spaceflight. This includes sustaining the current suite of on-orbit exercise devices by reducing maintenance, addressing obsolescence, and increasing reliability through creative engineering solutions. Advanced exercise systems technology development efforts focus on the sustainment of crew's physical condition beyond Low Earth Orbit for extended mission durations with significantly reduced mass, volume, and power consumption when compared to the ISS.

  2. Application of acute maximal exercise to protect orthostatic tolerance after simulated microgravity

    NASA Technical Reports Server (NTRS)

    Engelke, K. A.; Doerr, D. F.; Crandall, C. G.; Convertino, V. A.

    1996-01-01

    We tested the hypothesis that one bout of maximal exercise performed at the conclusion of prolonged simulated microgravity would improve blood pressure stability during an orthostatic challenge. Heart rate (HR), mean arterial blood pressure (MAP), norepinephrine (NE), epinephrine (E), arginine vasopressin (AVP), plasma renin activity (PRA), atrial natriuretic peptide (ANP), cardiac output (Q), forearm vascular resistance (FVR), and changes in leg volume were measured during lower body negative pressure (LBNP) to presyncope in seven subjects immediately prior to reambulation from 16 days of 6 degrees head-down tilt (HDT) under two experimental conditions: 1) after maximal supine cycle ergometry performed 24 h before returning to the upright posture (exercise) and 2) without exercise (control). After HDT, the reduction of LBNP tolerance time from pre-HDT levels was greater (P = 0.041) in the control condition (-2.0 +/- 0.2 min) compared with the exercise condition (-0.4 +/- 0.2 min). At presyncope after HDT, FVR and NE were higher (P < 0.05) after exercise compared with control, whereas MAP, HR, E, AVP, PRA, ANP, and leg volume were similar in both conditions. Plasma volume (PV) and carotid-cardiac baroreflex sensitivity were reduced after control HDT, but were restored by the exercise treatment. Maintenance of orthostatic tolerance by application of acute intense exercise after 16 days of simulated microgravity was associated with greater circulating levels of NE, vasoconstriction, Q, baroreflex sensitivity, and PV.

  3. Multiobjective Multifactorial Optimization in Evolutionary Multitasking.

    PubMed

    Gupta, Abhishek; Ong, Yew-Soon; Feng, Liang; Tan, Kay Chen

    2016-05-03

    In recent decades, the field of multiobjective optimization has attracted considerable interest among evolutionary computation researchers. One of the main features that makes evolutionary methods particularly appealing for multiobjective problems is the implicit parallelism offered by a population, which enables simultaneous convergence toward the entire Pareto front. While a plethora of related algorithms have been proposed till date, a common attribute among them is that they focus on efficiently solving only a single optimization problem at a time. Despite the known power of implicit parallelism, seldom has an attempt been made to multitask, i.e., to solve multiple optimization problems simultaneously. It is contended that the notion of evolutionary multitasking leads to the possibility of automated transfer of information across different optimization exercises that may share underlying similarities, thereby facilitating improved convergence characteristics. In particular, the potential for automated transfer is deemed invaluable from the standpoint of engineering design exercises where manual knowledge adaptation and reuse are routine. Accordingly, in this paper, we present a realization of the evolutionary multitasking paradigm within the domain of multiobjective optimization. The efficacy of the associated evolutionary algorithm is demonstrated on some benchmark test functions as well as on a real-world manufacturing process design problem from the composites industry.

  4. [Physical activity by pregnant women and its influence on maternal and foetal parameters; a systematic review].

    PubMed

    Aguilar Cordero, M J; Sánchez López, A M; Rodríguez Blanque, R; Noack Segovia, J P; Pozo Cano, M D; López-Contreras, G; Mur Villar, N

    2014-10-01

    Regular physical activity is known to be very beneficial to health. While it is important at all stages of life, during pregnancy doubts may arise about the suitability of physical exercise, as well as the type of activity, its frequency, intensity and duration. To analyse major studies on the influence of physical activity on maternal and foetal parameters. Systematic review of physical activity programmes for pregnant women and the results achieved, during pregnancy, childbirth and postpartum. 45 items were identified through an automated database search in PubMed, Scopus and Google Scholar, carried out from October 2013 to March 2014. In selecting the items, the criteria applied included the usefulness and relevance of the subject matter and the credibility or experience of the research study authors. The internal and external validity of each of the articles reviewed was taken into account. The results of the review highlight the importance of physical activity during pregnancy, and show that the information currently available can serve as an initial benchmark for further investigation into the impact of regular physical exercise, in an aquatic environment, on maternal-foetal health. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  5. Harmonizing lipidomics: NIST interlaboratory comparison exercise for lipidomics using SRM 1950-Metabolites in Frozen Human Plasma.

    PubMed

    Bowden, John A; Heckert, Alan; Ulmer, Candice Z; Jones, Christina M; Koelmel, Jeremy P; Abdullah, Laila; Ahonen, Linda; Alnouti, Yazen; Armando, Aaron M; Asara, John M; Bamba, Takeshi; Barr, John R; Bergquist, Jonas; Borchers, Christoph H; Brandsma, Joost; Breitkopf, Susanne B; Cajka, Tomas; Cazenave-Gassiot, Amaury; Checa, Antonio; Cinel, Michelle A; Colas, Romain A; Cremers, Serge; Dennis, Edward A; Evans, James E; Fauland, Alexander; Fiehn, Oliver; Gardner, Michael S; Garrett, Timothy J; Gotlinger, Katherine H; Han, Jun; Huang, Yingying; Neo, Aveline Huipeng; Hyötyläinen, Tuulia; Izumi, Yoshihiro; Jiang, Hongfeng; Jiang, Houli; Jiang, Jiang; Kachman, Maureen; Kiyonami, Reiko; Klavins, Kristaps; Klose, Christian; Köfeler, Harald C; Kolmert, Johan; Koal, Therese; Koster, Grielof; Kuklenyik, Zsuzsanna; Kurland, Irwin J; Leadley, Michael; Lin, Karen; Maddipati, Krishna Rao; McDougall, Danielle; Meikle, Peter J; Mellett, Natalie A; Monnin, Cian; Moseley, M Arthur; Nandakumar, Renu; Oresic, Matej; Patterson, Rainey; Peake, David; Pierce, Jason S; Post, Martin; Postle, Anthony D; Pugh, Rebecca; Qiu, Yunping; Quehenberger, Oswald; Ramrup, Parsram; Rees, Jon; Rembiesa, Barbara; Reynaud, Denis; Roth, Mary R; Sales, Susanne; Schuhmann, Kai; Schwartzman, Michal Laniado; Serhan, Charles N; Shevchenko, Andrej; Somerville, Stephen E; St John-Williams, Lisa; Surma, Michal A; Takeda, Hiroaki; Thakare, Rhishikesh; Thompson, J Will; Torta, Federico; Triebl, Alexander; Trötzmüller, Martin; Ubhayasekera, S J Kumari; Vuckovic, Dajana; Weir, Jacquelyn M; Welti, Ruth; Wenk, Markus R; Wheelock, Craig E; Yao, Libin; Yuan, Min; Zhao, Xueqing Heather; Zhou, Senlin

    2017-12-01

    As the lipidomics field continues to advance, self-evaluation within the community is critical. Here, we performed an interlaboratory comparison exercise for lipidomics using Standard Reference Material (SRM) 1950-Metabolites in Frozen Human Plasma, a commercially available reference material. The interlaboratory study comprised 31 diverse laboratories, with each laboratory using a different lipidomics workflow. A total of 1,527 unique lipids were measured across all laboratories and consensus location estimates and associated uncertainties were determined for 339 of these lipids measured at the sum composition level by five or more participating laboratories. These evaluated lipids detected in SRM 1950 serve as community-wide benchmarks for intra- and interlaboratory quality control and method validation. These analyses were performed using nonstandardized laboratory-independent workflows. The consensus locations were also compared with a previous examination of SRM 1950 by the LIPID MAPS consortium. While the central theme of the interlaboratory study was to provide values to help harmonize lipids, lipid mediators, and precursor measurements across the community, it was also initiated to stimulate a discussion regarding areas in need of improvement. Copyright © 2017 by the American Society for Biochemistry and Molecular Biology, Inc.

  6. EnergyIQ

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MILLS, EVAN; MATTHE, PAUL; STOUFER, MARTIN

    2016-10-06

    EnergyIQ-the first "action-oriented" benchmarking tool for non-residential buildings-provides a standardized opportunity assessment based on benchmarking results. along with decision-support information to help refine action plans. EnergyIQ offers a wide array of benchmark metrics, with visuall as well as tabular display. These include energy, costs, greenhouse-gas emissions, and a large array of characteristics (e.g. building components or operational strategies). The tool supports cross-sectional benchmarking for comparing the user's building to it's peers at one point in time, as well as longitudinal benchmarking for tracking the performance of an individual building or enterprise portfolio over time. Based on user inputs, the toolmore » generates a list of opportunities and recommended actions. Users can then explore the "Decision Support" module for helpful information on how to refine action plans, create design-intent documentation, and implement improvements. This includes information on best practices, links to other energy analysis tools and more. The variety of databases are available within EnergyIQ from which users can specify peer groups for comparison. Using the tool, this data can be visually browsed and used as a backdrop against which to view a variety of energy benchmarking metrics for the user's own building. User can save their project information and return at a later date to continue their exploration. The initial database is the CA Commercial End-Use Survey (CEUS), which provides details on energy use and characteristics for about 2800 buildings (and 62 building types). CEUS is likely the most thorough survey of its kind every conducted. The tool is built as a web service. The EnergyIQ web application is written in JSP with pervasive us of JavaScript and CSS2. EnergyIQ also supports a SOAP based web service to allow the flow of queries and data to occur with non-browser implementations. Data are stored in an Oracle 10g database. References: Mills, Mathew, Brook and Piette. 2008. "Action Oriented Benchmarking: Concepts and Tools." Energy Engineering, Vol.105, No. 4, pp 21-40. LBNL-358E; Mathew, Mills, Bourassa, Brook. 2008. "Action-Oriented Benchmarking: Using the CEUS Database to Benchmark Commercial Buildings in California." Energy Engineering, Vol 105, No. 5, pp 6-18. LBNL-502E.« less

  7. A Cochlear Implant Signal Processing Lab: Exploration of a Problem-Based Learning Exercise

    ERIC Educational Resources Information Center

    Bhatti, P. T.; McClellan, J. H.

    2011-01-01

    This paper presents an introductory signal processing laboratory and examines this laboratory exercise in the context of problem-based learning (PBL). Centered in a real-world application, a cochlear implant, the exercise challenged students to demonstrate a working software-based signal processor. Partnering in groups of two or three, second-year…

  8. The Toxics Geography Exercise: Students Use Inquiry to Uncover Uses and Limits of Data in Policy Analysis

    ERIC Educational Resources Information Center

    Duke, L. Donald; Schmidt, Diane L.

    2011-01-01

    The Toxics Geography Exercise was developed as an application-oriented exercise to develop skills in critical analysis in groups of undergraduate students from widely diverse academic backgrounds. Students use publicly available data on industrial activities, history of toxic material disposal, basic chemistry, regulatory approaches of federal and…

  9. The measurement conundrum in exercise adherence research.

    PubMed

    Dishman, R K

    1994-11-01

    This paper has two purposes. It first prefaces a symposium titled "Exercise adherence and behavior change: prospects, problems, and future directions." The symposium describes the progress made during the past 5 years toward understanding the adoption and maintenance of physical activity and exercise. Specifically, research is discussed that has tested the applicability to physical activity of four psychological models of behavior: Reasoned Action, Planned Behavior, Social-Cognitive Theory, and the Transtheoretical Model of stages of change. Recent exercise interventions in clinical/community settings also are discussed to illustrate how theoretical models can be implemented to increase and maintain exercise. The second purpose of this paper is to provide a brief summary of the contemporary literatures on the determinants of physical activity and interventions designed to increase and maintain physical activity. The summary focuses on the measurement problems that have limited the advances made in theory and application in these areas of research. Progress toward resolving the measurement problems during the past 5 years is contrasted with earlier scientific consensus.

  10. GROWTH OF THE INTERNATIONAL CRITICALITY SAFETY AND REACTOR PHYSICS EXPERIMENT EVALUATION PROJECTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Blair Briggs; John D. Bess; Jim Gulliford

    2011-09-01

    Since the International Conference on Nuclear Criticality Safety (ICNC) 2007, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) have continued to expand their efforts and broaden their scope. Eighteen countries participated on the ICSBEP in 2007. Now, there are 20, with recent contributions from Sweden and Argentina. The IRPhEP has also expanded from eight contributing countries in 2007 to 16 in 2011. Since ICNC 2007, the contents of the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments1' have increased from 442 evaluations (38000 pages), containing benchmark specifications for 3955 critical ormore » subcritical configurations to 516 evaluations (nearly 55000 pages), containing benchmark specifications for 4405 critical or subcritical configurations in the 2010 Edition of the ICSBEP Handbook. The contents of the Handbook have also increased from 21 to 24 criticality-alarm-placement/shielding configurations with multiple dose points for each, and from 20 to 200 configurations categorized as fundamental physics measurements relevant to criticality safety applications. Approximately 25 new evaluations and 150 additional configurations are expected to be added to the 2011 edition of the Handbook. Since ICNC 2007, the contents of the 'International Handbook of Evaluated Reactor Physics Benchmark Experiments2' have increased from 16 different experimental series that were performed at 12 different reactor facilities to 53 experimental series that were performed at 30 different reactor facilities in the 2011 edition of the Handbook. Considerable effort has also been made to improve the functionality of the searchable database, DICE (Database for the International Criticality Benchmark Evaluation Project) and verify the accuracy of the data contained therein. DICE will be discussed in separate papers at ICNC 2011. The status of the ICSBEP and the IRPhEP will be discussed in the full paper, selected benchmarks that have been added to the ICSBEP Handbook will be highlighted, and a preview of the new benchmarks that will appear in the September 2011 edition of the Handbook will be provided. Accomplishments of the IRPhEP will also be highlighted and the future of both projects will be discussed. REFERENCES (1) International Handbook of Evaluated Criticality Safety Benchmark Experiments, NEA/NSC/DOC(95)03/I-IX, Organisation for Economic Co-operation and Development-Nuclear Energy Agency (OECD-NEA), September 2010 Edition, ISBN 978-92-64-99140-8. (2) International Handbook of Evaluated Reactor Physics Benchmark Experiments, NEA/NSC/DOC(2006)1, Organisation for Economic Co-operation and Development-Nuclear Energy Agency (OECD-NEA), March 2011 Edition, ISBN 978-92-64-99141-5.« less

  11. Effects of low-dye taping on plantar pressure pre and post exercise: an exploratory study.

    PubMed

    Nolan, Damien; Kennedy, Norelee

    2009-04-21

    Low-Dye taping is used for excessive pronation at the subtalar joint of the foot. Previous research has focused on the tape's immediate effect on plantar pressure. Its effectiveness following exercise has not been investigated. Peak plantar pressure distribution provides an indirect representation of subtalar joint kinematics. The objectives of the study were 1) To determine the effects of Low-Dye taping on peak plantar pressure immediately post-application. 2) To determine whether any initial effects are maintained following exercise. 12 asymptomatic subjects participated; each being screened for excessive pronation (navicular drop > 10 mm). Plantar pressure data was recorded, using the F-scan, at four intervals during the testing session: un-taped, baseline-taped, post-exercise session 1, and post-exercise session 2. Each exercise session consisted of a 10-minute walk at a normal pace. The foot was divided into 6 regions during data analysis. Repeated-measures analysis of variance (ANOVA) was used to assess regional pressure variations across the four testing conditions. Reduced lateral forefoot peak plantar pressure was the only significant difference immediately post tape application (p = 0.039). This effect was lost after 10 minutes of exercise (p = 0.036). Each exercise session resulted in significantly higher medial forefoot peak pressure compared to un-taped; (p = 0.015) and (p = 0.014) respectively, and baseline-taped; (p = 0.036) and (p = 0.015) respectively. Medial and lateral rearfoot values had also increased after the second session (p = 0.004), following their non-significant reduction at baseline-taped. A trend towards a medial-to-lateral shift in pressure present in the midfoot immediately following tape application was still present after 20 minutes of exercise. Low-Dye tape's initial effect of reduced lateral forefoot peak plantar pressure was lost after a 10-minute walk. However, the tape continued to have an effect on the medial forefoot after 20 minutes of exercise. Further studies with larger sample sizes are required to examine the important finding of the anti-pronatory trend present in the midfoot.

  12. 45 CFR 3.2 - Applicability.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... § 3.42(f) Smoking. (c) All regulations in this part are in addition to the provisions in the United.... Drivers to exercise due care Transportation, Sec. 21-504 Drivers shall exercise due care to avoid...

  13. Performance and scalability evaluation of "Big Memory" on Blue Gene Linux.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshii, K.; Iskra, K.; Naik, H.

    2011-05-01

    We address memory performance issues observed in Blue Gene Linux and discuss the design and implementation of 'Big Memory' - an alternative, transparent memory space introduced to eliminate the memory performance issues. We evaluate the performance of Big Memory using custom memory benchmarks, NAS Parallel Benchmarks, and the Parallel Ocean Program, at a scale of up to 4,096 nodes. We find that Big Memory successfully resolves the performance issues normally encountered in Blue Gene Linux. For the ocean simulation program, we even find that Linux with Big Memory provides better scalability than does the lightweight compute node kernel designed solelymore » for high-performance applications. Originally intended exclusively for compute node tasks, our new memory subsystem dramatically improves the performance of certain I/O node applications as well. We demonstrate this performance using the central processor of the LOw Frequency ARray radio telescope as an example.« less

  14. Benchmarking organic mixed conductors for transistors.

    PubMed

    Inal, Sahika; Malliaras, George G; Rivnay, Jonathan

    2017-11-24

    Organic mixed conductors have garnered significant attention in applications from bioelectronics to energy storage/generation. Their implementation in organic transistors has led to enhanced biosensing, neuromorphic function, and specialized circuits. While a narrow class of conducting polymers continues to excel in these new applications, materials design efforts have accelerated as researchers target new functionality, processability, and improved performance/stability. Materials for organic electrochemical transistors (OECTs) require both efficient electronic transport and facile ion injection in order to sustain high capacity. In this work, we show that the product of the electronic mobility and volumetric charge storage capacity (µC*) is the materials/system figure of merit; we use this framework to benchmark and compare the steady-state OECT performance of ten previously reported materials. This product can be independently verified and decoupled to guide materials design and processing. OECTs can therefore be used as a tool for understanding and designing new organic mixed conductors.

  15. FDNS CFD Code Benchmark for RBCC Ejector Mode Operation

    NASA Technical Reports Server (NTRS)

    Holt, James B.; Ruf, Joe

    1999-01-01

    Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi-dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for both Diffusion and Afterburning (DAB) and Simultaneous Mixing and Combustion (SMC) test conditions. Results from both the 2D and the 3D models are presented.

  16. Database Are Not Toasters: A Framework for Comparing Data Warehouse Appliances

    NASA Astrophysics Data System (ADS)

    Trajman, Omer; Crolotte, Alain; Steinhoff, David; Nambiar, Raghunath Othayoth; Poess, Meikel

    The success of Business Intelligence (BI) applications depends on two factors, the ability to analyze data ever more quickly and the ability to handle ever increasing volumes of data. Data Warehouse (DW) and Data Mart (DM) installations that support BI applications have historically been built using traditional architectures either designed from the ground up or based on customized reference system designs. The advent of Data Warehouse Appliances (DA) brings packaged software and hardware solutions that address performance and scalability requirements for certain market segments. The differences between DAs and custom installations make direct comparisons between them impractical and suggest the need for a targeted DA benchmark. In this paper we review data warehouse appliances by surveying thirteen products offered today. We assess the common characteristics among them and propose a classification for DA offerings. We hope our results will help define a useful benchmark for DAs.

  17. Exploring Infiniband Hardware Virtualization in OpenNebula towards Efficient High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pais Pitta de Lacerda Ruivo, Tiago; Bernabeu Altayo, Gerard; Garzoglio, Gabriele

    2014-11-11

    has been widely accepted that software virtualization has a big negative impact on high-performance computing (HPC) application performance. This work explores the potential use of Infiniband hardware virtualization in an OpenNebula cloud towards the efficient support of MPI-based workloads. We have implemented, deployed, and tested an Infiniband network on the FermiCloud private Infrastructure-as-a-Service (IaaS) cloud. To avoid software virtualization towards minimizing the virtualization overhead, we employed a technique called Single Root Input/Output Virtualization (SRIOV). Our solution spanned modifications to the Linux’s Hypervisor as well as the OpenNebula manager. We evaluated the performance of the hardware virtualization on up to 56more » virtual machines connected by up to 8 DDR Infiniband network links, with micro-benchmarks (latency and bandwidth) as well as w a MPI-intensive application (the HPL Linpack benchmark).« less

  18. Examining the Influencing Factors of Exercise Intention Among Older Adults: A Controlled Study Between Exergame and Traditional Exercise.

    PubMed

    Wu, Zumei; Li, Jinhui; Theng, Yin-Leng

    2015-09-01

    Promoting physical activities among older adults becomes an important component of successful aging. The aim of this study was to assess the influence of both exercise settings and player interaction patterns on exercise intention in a sample of Asian older adults. A 2×2 (exercise settings: traditional exercise vs. exergame×player interaction patterns: collaborative vs. competitive play) between-subjects experimental intervention was conducted with 113 Singaporean older adults for 1 month. An interviewer-administered questionnaire survey was issued to measure key variables of enjoyment, social presence, and perceived behavioral control. The findings supported the importance of social presence and perceived behavioral control in older adults' exercise prediction, and highlighted the effect of collaborative play in older adults' exercise promotion. Compared with traditional exercise, the effect of exergames on motivating older adults to exercise was significantly lower. The findings of this study revealed rich directions for future elderly exercise research, and provided strategies that could be applicable for policy making and game design to promote elderly exercise participation.

  19. A Set of Free Cross-Platform Authoring Programs for Flexible Web-Based CALL Exercises

    ERIC Educational Resources Information Center

    O'Brien, Myles

    2012-01-01

    The Mango Suite is a set of three freely downloadable cross-platform authoring programs for flexible network-based CALL exercises. They are Adobe Air applications, so they can be used on Windows, Macintosh, or Linux computers, provided the freely-available Adobe Air has been installed on the computer. The exercises which the programs generate are…

  20. The Effects of Exercise on Cardiovascular Biomarkers: New Insights, Recent Data, and Applications.

    PubMed

    Che, Lin; Li, Dong

    2017-01-01

    The benefit of regular exercise or physical activity with appropriate intensity on improving cardiopulmonary function and endurance has long been accepted with less controversy. The challenge remains, however, quantitatively evaluate the effect of exercise on cardiovascular health due in part to the amount and intensity of exercise varies widely plus lack of effective, robust and efficient biomarker evaluation systems. Better evaluating the overall function of biomarker and validate biomarkers utility in cardiovascular health should improve the evidence regarding the benefit or the effect of exercise or physical activity on cardiovascular health, in turn increasing the efficiency of the biomarker on individuals with mild to moderate cardiovascular risk. In this review, beyond traditional cytokines, chemokines and inflammatory factors, we systemic reviewed the latest novel biomarkers in metabolomics, genomics, proteomics, and molecular imaging mainly focus on heart health, as well as cardiovascular diseases such as atherosclerosis and ischemic heart disease. Furthermore, we highlight the state-of-the-art biomarker developing techniques and its application in the field of heart health. Finally, we discuss the clinical relevance of physical activity and exercise on key biomarkers in molecular basis and practical considerations.

  1. A Novel Remote Rehabilitation System with the Fusion of Noninvasive Wearable Device and Motion Sensing for Pulmonary Patients.

    PubMed

    Tey, Chuang-Kit; An, Jinyoung; Chung, Wan-Young

    2017-01-01

    Chronic obstructive pulmonary disease is a type of lung disease caused by chronically poor airflow that makes breathing difficult. As a chronic illness, it typically worsens over time. Therefore, pulmonary rehabilitation exercises and patient management for extensive periods of time are required. This paper presents a remote rehabilitation system for a multimodal sensors-based application for patients who have chronic breathing difficulties. The process involves the fusion of sensory data-captured motion data by stereo-camera and photoplethysmogram signal by a wearable PPG sensor-that are the input variables of a detection and evaluation framework. In addition, we incorporated a set of rehabilitation exercises specific for pulmonary patients into the system by fusing sensory data. Simultaneously, the system also features medical functions that accommodate the needs of medical professionals and those which ease the use of the application for patients, including exercises for tracking progress, patient performance, exercise assignments, and exercise guidance. Finally, the results indicate the accurate determination of pulmonary exercises from the fusion of sensory data. This remote rehabilitation system provides a comfortable and cost-effective option in the healthcare rehabilitation system.

  2. Obtaining Tenure in a Higher Education Broadcast Journalism Discipline: Mapping Out a Successful Blueprint.

    ERIC Educational Resources Information Center

    Reppert, James E.

    Intended to be used as a benchmark to be followed for mass communication faculty putting together tenure applications, this collection of documents constitutes a successful tenure application at Southern Arkansas University for an instructor in broadcast journalism who did not possess a Ph.D. but only a master's degree. In an introductory note,…

  3. Protective effect of exercise and sildenafil on acute stress and cognitive function.

    PubMed

    Ozbeyli, Dilek; Gokalp, Ayse Gizem; Koral, Tolga; Ocal, Onur Yuksel; Dogan, Berkay; Akakin, Dilek; Yuksel, Meral; Kasimay, Ozgur

    2015-11-01

    There are contradictory results about the effects of exercise and sildenafil on cognitive functions. To investigate the effects of sildenafil pretreatment and chronic exercise on anxiety and cognitive functions. Wistar rats (n=42) were divided as sedentary and exercise groups. A moderate-intensity swimming exercise was performed for 6 weeks, 5 days/week, 1h/day. Some of the rats were administered orogastrically with sildenafil (25mg/kg/day) either acutely or chronically. Exposure to cat odor was used for induction of stress. The level of anxiety was evaluated by elevated plus maze test, while object recognition test was used to determine cognitive functions. Brain tissues were removed for the measurement of myeloperoxidase (MPO), malondialdehyde (MDA), nitric oxide levels, lucigenin-enhanced chemiluminescence, and for histological analysis. Increased MPO and MDA levels in sedentary-stressed rats were decreased with sildenafil applications. Chronic exercise inhibited the increase in MPO levels. Increased nitric oxide and lucigenin chemiluminescence levels in sedentary-stressed rats, were diminished with chronic sildenafil pretreatment. The time spent in the open arms of the plus maze was declined in sedentary-stressed rats, while chronic sildenafil pretreatment increased the time back to that in non-stressed rats. Acute sildenafil application to exercised rats prolonged the time spent in open arms as compared to non-treated exercise group. The time spent with the novel object, which was decreased in sedentary-stressed rats, was increased with sildenafil pretreatment. Our results suggest that sildenafil pretreatment or exercise exerts a protective effect against acute stress and improves cognitive functions by decreasing oxidative damage. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Therapeutic exercise for rotator cuff tendinopathy: a systematic review of contextual factors and prescription parameters.

    PubMed

    Littlewood, Chris; Malliaras, Peter; Chance-Larsen, Ken

    2015-06-01

    Exercise is widely regarded as an effective intervention for symptomatic rotator cuff tendinopathy but the prescription is diverse and the important components of such programmes are not well understood. The objective of this study was to systematically review the contextual factors and prescription parameters of published exercise programmes for rotator cuff tendinopathy, to generate recommendations based on current evidence. An electronic search of AMED, CiNAHL, CENTRAL, MEDLINE, PEDro and SPORTDiscus was undertaken from their inception to June 2014 and supplemented by hand searching. Eligible studies included randomized controlled trials evaluating the effectiveness of exercise in participants with rotator cuff tendinopathy. Included studies were appraised using the Cochrane risk of bias tool and synthesized narratively. Fourteen studies were included, and suggested that exercise programmes are widely applicable and can be successfully designed by physiotherapists with varying experience; whether the exercise is completed at home or within a clinic setting does not appear to matter and neither does pain production or pain avoidance during exercise; inclusion of some level of resistance does seem to matter although the optimal level is unclear, the optimal number of repetitions is also unclear but higher repetitions might confer superior outcomes; three sets of exercise are preferable to two or one set but the optimal frequency is unknown; most programmes should demonstrate clinically significant outcomes by 12 weeks. This systematic review has offered preliminary guidance in relation to contextual factors and prescription parameters to aid development and application of exercise programmes for rotator cuff tendinopathy.

  5. Rock type discrimination and structural analysis with LANDSAT and Seasat data: San Rafael swell, Utah

    NASA Technical Reports Server (NTRS)

    Stewart, H. E.; Blom, R.; Abrams, M.; Daily, M.

    1980-01-01

    Satellite synthetic aperture radar (SAR) images is evaluated in terms of its geologic applications. The benchmark to which the SAR images are compared is LANDSAT, used both for structural and lithologic interpretations.

  6. Computers for real time flight simulation: A market survey

    NASA Technical Reports Server (NTRS)

    Bekey, G. A.; Karplus, W. J.

    1977-01-01

    An extensive computer market survey was made to determine those available systems suitable for current and future flight simulation studies at Ames Research Center. The primary requirement is for the computation of relatively high frequency content (5 Hz) math models representing powered lift flight vehicles. The Rotor Systems Research Aircraft (RSRA) was used as a benchmark vehicle for computation comparison studies. The general nature of helicopter simulations and a description of the benchmark model are presented, and some of the sources of simulation difficulties are examined. A description of various applicable computer architectures is presented, along with detailed discussions of leading candidate systems and comparisons between them.

  7. A Comparison of Automatic Parallelization Tools/Compilers on the SGI Origin 2000 Using the NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Frumkin, Michael; Hribar, Michelle; Jin, Hao-Qiang; Waheed, Abdul; Yan, Jerry

    1998-01-01

    Porting applications to new high performance parallel and distributed computing platforms is a challenging task. Since writing parallel code by hand is extremely time consuming and costly, porting codes would ideally be automated by using some parallelization tools and compilers. In this paper, we compare the performance of the hand written NAB Parallel Benchmarks against three parallel versions generated with the help of tools and compilers: 1) CAPTools: an interactive computer aided parallelization too] that generates message passing code, 2) the Portland Group's HPF compiler and 3) using compiler directives with the native FORTAN77 compiler on the SGI Origin2000.

  8. Analysing the performance of personal computers based on Intel microprocessors for sequence aligning bioinformatics applications.

    PubMed

    Nair, Pradeep S; John, Eugene B

    2007-01-01

    Aligning specific sequences against a very large number of other sequences is a central aspect of bioinformatics. With the widespread availability of personal computers in biology laboratories, sequence alignment is now often performed locally. This makes it necessary to analyse the performance of personal computers for sequence aligning bioinformatics benchmarks. In this paper, we analyse the performance of a personal computer for the popular BLAST and FASTA sequence alignment suites. Results indicate that these benchmarks have a large number of recurring operations and use memory operations extensively. It seems that the performance can be improved with a bigger L1-cache.

  9. Arithmetic Data Cube as a Data Intensive Benchmark

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael A.; Shabano, Leonid

    2003-01-01

    Data movement across computational grids and across memory hierarchy of individual grid machines is known to be a limiting factor for application involving large data sets. In this paper we introduce the Data Cube Operator on an Arithmetic Data Set which we call Arithmetic Data Cube (ADC). We propose to use the ADC to benchmark grid capabilities to handle large distributed data sets. The ADC stresses all levels of grid memory by producing 2d views of an Arithmetic Data Set of d-tuples described by a small number of parameters. We control data intensity of the ADC by controlling the sizes of the views through choice of the tuple parameters.

  10. GLOFRIM v1.0 - A globally applicable computational framework for integrated hydrological-hydrodynamic modelling

    NASA Astrophysics Data System (ADS)

    Hoch, Jannis M.; Neal, Jeffrey C.; Baart, Fedor; van Beek, Rens; Winsemius, Hessel C.; Bates, Paul D.; Bierkens, Marc F. P.

    2017-10-01

    We here present GLOFRIM, a globally applicable computational framework for integrated hydrological-hydrodynamic modelling. GLOFRIM facilitates spatially explicit coupling of hydrodynamic and hydrologic models and caters for an ensemble of models to be coupled. It currently encompasses the global hydrological model PCR-GLOBWB as well as the hydrodynamic models Delft3D Flexible Mesh (DFM; solving the full shallow-water equations and allowing for spatially flexible meshing) and LISFLOOD-FP (LFP; solving the local inertia equations and running on regular grids). The main advantages of the framework are its open and free access, its global applicability, its versatility, and its extensibility with other hydrological or hydrodynamic models. Before applying GLOFRIM to an actual test case, we benchmarked both DFM and LFP for a synthetic test case. Results show that for sub-critical flow conditions, discharge response to the same input signal is near-identical for both models, which agrees with previous studies. We subsequently applied the framework to the Amazon River basin to not only test the framework thoroughly, but also to perform a first-ever benchmark of flexible and regular grids on a large-scale. Both DFM and LFP produce comparable results in terms of simulated discharge with LFP exhibiting slightly higher accuracy as expressed by a Kling-Gupta efficiency of 0.82 compared to 0.76 for DFM. However, benchmarking inundation extent between DFM and LFP over the entire study area, a critical success index of 0.46 was obtained, indicating that the models disagree as often as they agree. Differences between models in both simulated discharge and inundation extent are to a large extent attributable to the gridding techniques employed. In fact, the results show that both the numerical scheme of the inundation model and the gridding technique can contribute to deviations in simulated inundation extent as we control for model forcing and boundary conditions. This study shows that the presented computational framework is robust and widely applicable. GLOFRIM is designed as open access and easily extendable, and thus we hope that other large-scale hydrological and hydrodynamic models will be added. Eventually, more locally relevant processes would be captured and more robust model inter-comparison, benchmarking, and ensemble simulations of flood hazard on a large scale would be allowed for.

  11. Comparision and analysis of top 10 exercise android Apps in mainland China.

    PubMed

    Wang, Yanling; Sun, Liu; Xu, Yahong; Xiao, Qian; Chang, Polun; Wu, Ying

    2015-01-01

    Medical guidelines highly recommend physical activity and aerobic exercise in the prevention of primary and secondary cardiovascular disease. The use of exercise-promoting application software may improve clinical outcomes for cardiovascular disease (CVD) patients. The study aimed to compare and analyze the functions of the top 10 exercise Android Apps which had more than 1,000,000 downloads from the main four Android App stores in mainland China. The results showed that most of these popular apps had pedometer, exercise plan preset, user data presentation, user encouragement and community sharing functions while a few of them had exercise video clips or animation support and wearable devices. Given these data, the conclusion is that these popular apps fulfill some of the functions recommended by medical guidelines, however, lack of some functions such as pre-exercise risk assessment, the exercise intensity recording, specific instructions by professionals, and monitoring functions for CVD patients.

  12. Personalized Preventive Medicine: Genetics and the Response to Regular Exercise in Preventive Interventions

    PubMed Central

    Bouchard, Claude; Antunes-Correa, Ligia M.; Ashley, Euan A.; Franklin, Nina; Hwang, Paul M.; Mattsson, C. Mikael; Negrao, Carlos E.; Phillips, Shane A.; Sarzynski, Mark A.; Wang, Ping-yuan; Wheeler, Matthew T.

    2014-01-01

    Regular exercise and a physically active lifestyle have favorable effects on health. Several issues related to this theme are addressed in this report. A comment on the requirements of personalized exercise medicine and in-depth biological profiling along with the opportunities that they offer is presented. This is followed by a brief overview of the evidence for the contributions of genetic differences to the ability to benefit from regular exercise. Subsequently, studies showing that mutations in TP53 influence exercise capacity in mice and humans are succinctly described. The evidence for effects of exercise on endothelial function in health and disease also is covered. Finally, changes in cardiac and skeletal muscle in response to exercise and their implications for patients with cardiac disease are summarized. Innovative research strategies are needed to define the molecular mechanisms involved in adaptation to exercise and to translate them into useful clinical and public health applications. PMID:25559061

  13. Benchmarking the Cost per Person of Mass Treatment for Selected Neglected Tropical Diseases: An Approach Based on Literature Review and Meta-regression with Web-Based Software Application

    PubMed Central

    Fitzpatrick, Christopher; Fleming, Fiona M.; Madin-Warburton, Matthew; Schneider, Timm; Meheus, Filip; Asiedu, Kingsley; Solomon, Anthony W.; Montresor, Antonio; Biswas, Gautam

    2016-01-01

    Background Advocacy around mass treatment for the elimination of selected Neglected Tropical Diseases (NTDs) has typically put the cost per person treated at less than US$ 0.50. Whilst useful for advocacy, the focus on a single number misrepresents the complexity of delivering “free” donated medicines to about a billion people across the world. We perform a literature review and meta-regression of the cost per person per round of mass treatment against NTDs. We develop a web-based software application (https://healthy.shinyapps.io/benchmark/) to calculate setting-specific unit costs against which programme budgets and expenditures or results-based pay-outs can be benchmarked. Methods We reviewed costing studies of mass treatment for the control, elimination or eradication of lymphatic filariasis, schistosomiasis, soil-transmitted helminthiasis, onchocerciasis, trachoma and yaws. These are the main 6 NTDs for which mass treatment is recommended. We extracted financial and economic unit costs, adjusted to a standard definition and base year. We regressed unit costs on the number of people treated and other explanatory variables. Regression results were used to “predict” country-specific unit cost benchmarks. Results We reviewed 56 costing studies and included in the meta-regression 34 studies from 23 countries and 91 sites. Unit costs were found to be very sensitive to economies of scale, and the decision of whether or not to use local volunteers. Financial unit costs are expected to be less than 2015 US$ 0.50 in most countries for programmes that treat 100 thousand people or more. However, for smaller programmes, including those in the “last mile”, or those that cannot rely on local volunteers, both economic and financial unit costs are expected to be higher. Discussion The available evidence confirms that mass treatment offers a low cost public health intervention on the path towards universal health coverage. However, more costing studies focussed on elimination are needed. Unit cost benchmarks can help in monitoring value for money in programme plans, budgets and accounts, or in setting a reasonable pay-out for results-based financing mechanisms. PMID:27918573

  14. Disaster metrics: quantitative benchmarking of hospital surge capacity in trauma-related multiple casualty events.

    PubMed

    Bayram, Jamil D; Zuabi, Shawki; Subbarao, Italo

    2011-06-01

    Hospital surge capacity in multiple casualty events (MCE) is the core of hospital medical response, and an integral part of the total medical capacity of the community affected. To date, however, there has been no consensus regarding the definition or quantification of hospital surge capacity. The first objective of this study was to quantitatively benchmark the various components of hospital surge capacity pertaining to the care of critically and moderately injured patients in trauma-related MCE. The second objective was to illustrate the applications of those quantitative parameters in local, regional, national, and international disaster planning; in the distribution of patients to various hospitals by prehospital medical services; and in the decision-making process for ambulance diversion. A 2-step approach was adopted in the methodology of this study. First, an extensive literature search was performed, followed by mathematical modeling. Quantitative studies on hospital surge capacity for trauma injuries were used as the framework for our model. The North Atlantic Treaty Organization triage categories (T1-T4) were used in the modeling process for simplicity purposes. Hospital Acute Care Surge Capacity (HACSC) was defined as the maximum number of critical (T1) and moderate (T2) casualties a hospital can adequately care for per hour, after recruiting all possible additional medical assets. HACSC was modeled to be equal to the number of emergency department beds (#EDB), divided by the emergency department time (EDT); HACSC = #EDB/EDT. In trauma-related MCE, the EDT was quantitatively benchmarked to be 2.5 (hours). Because most of the critical and moderate casualties arrive at hospitals within a 6-hour period requiring admission (by definition), the hospital bed surge capacity must match the HACSC at 6 hours to ensure coordinated care, and it was mathematically benchmarked to be 18% of the staffed hospital bed capacity. Defining and quantitatively benchmarking the different components of hospital surge capacity is vital to hospital preparedness in MCE. Prospective studies of our mathematical model are needed to verify its applicability, generalizability, and validity.

  15. Benchmarking the Cost per Person of Mass Treatment for Selected Neglected Tropical Diseases: An Approach Based on Literature Review and Meta-regression with Web-Based Software Application.

    PubMed

    Fitzpatrick, Christopher; Fleming, Fiona M; Madin-Warburton, Matthew; Schneider, Timm; Meheus, Filip; Asiedu, Kingsley; Solomon, Anthony W; Montresor, Antonio; Biswas, Gautam

    2016-12-01

    Advocacy around mass treatment for the elimination of selected Neglected Tropical Diseases (NTDs) has typically put the cost per person treated at less than US$ 0.50. Whilst useful for advocacy, the focus on a single number misrepresents the complexity of delivering "free" donated medicines to about a billion people across the world. We perform a literature review and meta-regression of the cost per person per round of mass treatment against NTDs. We develop a web-based software application (https://healthy.shinyapps.io/benchmark/) to calculate setting-specific unit costs against which programme budgets and expenditures or results-based pay-outs can be benchmarked. We reviewed costing studies of mass treatment for the control, elimination or eradication of lymphatic filariasis, schistosomiasis, soil-transmitted helminthiasis, onchocerciasis, trachoma and yaws. These are the main 6 NTDs for which mass treatment is recommended. We extracted financial and economic unit costs, adjusted to a standard definition and base year. We regressed unit costs on the number of people treated and other explanatory variables. Regression results were used to "predict" country-specific unit cost benchmarks. We reviewed 56 costing studies and included in the meta-regression 34 studies from 23 countries and 91 sites. Unit costs were found to be very sensitive to economies of scale, and the decision of whether or not to use local volunteers. Financial unit costs are expected to be less than 2015 US$ 0.50 in most countries for programmes that treat 100 thousand people or more. However, for smaller programmes, including those in the "last mile", or those that cannot rely on local volunteers, both economic and financial unit costs are expected to be higher. The available evidence confirms that mass treatment offers a low cost public health intervention on the path towards universal health coverage. However, more costing studies focussed on elimination are needed. Unit cost benchmarks can help in monitoring value for money in programme plans, budgets and accounts, or in setting a reasonable pay-out for results-based financing mechanisms.

  16. Different weight bearing push-up plus exercises with and without isometric horizontal abduction in subjects with scapular winging: A randomized trial.

    PubMed

    Choi, Woo-Jeong; Yoon, Tae-Lim; Choi, Sil-Ah; Lee, Ji-Hyun; Cynn, Heon-Seock

    2017-07-01

    The aim of the present study was to determine whether the application of isometric horizontal abduction (IHA) differentially affected two weight-bearing push-up plus exercises by examining activation of the scapulothoracic muscles in subjects with scapular winging. Fifteen male subjects performed standard push-up plus (SPP) and wall push-up plus (WPP), with and without IHA. Two-way analyses of variance using two within-subject factors were used to determine the statistical significance of observed differences in upper trapezius (UT), pectoralis major (PM), and serratus anterior (SA) muscle activities and UT/SA and PM/SA muscle activity ratios. UT and SA muscle activities were greater during SPP than WPP. PM muscle activity was lower with IHA application. The UT/SA and PM/SA muscle activity ratios were lower during SPP than WPP. The PM/SA muscle activity ratio was lower with IHA application. The results suggest that IHA application using a Thera-Band can effectively reduce PM muscle activity during SPP and WPP exercises. Moreover, the SPP exercise can be used to increase UT and SA muscle activity and reduce the UT/SA and PM/SA muscle activity ratios in subjects with scapular winging. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Using a simulation cell for exercise realism.

    PubMed

    Lerner, Ken

    2013-01-01

    A simulation cell or SimCell is an effective and flexible tool for control of emergency management exercises. It allows exercise participants to interact, via simulation, with a wide variety of nonplaying organizations and officials. Adapted from military application, the Chemical Stockpile Emergency Preparedness Program (CSEPP) applied, developed, and refined the SimCell concept for emergency management exercises. It has now been incorporated into national exercise guidance through the Homeland Security Exercise and Evaluation Program, and has been used in a wide variety of national, regional, and local exercises. This article reviews development of the SimCell concept in CSEPP, briefly surveys current practice incorporating SimCells in exercise control, and offers practical lessons-learned and tips on using a SimCell to best advantage. Lessons learned include using a SimCell as an exercise-control hub; preparing inject material for exercise controllers as part of the Master Scenario Event List; laying the groundwork for success through exercise player and controller training; developing protocol for SimCell communications; and capturing feedback from SimCell controllers for inclusion in the exercise evaluation reporting process. The SimCell concept is flexible and can be applied to a variety of exercise types and through a variety of methods.

  18. A new symmetrical quasi-classical model for electronically non-adiabatic processes: Application to the case of weak non-adiabatic coupling

    DOE PAGES

    Cotton, Stephen J.; Miller, William H.

    2016-10-14

    Previous work has shown how a symmetrical quasi-classical (SQC) windowing procedure can be used to quantize the initial and final electronic degrees of freedom in the Meyer-Miller (MM) classical vibronic (i.e, nuclear + electronic) Hamiltonian, and that the approach provides a very good description of electronically non-adiabatic processes within a standard classical molecular dynamics framework for a number of benchmark problems. This study explores application of the SQC/MM approach to the case of very weak non-adiabatic coupling between the electronic states, showing (as anticipated) how the standard SQC/MM approach used to date fails in this limit, and then devises amore » new SQC windowing scheme to deal with it. Finally, application of this new SQC model to a variety of realistic benchmark systems shows that the new model not only treats the weak coupling case extremely well, but it is also seen to describe the “normal” regime (of electronic transition probabilities ≳ 0.1) even more accurately than the previous “standard” model.« less

  19. A new symmetrical quasi-classical model for electronically non-adiabatic processes: Application to the case of weak non-adiabatic coupling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cotton, Stephen J.; Miller, William H.

    Previous work has shown how a symmetrical quasi-classical (SQC) windowing procedure can be used to quantize the initial and final electronic degrees of freedom in the Meyer-Miller (MM) classical vibronic (i.e, nuclear + electronic) Hamiltonian, and that the approach provides a very good description of electronically non-adiabatic processes within a standard classical molecular dynamics framework for a number of benchmark problems. This study explores application of the SQC/MM approach to the case of very weak non-adiabatic coupling between the electronic states, showing (as anticipated) how the standard SQC/MM approach used to date fails in this limit, and then devises amore » new SQC windowing scheme to deal with it. Finally, application of this new SQC model to a variety of realistic benchmark systems shows that the new model not only treats the weak coupling case extremely well, but it is also seen to describe the “normal” regime (of electronic transition probabilities ≳ 0.1) even more accurately than the previous “standard” model.« less

  20. A heterogeneous computing accelerated SCE-UA global optimization method using OpenMP, OpenCL, CUDA, and OpenACC.

    PubMed

    Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Liang, Ke; Hong, Yang

    2017-10-01

    The shuffled complex evolution optimization developed at the University of Arizona (SCE-UA) has been successfully applied in various kinds of scientific and engineering optimization applications, such as hydrological model parameter calibration, for many years. The algorithm possesses good global optimality, convergence stability and robustness. However, benchmark and real-world applications reveal the poor computational efficiency of the SCE-UA. This research aims at the parallelization and acceleration of the SCE-UA method based on powerful heterogeneous computing technology. The parallel SCE-UA is implemented on Intel Xeon multi-core CPU (by using OpenMP and OpenCL) and NVIDIA Tesla many-core GPU (by using OpenCL, CUDA, and OpenACC). The serial and parallel SCE-UA were tested based on the Griewank benchmark function. Comparison results indicate the parallel SCE-UA significantly improves computational efficiency compared to the original serial version. The OpenCL implementation obtains the best overall acceleration results however, with the most complex source code. The parallel SCE-UA has bright prospects to be applied in real-world applications.

  1. Role of metabolic stress for enhancing muscle adaptations: Practical applications

    PubMed Central

    de Freitas, Marcelo Conrado; Gerosa-Neto, Jose; Zanchi, Nelo Eidy; Lira, Fabio Santos; Rossi, Fabrício Eduardo

    2017-01-01

    Metabolic stress is a physiological process that occurs during exercise in response to low energy that leads to metabolite accumulation [lactate, phosphate inorganic (Pi) and ions of hydrogen (H+)] in muscle cells. Traditional exercise protocol (i.e., Resistance training) has an important impact on the increase of metabolite accumulation, which influences hormonal release, hypoxia, reactive oxygen species (ROS) production and cell swelling. Changes in acute exercise routines, such as intensity, volume and rest between sets, are determinants for the magnitude of metabolic stress, furthermore, different types of training, such as low-intensity resistance training plus blood flow restriction and high intensity interval training, could be used to maximize metabolic stress during exercise. Thus, the objective of this review is to describe practical applications that induce metabolic stress and the potential effects of metabolic stress to increase systemic hormonal release, hypoxia, ROS production, cell swelling and muscle adaptations. PMID:28706859

  2. Cardiorespiratory Fitness and Atherosclerosis: Recent Data and Future Directions.

    PubMed

    Mehanna, Emile; Hamik, Anne; Josephson, Richard A

    2016-05-01

    Historically, the relationship between exercise and the cardiovascular system was viewed as unidirectional, with a disease resulting in exercise limitation and hazard. This article reviews and explores the bidirectional nature, delineating the effects, generally positive, on the cardiovascular system and atherosclerosis. Exercise augments eNOS, affects redox potential, and favorably affects mediators of atherosclerosis including lipids, glucose homeostasis, and inflammation. There are direct effects on the vasculature as well as indirect benefits related to exercise-induced changes in body composition and skeletal muscle. Application of aerobic exercise to specific populations is described, with the hope that this knowledge will move the science forward and improve individual patient outcome.

  3. Cardiorespiratory Fitness and Atherosclerosis: Recent Data and Future Directions

    PubMed Central

    Mehanna, Emile; Hamik, Anne; Josephson, Richard A

    2017-01-01

    Historically the relationship between exercise and the cardiovascular system was viewed as unidirectional, with disease resulting in exercise limitation and hazard. This article reviews and explores the bidirectional nature, delineating the effects, generally positive, on the cardiovascular system and atherosclerosis. Exercise augments eNOS, affects redox potential, and favorably affects mediators of atherosclerosis including lipids, glucose homeostasis, and inflammation. There are direct effects on the vasculature as well as indirect benefits related to exercises induced changes in body composition and skeletal muscle. Application of aerobic exercise to specific populations is described, with the hope that this knowledge will move the science forward and improve individual patient outcome. PMID:27005804

  4. Exercise, fitness, and the gut.

    PubMed

    Cronin, Owen; Molloy, Michael G; Shanahan, Fergus

    2016-03-01

    Exercise and gut symptomatology have long been connected. The possibility that regular exercise fosters intestinal health and function has been somewhat overlooked in the scientific literature. In this review, we summarize current knowledge and discuss a selection of recent, relevant, and innovative studies, hypotheses and reviews that elucidate a complex topic. The multiorgan benefits of regular exercise are extensive. When taken in moderation, these benefits transcend improved cardio-respiratory fitness and likely reach the gut in a metabolic, immunological, neural, and microbial manner. This is applicable in both health and disease. However, further work is required to provide safe, effective recommendations on physical activity in specific gastrointestinal conditions. Challenging methodology investigating the relationship between exercise and gut health should not deter from exploring exercise in the promotion of gastrointestinal health.

  5. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    NASA Astrophysics Data System (ADS)

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the changing computer hardware platforms in order to provide fast, accurate and efficient solutions to large, complex electromagnetic problems. The research in this dissertation proves that the performance of parallel code is intimately related to the configuration of the computer hardware and can be maximized for different hardware platforms. To benchmark and optimize the performance of parallel CEM software, a variety of large, complex projects are created and executed on a variety of computer platforms. The computer platforms used in this research are detailed in this dissertation. The projects run as benchmarks are also described in detail and results are presented. The parameters that affect parallel CEM software on High Performance Computing Clusters (HPCC) are investigated. This research demonstrates methods to maximize the performance of parallel CEM software code.

  6. Construct Validity of Physical Fitness Tests

    DTIC Science & Technology

    2011-02-03

    Medicine and Science in Sports and Exercise , 21, 319-324. *Fleishman, E. A. (1964). The structure and measurement of physical fitness. Englewood Cliffs...Quarterly for Exercise and Sport, 64, 256-273. *McCloy, E. (1935). Factor analysis methods in the measurement of physical abilities. Research Quarterly...Research Quarterly, 34, 525. Physical Fitness Test Validity 23 Powers, S. K., & Howley, E. T. (1990). Exercise physiology: Theory and application to

  7. Benchmarking comparison and validation of MCNP photon interaction data

    NASA Astrophysics Data System (ADS)

    Colling, Bethany; Kodeli, I.; Lilley, S.; Packer, L. W.

    2017-09-01

    The objective of the research was to test available photoatomic data libraries for fusion relevant applications, comparing against experimental and computational neutronics benchmarks. Photon flux and heating was compared using the photon interaction data libraries (mcplib 04p, 05t, 84p and 12p). Suitable benchmark experiments (iron and water) were selected from the SINBAD database and analysed to compare experimental values with MCNP calculations using mcplib 04p, 84p and 12p. In both the computational and experimental comparisons, the majority of results with the 04p, 84p and 12p photon data libraries were within 1σ of the mean MCNP statistical uncertainty. Larger differences were observed when comparing computational results with the 05t test photon library. The Doppler broadening sampling bug in MCNP-5 is shown to be corrected for fusion relevant problems through use of the 84p photon data library. The recommended libraries for fusion neutronics are 84p (or 04p) with MCNP6 and 84p if using MCNP-5.

  8. VRACK: measuring pedal kinematics during stationary bike cycling.

    PubMed

    Farjadian, Amir B; Kong, Qingchao; Gade, Venkata K; Deutsch, Judith E; Mavroidis, Constantinos

    2013-06-01

    Ankle impairment and lower limb asymmetries in strength and coordination are common symptoms for individuals with selected musculoskeletal and neurological impairments. The virtual reality augmented cycling kit (VRACK) was designed as a compact mechatronics system for lower limb and mobility rehabilitation. The system measures interaction forces and cardiac activity during cycling in a virtual environment. The kinematics measurement was added to the system. Due to the constrained problem definition, the combination of inertial measurement unit (IMU) and Kalman filtering was recruited to compute the optimal pedal angular displacement during dynamic cycling exercise. Using a novel benchmarking method the accuracy of IMU-based kinematics measurement was evaluated. Relatively accurate angular measurements were achieved. The enhanced VRACK system can serve as a rehabilitation device to monitor biomechanical and physiological variables during cycling on a stationary bike.

  9. Application of the probabilistic model BET_UNREST during a volcanic unrest simulation exercise in Dominica, Lesser Antilles

    NASA Astrophysics Data System (ADS)

    Constantinescu, Robert; Robertson, Richard; Lindsay, Jan M.; Tonini, Roberto; Sandri, Laura; Rouwet, Dmitri; Smith, Patrick; Stewart, Roderick

    2016-11-01

    We report on the first "real-time" application of the BET_UNREST (Bayesian Event Tree for Volcanic Unrest) probabilistic model, during a VUELCO Simulation Exercise carried out on the island of Dominica, Lesser Antilles, in May 2015. Dominica has a concentration of nine potentially active volcanic centers and frequent volcanic earthquake swarms at shallow depths, intense geothermal activity, and recent phreatic explosions (1997) indicate the region is still active. The exercise scenario was developed in secret by a team of scientists from The University of the West Indies (Trinidad and Tobago) and University of Auckland (New Zealand). The simulated unrest activity was provided to the exercise's Scientific Team in three "phases" through exercise injects comprising processed monitoring data. We applied the newly created BET_UNREST model through its software implementation PyBetUnrest, to estimate the probabilities of having (i) unrest of (ii) magmatic, hydrothermal or tectonic origin, which may or may not lead to (iii) an eruption. The probabilities obtained for each simulated phase raised controversy and intense deliberations among the members of the Scientific Team. The results were often considered to be "too high" and were not included in any of the reports presented to ODM (Office for Disaster Management) revealing interesting crisis communication challenges. We concluded that the PyBetUnrest application itself was successful and brought the tool one step closer to a full implementation. However, as with any newly proposed method, it needs more testing, and in order to be able to use it in the future, we make a series of recommendations for future applications.

  10. Equilibrium Partitioning Sediment Benchmarks (ESBs) for the ...

    EPA Pesticide Factsheets

    This document describes procedures to determine the concentrations of nonionic organic chemicals in sediment interstitial waters. In previous ESB documents, the general equilibrium partitioning (EqP) approach was chosen for the derivation of sediment benchmarks because it accounts for the varying bioavailability of chemicals in different sediments and allows for the incorporation of the appropriate biological effects concentration. This provides for the derivation of benchmarks that are causally linked to the specific chemical, applicable across sediments, and appropriately protective of benthic organisms.  This equilibrium partitioning sediment benchmark (ESB) document was prepared by scientists from the Atlantic Ecology Division, Mid-Continent Ecology Division, and Western Ecology Division, the Office of Water, and private consultants. The document describes procedures to determine the interstitial water concentrations of nonionic organic chemicals in contaminated sediments. Based on these concentrations, guidance is provided on the derivation of toxic units to assess whether the sediments are likely to cause adverse effects to benthic organisms. The equilibrium partitioning (EqP) approach was chosen because it is based on the concentrations of chemical(s) that are known to be harmful and bioavailable in the environment.  This document, and five others published over the last nine years, will be useful for the Program Offices, including Superfund, a

  11. Metric Evaluation Pipeline for 3d Modeling of Urban Scenes

    NASA Astrophysics Data System (ADS)

    Bosch, M.; Leichtman, A.; Chilcott, D.; Goldberg, H.; Brown, M.

    2017-05-01

    Publicly available benchmark data and metric evaluation approaches have been instrumental in enabling research to advance state of the art methods for remote sensing applications in urban 3D modeling. Most publicly available benchmark datasets have consisted of high resolution airborne imagery and lidar suitable for 3D modeling on a relatively modest scale. To enable research in larger scale 3D mapping, we have recently released a public benchmark dataset with multi-view commercial satellite imagery and metrics to compare 3D point clouds with lidar ground truth. We now define a more complete metric evaluation pipeline developed as publicly available open source software to assess semantically labeled 3D models of complex urban scenes derived from multi-view commercial satellite imagery. Evaluation metrics in our pipeline include horizontal and vertical accuracy and completeness, volumetric completeness and correctness, perceptual quality, and model simplicity. Sources of ground truth include airborne lidar and overhead imagery, and we demonstrate a semi-automated process for producing accurate ground truth shape files to characterize building footprints. We validate our current metric evaluation pipeline using 3D models produced using open source multi-view stereo methods. Data and software is made publicly available to enable further research and planned benchmarking activities.

  12. HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN

    EPA Science Inventory

    While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...

  13. CERN IRRADIATION FACILITIES.

    PubMed

    Pozzi, Fabio; Garcia Alia, Ruben; Brugger, Markus; Carbonez, Pierre; Danzeca, Salvatore; Gkotse, Blerina; Richard Jaekel, Martin; Ravotti, Federico; Silari, Marco; Tali, Maris

    2017-09-28

    CERN provides unique irradiation facilities for applications in dosimetry, metrology, intercomparison of radiation protection devices, benchmark of Monte Carlo codes and radiation damage studies to electronics. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Acute nutritional ketosis: implications for exercise performance and metabolism

    PubMed Central

    2014-01-01

    Ketone bodies acetoacetate (AcAc) and D-β-hydroxybutyrate (βHB) may provide an alternative carbon source to fuel exercise when delivered acutely in nutritional form. The metabolic actions of ketone bodies are based on sound evolutionary principles to prolong survival during caloric deprivation. By harnessing the potential of these metabolic actions during exercise, athletic performance could be influenced, providing a useful model for the application of ketosis in therapeutic conditions. This article examines the energetic implications of ketone body utilisation with particular reference to exercise metabolism and substrate energetics. PMID:25379174

  15. Benchmarking density functional tight binding models for barrier heights and reaction energetics of organic molecules.

    PubMed

    Gruden, Maja; Andjeklović, Ljubica; Jissy, Akkarapattiakal Kuriappan; Stepanović, Stepan; Zlatar, Matija; Cui, Qiang; Elstner, Marcus

    2017-09-30

    Density Functional Tight Binding (DFTB) models are two to three orders of magnitude faster than ab initio and Density Functional Theory (DFT) methods and therefore are particularly attractive in applications to large molecules and condensed phase systems. To establish the applicability of DFTB models to general chemical reactions, we conduct benchmark calculations for barrier heights and reaction energetics of organic molecules using existing databases and several new ones compiled in this study. Structures for the transition states and stable species have been fully optimized at the DFTB level, making it possible to characterize the reliability of DFTB models in a more thorough fashion compared to conducting single point energy calculations as done in previous benchmark studies. The encouraging results for the diverse sets of reactions studied here suggest that DFTB models, especially the most recent third-order version (DFTB3/3OB augmented with dispersion correction), in most cases provide satisfactory description of organic chemical reactions with accuracy almost comparable to popular DFT methods with large basis sets, although larger errors are also seen for certain cases. Therefore, DFTB models can be effective for mechanistic analysis (e.g., transition state search) of large (bio)molecules, especially when coupled with single point energy calculations at higher levels of theory. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  16. BENCHMARK DOSE TECHNICAL GUIDANCE DOCUMENT ...

    EPA Pesticide Factsheets

    The U.S. EPA conducts risk assessments for an array of health effects that may result from exposure to environmental agents, and that require an analysis of the relationship between exposure and health-related outcomes. The dose-response assessment is essentially a two-step process, the first being the definition of a point of departure (POD), and the second extrapolation from the POD to low environmentally-relevant exposure levels. The benchmark dose (BMD) approach provides a more quantitative alternative to the first step in the dose-response assessment than the current NOAEL/LOAEL process for noncancer health effects, and is similar to that for determining the POD proposed for cancer endpoints. As the Agency moves toward harmonization of approaches for human health risk assessment, the dichotomy between cancer and noncancer health effects is being replaced by consideration of mode of action and whether the effects of concern are likely to be linear or nonlinear at low doses. Thus, the purpose of this project is to provide guidance for the Agency and the outside community on the application of the BMD approach in determining the POD for all types of health effects data, whether a linear or nonlinear low dose extrapolation is used. A guidance document is being developed under the auspices of EPA's Risk Assessment Forum. The purpose of this project is to provide guidance for the Agency and the outside community on the application of the benchmark dose (BMD) appr

  17. Supportive decision making at the point of care: refinement of a case-based reasoning application for use in nursing practice.

    PubMed

    DI Pietro, Tammie L; Doran, Diane M; McArthur, Gregory

    2010-01-01

    Variations in nursing care have been observed, affecting patient outcomes and quality of care. Case-based reasoners that benchmark for patient indicators can reduce variation through decision support. This study evaluated and validated a case-based reasoning application to establish benchmarks for nursing-sensitive patient outcomes of pain, fatigue, and toilet use, using patient characteristic variables for generating similar cases. Three graduate nursing students participated. Each ranked 25 patient cases using demographics of age, sex, diagnosis, and comorbidities against 10 patients from a database. Participant judgments of case similarity were compared with the case-based reasoning system. Feature weights for each indicator were adjusted to make the case-based reasoning system's similarity ranking correspond more closely to participant judgment. Small differences were noted between initial weights and weights generated from participants. For example, initial weight for comorbidities was 0.35, whereas weights generated by participants for pain, fatigue, and toilet use were 0.49, 0.42, and 0.48, respectively. For the same outcomes, the initial weight for sex was 0.15, but weights generated by the participants were 0.025, 0.002, and 0.000, respectively. Refinement of the case-based reasoning tool established valid benchmarks for patient outcomes in relation to participants and assisted in point-of-care decision making.

  18. Cloud-Based Numerical Weather Prediction for Near Real-Time Forecasting and Disaster Response

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew; Case, Jonathan; Venner, Jason; Schroeder, Richard; Checchi, Milton; Zavodsky, Bradley; O'Brien, Raymond

    2015-01-01

    Cloud computing capabilities have rapidly expanded within the private sector, offering new opportunities for meteorological applications. Collaborations between NASA Marshall, NASA Ames, and contractor partners led to evaluations of private (NASA) and public (Amazon) resources for executing short-term NWP systems. Activities helped the Marshall team further understand cloud capabilities, and benchmark use of cloud resources for NWP and other applications

  19. How well do we characterize the biophysical effects of vegetation cover change? Benchmarking land surface models against satellite observations.

    NASA Astrophysics Data System (ADS)

    Duveiller, Gregory; Forzieri, Giovanni; Robertson, Eddy; Georgievski, Goran; Li, Wei; Lawrence, Peter; Ciais, Philippe; Pongratz, Julia; Sitch, Stephen; Wiltshire, Andy; Arneth, Almut; Cescatti, Alessandro

    2017-04-01

    Changes in vegetation cover can affect the climate by altering the carbon, water and energy cycles. The main tools to characterize such land-climate interactions for both the past and future are land surface models (LSMs) that can be embedded in larger Earth System models (ESMs). While such models have long been used to characterize the biogeochemical effects of vegetation cover change, their capacity to model biophysical effects accurately across the globe remains unclear due to the complexity of the phenomena. The result of competing biophysical processes on the surface energy balance varies spatially and seasonally, and can lead to warming or cooling depending on the specific vegetation change and on the background climate (e.g. presence of snow or soil moisture). Here we present a global scale benchmarking exercise of four of the most commonly used LSMs (JULES, ORCHIDEE, JSBACH and CLM) against a dedicated dataset of satellite observations. To facilitate the understanding of the causes that lead to discrepancies between simulated and observed data, we focus on pure transitions amongst major plant functional types (PFTs): from different tree types (evergreen broadleaf trees, deciduous broadleaf trees and needleleaf trees) to either grasslands or crops. From the modelling perspective, this entails generating a separate simulation for each PFT in which all 1° by 1° grid cells are uniformly covered with that PFT, and then analysing the differences amongst them in terms of resulting biophysical variables (e.g net radiation, latent and sensible heat). From the satellite perspective, the effect of pure transitions is obtained by unmixing the signal of different 0.05° spatial resolution MODIS products (albedo, latent heat, upwelling longwave radiation) over a local moving window using PFT maps derived from the ESA Climate Change Initiative land cover map. After aggregating to a common spatial support, the observation and model-driven datasets are confronted and analysed across different climate zones. Results indicate that models tend to catch better radiative than non-radiative energy fluxes. However, for various vegetation transitions, models do not agree amongst themselves on the magnitude nor the sign of the change. In particular, predicting the impact of land cover change on the partitioning of the available energy between latent and sensible heat proves to be a challenging task for vegetation models. We expect that this benchmarking exercise will shed a light on where to prioritize the efforts in model development as well as inform where consensus between model and observations is already met. Improving the robustness and consistency of land-model is essential to develop and inform land-based mitigation and adaptation policies that account for both biogeochemical and biophysical vegetation impacts on climate.

  20. A Preliminary Study on the Feasibility of Using a Virtual Reality Cognitive Training Application for Remote Detection of Mild Cognitive Impairment.

    PubMed

    Zygouris, Stelios; Ntovas, Konstantinos; Giakoumis, Dimitrios; Votis, Konstantinos; Doumpoulakis, Stefanos; Segkouli, Sofia; Karagiannidis, Charalampos; Tzovaras, Dimitrios; Tsolaki, Magda

    2017-01-01

    It has been demonstrated that virtual reality (VR) applications can be used for the detection of mild cognitive impairment (MCI). The aim of this study is to provide a preliminary investigation on whether a VR cognitive training application can be used to detect MCI in persons using the application at home without the help of an examiner. Two groups, one of healthy older adults (n = 6) and one of MCI patients (n = 6) were recruited from Thessaloniki day centers for cognitive disorders and provided with a tablet PC with custom software enabling the self-administration of the Virtual Super Market (VSM) cognitive training exercise. The average performance (from 20 administrations of the exercise) of the two groups was compared and was also correlated with performance in established neuropsychological tests. Average performance in terms of duration to complete the given exercise differed significantly between healthy(μ  = 247.41 s/ sd = 89.006) and MCI (μ= 454.52 s/ sd = 177.604) groups, yielding a correct classification rate of 91.8% with a sensitivity and specificity of 94% and 89% respectively for MCI detection. Average performance also correlated significantly with performance in Functional Cognitive Assessment Scale (FUCAS), Test of Everyday Attention (TEA), and Rey Osterrieth Complex Figure test (ROCFT). The VR application exhibited very high accuracy in detecting MCI while all participants were able to operate the tablet and application on their own. Diagnostic accuracy was improved compared to a previous study using data from only one administration of the exercise. The results of the present study suggest that remote MCI detection through VR applications can be feasible.

  1. Effects of nasal positive expiratory pressure on dynamic hyperinflation and 6-minute walk test in patients with COPD.

    PubMed

    Wibmer, Thomas; Rüdiger, Stefan; Heitner, Claudia; Kropf-Sanchen, Cornelia; Blanta, Ioanna; Stoiber, Kathrin M; Rottbauer, Wolfgang; Schumann, Christian

    2014-05-01

    Dynamic hyperinflation is an important target in the treatment of COPD. There is increasing evidence that positive expiratory pressure (PEP) could reduce dynamic hyperinflation during exercise. PEP application through a nasal mask and a flow resistance device might have the potential to be used during daily physical activities as an auxiliary strategy of ventilatory assistance. The aim of this study was to determine the effects of nasal PEP on lung volumes during physical exercise in patients with COPD. Twenty subjects (mean ± SD age 69.4 ± 6.4 years) with stable mild-to-severe COPD were randomized to undergo physical exercise with nasal PEP breathing, followed by physical exercise with habitual breathing, or vice versa. Physical exercise was induced by a standard 6-min walk test (6 MWT) protocol. PEP was applied by means of a silicone nasal mask loaded with a fixed-orifice flow resistor. Body plethysmography was performed immediately pre-exercise and post-exercise. Differences in mean pre- to post-exercise changes in total lung capacity (-0.63 ± 0.80 L, P = .002), functional residual capacity (-0.48 ± 0.86 L, P = .021), residual volume (-0.56 ± 0.75 L, P = .004), S(pO2) (-1.7 ± 3.4%, P = .041), and 6 MWT distance (-30.8 ± 30.0 m, P = .001) were statistically significant between the experimental and the control interventions. The use of flow-dependent expiratory pressure, applied with a nasal mask and a PEP device, might promote significant reduction of dynamic hyperinflation during walking exercise. Further studies are warranted addressing improvements in endurance performance under regular application of nasal PEP during physical activities.

  2. Interplanetary Overlay Network Bundle Protocol Implementation

    NASA Technical Reports Server (NTRS)

    Burleigh, Scott C.

    2011-01-01

    The Interplanetary Overlay Network (ION) system's BP package, an implementation of the Delay-Tolerant Networking (DTN) Bundle Protocol (BP) and supporting services, has been specifically designed to be suitable for use on deep-space robotic vehicles. Although the ION BP implementation is unique in its use of zero-copy objects for high performance, and in its use of resource-sensitive rate control, it is fully interoperable with other implementations of the BP specification (Internet RFC 5050). The ION BP implementation is built using the same software infrastructure that underlies the implementation of the CCSDS (Consultative Committee for Space Data Systems) File Delivery Protocol (CFDP) built into the flight software of Deep Impact. It is designed to minimize resource consumption, while maximizing operational robustness. For example, no dynamic allocation of system memory is required. Like all the other ION packages, ION's BP implementation is designed to port readily between Linux and Solaris (for easy development and for ground system operations) and VxWorks (for flight systems operations). The exact same source code is exercised in both environments. Initially included in the ION BP implementations are the following: libraries of functions used in constructing bundle forwarders and convergence-layer (CL) input and output adapters; a simple prototype bundle forwarder and associated CL adapters designed to run over an IPbased local area network; administrative tools for managing a simple DTN infrastructure built from these components; a background daemon process that silently destroys bundles whose time-to-live intervals have expired; a library of functions exposed to applications, enabling them to issue and receive data encapsulated in DTN bundles; and some simple applications that can be used for system checkout and benchmarking.

  3. Effectiveness of an Intensive Handwriting Program for First Grade Students Using the Application Letterschool: A Pilot Study

    ERIC Educational Resources Information Center

    Jordan, Géraldine; Michaud, Fanny; Kaiser, Marie-Laure

    2016-01-01

    The purpose of this pilot study is to analyze the efficacy of a program that combines fine motor activities, animated models, exercises on a digital tablet and paper-pencil exercises. The 10-week program with a 45-minute session and daily exercises was implemented in a class of 16 students of first grade (mean age = 6.9 years old), with another…

  4. The emerging role of exercise testing and stress echocardiography in valvular heart disease.

    PubMed

    Picano, Eugenio; Pibarot, Philippe; Lancellotti, Patrizio; Monin, Jean Luc; Bonow, Robert O

    2009-12-08

    Exercise testing has an established role in the evaluation of patients with valvular heart disease and can aid clinical decision making. Because symptoms may develop slowly and indolently in chronic valve diseases and are often not recognized by patients and their physicians, the symptomatic, blood pressure, and electrocardiographic responses to exercise can help identify patients who would benefit from early valve repair or replacement. In addition, stress echocardiography has emerged as an important component of stress testing in patients with valvular heart disease, with relevant established and potential applications. Stress echocardiography has the advantages of its wide availability, low cost, and versatility for the assessment of disease severity. The versatile applications of stress echocardiography can be tailored to the individual patient with aortic or mitral valve disease, both before and after valve replacement or repair. Hence, exercise-induced changes in valve hemodynamics, ventricular function, and pulmonary artery pressure, together with exercise capacity and symptomatic responses to exercise, provide the clinician with diagnostic and prognostic information that can contribute to subsequent clinical decisions. Nevertheless, there is a lack of convincing evidence that the results of stress echocardiography lead to clinical decisions that result in better outcomes, and therefore large-scale prospective randomized studies focusing on patient outcomes are needed in the future.

  5. Field Projects with Rivers for Introductory Physical-Geology Laboratories.

    ERIC Educational Resources Information Center

    Cordua, William S.

    1983-01-01

    Discusses exercises using a river for the study of river processes and landforms. Although developed for college, they can be adapted for other levels. Exercises involve discharge measurement, flood prediction, and application of the Hjulstrom diagram to river sediments. (JN)

  6. Benchmark datasets for phylogenomic pipeline validation, applications for foodborne pathogen surveillance

    PubMed Central

    Rand, Hugh; Shumway, Martin; Trees, Eija K.; Simmons, Mustafa; Agarwala, Richa; Davis, Steven; Tillman, Glenn E.; Defibaugh-Chavez, Stephanie; Carleton, Heather A.; Klimke, William A.; Katz, Lee S.

    2017-01-01

    Background As next generation sequence technology has advanced, there have been parallel advances in genome-scale analysis programs for determining evolutionary relationships as proxies for epidemiological relationship in public health. Most new programs skip traditional steps of ortholog determination and multi-gene alignment, instead identifying variants across a set of genomes, then summarizing results in a matrix of single-nucleotide polymorphisms or alleles for standard phylogenetic analysis. However, public health authorities need to document the performance of these methods with appropriate and comprehensive datasets so they can be validated for specific purposes, e.g., outbreak surveillance. Here we propose a set of benchmark datasets to be used for comparison and validation of phylogenomic pipelines. Methods We identified four well-documented foodborne pathogen events in which the epidemiology was concordant with routine phylogenomic analyses (reference-based SNP and wgMLST approaches). These are ideal benchmark datasets, as the trees, WGS data, and epidemiological data for each are all in agreement. We have placed these sequence data, sample metadata, and “known” phylogenetic trees in publicly-accessible databases and developed a standard descriptive spreadsheet format describing each dataset. To facilitate easy downloading of these benchmarks, we developed an automated script that uses the standard descriptive spreadsheet format. Results Our “outbreak” benchmark datasets represent the four major foodborne bacterial pathogens (Listeria monocytogenes, Salmonella enterica, Escherichia coli, and Campylobacter jejuni) and one simulated dataset where the “known tree” can be accurately called the “true tree”. The downloading script and associated table files are available on GitHub: https://github.com/WGS-standards-and-analysis/datasets. Discussion These five benchmark datasets will help standardize comparison of current and future phylogenomic pipelines, and facilitate important cross-institutional collaborations. Our work is part of a global effort to provide collaborative infrastructure for sequence data and analytic tools—we welcome additional benchmark datasets in our recommended format, and, if relevant, we will add these on our GitHub site. Together, these datasets, dataset format, and the underlying GitHub infrastructure present a recommended path for worldwide standardization of phylogenomic pipelines. PMID:29372115

  7. Benchmark datasets for phylogenomic pipeline validation, applications for foodborne pathogen surveillance.

    PubMed

    Timme, Ruth E; Rand, Hugh; Shumway, Martin; Trees, Eija K; Simmons, Mustafa; Agarwala, Richa; Davis, Steven; Tillman, Glenn E; Defibaugh-Chavez, Stephanie; Carleton, Heather A; Klimke, William A; Katz, Lee S

    2017-01-01

    As next generation sequence technology has advanced, there have been parallel advances in genome-scale analysis programs for determining evolutionary relationships as proxies for epidemiological relationship in public health. Most new programs skip traditional steps of ortholog determination and multi-gene alignment, instead identifying variants across a set of genomes, then summarizing results in a matrix of single-nucleotide polymorphisms or alleles for standard phylogenetic analysis. However, public health authorities need to document the performance of these methods with appropriate and comprehensive datasets so they can be validated for specific purposes, e.g., outbreak surveillance. Here we propose a set of benchmark datasets to be used for comparison and validation of phylogenomic pipelines. We identified four well-documented foodborne pathogen events in which the epidemiology was concordant with routine phylogenomic analyses (reference-based SNP and wgMLST approaches). These are ideal benchmark datasets, as the trees, WGS data, and epidemiological data for each are all in agreement. We have placed these sequence data, sample metadata, and "known" phylogenetic trees in publicly-accessible databases and developed a standard descriptive spreadsheet format describing each dataset. To facilitate easy downloading of these benchmarks, we developed an automated script that uses the standard descriptive spreadsheet format. Our "outbreak" benchmark datasets represent the four major foodborne bacterial pathogens ( Listeria monocytogenes , Salmonella enterica , Escherichia coli , and Campylobacter jejuni ) and one simulated dataset where the "known tree" can be accurately called the "true tree". The downloading script and associated table files are available on GitHub: https://github.com/WGS-standards-and-analysis/datasets. These five benchmark datasets will help standardize comparison of current and future phylogenomic pipelines, and facilitate important cross-institutional collaborations. Our work is part of a global effort to provide collaborative infrastructure for sequence data and analytic tools-we welcome additional benchmark datasets in our recommended format, and, if relevant, we will add these on our GitHub site. Together, these datasets, dataset format, and the underlying GitHub infrastructure present a recommended path for worldwide standardization of phylogenomic pipelines.

  8. Remote Adaptive Motor Resistance Training Exercise Apparatus and Method of Use Thereof

    NASA Technical Reports Server (NTRS)

    Reich, Alton (Inventor); Shaw, James (Inventor)

    2017-01-01

    The invention comprises a method and/or an apparatus using a computer configured exercise system equipped with an electric motor to provide physical resistance to user motion in conjunction with means for sharing exercise system related data and/or user performance data with a secondary user, such as a medical professional, a physical therapist, a trainer, a computer generated competitor, and/or a human competitor. For example, the exercise system is used with a remote trainer to enhance exercise performance, with a remote medical professional for rehabilitation, and/or with a competitor in a competition, such as in a power/weightlifting competition or in a video game. The exercise system is optionally configured with an intelligent software assistant and knowledge navigator functioning as a personal assistant application.

  9. Remote Adaptive Motor Resistance Training Exercise Apparatus and Method of Use Thereof

    NASA Technical Reports Server (NTRS)

    Shaw, James (Inventor); Reich, Alton (Inventor)

    2016-01-01

    The invention comprises a method and/or an apparatus using a computer configured exercise system equipped with an electric motor to provide physical resistance to user motion in conjunction with means for sharing exercise system related data and/or user performance data with a secondary user, such as a medical professional, a physical therapist, a trainer, a computer generated competitor, and/or a human competitor. For example, the exercise system is used with a remote trainer to enhance exercise performance, with a remote medical professional for rehabilitation, and/or with a competitor in a competition, such as in a power/weightlifting competition or in a video game. The exercise system is optionally configured with an intelligent software assistant and knowledge navigator functioning as a personal assistant application.

  10. Organic contaminants, trace and major elements, and nutrients in water and sediment sampled in response to the Deepwater Horizon oil spill

    USGS Publications Warehouse

    Nowell, Lisa H.; Ludtke, Amy S.; Mueller, David K.; Scott, Jonathon C.

    2012-01-01

    Beach water and sediment samples were collected along the Gulf of Mexico coast to assess differences in contaminant concentrations before and after landfall of Macondo-1 well oil released into the Gulf of Mexico from the sinking of the British Petroleum Corporation's Deepwater Horizon drilling platform. Samples were collected at 70 coastal sites between May 7 and July 7, 2010, to document baseline, or "pre-landfall" conditions. A subset of 48 sites was resampled during October 4 to 14, 2010, after oil had made landfall on the Gulf of Mexico coast, called the "post-landfall" sampling period, to determine if actionable concentrations of oil were present along shorelines. Few organic contaminants were detected in water; their detection frequencies generally were low and similar in pre-landfall and post-landfall samples. Only one organic contaminant--toluene--had significantly higher concentrations in post-landfall than pre-landfall water samples. No water samples exceeded any human-health benchmarks, and only one post-landfall water sample exceeded an aquatic-life benchmark--the toxic-unit benchmark for polycyclic aromatic hydrocarbons (PAH) mixtures. In sediment, concentrations of 3 parent PAHs and 17 alkylated PAH groups were significantly higher in post-landfall samples than pre-landfall samples. One pre-landfall sample from Texas exceeded the sediment toxic-unit benchmark for PAH mixtures; this site was not sampled during the post-landfall period. Empirical upper screening-value benchmarks for PAHs in sediment were exceeded at 37 percent of post-landfall samples and 22 percent of pre-landfall samples, but there was no significant difference in the proportion of samples exceeding benchmarks between paired pre-landfall and post-landfall samples. Seven sites had the largest concentration differences between post-landfall and pre-landfall samples for 15 alkylated PAHs. Five of these seven sites, located in Louisiana, Mississippi, and Alabama, had diagnostic geochemical evidence of Macondo-1 oil in post-landfall sediments and tarballs. For trace and major elements in water, analytical reporting levels for several elements were high and variable. No human-health benchmarks were exceeded, although these were available for only two elements. Aquatic-life benchmarks for trace elements were exceeded in 47 percent of water samples overall. The elements responsible for the most exceedances in post-landfall samples were boron, copper, and manganese. Benchmark exceedances in water could be substantially underestimated because some samples had reporting levels higher than the applicable benchmarks (such as cobalt, copper, lead and zinc) and some elements (such as boron and vanadium) were analyzed in samples from only one sampling period. For trace elements in whole sediment, empirical upper screening-value benchmarks were exceeded in 57 percent of post-landfall samples and 40 percent of pre-landfall samples, but there was no significant difference in the proportion of samples exceeding benchmarks between paired pre-landfall and post-landfall samples. Benchmark exceedance frequencies could be conservatively high because they are based on measurements of total trace-element concentrations in sediment. In the less than 63-micrometer sediment fraction, one or more trace or major elements were anthropogenically enriched relative to national baseline values for U.S. streams for all sediment samples except one. Sixteen percent of sediment samples exceeded upper screening-value benchmarks for, and were enriched in, one or more of the following elements: barium, vanadium, aluminum, manganese, arsenic, chromium, and cobalt. These samples were evenly divided between the sampling periods. Aquatic-life benchmarks were frequently exceeded along the Gulf of Mexico coast by trace elements in both water and sediment and by PAHs in sediment. For the most part, however, significant differences between pre-landfall and post-landfall samples were limited to concentrations of PAHs in sediment. At five sites along the coast, the higher post-landfall concentrations of PAHs were associated with diagnostic geochemical evidence of Deepwater Horizon Macondo-1 oil.

  11. Monitoring Energy Expenditure Using a Multi-Sensor Device-Applications and Limitations of the SenseWear Armband in Athletic Populations.

    PubMed

    Koehler, Karsten; Drenowatz, Clemens

    2017-01-01

    In order to monitor their energy requirements, athletes may desire to assess energy expenditure (EE) during training and competition. Recent technological advances and increased customer interest have created a market for wearable devices that measure physiological variables and bodily movement over prolonged time periods and convert this information into EE data. This mini-review provides an overview of the applicability of the SenseWear armband (SWA), which combines accelerometry with measurements of heat production and skin conductivity, to measure total daily energy expenditure (TDEE) and its components such as exercise energy expenditure (ExEE) in athletic populations. While the SWA has been shown to provide valid estimates of EE in the general population, validation studies in athletic populations indicate a tendency toward underestimation of ExEE particularly during high-intensity exercise (>10 METs) with an increasing underestimation as exercise intensity increases. Although limited information is available on the accuracy of the SWA during resistance exercise, high-intensity interval exercise, or mixed exercise forms, there seems to be a similar trend of underestimating high levels of ExEE. The SWA, however, is capable of detecting movement patterns and metabolic measurements even at high exercise intensities, suggesting that underestimation may result from limitations in the proprietary algorithms. In addition, the SWA has been used in the assessment of sleep quantity and quality as well as non-exercise activity thermogenesis. Overall, the SWA provides viable information and remains to be used in various clinical and athletic settings, despite the termination of its commercial sale.

  12. Factors predicting barriers to exercise in midlife Australian women.

    PubMed

    McGuire, Amanda; Seib, Charrlotte; Anderson, Debra

    2016-05-01

    Chronic diseases are the leading cause of death and disability worldwide. They are, though, largely attributable to modifiable lifestyle risk factors, including lack of exercise. This study aims to investigate what factors predict perceptions of barriers to exercise in midlife women. This cross-sectional descriptive study collected data from midlife Australian women by online questionnaire. Volunteers aged between 40 and 65 years were recruited following media publicity about the study. The primary outcome measure was perceived exercise barriers (EBBS Barriers sub-scale). Other self-report data included: exercise, smoking, alcohol, fruit and vegetable consumption, body mass index, physical and mental health and well-being (MOS SF-12v2) and exercise self-efficacy. On average, the 225 participants were aged 50.9 years (SD=5.9). The significant predictors of perceived barriers to exercise were perceived benefits of exercise, exercise self-efficacy, physical well-being and mental well-being. These variables explained 41% of the variance in the final model (F (8219)=20.1, p<.01) CONCLUSIONS: In midlife women, perceptions of barriers to exercise correlate with beliefs about the health benefits of exercise, exercise self-efficacy, physical and mental well-being. These findings have application to health promotion interventions targeting exercise behaviour change in midlife women. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Technology Assessment On Stressor Impacts To Green Infrastructure BMP Performance, Monitoring And Integration

    EPA Science Inventory

    This presentation will document, benchmark and evalute state-of-the-science research and implementation on BMP performance, monitoring, and integration for green infrastructure applications, to manage wet weather flwo, storm-water-runoff stressor relief and remedial sustainable w...

  14. State-and-transition models for heterogeneous landscapes: A strategy for development and application

    USDA-ARS?s Scientific Manuscript database

    Interpretation of assessment and monitoring data requires information about reference conditions and ecological resilience. Reference conditions used as benchmarks can be specified via potential-based land classifications (e.g., ecological sites) that describe the plant communities potentially obser...

  15. 78 FR 38539 - Federal Acquisition Regulation; Applicability of the Senior Executive Compensation Benchmark

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-26

    ... 808 breached contracts awarded before the statutory date of enactment (General Dynamics Corp. v. U.S... 51 U.S.C. 20115. PART 31--CONTRACT COST PRINCIPLES AND PROCEDURES 0 2. Amend section 31.205-6 by-- 0... 38539

  16. A Million Cancer Genome Warehouse

    DTIC Science & Technology

    2012-11-20

    Software, Strawberry Canyon, 2012. 25 Units (GPUs) without any changes needed to the client applications. ● Service-level APIs are designed to... Strawberry Canyon, 2012. 62 Patterson, D. For better or worse, benchmarks shape a field: technical perspective, Communications of the ACM, v.55 n.7

  17. Center for Extended Magnetohydrodynamic Modeling Cooperative Agreement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carl R. Sovinec

    The Center for Extended Magnetohydrodynamic Modeling (CEMM) is developing computer simulation models for predicting the behavior of magnetically confined plasmas. Over the first phase of support from the Department of Energy’s Scientific Discovery through Advanced Computing (SciDAC) initiative, the focus has been on macroscopic dynamics that alter the confinement properties of magnetic field configurations. The ultimate objective is to provide computational capabilities to predict plasma behavior—not unlike computational weather prediction—to optimize performance and to increase the reliability of magnetic confinement for fusion energy. Numerical modeling aids theoretical research by solving complicated mathematical models of plasma behavior including strong nonlinear effectsmore » and the influences of geometrical shaping of actual experiments. The numerical modeling itself remains an area of active research, due to challenges associated with simulating multiple temporal and spatial scales. The research summarized in this report spans computational and physical topics associated with state of the art simulation of magnetized plasmas. The tasks performed for this grant are categorized according to whether they are primarily computational, algorithmic, or application-oriented in nature. All involve the development and use of the Non-Ideal Magnetohydrodynamics with Rotation, Open Discussion (NIMROD) code, which is described at http://nimrodteam.org. With respect to computation, we have tested and refined methods for solving the large algebraic systems of equations that result from our numerical approximations of the physical model. Collaboration with the Terascale Optimal PDE Solvers (TOPS) SciDAC center led us to the SuperLU_DIST software library [http://crd.lbl.gov/~xiaoye/SuperLU/] for solving large sparse matrices using direct methods on parallel computers. Switching to this solver library boosted NIMROD’s performance by a factor of five in typical large nonlinear simulations, which has been publicized as a success story of SciDAC-fostered collaboration. Furthermore, the SuperLU software does not assume any mathematical symmetry, and its generality provides an important capability for extending the physical model beyond magnetohydrodynamics (MHD). With respect to algorithmic and model development, our most significant accomplishment is the development of a new method for solving plasma models that treat electrons as an independent plasma component. These ‘two-fluid’ models encompass MHD and add temporal and spatial scales that are beyond the response of the ion species. Implementation and testing of a previously published algorithm did not prove successful for NIMROD, and the new algorithm has since been devised, analyzed, and implemented. Two-fluid modeling, an important objective of the original NIMROD project, is now routine in 2D applications. Algorithmic components for 3D modeling are in place and tested; though, further computational work is still needed for efficiency. Other algorithmic work extends the ion-fluid stress tensor to include models for parallel and gyroviscous stresses. In addition, our hot-particle simulation capability received important refinements that permitted completion of a benchmark with the M3D code. A highlight of our applications work is the edge-localized mode (ELM) modeling, which was part of the first-ever computational Performance Target for the DOE Office of Fusion Energy Science, see http://www.science.doe.gov/ofes/performancetargets.shtml. Our efforts allowed MHD simulations to progress late into the nonlinear stage, where energy is conducted to the wall location. They also produced a two-fluid ELM simulation starting from experimental information and demonstrating critical drift effects that are characteristic of two-fluid physics. Another important application is the internal kink mode in a tokamak. Here, the primary purpose of the study has been to benchmark the two main code development lines of CEMM, NIMROD and M3D, on a relevant nonlinear problem. Results from the two codes show repeating nonlinear relaxation events driven by the kink mode over quantitatively comparable timescales. The work has launched a more comprehensive nonlinear benchmarking exercise, where realistic transport effects have an important role.« less

  18. Refining Measurement of Social Cognitive Theory Factors Associated with Exercise Adherence in Head and Neck Cancer Patients.

    PubMed

    Rogers, Laura Q; Fogleman, Amanda; Verhulst, Steven; Bhugra, Mudita; Rao, Krishna; Malone, James; Robbs, Randall; Robbins, K Thomas

    2015-01-01

    Social cognitive theory (SCT) measures related to exercise adherence in head and neck cancer (HNCa) patients were developed. Enrolling 101 HNCa patients, psychometric properties and associations with exercise behavior were examined for barriers self-efficacy, perceived barriers interference, outcome expectations, enjoyment, and goal setting. Cronbach's alpha ranged from.84 to.95; only enjoyment demonstrated limited test-retest reliability. Subscales for barriers self-efficacy (motivational, physical health) and barriers interference (motivational, physical health, time, environment) were identified. Multiple SCT constructs were cross-sectional correlates and prospective predictors of exercise behavior. These measures can improve the application of the SCT to exercise adherence in HNCa patients.

  19. Dynamic characteristics of oxygen consumption.

    PubMed

    Ye, Lin; Argha, Ahmadreza; Yu, Hairong; Celler, Branko G; Nguyen, Hung T; Su, Steven

    2018-04-23

    Previous studies have indicated that oxygen uptake ([Formula: see text]) is one of the most accurate indices for assessing the cardiorespiratory response to exercise. In most existing studies, the response of [Formula: see text] is often roughly modelled as a first-order system due to the inadequate stimulation and low signal to noise ratio. To overcome this difficulty, this paper proposes a novel nonparametric kernel-based method for the dynamic modelling of [Formula: see text] response to provide a more robust estimation. Twenty healthy non-athlete participants conducted treadmill exercises with monotonous stimulation (e.g., single step function as input). During the exercise, [Formula: see text] was measured and recorded by a popular portable gas analyser ([Formula: see text], COSMED). Based on the recorded data, a kernel-based estimation method was proposed to perform the nonparametric modelling of [Formula: see text]. For the proposed method, a properly selected kernel can represent the prior modelling information to reduce the dependence of comprehensive stimulations. Furthermore, due to the special elastic net formed by [Formula: see text] norm and kernelised [Formula: see text] norm, the estimations are smooth and concise. Additionally, the finite impulse response based nonparametric model which estimated by the proposed method can optimally select the order and fit better in terms of goodness-of-fit comparing to classical methods. Several kernels were introduced for the kernel-based [Formula: see text] modelling method. The results clearly indicated that the stable spline (SS) kernel has the best performance for [Formula: see text] modelling. Particularly, based on the experimental data from 20 participants, the estimated response from the proposed method with SS kernel was significantly better than the results from the benchmark method [i.e., prediction error method (PEM)] ([Formula: see text] vs [Formula: see text]). The proposed nonparametric modelling method is an effective method for the estimation of the impulse response of VO 2 -Speed system. Furthermore, the identified average nonparametric model method can dynamically predict [Formula: see text] response with acceptable accuracy during treadmill exercise.

  20. Effects of neuromuscular joint facilitation on bridging exercises with respect to deep muscle changes.

    PubMed

    Zhou, Bin; Huang, QiuChen; Zheng, Tao; Huo, Ming; Maruyama, Hitoshi

    2015-05-01

    [Purpose] This study examined the effects of neuromuscular joint facilitation on bridging exercises by assessing the cross-sectional area of the multifidus muscle and thickness of the musculus transversus abdominis. [Subjects] Twelve healthy men. [Methods] Four exercises were evaluated: (a) supine resting, (b) bridging resistance exercise involving posterior pelvic tilting, (c) bridging resistance exercise involving anterior pelvic tilting, and (d) bridging resistance exercise involving neuromuscular joint facilitation. The cross-sectional area of the multifidus muscle and thickness of the musculus transversus abdominis were measured during each exercise. [Results] The cross-sectional area of the multifidus muscle and thickness of the musculus transversus abdominis were significantly greater in the neuromuscular joint facilitation group than the others. [Conclusion] Neuromuscular joint facilitation intervention improves the function of deep muscles such as the multifidus muscle and musculus transversus abdominis. Therefore, it can be recommended for application in clinical treatments such as that for back pain.

Top