Sample records for benchmark field study

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munro, J.F.; Kristal, J.; Thompson, G.

    The Office of Environmental Management is bringing Headquarters and the Field together to implement process improvements throughout the Complex through a systematic process of organizational learning called benchmarking. Simply stated, benchmarking is a process of continuously comparing and measuring practices, processes, or methodologies with those of other private and public organizations. The EM benchmarking program, which began as the result of a recommendation from Xerox Corporation, is building trust and removing barriers to performance enhancement across the DOE organization. The EM benchmarking program is designed to be field-centered with Headquarters providing facilitatory and integrative functions on an ``as needed`` basis.more » One of the main goals of the program is to assist Field Offices and their associated M&O/M&I contractors develop the capabilities to do benchmarking for themselves. In this regard, a central precept is that in order to realize tangible performance benefits, program managers and staff -- the ones closest to the work - must take ownership of the studies. This avoids the ``check the box`` mentality associated with some third party studies. This workshop will provide participants with a basic level of understanding why the EM benchmarking team was developed and the nature and scope of its mission. Participants will also begin to understand the types of study levels and the particular methodology the EM benchmarking team is using to conduct studies. The EM benchmarking team will also encourage discussion on ways that DOE (both Headquarters and the Field) can team with its M&O/M&I contractors to conduct additional benchmarking studies. This ``introduction to benchmarking`` is intended to create a desire to know more and a greater appreciation of how benchmarking processes could be creatively employed to enhance performance.« less

  2. Benchmarking: applications to transfusion medicine.

    PubMed

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. Using chemical benchmarking to determine the persistence of chemicals in a Swedish lake.

    PubMed

    Zou, Hongyan; Radke, Michael; Kierkegaard, Amelie; MacLeod, Matthew; McLachlan, Michael S

    2015-02-03

    It is challenging to measure the persistence of chemicals under field conditions. In this work, two approaches for measuring persistence in the field were compared: the chemical mass balance approach, and a novel chemical benchmarking approach. Ten pharmaceuticals, an X-ray contrast agent, and an artificial sweetener were studied in a Swedish lake. Acesulfame K was selected as a benchmark to quantify persistence using the chemical benchmarking approach. The 95% confidence intervals of the half-life for transformation in the lake system ranged from 780-5700 days for carbamazepine to <1-2 days for ketoprofen. The persistence estimates obtained using the benchmarking approach agreed well with those from the mass balance approach (1-21% difference), indicating that chemical benchmarking can be a valid and useful method to measure the persistence of chemicals under field conditions. Compared to the mass balance approach, the benchmarking approach partially or completely eliminates the need to quantify mass flow of chemicals, so it is particularly advantageous when the quantification of mass flow of chemicals is difficult. Furthermore, the benchmarking approach allows for ready comparison and ranking of the persistence of different chemicals.

  4. Benchmark study on glyphosate-resistant crop systems in the United States. Part 2: Perspectives.

    PubMed

    Owen, Micheal D K; Young, Bryan G; Shaw, David R; Wilson, Robert G; Jordan, David L; Dixon, Philip M; Weller, Stephen C

    2011-07-01

    A six-state, 5 year field project was initiated in 2006 to study weed management methods that foster the sustainability of genetically engineered (GE) glyphosate-resistant (GR) crop systems. The benchmark study field-scale experiments were initiated following a survey, conducted in the winter of 2005-2006, of farmer opinions on weed management practices and their views on GR weeds and management tactics. The main survey findings supported the premise that growers were generally less aware of the significance of evolved herbicide resistance and did not have a high recognition of the strong selection pressure from herbicides on the evolution of herbicide-resistant (HR) weeds. The results of the benchmark study survey indicated that there are educational challenges to implement sustainable GR-based crop systems and helped guide the development of the field-scale benchmark study. Paramount is the need to develop consistent and clearly articulated science-based management recommendations that enable farmers to reduce the potential for HR weeds. This paper provides background perspectives about the use of GR crops, the impact of these crops and an overview of different opinions about the use of GR crops on agriculture and society, as well as defining how the benchmark study will address these issues. Copyright © 2011 Society of Chemical Industry.

  5. Benchmark problems for numerical implementations of phase field models

    DOE PAGES

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; ...

    2016-10-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verifymore » new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.« less

  6. Toward Automated Benchmarking of Atomistic Force Fields: Neat Liquid Densities and Static Dielectric Constants from the ThermoML Data Archive.

    PubMed

    Beauchamp, Kyle A; Behr, Julie M; Rustenburg, Ariën S; Bayly, Christopher I; Kroenlein, Kenneth; Chodera, John D

    2015-10-08

    Atomistic molecular simulations are a powerful way to make quantitative predictions, but the accuracy of these predictions depends entirely on the quality of the force field employed. Although experimental measurements of fundamental physical properties offer a straightforward approach for evaluating force field quality, the bulk of this information has been tied up in formats that are not machine-readable. Compiling benchmark data sets of physical properties from non-machine-readable sources requires substantial human effort and is prone to the accumulation of human errors, hindering the development of reproducible benchmarks of force-field accuracy. Here, we examine the feasibility of benchmarking atomistic force fields against the NIST ThermoML data archive of physicochemical measurements, which aggregates thousands of experimental measurements in a portable, machine-readable, self-annotating IUPAC-standard format. As a proof of concept, we present a detailed benchmark of the generalized Amber small-molecule force field (GAFF) using the AM1-BCC charge model against experimental measurements (specifically, bulk liquid densities and static dielectric constants at ambient pressure) automatically extracted from the archive and discuss the extent of data available for use in larger scale (or continuously performed) benchmarks. The results of even this limited initial benchmark highlight a general problem with fixed-charge force fields in the representation low-dielectric environments, such as those seen in binding cavities or biological membranes.

  7. Short-Term Field Study Programs: A Holistic and Experiential Approach to Learning

    ERIC Educational Resources Information Center

    Long, Mary M.; Sandler, Dennis M.; Topol, Martin T.

    2017-01-01

    For business schools, AACSB and Middle States' call for more experiential learning is one reason to provide study abroad programs. Universities must attend to the demand for continuous improvement and employ metrics to benchmark and evaluate their relative standing among peer institutions. One such benchmark is the National Survey of Student…

  8. Review of the GMD Benchmark Event in TPL-007-1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backhaus, Scott N.; Rivera, Michael Kelly

    2015-07-21

    Los Alamos National Laboratory (LANL) examined the approaches suggested in NERC Standard TPL-007-1 for defining the geo-electric field for the Benchmark Geomagnetic Disturbance (GMD) Event. Specifically; 1. Estimating 100-year exceedance geo-electric field magnitude; The scaling of the GMD Benchmark Event to geomagnetic latitudes below 60 degrees north; and 3. The effect of uncertainties in earth conductivity data on the conversion from geomagnetic field to geo-electric field. This document summarizes the review and presents recommendations for consideration

  9. International land Model Benchmarking (ILAMB) Package v002.00

    DOE Data Explorer

    Collier, Nathaniel [Oak Ridge National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory; Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory

    2016-05-09

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  10. International land Model Benchmarking (ILAMB) Package v001.00

    DOE Data Explorer

    Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory

    2016-05-02

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  11. Benchmarking 2009: Trends in Education Philanthropy

    ERIC Educational Resources Information Center

    Bearman, Jessica; Kilgore, Gin

    2009-01-01

    "Benchmarking 2009: Trends in Education Philanthropy" is Grantmakers for Education's (GFE) second annual study of grantmaking trends and priorities among members. As a national network dedicated to improving education outcomes through philanthropy, GFE members are mindful of their role in fostering greater knowledge in the field. They believe it's…

  12. How does audit and feedback influence intentions of health professionals to improve practice? A laboratory experiment and field study in cardiac rehabilitation.

    PubMed

    Gude, Wouter T; van Engen-Verheul, Mariëtte M; van der Veer, Sabine N; de Keizer, Nicolette F; Peek, Niels

    2017-04-01

    To identify factors that influence the intentions of health professionals to improve their practice when confronted with clinical performance feedback, which is an essential first step in the audit and feedback mechanism. We conducted a theory-driven laboratory experiment with 41 individual professionals, and a field study in 18 centres in the context of a cluster-randomised trial of electronic audit and feedback in cardiac rehabilitation. Feedback reports were provided through a web-based application, and included performance scores and benchmark comparisons (high, intermediate or low performance) for a set of process and outcome indicators. From each report participants selected indicators for improvement into their action plan. Our unit of observation was an indicator presented in a feedback report (selected yes/no); we considered selecting an indicator to reflect an intention to improve. We analysed 767 observations in the laboratory experiment and 614 in the field study, respectively. Each 10% decrease in performance score increased the probability of an indicator being selected by 54% (OR, 1.54; 95% CI 1.29% to 1.83%) in the laboratory experiment, and 25% (OR, 1.25; 95% CI 1.13% to 1.39%) in the field study. Also, performance being benchmarked as low and intermediate increased this probability in laboratory settings. Still, participants ignored the benchmarks in 34% (laboratory experiment) and 48% (field study) of their selections. When confronted with clinical performance feedback, performance scores and benchmark comparisons influenced health professionals' intentions to improve practice. However, there was substantial variation in these intentions, because professionals disagreed with benchmarks, deemed improvement unfeasible or did not consider the indicator an essential aspect of care quality. These phenomena impede intentions to improve practice, and are thus likely to dilute the effects of audit and feedback interventions. NTR3251, pre-results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  13. A Field-Based Aquatic Life Benchmark for Conductivity in ...

    EPA Pesticide Factsheets

    EPA announced the availability of the final report, A Field-Based Aquatic Life Benchmark for Conductivity in Central Appalachian Streams. This report describes a method to characterize the relationship between the extirpation (the effective extinction) of invertebrate genera and salinity (measured as conductivity) and from that relationship derives a freshwater aquatic life benchmark. This benchmark of 300 µS/cm may be applied to waters in Appalachian streams that are dominated by calcium and magnesium salts of sulfate and bicarbonate at circum-neutral to mildly alkaline pH. This report provides scientific evidence for a conductivity benchmark in a specific region rather than for the entire United States.

  14. Benchmarking in health care: using the Internet to identify resources.

    PubMed

    Lingle, V A

    1996-01-01

    Benchmarking is a quality improvement tool that is increasingly being applied to the health care field and to the libraries within that field. Using mostly resources assessible at no charge through the Internet, a collection of information was compiled on benchmarking and its applications. Sources could be identified in several formats including books, journals and articles, multi-media materials, and organizations.

  15. Phase field benchmark problems for dendritic growth and linear elasticity

    DOE PAGES

    Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.; ...

    2018-03-26

    We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less

  16. Phase field benchmark problems for dendritic growth and linear elasticity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.

    We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less

  17. How do I know if my forecasts are better? Using benchmarks in hydrological ensemble prediction

    NASA Astrophysics Data System (ADS)

    Pappenberger, F.; Ramos, M. H.; Cloke, H. L.; Wetterhall, F.; Alfieri, L.; Bogner, K.; Mueller, A.; Salamon, P.

    2015-03-01

    The skill of a forecast can be assessed by comparing the relative proximity of both the forecast and a benchmark to the observations. Example benchmarks include climatology or a naïve forecast. Hydrological ensemble prediction systems (HEPS) are currently transforming the hydrological forecasting environment but in this new field there is little information to guide researchers and operational forecasters on how benchmarks can be best used to evaluate their probabilistic forecasts. In this study, it is identified that the forecast skill calculated can vary depending on the benchmark selected and that the selection of a benchmark for determining forecasting system skill is sensitive to a number of hydrological and system factors. A benchmark intercomparison experiment is then undertaken using the continuous ranked probability score (CRPS), a reference forecasting system and a suite of 23 different methods to derive benchmarks. The benchmarks are assessed within the operational set-up of the European Flood Awareness System (EFAS) to determine those that are 'toughest to beat' and so give the most robust discrimination of forecast skill, particularly for the spatial average fields that EFAS relies upon. Evaluating against an observed discharge proxy the benchmark that has most utility for EFAS and avoids the most naïve skill across different hydrological situations is found to be meteorological persistency. This benchmark uses the latest meteorological observations of precipitation and temperature to drive the hydrological model. Hydrological long term average benchmarks, which are currently used in EFAS, are very easily beaten by the forecasting system and the use of these produces much naïve skill. When decomposed into seasons, the advanced meteorological benchmarks, which make use of meteorological observations from the past 20 years at the same calendar date, have the most skill discrimination. They are also good at discriminating skill in low flows and for all catchment sizes. Simpler meteorological benchmarks are particularly useful for high flows. Recommendations for EFAS are to move to routine use of meteorological persistency, an advanced meteorological benchmark and a simple meteorological benchmark in order to provide a robust evaluation of forecast skill. This work provides the first comprehensive evidence on how benchmarks can be used in evaluation of skill in probabilistic hydrological forecasts and which benchmarks are most useful for skill discrimination and avoidance of naïve skill in a large scale HEPS. It is recommended that all HEPS use the evidence and methodology provided here to evaluate which benchmarks to employ; so forecasters can have trust in their skill evaluation and will have confidence that their forecasts are indeed better.

  18. A Field-Based Aquatic Life Benchmark for Conductivity in ...

    EPA Pesticide Factsheets

    This report adapts the standard U.S. EPA methodology for deriving ambient water quality criteria. Rather than use toxicity test results, the adaptation uses field data to determine the loss of 5% of genera from streams. The method is applied to derive effect benchmarks for dissolved salts as measured by conductivity in Central Appalachian streams using data from West Virginia and Kentucky. This report provides scientific evidence for a conductivity benchmark in a specific region rather than for the entire United States.

  19. Turbofan forced mixer-nozzle internal flowfield. Volume 1: A benchmark experimental study

    NASA Technical Reports Server (NTRS)

    Paterson, R. W.

    1982-01-01

    An experimental investigation of the flow field within a model turbofan forced mixer nozzle is described. Velocity and thermodynamic state variable data for use in assessing the accuracy and assisting the further development of computational procedures for predicting the flow field within mixer nozzles are provided. Velocity and temperature data suggested that the nozzle mixing process was dominated by circulations (secondary flows) of a length scale on the order the lobe dimensions which were associated with strong radial velocities observed near the lobe exit plane. The 'benchmark' model mixer experiment conducted for code assessment purposes is discussed.

  20. The PAC-MAN model: Benchmark case for linear acoustics in computational physics

    NASA Astrophysics Data System (ADS)

    Ziegelwanger, Harald; Reiter, Paul

    2017-10-01

    Benchmark cases in the field of computational physics, on the one hand, have to contain a certain complexity to test numerical edge cases and, on the other hand, require the existence of an analytical solution, because an analytical solution allows the exact quantification of the accuracy of a numerical simulation method. This dilemma causes a need for analytical sound field formulations of complex acoustic problems. A well known example for such a benchmark case for harmonic linear acoustics is the ;Cat's Eye model;, which describes the three-dimensional sound field radiated from a sphere with a missing octant analytically. In this paper, a benchmark case for two-dimensional (2D) harmonic linear acoustic problems, viz., the ;PAC-MAN model;, is proposed. The PAC-MAN model describes the radiated and scattered sound field around an infinitely long cylinder with a cut out sector of variable angular width. While the analytical calculation of the 2D sound field allows different angular cut-out widths and arbitrarily positioned line sources, the computational cost associated with the solution of this problem is similar to a 1D problem because of a modal formulation of the sound field in the PAC-MAN model.

  1. Using Benchmarking To Strengthen the Assessment of Persistence.

    PubMed

    McLachlan, Michael S; Zou, Hongyan; Gouin, Todd

    2017-01-03

    Chemical persistence is a key property for assessing chemical risk and chemical hazard. Current methods for evaluating persistence are based on laboratory tests. The relationship between the laboratory based estimates and persistence in the environment is often unclear, in which case the current methods for evaluating persistence can be questioned. Chemical benchmarking opens new possibilities to measure persistence in the field. In this paper we explore how the benchmarking approach can be applied in both the laboratory and the field to deepen our understanding of chemical persistence in the environment and create a firmer scientific basis for laboratory to field extrapolation of persistence test results.

  2. A Field-Based Aquatic Life Benchmark for Conductivity in Central Appalachian Streams (Final Report)

    EPA Science Inventory

    EPA announced the availability of the final report, A Field-Based Aquatic Life Benchmark for Conductivity in Central Appalachian Streams. This report describes a method to characterize the relationship between the extirpation (the effective extinction) of invertebrate g...

  3. Benchmark levels for the consumptive water footprint of crop production for different environmental conditions: a case study for winter wheat in China

    NASA Astrophysics Data System (ADS)

    Zhuo, La; Mekonnen, Mesfin M.; Hoekstra, Arjen Y.

    2016-11-01

    Meeting growing food demands while simultaneously shrinking the water footprint (WF) of agricultural production is one of the greatest societal challenges. Benchmarks for the WF of crop production can serve as a reference and be helpful in setting WF reduction targets. The consumptive WF of crops, the consumption of rainwater stored in the soil (green WF), and the consumption of irrigation water (blue WF) over the crop growing period varies spatially and temporally depending on environmental factors like climate and soil. The study explores which environmental factors should be distinguished when determining benchmark levels for the consumptive WF of crops. Hereto we determine benchmark levels for the consumptive WF of winter wheat production in China for all separate years in the period 1961-2008, for rain-fed vs. irrigated croplands, for wet vs. dry years, for warm vs. cold years, for four different soil classes, and for two different climate zones. We simulate consumptive WFs of winter wheat production with the crop water productivity model AquaCrop at a 5 by 5 arcmin resolution, accounting for water stress only. The results show that (i) benchmark levels determined for individual years for the country as a whole remain within a range of ±20 % around long-term mean levels over 1961-2008, (ii) the WF benchmarks for irrigated winter wheat are 8-10 % larger than those for rain-fed winter wheat, (iii) WF benchmarks for wet years are 1-3 % smaller than for dry years, (iv) WF benchmarks for warm years are 7-8 % smaller than for cold years, (v) WF benchmarks differ by about 10-12 % across different soil texture classes, and (vi) WF benchmarks for the humid zone are 26-31 % smaller than for the arid zone, which has relatively higher reference evapotranspiration in general and lower yields in rain-fed fields. We conclude that when determining benchmark levels for the consumptive WF of a crop, it is useful to primarily distinguish between different climate zones. If actual consumptive WFs of winter wheat throughout China were reduced to the benchmark levels set by the best 25 % of Chinese winter wheat production (1224 m3 t-1 for arid areas and 841 m3 t-1 for humid areas), the water saving in an average year would be 53 % of the current water consumption at winter wheat fields in China. The majority of the yield increase and associated improvement in water productivity can be achieved in southern China.

  4. Model benchmarking and reference signals for angled-beam shear wave ultrasonic nondestructive evaluation (NDE) inspections

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Hopkins, Deborah; Datuin, Marvin; Warchol, Mark; Warchol, Lyudmila; Forsyth, David S.; Buynak, Charlie; Lindgren, Eric A.

    2017-02-01

    For model benchmark studies, the accuracy of the model is typically evaluated based on the change in response relative to a selected reference signal. The use of a side drilled hole (SDH) in a plate was investigated as a reference signal for angled beam shear wave inspection for aircraft structure inspections of fastener sites. Systematic studies were performed with varying SDH depth and size, and varying the ultrasonic probe frequency, focal depth, and probe height. Increased error was observed with the simulation of angled shear wave beams in the near-field. Even more significant, asymmetry in real probes and the inherent sensitivity of signals in the near-field to subtle test conditions were found to provide a greater challenge with achieving model agreement. To achieve quality model benchmark results for this problem, it is critical to carefully align the probe with the part geometry, to verify symmetry in probe response, and ideally avoid using reference signals from the near-field response. Suggested reference signals for angled beam shear wave inspections include using the `through hole' corner specular reflection signal and the full skip' signal off of the far wall from the side drilled hole.

  5. Principles for Developing Benchmark Criteria for Staff Training in Responsible Gambling.

    PubMed

    Oehler, Stefan; Banzer, Raphaela; Gruenerbl, Agnes; Malischnig, Doris; Griffiths, Mark D; Haring, Christian

    2017-03-01

    One approach to minimizing the negative consequences of excessive gambling is staff training to reduce the rate of the development of new cases of harm or disorder within their customers. The primary goal of the present study was to assess suitable benchmark criteria for the training of gambling employees at casinos and lottery retailers. The study utilised the Delphi Method, a survey with one qualitative and two quantitative phases. A total of 21 invited international experts in the responsible gambling field participated in all three phases. A total of 75 performance indicators were outlined and assigned to six categories: (1) criteria of content, (2) modelling, (3) qualification of trainer, (4) framework conditions, (5) sustainability and (6) statistical indicators. Nine of the 75 indicators were rated as very important by 90 % or more of the experts. Unanimous support for importance was given to indicators such as (1) comprehensibility and (2) concrete action-guidance for handling with problem gamblers, Additionally, the study examined the implementation of benchmarking, when it should be conducted, and who should be responsible. Results indicated that benchmarking should be conducted every 1-2 years regularly and that one institution should be clearly defined and primarily responsible for benchmarking. The results of the present study provide the basis for developing a benchmarking for staff training in responsible gambling.

  6. A Field-Based Aquatic Life Benchmark for Conductivity in Central Appalachian Streams (2010) (External Review Draft)

    EPA Science Inventory

    This report adapts the standard U.S. EPA methodology for deriving ambient water quality criteria. Rather than use toxicity test results, the adaptation uses field data to determine the loss of 5% of genera from streams. The method is applied to derive effect benchmarks for disso...

  7. Is Higher Better? Determinants and Comparisons of Performance on the Major Field Test in Business

    ERIC Educational Resources Information Center

    Bielinska-Kwapisz, Agnieszka; Brown, F. William; Semenik, Richard

    2012-01-01

    Student performance on the Major Field Achievement Test in Business is an important benchmark for college of business programs. The authors' results indicate that such benchmarking can only be meaningful if certain student characteristics are taken into account. The differences in achievement between cohorts are explored in detail by separating…

  8. Simulation Studies for Inspection of the Benchmark Test with PATRASH

    NASA Astrophysics Data System (ADS)

    Shimosaki, Y.; Igarashi, S.; Machida, S.; Shirakata, M.; Takayama, K.; Noda, F.; Shigaki, K.

    2002-12-01

    In order to delineate the halo-formation mechanisms in a typical FODO lattice, a 2-D simulation code PATRASH (PArticle TRAcking in a Synchrotron for Halo analysis) has been developed. The electric field originating from the space charge is calculated by the Hybrid Tree code method. Benchmark tests utilizing three simulation codes of ACCSIM, PATRASH and SIMPSONS were carried out. These results have been confirmed to be fairly in agreement with each other. The details of PATRASH simulation are discussed with some examples.

  9. Performance Comparison of NAMI DANCE and FLOW-3D® Models in Tsunami Propagation, Inundation and Currents using NTHMP Benchmark Problems

    NASA Astrophysics Data System (ADS)

    Velioglu Sogut, Deniz; Yalciner, Ahmet Cevdet

    2018-06-01

    Field observations provide valuable data regarding nearshore tsunami impact, yet only in inundation areas where tsunami waves have already flooded. Therefore, tsunami modeling is essential to understand tsunami behavior and prepare for tsunami inundation. It is necessary that all numerical models used in tsunami emergency planning be subject to benchmark tests for validation and verification. This study focuses on two numerical codes, NAMI DANCE and FLOW-3D®, for validation and performance comparison. NAMI DANCE is an in-house tsunami numerical model developed by the Ocean Engineering Research Center of Middle East Technical University, Turkey and Laboratory of Special Research Bureau for Automation of Marine Research, Russia. FLOW-3D® is a general purpose computational fluid dynamics software, which was developed by scientists who pioneered in the design of the Volume-of-Fluid technique. The codes are validated and their performances are compared via analytical, experimental and field benchmark problems, which are documented in the ``Proceedings and Results of the 2011 National Tsunami Hazard Mitigation Program (NTHMP) Model Benchmarking Workshop'' and the ``Proceedings and Results of the NTHMP 2015 Tsunami Current Modeling Workshop". The variations between the numerical solutions of these two models are evaluated through statistical error analysis.

  10. Benchmarking an Unstructured-Grid Model for Tsunami Current Modeling

    NASA Astrophysics Data System (ADS)

    Zhang, Yinglong J.; Priest, George; Allan, Jonathan; Stimely, Laura

    2016-12-01

    We present model results derived from a tsunami current benchmarking workshop held by the NTHMP (National Tsunami Hazard Mitigation Program) in February 2015. Modeling was undertaken using our own 3D unstructured-grid model that has been previously certified by the NTHMP for tsunami inundation. Results for two benchmark tests are described here, including: (1) vortex structure in the wake of a submerged shoal and (2) impact of tsunami waves on Hilo Harbor in the 2011 Tohoku event. The modeled current velocities are compared with available lab and field data. We demonstrate that the model is able to accurately capture the velocity field in the two benchmark tests; in particular, the 3D model gives a much more accurate wake structure than the 2D model for the first test, with the root-mean-square error and mean bias no more than 2 cm s-1 and 8 mm s-1, respectively, for the modeled velocity.

  11. Using benchmarks for radiation testing of microprocessors and FPGAs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Heather; Robinson, William H.; Rech, Paolo

    Performance benchmarks have been used over the years to compare different systems. These benchmarks can be useful for researchers trying to determine how changes to the technology, architecture, or compiler affect the system's performance. No such standard exists for systems deployed into high radiation environments, making it difficult to assess whether changes in the fabrication process, circuitry, architecture, or software affect reliability or radiation sensitivity. In this paper, we propose a benchmark suite for high-reliability systems that is designed for field-programmable gate arrays and microprocessors. As a result, we describe the development process and report neutron test data for themore » hardware and software benchmarks.« less

  12. Using benchmarks for radiation testing of microprocessors and FPGAs

    DOE PAGES

    Quinn, Heather; Robinson, William H.; Rech, Paolo; ...

    2015-12-17

    Performance benchmarks have been used over the years to compare different systems. These benchmarks can be useful for researchers trying to determine how changes to the technology, architecture, or compiler affect the system's performance. No such standard exists for systems deployed into high radiation environments, making it difficult to assess whether changes in the fabrication process, circuitry, architecture, or software affect reliability or radiation sensitivity. In this paper, we propose a benchmark suite for high-reliability systems that is designed for field-programmable gate arrays and microprocessors. As a result, we describe the development process and report neutron test data for themore » hardware and software benchmarks.« less

  13. Field Performance of Photovoltaic Systems in the Tucson Desert

    NASA Astrophysics Data System (ADS)

    Orsburn, Sean; Brooks, Adria; Cormode, Daniel; Greenberg, James; Hardesty, Garrett; Lonij, Vincent; Salhab, Anas; St. Germaine, Tyler; Torres, Gabe; Cronin, Alexander

    2011-10-01

    At the Tucson Electric Power (TEP) solar test yard, over 20 different grid-connected photovoltaic (PV) systems are being tested. The goal at the TEP solar test yard is to measure and model real-world performance of PV systems and to benchmark new technologies such as holographic concentrators. By studying voltage and current produced by the PV systems as a function of incident irradiance, and module temperature, we can compare our measurements of field-performance (in a harsh desert environment) to manufacturer specifications (determined under laboratory conditions). In order to measure high-voltage and high-current signals, we designed and built reliable, accurate sensors that can handle extreme desert temperatures. We will present several benchmarks of sensors in a controlled environment, including shunt resistors and Hall-effect current sensors, to determine temperature drift and accuracy. Finally we will present preliminary field measurements of PV performance for several different PV technologies.

  14. Intercomparison of Monte Carlo radiation transport codes to model TEPC response in low-energy neutron and gamma-ray fields.

    PubMed

    Ali, F; Waker, A J; Waller, E J

    2014-10-01

    Tissue-equivalent proportional counters (TEPC) can potentially be used as a portable and personal dosemeter in mixed neutron and gamma-ray fields, but what hinders this use is their typically large physical size. To formulate compact TEPC designs, the use of a Monte Carlo transport code is necessary to predict the performance of compact designs in these fields. To perform this modelling, three candidate codes were assessed: MCNPX 2.7.E, FLUKA 2011.2 and PHITS 2.24. In each code, benchmark simulations were performed involving the irradiation of a 5-in. TEPC with monoenergetic neutron fields and a 4-in. wall-less TEPC with monoenergetic gamma-ray fields. The frequency and dose mean lineal energies and dose distributions calculated from each code were compared with experimentally determined data. For the neutron benchmark simulations, PHITS produces data closest to the experimental values and for the gamma-ray benchmark simulations, FLUKA yields data closest to the experimentally determined quantities. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Defense Programs benchmarking in Chicago, April 1994: Identifying best practices in the pollution prevention programs of selected private industries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-12-01

    The Office of Defense Programs (DP) was the first US Department of Energy (DOE) Cognizant Secretarial Office (CSO) to attempt to benchmark private industries for best-in-class practices in the field of pollution prevention. Defense Programs` intent in this effort is to identify and bring to DOE field offices strategic and technological tools that have helped private companies minimize waste and prevent pollution. Defense Programs` premier benchmarking study focused on business practices and process improvements used to implement exceptional pollution prevention programs in four privately owned companies. The current interest in implementing partnerships information exchange, and technology transfer with the privatemore » sector prompted DP to continue to seek best practices in the area of pollution prevention through a second benchmarking endeavor in May 1994. This report presents the results of that effort. The decision was made to select host facilities that own processes similar to those at DOE plants and laboratories, that have programs that have been recognized on a local or national level, that have an interest in partnering with the Department on an information-sharing basis, and that are located in proximity to each other. The DP benchmarking team assessed the pollution prevention programs of five companies in the Chicago area--GE Plastics, Navistar, Northrop Corporation, Sundstrand and Caterpillar. At all facilities visited, Ozone Depleting Compounds (ODCs), hazardous wastes, releases under the Superfund Amendments and Reauthorization Act (SARA), waste water and non-hazardous wastes are being eliminated, replaced, reduced, recycled and reused whenever practicable.« less

  16. Neutron Reference Benchmark Field Specification: ACRR Free-Field Environment (ACRR-FF-CC-32-CL).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vega, Richard Manuel; Parma, Edward J.; Griffin, Patrick J.

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity free-field reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 31 integral dosimetry measurements in themore » neutron field are reported.« less

  17. Experimental benchmarking of quantum control in zero-field nuclear magnetic resonance.

    PubMed

    Jiang, Min; Wu, Teng; Blanchard, John W; Feng, Guanru; Peng, Xinhua; Budker, Dmitry

    2018-06-01

    Demonstration of coherent control and characterization of the control fidelity is important for the development of quantum architectures such as nuclear magnetic resonance (NMR). We introduce an experimental approach to realize universal quantum control, and benchmarking thereof, in zero-field NMR, an analog of conventional high-field NMR that features less-constrained spin dynamics. We design a composite pulse technique for both arbitrary one-spin rotations and a two-spin controlled-not (CNOT) gate in a heteronuclear two-spin system at zero field, which experimentally demonstrates universal quantum control in such a system. Moreover, using quantum information-inspired randomized benchmarking and partial quantum process tomography, we evaluate the quality of the control, achieving single-spin control for 13 C with an average fidelity of 0.9960(2) and two-spin control via a CNOT gate with a fidelity of 0.9877(2). Our method can also be extended to more general multispin heteronuclear systems at zero field. The realization of universal quantum control in zero-field NMR is important for quantum state/coherence preparation, pulse sequence design, and is an essential step toward applications to materials science, chemical analysis, and fundamental physics.

  18. Experimental benchmarking of quantum control in zero-field nuclear magnetic resonance

    PubMed Central

    Feng, Guanru

    2018-01-01

    Demonstration of coherent control and characterization of the control fidelity is important for the development of quantum architectures such as nuclear magnetic resonance (NMR). We introduce an experimental approach to realize universal quantum control, and benchmarking thereof, in zero-field NMR, an analog of conventional high-field NMR that features less-constrained spin dynamics. We design a composite pulse technique for both arbitrary one-spin rotations and a two-spin controlled-not (CNOT) gate in a heteronuclear two-spin system at zero field, which experimentally demonstrates universal quantum control in such a system. Moreover, using quantum information–inspired randomized benchmarking and partial quantum process tomography, we evaluate the quality of the control, achieving single-spin control for 13C with an average fidelity of 0.9960(2) and two-spin control via a CNOT gate with a fidelity of 0.9877(2). Our method can also be extended to more general multispin heteronuclear systems at zero field. The realization of universal quantum control in zero-field NMR is important for quantum state/coherence preparation, pulse sequence design, and is an essential step toward applications to materials science, chemical analysis, and fundamental physics. PMID:29922714

  19. Development of new geomagnetic storm ground response scaling factors for utilization in hazard assessments

    NASA Astrophysics Data System (ADS)

    Pulkkinen, A. A.; Bernabeu, E.; Weigel, R. S.; Kelbert, A.; Rigler, E. J.; Bedrosian, P.; Love, J. J.

    2017-12-01

    Development of realistic storm scenarios that can be played through the exposed systems is one of the key requirements for carrying out quantitative space weather hazards assessments. In the geomagnetically induced currents (GIC) and power grids context, these scenarios have to quantify the spatiotemporal evolution of the geoelectric field that drives the potentially hazardous currents in the system. In response to the Federal Energy Regulatory Commission (FERC) order 779, a team of scientists and engineers that worked under the auspices of North American Electric Reliability Corporation (NERC), has developed extreme geomagnetic storm and geoelectric field benchmark(s) that use various scaling factors that account for geomagnetic latitude and ground structure of the locations of interest. These benchmarks, together with the information generated in the National Space Weather Action Plan, are the foundation for the hazards assessments that the industry will be carrying out in response to the FERC order and under the auspices of the National Science and Technology Council. While the scaling factors developed in the past work were based on the best available information, there is now significant new information available for parts of the U.S. pertaining to the ground response to external geomagnetic field excitation. The significant new information includes the results magnetotelluric surveys that have been conducted over the past few years across the contiguous US and results from previous surveys that have been made available in a combined online database. In this paper, we distill this new information in the framework of the NERC benchmark and in terms of updated ground response scaling factors thereby allowing straightforward utilization in the hazard assessments. We also outline the path forward for improving the overall extreme event benchmark scenario(s) including generalization of the storm waveforms and geoelectric field spatial patterns.

  20. A comprehensive benchmarking study of protocols and sequencing platforms for 16S rRNA community profiling

    DOE PAGES

    Podar, Mircea; Shakya, Migun; D'Amore, Rosalinda; ...

    2016-01-14

    In the last 5 years, the rapid pace of innovations and improvements in sequencing technologies has completely changed the landscape of metagenomic and metagenetic experiments. Therefore, it is critical to benchmark the various methodologies for interrogating the composition of microbial communities, so that we can assess their strengths and limitations. Here, the most common phylogenetic marker for microbial community diversity studies is the 16S ribosomal RNA gene and in the last 10 years the field has moved from sequencing a small number of amplicons and samples to more complex studies where thousands of samples and multiple different gene regions aremore » interrogated.« less

  1. Model Prediction Results for 2007 Ultrasonic Benchmark Problems

    NASA Astrophysics Data System (ADS)

    Kim, Hak-Joon; Song, Sung-Jin

    2008-02-01

    The World Federation of NDE Centers (WFNDEC) has addressed two types of problems for the 2007 ultrasonic benchmark problems: prediction of side-drilled hole responses with 45° and 60° refracted shear waves, and effects of surface curvatures on the ultrasonic responses of flat-bottomed hole. To solve this year's ultrasonic benchmark problems, we applied multi-Gaussian beam models for calculation of ultrasonic beam fields and the Kirchhoff approximation and the separation of variables method for calculation of far-field scattering amplitudes of flat-bottomed holes and side-drilled holes respectively In this paper, we present comparison results of model predictions to experiments for side-drilled holes and discuss effect of interface curvatures on ultrasonic responses by comparison of peak-to-peak amplitudes of flat-bottomed hole responses with different sizes and interface curvatures.

  2. OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE - A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnis Judzis

    2002-10-01

    This document details the progress to date on the OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE -- A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING contract for the quarter starting July 2002 through September 2002. Even though we are awaiting the optimization portion of the testing program, accomplishments include the following: (1) Smith International agreed to participate in the DOE Mud Hammer program. (2) Smith International chromed collars for upcoming benchmark tests at TerraTek, now scheduled for 4Q 2002. (3) ConocoPhillips had a field trial of the Smith fluid hammer offshore Vietnam. The hammer functioned properly, though themore » well encountered hole conditions and reaming problems. ConocoPhillips plan another field trial as a result. (4) DOE/NETL extended the contract for the fluid hammer program to allow Novatek to ''optimize'' their much delayed tool to 2003 and to allow Smith International to add ''benchmarking'' tests in light of SDS Digger Tools' current financial inability to participate. (5) ConocoPhillips joined the Industry Advisors for the mud hammer program. (6) TerraTek acknowledges Smith International, BP America, PDVSA, and ConocoPhillips for cost-sharing the Smith benchmarking tests allowing extension of the contract to complete the optimizations.« less

  3. Metallicity gradients in local field star-forming galaxies: insights on inflows, outflows, and the coevolution of gas, stars and metals

    NASA Astrophysics Data System (ADS)

    Ho, I.-Ting; Kudritzki, Rolf-Peter; Kewley, Lisa J.; Zahid, H. Jabran; Dopita, Michael A.; Bresolin, Fabio; Rupke, David S. N.

    2015-04-01

    We present metallicity gradients in 49 local field star-forming galaxies. We derive gas-phase oxygen abundances using two widely adopted metallicity calibrations based on the [O III]/Hβ, [N II]/Hα, and [N II]/[O II] line ratios. The two derived metallicity gradients are usually in good agreement within ± 0.14 dex R_{25}^{-1} (R25 is the B-band iso-photoal radius), but the metallicity gradients can differ significantly when the ionization parameters change systematically with radius. We investigate the metallicity gradients as a function of stellar mass (8 < log (M*/M⊙) < 11) and absolute B-band luminosity (-16 > MB > -22). When the metallicity gradients are expressed in dex kpc-1, we show that galaxies with lower mass and luminosity, on average, have steeper metallicity gradients. When the metallicity gradients are expressed in dex R_{25}^{-1}, we find no correlation between the metallicity gradients, and stellar mass and luminosity. We provide a local benchmark metallicity gradient of field star-forming galaxies useful for comparison with studies at high redshifts. We investigate the origin of the local benchmark gradient using simple chemical evolution models and observed gas and stellar surface density profiles in nearby field spiral galaxies. Our models suggest that the local benchmark gradient is a direct result of the coevolution of gas and stellar disc under virtually closed-box chemical evolution when the stellar-to-gas mass ratio becomes high (≫0.3). These models imply low current mass accretion rates ( ≲ 0.3 × SFR), and low-mass outflow rates ( ≲ 3 × SFR) in local field star-forming galaxies.

  4. A chemical EOR benchmark study of different reservoir simulators

    NASA Astrophysics Data System (ADS)

    Goudarzi, Ali; Delshad, Mojdeh; Sepehrnoori, Kamy

    2016-09-01

    Interest in chemical EOR processes has intensified in recent years due to the advancements in chemical formulations and injection techniques. Injecting Polymer (P), surfactant/polymer (SP), and alkaline/surfactant/polymer (ASP) are techniques for improving sweep and displacement efficiencies with the aim of improving oil production in both secondary and tertiary floods. There has been great interest in chemical flooding recently for different challenging situations. These include high temperature reservoirs, formations with extreme salinity and hardness, naturally fractured carbonates, and sandstone reservoirs with heavy and viscous crude oils. More oil reservoirs are reaching maturity where secondary polymer floods and tertiary surfactant methods have become increasingly important. This significance has added to the industry's interest in using reservoir simulators as tools for reservoir evaluation and management to minimize costs and increase the process efficiency. Reservoir simulators with special features are needed to represent coupled chemical and physical processes present in chemical EOR processes. The simulators need to be first validated against well controlled lab and pilot scale experiments to reliably predict the full field implementations. The available data from laboratory scale include 1) phase behavior and rheological data; and 2) results of secondary and tertiary coreflood experiments for P, SP, and ASP floods under reservoir conditions, i.e. chemical retentions, pressure drop, and oil recovery. Data collected from corefloods are used as benchmark tests comparing numerical reservoir simulators with chemical EOR modeling capabilities such as STARS of CMG, ECLIPSE-100 of Schlumberger, REVEAL of Petroleum Experts. The research UTCHEM simulator from The University of Texas at Austin is also included since it has been the benchmark for chemical flooding simulation for over 25 years. The results of this benchmark comparison will be utilized to improve chemical design for field-scale studies using commercial simulators. The benchmark tests illustrate the potential of commercial simulators for chemical flooding projects and provide a comprehensive table of strengths and limitations of each simulator for a given chemical EOR process. Mechanistic simulations of chemical EOR processes will provide predictive capability and can aid in optimization of the field injection projects. The objective of this paper is not to compare the computational efficiency and solution algorithms; it only focuses on the process modeling comparison.

  5. Recommendations for Benchmarking Preclinical Studies of Nanomedicines.

    PubMed

    Dawidczyk, Charlene M; Russell, Luisa M; Searson, Peter C

    2015-10-01

    Nanoparticle-based delivery systems provide new opportunities to overcome the limitations associated with traditional small-molecule drug therapy for cancer and to achieve both therapeutic and diagnostic functions in the same platform. Preclinical trials are generally designed to assess therapeutic potential and not to optimize the design of the delivery platform. Consequently, progress in developing design rules for cancer nanomedicines has been slow, hindering progress in the field. Despite the large number of preclinical trials, several factors restrict comparison and benchmarking of different platforms, including variability in experimental design, reporting of results, and the lack of quantitative data. To solve this problem, we review the variables involved in the design of preclinical trials and propose a protocol for benchmarking that we recommend be included in in vivo preclinical studies of drug-delivery platforms for cancer therapy. This strategy will contribute to building the scientific knowledge base that enables development of design rules and accelerates the translation of new technologies. ©2015 American Association for Cancer Research.

  6. Perspective: Recommendations for benchmarking pre-clinical studies of nanomedicines

    PubMed Central

    Dawidczyk, Charlene M.; Russell, Luisa M.; Searson, Peter C.

    2015-01-01

    Nanoparticle-based delivery systems provide new opportunities to overcome the limitations associated with traditional small molecule drug therapy for cancer, and to achieve both therapeutic and diagnostic functions in the same platform. Pre-clinical trials are generally designed to assess therapeutic potential and not to optimize the design of the delivery platform. Consequently, progress in developing design rules for cancer nanomedicines has been slow, hindering progress in the field. Despite the large number of pre-clinical trials, several factors restrict comparison and benchmarking of different platforms, including variability in experimental design, reporting of results, and the lack of quantitative data. To solve this problem, we review the variables involved in the design of pre-clinical trials and propose a protocol for benchmarking that we recommend be included in in vivo pre-clinical studies of drug delivery platforms for cancer therapy. This strategy will contribute to building the scientific knowledge base that enables development of design rules and accelerates the translation of new technologies. PMID:26249177

  7. Paying Medicare Advantage Plans: To Level or Tilt the Playing Field

    PubMed Central

    Glazer, Jacob; McGuire, Thomas G.

    2017-01-01

    Medicare beneficiaries are eligible for health insurance through the public option of traditional Medicare (TM) or may join a private Medicare Advantage (MA) plan. Both are highly subsidized but in different ways. Medicare pays for most of costs directly in TM, and makes a subsidy payment to an MA plan based on a “benchmark” for each beneficiary choosing a private plan. The level of this benchmark is arguably the most important policy decision Medicare makes about the MA program. Presently, about 30% of beneficiaries are in MA, and Medicare subsidizes MA plans more on average than TM. Many analysts recommend equalizing Medicare’s subsidy across the options – referred to in policy circles as a “level playing field.” This paper studies the normative question of how to set the level of the benchmark, applying the versatile model of plan choice developed by Einav and Finkelstein (EF) to Medicare. The EF framework implies unequal subsidies to counteract risk selection across plan types. We also study other reasons to tilt the field: the relative efficiency of MA vs. TM, market power of MA plans, and institutional features of the way Medicare determines subsidies and premiums. After review of the empirical and policy literature, we conclude that in areas where the MA market is competitive, the benchmark should be set below average costs in TM, but in areas characterized by imperfect competition in MA, it should be raised in order to offset output (enrollment) restrictions by plans with market power. We also recommend specific modifications of Medicare rules to make demand for MA more price elastic. PMID:28318667

  8. Second Language Acquisition in Applied Linguistics: 1925-2015 and Beyond

    ERIC Educational Resources Information Center

    Tarone, Elaine

    2015-01-01

    Taking 1925, the founding year of "Language", the journal of the Linguistics Society of America, as a benchmark for "the past", and 2015 as benchmark for "the present", the author considers what was known then and what is known now about second language acquisition in applied linguistics. The field has grown more…

  9. An overview of the ENEA activities in the field of coupled codes NPP simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parisi, C.; Negrenti, E.; Sepielli, M.

    2012-07-01

    In the framework of the nuclear research activities in the fields of safety, training and education, ENEA (the Italian National Agency for New Technologies, Energy and the Sustainable Development) is in charge of defining and pursuing all the necessary steps for the development of a NPP engineering simulator at the 'Casaccia' Research Center near Rome. A summary of the activities in the field of the nuclear power plants simulation by coupled codes is here presented with the long term strategy for the engineering simulator development. Specifically, results from the participation in international benchmarking activities like the OECD/NEA 'Kalinin-3' benchmark andmore » the 'AER-DYN-002' benchmark, together with simulations of relevant events like the Fukushima accident, are here reported. The ultimate goal of such activities performed using state-of-the-art technology is the re-establishment of top level competencies in the NPP simulation field in order to facilitate the development of Enhanced Engineering Simulators and to upgrade competencies for supporting national energy strategy decisions, the nuclear national safety authority, and the R and D activities on NPP designs. (authors)« less

  10. Identification of Key Indicators of Quality in Afterschool Programs. CRESST Report 748

    ERIC Educational Resources Information Center

    Huang, Denise; La Torre, Deborah; Harven, Aletha; Huber, Lindsay Perez; Jiang, Lu; Leon, Seth; Oh, Christine

    2008-01-01

    Researchers and policymakers are increasingly interested in the issue of school accountability. Despite this, program standards for afterschool programs are not as fully developed as they are in other fields. This study bridges that gap and presents the results from a study that identifies benchmarks and indicators for high quality afterschool…

  11. Toward benchmarking in catalysis science: Best practices, challenges, and opportunities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bligaard, Thomas; Bullock, R. Morris; Campbell, Charles T.

    Benchmarking is a community-based and (preferably) community-driven activity involving consensus-based decisions on how to make reproducible, fair, and relevant assessments. In catalysis science, important catalyst performance metrics include activity, selectivity, and the deactivation profile, which enable comparisons between new and standard catalysts. Benchmarking also requires careful documentation, archiving, and sharing of methods and measurements, to ensure that the full value of research data can be realized. Beyond these goals, benchmarking presents unique opportunities to advance and accelerate understanding of complex reaction systems by combining and comparing experimental information from multiple, in situ and operando techniques with theoretical insights derived frommore » calculations characterizing model systems. This Perspective describes the origins and uses of benchmarking and its applications in computational catalysis, heterogeneous catalysis, molecular catalysis, and electrocatalysis. As a result, it also discusses opportunities and challenges for future developments in these fields.« less

  12. Toward benchmarking in catalysis science: Best practices, challenges, and opportunities

    DOE PAGES

    Bligaard, Thomas; Bullock, R. Morris; Campbell, Charles T.; ...

    2016-03-07

    Benchmarking is a community-based and (preferably) community-driven activity involving consensus-based decisions on how to make reproducible, fair, and relevant assessments. In catalysis science, important catalyst performance metrics include activity, selectivity, and the deactivation profile, which enable comparisons between new and standard catalysts. Benchmarking also requires careful documentation, archiving, and sharing of methods and measurements, to ensure that the full value of research data can be realized. Beyond these goals, benchmarking presents unique opportunities to advance and accelerate understanding of complex reaction systems by combining and comparing experimental information from multiple, in situ and operando techniques with theoretical insights derived frommore » calculations characterizing model systems. This Perspective describes the origins and uses of benchmarking and its applications in computational catalysis, heterogeneous catalysis, molecular catalysis, and electrocatalysis. As a result, it also discusses opportunities and challenges for future developments in these fields.« less

  13. The particular use of PIV methods for the modelling of heat and hydrophysical processes in the nuclear power plants

    NASA Astrophysics Data System (ADS)

    Sergeev, D. A.; Kandaurov, A. A.; Troitskaya, Yu I.

    2017-11-01

    In this paper we describe PIV-system specially designed for the study of the hydrophysical processes in large-scale benchmark setup of promising fast reactor. The system allows the PIV-measurements for the conditions of complicated configuration of the reactor benchmark, reflections and distortions section of the laser sheet, blackout, in the closed volume. The use of filtering techniques and method of masks images enabled us to reduce the number of incorrect measurement of flow velocity vectors by an order. The method of conversion of image coordinates and velocity field in the reference model of the reactor using a virtual 3D simulation targets, without loss of accuracy in comparison with a method of using physical objects in filming area was released. The results of measurements of velocity fields in various modes, both stationary (workers), as well as in non-stationary (emergency).

  14. Benchmarking specialty hospitals, a scoping review on theory and practice.

    PubMed

    Wind, A; van Harten, W H

    2017-04-04

    Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.

  15. Analysis of Students' Assessments in Middle School Curriculum Materials: Aiming Precisely at Benchmarks and Standards.

    ERIC Educational Resources Information Center

    Stern, Luli; Ahlgren, Andrew

    2002-01-01

    Project 2061 of the American Association for the Advancement of Science (AAAS) developed and field-tested a procedure for analyzing curriculum materials, including assessments, in terms of contribution to the attainment of benchmarks and standards. Using this procedure, Project 2061 produced a database of reports on nine science middle school…

  16. Neutron Reference Benchmark Field Specification: ACRR 44 Inch Lead-Boron (LB44) Bucket Environment (ACRR-LB44-CC-32-CL).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vega, Richard Manuel; Parma, Edward J.; Griffin, Patrick J.

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the 44 inch Lead-Boron (LB44) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results ofmore » 31 integral dosimetry measurements in the neutron field are reported.« less

  17. Neutron Reference Benchmark Field Specifications: ACRR Polyethylene-Lead-Graphite (PLG) Bucket Environment (ACRR-PLG-CC-32-CL).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vega, Richard Manuel; Parm, Edward J.; Griffin, Patrick J.

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the Polyethylene-Lead-Graphite (PLG) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 37 integralmore » dosimetry measurements in the neutron field are reported.« less

  18. A method for deriving water-quality benchmarks using field data.

    PubMed

    Cormier, Susan M; Suter, Glenn W

    2013-02-01

    The authors describe a methodology that characterizes effects to individual genera observed in the field and estimate the concentration at which 5% of genera are adversely affected. Ionic strength, measured as specific conductance, is used to illustrate the methodology. Assuming some resilience in the population, 95% of the genera are afforded protection. The authors selected an unambiguous effect, the presence or absence of a genus from sampling locations. The absence of a genus, extirpation, is operationally defined as the point above which only 5% of the observations of a genus occurs. The concentrations that cause extirpation of each genus are rank-ordered from least to greatest, and the benchmark is estimated at the 5th percentile of the distribution using two-point interpolation. When a full range of exposures and many taxa are included in the model of taxonomic sensitivity, the model broadly characterizes how species in general respond to a concentration gradient of the causal agent. This recognized U.S. Environmental Protection Agency methodology has many advantages. Observations from field studies include the full range of conditions, effects, species, and interactions that occur in the environment and can be used to model some causal relationships that laboratory studies cannot. Copyright © 2012 SETAC.

  19. Benchmarking fully analytic DFT force fields for vibrational spectroscopy: A study on halogenated compounds

    NASA Astrophysics Data System (ADS)

    Pietropolli Charmet, Andrea; Cornaton, Yann

    2018-05-01

    This work presents an investigation of the theoretical predictions yielded by anharmonic force fields having the cubic and quartic force constants are computed analytically by means of density functional theory (DFT) using the recursive scheme developed by M. Ringholm et al. (J. Comput. Chem. 35 (2014) 622). Different functionals (namely B3LYP, PBE, PBE0 and PW86x) and basis sets were used for calculating the anharmonic vibrational spectra of two halomethanes. The benchmark analysis carried out demonstrates the reliability and overall good performances offered by hybrid approaches, where the harmonic data obtained at the coupled cluster with single and double excitations level of theory augmented by a perturbational estimate of the effects of connected triple excitations, CCSD(T), are combined with the fully analytic higher order force constants yielded by DFT functionals. These methods lead to reliable and computationally affordable calculations of anharmonic vibrational spectra with an accuracy comparable to that yielded by hybrid force fields having the anharmonic force fields computed at second order Møller-Plesset perturbation theory (MP2) level of theory using numerical differentiation but without the corresponding potential issues related to computational costs and numerical errors.

  20. A systematic benchmark of the ab initio Bethe-Salpeter equation approach for low-lying optical excitations of small organic molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruneval, Fabien; Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, California 94720; Department of Physics, University of California, Berkeley, California 94720

    2015-06-28

    The predictive power of the ab initio Bethe-Salpeter equation (BSE) approach, rigorously based on many-body Green’s function theory but incorporating information from density functional theory, has already been demonstrated for the optical gaps and spectra of solid-state systems. Interest in photoactive hybrid organic/inorganic systems has recently increased and so has the use of the BSE for computing neutral excitations of organic molecules. However, no systematic benchmarks of the BSE for neutral electronic excitations of organic molecules exist. Here, we study the performance of the BSE for the 28 small molecules in Thiel’s widely used time-dependent density functional theory benchmark setmore » [Schreiber et al., J. Chem. Phys. 128, 134110 (2008)]. We observe that the BSE produces results that depend critically on the mean-field starting point employed in the perturbative approach. We find that this starting point dependence is mainly introduced through the quasiparticle energies obtained at the intermediate GW step and that with a judicious choice of starting mean-field, singlet excitation energies obtained from BSE are in excellent quantitative agreement with higher-level wavefunction methods. The quality of the triplet excitations is slightly less satisfactory.« less

  1. Vegetation composition and structure of southern coastal plain pine forests: An ecological comparison

    USGS Publications Warehouse

    Hedman, C.W.; Grace, S.L.; King, S.E.

    2000-01-01

    Longleaf pine (Pinus palustris) ecosystems are characterized by a diverse community of native groundcover species. Critics of plantation forestry claim that loblolly (Pinus taeda) and slash pine (Pinus elliottii) forests are devoid of native groundcover due to associated management practices. As a result of these practices, some believe that ecosystem functions characteristic of longleaf pine are lost under loblolly and slash pine plantation management. Our objective was to quantify and compare vegetation composition and structure of longleaf, loblolly, and slash pine forests of differing ages, management strategies, and land-use histories. Information from this study will further our understanding and lead to inferences about functional differences among pine cover types. Vegetation and environmental data were collected in 49 overstory plots across Southlands Experiment Forest in Bainbridge, GA. Nested plots, i.e. midstory, understory, and herbaceous, were replicated four times within each overstory plot. Over 400 species were identified. Herbaceous species richness was variable for all three pine cover types. Herbaceous richness for longleaf, slash, and loblolly pine averaged 15, 13, and 12 species per m2, respectively. Longleaf pine plots had significantly more (p < 0.029) herbaceous species and greater herbaceous cover (p < 0.001) than loblolly or slash pine plots. Longleaf and slash pine plots were otherwise similar in species richness and stand structure, both having lower overstory density, midstory density, and midstory cover than loblolly pine plots. Multivariate analyses provided additional perspectives on vegetation patterns. Ordination and classification procedures consistently placed herbaceous plots into two groups which we refer to as longleaf pine benchmark (34 plots) and non-benchmark (15 plots). Benchmark plots typically contained numerous herbaceous species characteristic of relic longleaf pine/wiregrass communities found in the area. Conversely, non-benchmark plots contained fewer species characteristic of relic longleaf pine/wiregrass communities and more ruderal species common to highly disturbed sites. The benchmark group included 12 naturally regenerated longleaf plots and 22 loblolly, slash, and longleaf pine plantation plots encompassing a broad range of silvicultural disturbances. Non-benchmark plots included eight afforested old-field plantation plots and seven cutover plantation plots. Regardless of overstory species, all afforested old fields were low either in native species richness or in abundance. Varying degrees of this groundcover condition were also found in some cutover plantation plots that were classified as non-benchmark. Environmental variables strongly influencing vegetation patterns included agricultural history and fire frequency. Results suggest that land-use history, particularly related to agriculture, has a greater influence on groundcover composition and structure in southern pine forests than more recent forest management activities or pine cover type. Additional research is needed to identify the potential for afforested old fields to recover native herbaceous species. In the interim, high-yield plantation management should initially target old-field sites which already support reduced numbers of groundcover species. Sites which have not been farmed in the past 50-60 years should be considered for longleaf pine restoration and multiple-use objectives, since they have the greatest potential for supporting diverse native vegetation. (C) 2000 Elsevier Science B.V.

  2. Weighting and Aggregation in Composite Indicator Construction: A Multiplicative Optimization Approach

    ERIC Educational Resources Information Center

    Zhou, P.; Ang, B. W.; Zhou, D. Q.

    2010-01-01

    Composite indicators (CIs) have increasingly been accepted as a useful tool for benchmarking, performance comparisons, policy analysis and public communication in many different fields. Several recent studies show that as a data aggregation technique in CI construction the weighted product (WP) method has some desirable properties. However, a…

  3. Benchmarking for On-Scalp MEG Sensors.

    PubMed

    Xie, Minshu; Schneiderman, Justin F; Chukharkin, Maxim L; Kalabukhov, Alexei; Riaz, Bushra; Lundqvist, Daniel; Whitmarsh, Stephen; Hamalainen, Matti; Jousmaki, Veikko; Oostenveld, Robert; Winkler, Dag

    2017-06-01

    We present a benchmarking protocol for quantitatively comparing emerging on-scalp magnetoencephalography (MEG) sensor technologies to their counterparts in state-of-the-art MEG systems. As a means of validation, we compare a high-critical-temperature superconducting quantum interference device (high T c SQUID) with the low- T c SQUIDs of an Elekta Neuromag TRIUX system in MEG recordings of auditory and somatosensory evoked fields (SEFs) on one human subject. We measure the expected signal gain for the auditory-evoked fields (deeper sources) and notice some unfamiliar features in the on-scalp sensor-based recordings of SEFs (shallower sources). The experimental results serve as a proof of principle for the benchmarking protocol. This approach is straightforward, general to various on-scalp MEG sensors, and convenient to use on human subjects. The unexpected features in the SEFs suggest on-scalp MEG sensors may reveal information about neuromagnetic sources that is otherwise difficult to extract from state-of-the-art MEG recordings. As the first systematically established on-scalp MEG benchmarking protocol, magnetic sensor developers can employ this method to prove the utility of their technology in MEG recordings. Further exploration of the SEFs with on-scalp MEG sensors may reveal unique information about their sources.

  4. Evaluation of CHO Benchmarks on the Arria 10 FPGA using Intel FPGA SDK for OpenCL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Zheming; Yoshii, Kazutomo; Finkel, Hal

    The OpenCL standard is an open programming model for accelerating algorithms on heterogeneous computing system. OpenCL extends the C-based programming language for developing portable codes on different platforms such as CPU, Graphics processing units (GPUs), Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs). The Intel FPGA SDK for OpenCL is a suite of tools that allows developers to abstract away the complex FPGA-based development flow for a high-level software development flow. Users can focus on the design of hardware-accelerated kernel functions in OpenCL and then direct the tools to generate the low-level FPGA implementations. The approach makes themore » FPGA-based development more accessible to software users as the needs for hybrid computing using CPUs and FPGAs are increasing. It can also significantly reduce the hardware development time as users can evaluate different ideas with high-level language without deep FPGA domain knowledge. Benchmarking of OpenCL-based framework is an effective way for analyzing the performance of system by studying the execution of the benchmark applications. CHO is a suite of benchmark applications that provides support for OpenCL [1]. The authors presented CHO as an OpenCL port of the CHStone benchmark. Using Altera OpenCL (AOCL) compiler to synthesize the benchmark applications, they listed the resource usage and performance of each kernel that can be successfully synthesized by the compiler. In this report, we evaluate the resource usage and performance of the CHO benchmark applications using the Intel FPGA SDK for OpenCL and Nallatech 385A FPGA board that features an Arria 10 FPGA device. The focus of the report is to have a better understanding of the resource usage and performance of the kernel implementations using Arria-10 FPGA devices compared to Stratix-5 FPGA devices. In addition, we also gain knowledge about the limitations of the current compiler when it fails to synthesize a benchmark application.« less

  5. Benchmark Modeling of the Near-Field and Far-Field Wave Effects of Wave Energy Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhinefrank, Kenneth E; Haller, Merrick C; Ozkan-Haller, H Tuba

    2013-01-26

    This project is an industry-led partnership between Columbia Power Technologies and Oregon State University that will perform benchmark laboratory experiments and numerical modeling of the near-field and far-field impacts of wave scattering from an array of wave energy devices. These benchmark experimental observations will help to fill a gaping hole in our present knowledge of the near-field effects of multiple, floating wave energy converters and are a critical requirement for estimating the potential far-field environmental effects of wave energy arrays. The experiments will be performed at the Hinsdale Wave Research Laboratory (Oregon State University) and will utilize an array ofmore » newly developed Buoys' that are realistic, lab-scale floating power converters. The array of Buoys will be subjected to realistic, directional wave forcing (1:33 scale) that will approximate the expected conditions (waves and water depths) to be found off the Central Oregon Coast. Experimental observations will include comprehensive in-situ wave and current measurements as well as a suite of novel optical measurements. These new optical capabilities will include imaging of the 3D wave scattering using a binocular stereo camera system, as well as 3D device motion tracking using a newly acquired LED system. These observing systems will capture the 3D motion history of individual Buoys as well as resolve the 3D scattered wave field; thus resolving the constructive and destructive wave interference patterns produced by the array at high resolution. These data combined with the device motion tracking will provide necessary information for array design in order to balance array performance with the mitigation of far-field impacts. As a benchmark data set, these data will be an important resource for testing of models for wave/buoy interactions, buoy performance, and far-field effects on wave and current patterns due to the presence of arrays. Under the proposed project we will initiate high-resolution (fine scale, very near-field) fluid/structure interaction simulations of buoy motions, as well as array-scale, phase-resolving wave scattering simulations. These modeling efforts will utilize state-of-the-art research quality models, which have not yet been brought to bear on this complex problem of large array wave/structure interaction problem.« less

  6. Reviews and syntheses: Field data to benchmark the carbon cycle models for tropical forests

    USGS Publications Warehouse

    Clark, Deborah A.; Asao, Shinichi; Fisher, Rosie A.; Reed, Sasha C.; Reich, Peter B.; Ryan, Michael G.; Wood, Tana E.; Yang, Xiaojuan

    2017-01-01

    For more accurate projections of both the global carbon (C) cycle and the changing climate, a critical current need is to improve the representation of tropical forests in Earth system models. Tropical forests exchange more C, energy, and water with the atmosphere than any other class of land ecosystems. Further, tropical-forest C cycling is likely responding to the rapid global warming, intensifying water stress, and increasing atmospheric CO2 levels. Projections of the future C balance of the tropics vary widely among global models. A current effort of the modeling community, the ILAMB (International Land Model Benchmarking) project, is to compile robust observations that can be used to improve the accuracy and realism of the land models for all major biomes. Our goal with this paper is to identify field observations of tropical-forest ecosystem C stocks and fluxes, and of their long-term trends and climatic and CO2 sensitivities, that can serve this effort. We propose criteria for reference-level field data from this biome and present a set of documented examples from old-growth lowland tropical forests. We offer these as a starting point towards the goal of a regularly updated consensus set of benchmark field observations of C cycling in tropical forests.

  7. Reviews and syntheses: Field data to benchmark the carbon cycle models for tropical forests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, Deborah A.; Asao, Shinichi; Fisher, Rosie

    For more accurate projections of both the global carbon (C) cycle and the changing climate, a critical current need is to improve the representation of tropical forests in Earth system models. Tropical forests exchange more C, energy, and water with the atmosphere than any other class of land ecosystems. Further, tropical-forest C cycling is likely responding to the rapid global warming, intensifying water stress, and increasing atmospheric CO 2 levels. Projections of the future C balance of the tropics vary widely among global models. A current effort of the modeling community, the ILAMB (International Land Model Benchmarking) project, is tomore » compile robust observations that can be used to improve the accuracy and realism of the land models for all major biomes. Our goal with this paper is to identify field observations of tropical-forest ecosystem C stocks and fluxes, and of their long-term trends and climatic and CO 2 sensitivities, that can serve this effort. We propose criteria for reference-level field data from this biome and present a set of documented examples from old-growth lowland tropical forests. We offer these as a starting point towards the goal of a regularly updated consensus set of benchmark field observations of C cycling in tropical forests.« less

  8. Reviews and syntheses: Field data to benchmark the carbon cycle models for tropical forests

    NASA Astrophysics Data System (ADS)

    Clark, Deborah A.; Asao, Shinichi; Fisher, Rosie; Reed, Sasha; Reich, Peter B.; Ryan, Michael G.; Wood, Tana E.; Yang, Xiaojuan

    2017-10-01

    For more accurate projections of both the global carbon (C) cycle and the changing climate, a critical current need is to improve the representation of tropical forests in Earth system models. Tropical forests exchange more C, energy, and water with the atmosphere than any other class of land ecosystems. Further, tropical-forest C cycling is likely responding to the rapid global warming, intensifying water stress, and increasing atmospheric CO2 levels. Projections of the future C balance of the tropics vary widely among global models. A current effort of the modeling community, the ILAMB (International Land Model Benchmarking) project, is to compile robust observations that can be used to improve the accuracy and realism of the land models for all major biomes. Our goal with this paper is to identify field observations of tropical-forest ecosystem C stocks and fluxes, and of their long-term trends and climatic and CO2 sensitivities, that can serve this effort. We propose criteria for reference-level field data from this biome and present a set of documented examples from old-growth lowland tropical forests. We offer these as a starting point towards the goal of a regularly updated consensus set of benchmark field observations of C cycling in tropical forests.

  9. Reviews and syntheses: Field data to benchmark the carbon cycle models for tropical forests

    DOE PAGES

    Clark, Deborah A.; Asao, Shinichi; Fisher, Rosie; ...

    2017-10-23

    For more accurate projections of both the global carbon (C) cycle and the changing climate, a critical current need is to improve the representation of tropical forests in Earth system models. Tropical forests exchange more C, energy, and water with the atmosphere than any other class of land ecosystems. Further, tropical-forest C cycling is likely responding to the rapid global warming, intensifying water stress, and increasing atmospheric CO 2 levels. Projections of the future C balance of the tropics vary widely among global models. A current effort of the modeling community, the ILAMB (International Land Model Benchmarking) project, is tomore » compile robust observations that can be used to improve the accuracy and realism of the land models for all major biomes. Our goal with this paper is to identify field observations of tropical-forest ecosystem C stocks and fluxes, and of their long-term trends and climatic and CO 2 sensitivities, that can serve this effort. We propose criteria for reference-level field data from this biome and present a set of documented examples from old-growth lowland tropical forests. We offer these as a starting point towards the goal of a regularly updated consensus set of benchmark field observations of C cycling in tropical forests.« less

  10. Key aspects of cost effective collector and solar field design

    NASA Astrophysics Data System (ADS)

    von Reeken, Finn; Nicodemo, Dario; Keck, Thomas; Weinrebe, Gerhard; Balz, Markus

    2016-05-01

    A study has been performed where different key parameters influencing solar field cost are varied. By using levelised cost of energy as figure of merit it is shown that parameters like GoToStow wind speed, heliostat stiffness or tower height should be adapted to respective site conditions from an economical point of view. The benchmark site Redstone (Northern Cape Province, South Africa) has been compared to an alternate site close to Phoenix (AZ, USA) regarding site conditions and their effect on cost-effective collector and solar field design.

  11. Omega Hawaii Antenna System: Modification and Validation Tests. Volume 2. Data Sheets.

    DTIC Science & Technology

    1979-10-19

    a benchmark because of potential hotel construction . DS 5-1 DATA SHEET 5 (DS-5) RADIO FIELD INTENSITY MEASUREMENTS OMEGA STATION: HAWAII SITE NO. C 1A...27.5 1008 11.05 26.5 1007 Ft 11.80 28.1 COMMENT Not considered for a benchmark because of potential hotel construction . DS 5-5 DATA SHEET 5 (DS-5) RADIO

  12. Developing a benchmark for emotional analysis of music

    PubMed Central

    Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the ‘Emotion in Music’ task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER. PMID:28282400

  13. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Peiyuan; Brown, Timothy; Fullmer, William D.

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling ofmore » the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.« less

  14. Developing a benchmark for emotional analysis of music.

    PubMed

    Aljanaki, Anna; Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.

  15. Cove benchmark calculations using SAGUARO and FEMTRAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eaton, R.R.; Martinez, M.J.

    1986-10-01

    Three small-scale, time-dependent, benchmarking calculations have been made using the finite element codes SAGUARO, to determine hydraulic head and water velocity profiles, and FEMTRAN, to predict the solute transport. Sand and hard rock porous materials were used. Time scales for the problems, which ranged from tens of hours to thousands of years, have posed no particular diffculty for the two codes. Studies have been performed to determine the effects of computational mesh, boundary conditions, velocity formulation and SAGUARO/FEMTRAN code-coupling on water and solute transport. Results showed that mesh refinement improved mass conservation. Varying the drain-tile size in COVE 1N hadmore » a weak effect on the rate at which the tile field drained. Excellent agreement with published COVE 1N data was obtained for the hydrological field and reasonable agreement for the solute-concentration predictions. The question remains whether these types of calculations can be carried out on repository-scale problems using material characteristic curves representing tuff with fractures.« less

  16. Treatment planning for spinal radiosurgery : A competitive multiplatform benchmark challenge.

    PubMed

    Moustakis, Christos; Chan, Mark K H; Kim, Jinkoo; Nilsson, Joakim; Bergman, Alanah; Bichay, Tewfik J; Palazon Cano, Isabel; Cilla, Savino; Deodato, Francesco; Doro, Raffaela; Dunst, Jürgen; Eich, Hans Theodor; Fau, Pierre; Fong, Ming; Haverkamp, Uwe; Heinze, Simon; Hildebrandt, Guido; Imhoff, Detlef; de Klerck, Erik; Köhn, Janett; Lambrecht, Ulrike; Loutfi-Krauss, Britta; Ebrahimi, Fatemeh; Masi, Laura; Mayville, Alan H; Mestrovic, Ante; Milder, Maaike; Morganti, Alessio G; Rades, Dirk; Ramm, Ulla; Rödel, Claus; Siebert, Frank-Andre; den Toom, Wilhelm; Wang, Lei; Wurster, Stefan; Schweikard, Achim; Soltys, Scott G; Ryu, Samuel; Blanck, Oliver

    2018-05-25

    To investigate the quality of treatment plans of spinal radiosurgery derived from different planning and delivery systems. The comparisons include robotic delivery and intensity modulated arc therapy (IMAT) approaches. Multiple centers with equal systems were used to reduce a bias based on individual's planning abilities. The study used a series of three complex spine lesions to maximize the difference in plan quality among the various approaches. Internationally recognized experts in the field of treatment planning and spinal radiosurgery from 12 centers with various treatment planning systems participated. For a complex spinal lesion, the results were compared against a previously published benchmark plan derived for CyberKnife radiosurgery (CKRS) using circular cones only. For two additional cases, one with multiple small lesions infiltrating three vertebrae and a single vertebra lesion treated with integrated boost, the results were compared against a benchmark plan generated using a best practice guideline for CKRS. All plans were rated based on a previously established ranking system. All 12 centers could reach equality (n = 4) or outperform (n = 8) the benchmark plan. For the multiple lesions and the single vertebra lesion plan only 5 and 3 of the 12 centers, respectively, reached equality or outperformed the best practice benchmark plan. However, the absolute differences in target and critical structure dosimetry were small and strongly planner-dependent rather than system-dependent. Overall, gantry-based IMAT with simple planning techniques (two coplanar arcs) produced faster treatments and significantly outperformed static gantry intensity modulated radiation therapy (IMRT) and multileaf collimator (MLC) or non-MLC CKRS treatment plan quality regardless of the system (mean rank out of 4 was 1.2 vs. 3.1, p = 0.002). High plan quality for complex spinal radiosurgery was achieved among all systems and all participating centers in this planning challenge. This study concludes that simple IMAT techniques can generate significantly better plan quality compared to previous established CKRS benchmarks.

  17. A new numerical benchmark for variably saturated variable-density flow and transport in porous media

    NASA Astrophysics Data System (ADS)

    Guevara, Carlos; Graf, Thomas

    2016-04-01

    In subsurface hydrological systems, spatial and temporal variations in solute concentration and/or temperature may affect fluid density and viscosity. These variations could lead to potentially unstable situations, in which a dense fluid overlies a less dense fluid. These situations could produce instabilities that appear as dense plume fingers migrating downwards counteracted by vertical upwards flow of freshwater (Simmons et al., Transp. Porous Medium, 2002). As a result of unstable variable-density flow, solute transport rates are increased over large distances and times as compared to constant-density flow. The numerical simulation of variable-density flow in saturated and unsaturated media requires corresponding benchmark problems against which a computer model is validated (Diersch and Kolditz, Adv. Water Resour, 2002). Recorded data from a laboratory-scale experiment of variable-density flow and solute transport in saturated and unsaturated porous media (Simmons et al., Transp. Porous Medium, 2002) is used to define a new numerical benchmark. The HydroGeoSphere code (Therrien et al., 2004) coupled with PEST (www.pesthomepage.org) are used to obtain an optimized parameter set capable of adequately representing the data set by Simmons et al., (2002). Fingering in the numerical model is triggered using random hydraulic conductivity fields. Due to the inherent randomness, a large number of simulations were conducted in this study. The optimized benchmark model adequately predicts the plume behavior and the fate of solutes. This benchmark is useful for model verification of variable-density flow problems in saturated and/or unsaturated media.

  18. Benchmarking Multilayer-HySEA model for landslide generated tsunami. HTHMP validation process.

    NASA Astrophysics Data System (ADS)

    Macias, J.; Escalante, C.; Castro, M. J.

    2017-12-01

    Landslide tsunami hazard may be dominant along significant parts of the coastline around the world, in particular in the USA, as compared to hazards from other tsunamigenic sources. This fact motivated NTHMP about the need of benchmarking models for landslide generated tsunamis, following the same methodology already used for standard tsunami models when the source is seismic. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory data sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. A total of 7 benchmarks. The Multilayer-HySEA model including non-hydrostatic effects has been used to perform all the benchmarking problems dealing with laboratory experiments proposed in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017 by NTHMP. The aim of this presentation is to show some of the latest numerical results obtained with the Multilayer-HySEA (non-hydrostatic) model in the framework of this validation effort.Acknowledgements. This research has been partially supported by the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and University of Malaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).

  19. Academic health center teaching hospitals in transition: a perspective from the field.

    PubMed

    Cyphert, S T; Colloton, J W; Levey, S

    1997-01-01

    A study of 11 Academic Health Center Teaching Hospitals (ATHs) in 11 states found that cost reduction programs, internal reorganizations, reengineering, benchmarking, and broadened entrepreneurial activity were prominent among the strategic initiatives reported in dealing with an increasingly turbulent environment. Although none of the ATHs had experienced negative net margins, we conclude that today's competitive healthcare system requires ATHs be reimbursed separately for their educational and other societally related costs to assist them in competing on a level playing fields.

  20. Benchmark Study of Global Clean Energy Manufacturing | Advanced

    Science.gov Websites

    Manufacturing Research | NREL Benchmark Study of Global Clean Energy Manufacturing Benchmark Study of Global Clean Energy Manufacturing Through a first-of-its-kind benchmark study, the Clean Energy Technology End Product.' The study examined four clean energy technologies: wind turbine components

  1. Designs of Empirical Evaluations of Nonexperimental Methods in Field Settings.

    PubMed

    Wong, Vivian C; Steiner, Peter M

    2018-01-01

    Over the last three decades, a research design has emerged to evaluate the performance of nonexperimental (NE) designs and design features in field settings. It is called the within-study comparison (WSC) approach or the design replication study. In the traditional WSC design, treatment effects from a randomized experiment are compared to those produced by an NE approach that shares the same target population. The nonexperiment may be a quasi-experimental design, such as a regression-discontinuity or an interrupted time-series design, or an observational study approach that includes matching methods, standard regression adjustments, and difference-in-differences methods. The goals of the WSC are to determine whether the nonexperiment can replicate results from a randomized experiment (which provides the causal benchmark estimate), and the contexts and conditions under which these methods work in practice. This article presents a coherent theory of the design and implementation of WSCs for evaluating NE methods. It introduces and identifies the multiple purposes of WSCs, required design components, common threats to validity, design variants, and causal estimands of interest in WSCs. It highlights two general approaches for empirical evaluations of methods in field settings, WSC designs with independent and dependent benchmark and NE arms. This article highlights advantages and disadvantages for each approach, and conditions and contexts under which each approach is optimal for addressing methodological questions.

  2. An Expert System for Processing Uncorrelated Satellite Tracks

    DTIC Science & Technology

    1992-12-17

    earthworms with much intellect e\\en though they routinely carry out this same function. One definition given artificial intelligence is "the study of mental...Networks: Benchmarking Studies ," Proceedings from the IEEE International Conference on Neural Networkv. pp. 64-65, 1988. 229 Lyddane, R., "Small...reverse if necessary and rdenqtl_ by block number, Field Group Subgroup Artificial Intelligence, Expert Systems, Neural Networks. Orbital Mechanics

  3. Three essays of economics and policy on renewable energy and energy efficiency

    NASA Astrophysics Data System (ADS)

    Meng, Yuxi

    In face of the crisis in energy security, environmental contamination, and climate change, energy saving and carbon emission reduction have become the top concerns of the whole human world. To address those concerns, renewable energy and energy efficiency are the two fields that many countries are paying attention to, which are also my research focus. The dissertation consists of three papers, including the innovation behavior of renewable energy producers, the impact of renewable energy policy on renewable innovation, and the market feedback to energy efficient building benchmarking ordinance. Here are the main conclusions I have reached in this dissertation. First, through the study on foreign patenting intention with the case study of Chinese solar PV industry, I looked at the patenting behaviors of 15 non-Chinese solar PV producers in solar PV technologies in China, and pointed out that foreign firms may file patents in the home country or production base of their competitors in order to earn the competitive edge in the global market. The second study is about the "Innovation by Generating" process. I specifically focused on Renewable Portfolio Standard (RPS) in the United States and the innovation performance within each state, and found out that wind power generation in RPS states has developed rapidly after the adoption of RPS, while the "Innovating by Generating" effect is more significant in solar PV technologies. In general, the innovations of the two technology groups are not prominently encouraged by RPS. My last study is about the benchmarking law and market response in the scenario of Philadelphia Benchmarking Law. By comparing the rental rate of LEED/EnergyStar buildings and ordinary buildings in the city of Philadelphia before and after the adoption of the building energy efficiency benchmarking law, I believe that the passage of Philadelphia Benchmarking Law may be helpful in improving the public awareness and understanding of energy efficiency information of buildings.

  4. Benchmarking: a method for continuous quality improvement in health.

    PubMed

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-05-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical-social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted.

  5. Benchmark solution of the dynamic response of a spherical shell at finite strain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Versino, Daniele; Brock, Jerry S.

    2016-09-28

    Our paper describes the development of high fidelity solutions for the study of homogeneous (elastic and inelastic) spherical shells subject to dynamic loading and undergoing finite deformations. The goal of the activity is to provide high accuracy results that can be used as benchmark solutions for the verification of computational physics codes. Furthermore, the equilibrium equations for the geometrically non-linear problem are solved through mode expansion of the displacement field and the boundary conditions are enforced in a strong form. Time integration is performed through high-order implicit Runge–Kutta schemes. Finally, we evaluate accuracy and convergence of the proposed method bymore » means of numerical examples with finite deformations and material non-linearities and inelasticity.« less

  6. Benchmarking of Neutron Production of Heavy-Ion Transport Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less

  7. Benchmarking of Heavy Ion Transport Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in designing and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less

  8. Benchmarking reference services: step by step.

    PubMed

    Buchanan, H S; Marshall, J G

    1996-01-01

    This article is a companion to an introductory article on benchmarking published in an earlier issue of Medical Reference Services Quarterly. Librarians interested in benchmarking often ask the following questions: How do I determine what to benchmark; how do I form a benchmarking team; how do I identify benchmarking partners; what's the best way to collect and analyze benchmarking information; and what will I do with the data? Careful planning is a critical success factor of any benchmarking project, and these questions must be answered before embarking on a benchmarking study. This article summarizes the steps necessary to conduct benchmarking research. Relevant examples of each benchmarking step are provided.

  9. Development of a flattening filter free multiple source model for use as an independent, Monte Carlo, dose calculation, quality assurance tool for clinical trials.

    PubMed

    Faught, Austin M; Davidson, Scott E; Popple, Richard; Kry, Stephen F; Etzel, Carol; Ibbott, Geoffrey S; Followill, David S

    2017-09-01

    The Imaging and Radiation Oncology Core-Houston (IROC-H) Quality Assurance Center (formerly the Radiological Physics Center) has reported varying levels of compliance from their anthropomorphic phantom auditing program. IROC-H studies have suggested that one source of disagreement between institution submitted calculated doses and measurement is the accuracy of the institution's treatment planning system dose calculations and heterogeneity corrections used. In order to audit this step of the radiation therapy treatment process, an independent dose calculation tool is needed. Monte Carlo multiple source models for Varian flattening filter free (FFF) 6 MV and FFF 10 MV therapeutic x-ray beams were commissioned based on central axis depth dose data from a 10 × 10 cm 2 field size and dose profiles for a 40 × 40 cm 2 field size. The models were validated against open-field measurements in a water tank for field sizes ranging from 3 × 3 cm 2 to 40 × 40 cm 2 . The models were then benchmarked against IROC-H's anthropomorphic head and neck phantom and lung phantom measurements. Validation results, assessed with a ±2%/2 mm gamma criterion, showed average agreement of 99.9% and 99.0% for central axis depth dose data for FFF 6 MV and FFF 10 MV models, respectively. Dose profile agreement using the same evaluation technique averaged 97.8% and 97.9% for the respective models. Phantom benchmarking comparisons were evaluated with a ±3%/2 mm gamma criterion, and agreement averaged 90.1% and 90.8% for the respective models. Multiple source models for Varian FFF 6 MV and FFF 10 MV beams have been developed, validated, and benchmarked for inclusion in an independent dose calculation quality assurance tool for use in clinical trial audits. © 2017 American Association of Physicists in Medicine.

  10. SU-G-BRC-17: Using Generalized Mean for Equivalent Square Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, S; Fan, Q; Lei, Y

    Purpose: Equivalent Square (ES) is a widely used concept in radiotherapy. It enables us to determine many important quantities for a rectangular treatment field, without measurement, based on the corresponding values from its ES field. In this study, we propose a Generalized Mean (GM) type ES formula and compare it with other established formulae using benchmark datasets. Methods: Our GM approach is expressed as ES=(w•fx^α+(1-w)•fy^α)^(1/α), where fx, fy, α, and w represent field sizes, power index, and a weighting factor, respectively. When α=−1 it reduces to well-known Sterling type ES formulae. In our study, α and w are determined throughmore » least-square-fitting. Akaike Information Criterion (AIC) was used to benchmark the performance of each formula. BJR (Supplement 17) ES field table for X-ray PDDs and open field output factor tables in Varian TrueBeam representative dataset were used for validation. Results: Switching from α=−1 to α=−1.25, a 20% reduction in standard deviation of residual error in ES estimation was achieved for the BJR dataset. The maximum relative residual error was reduced from ∼3% (in Sterling formula) or ∼2% (in Vadash/Bjarngard formula) down to ∼1% in GM formula for open fields of all energies and at rectangular field sizes from 3cm to 40cm in the Varian dataset. The improvement of the GM over the Sterling type ES formulae is particularly noticeable for very elongated rectangular fields with short width. AIC analysis confirmed the superior performance of the GM formula after taking into account the expanded parameter space. Conclusion: The GM significantly outperforms Sterling type formulae at slightly increased computational cost. The GM calculation may nullify the requirement of data measurement for many rectangular fields and hence shorten the Linac commissioning process. Improved dose calculation accuracy is also expected by adopting the GM formula into treatment planning and secondary MU check systems.« less

  11. Evolutionary Optimization of a Geometrically Refined Truss

    NASA Technical Reports Server (NTRS)

    Hull, P. V.; Tinker, M. L.; Dozier, G. V.

    2007-01-01

    Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.

  12. Developing a molecular dynamics force field for both folded and disordered protein states.

    PubMed

    Robustelli, Paul; Piana, Stefano; Shaw, David E

    2018-05-07

    Molecular dynamics (MD) simulation is a valuable tool for characterizing the structural dynamics of folded proteins and should be similarly applicable to disordered proteins and proteins with both folded and disordered regions. It has been unclear, however, whether any physical model (force field) used in MD simulations accurately describes both folded and disordered proteins. Here, we select a benchmark set of 21 systems, including folded and disordered proteins, simulate these systems with six state-of-the-art force fields, and compare the results to over 9,000 available experimental data points. We find that none of the tested force fields simultaneously provided accurate descriptions of folded proteins, of the dimensions of disordered proteins, and of the secondary structure propensities of disordered proteins. Guided by simulation results on a subset of our benchmark, however, we modified parameters of one force field, achieving excellent agreement with experiment for disordered proteins, while maintaining state-of-the-art accuracy for folded proteins. The resulting force field, a99SB- disp , should thus greatly expand the range of biological systems amenable to MD simulation. A similar approach could be taken to improve other force fields. Copyright © 2018 the Author(s). Published by PNAS.

  13. Experimental Mapping and Benchmarking of Magnetic Field Codes on the LHD Ion Accelerator

    NASA Astrophysics Data System (ADS)

    Chitarin, G.; Agostinetti, P.; Gallo, A.; Marconato, N.; Nakano, H.; Serianni, G.; Takeiri, Y.; Tsumori, K.

    2011-09-01

    For the validation of the numerical models used for the design of the Neutral Beam Test Facility for ITER in Padua [1], an experimental benchmark against a full-size device has been sought. The LHD BL2 injector [2] has been chosen as a first benchmark, because the BL2 Negative Ion Source and Beam Accelerator are geometrically similar to SPIDER, even though BL2 does not include current bars and ferromagnetic materials. A comprehensive 3D magnetic field model of the LHD BL2 device has been developed based on the same assumptions used for SPIDER. In parallel, a detailed experimental magnetic map of the BL2 device has been obtained using a suitably designed 3D adjustable structure for the fine positioning of the magnetic sensors inside 27 of the 770 beamlet apertures. The calculated values have been compared to the experimental data. The work has confirmed the quality of the numerical model, and has also provided useful information on the magnetic non-uniformities due to the edge effects and to the tolerance on permanent magnet remanence.

  14. The use of the Hirsch index in benchmarking hepatic surgery research.

    PubMed

    Cucchetti, Alessandro; Mazzotti, Federico; Pellegrini, Sara; Cescon, Matteo; Maroni, Lorenzo; Ercolani, Giorgio; Pinna, Antonio Daniele

    2013-10-01

    The Hirsch index (h-index) is recognized as an effective way to summarize an individual's scientific research output. However, a benchmark for evaluating surgeon scientists in the field of hepatic surgery is still not available. A total of 3,251 authors who published between 1949 and 2011 were identified using the Scopus identification number. The h-index, the total number of cited document, the total number of citations, and the scientific age were calculated for each author using both Scopus and Google Scholar. The median h-index was 6 and the median scientific age, assessed with Google Scholar, was 19 years. The numbers of cited documents, numbers of citations, and h-indexes obtained from Scopus and Google Scholar showed good correlation with one another; however, the results from the 2 databases were modified in different ways by scientific age. By plotting scientific age against h-index percentiles an h-index growth chart for both Scopus database and Google Scholar was provided. This analysis provides a first benchmark to assess surgeon scientists' productivity in the field of liver surgery. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Experimental Mapping and Benchmarking of Magnetic Field Codes on the LHD Ion Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chitarin, G.; University of Padova, Dept. of Management and Engineering, strad. S. Nicola, 36100 Vicenza; Agostinetti, P.

    2011-09-26

    For the validation of the numerical models used for the design of the Neutral Beam Test Facility for ITER in Padua [1], an experimental benchmark against a full-size device has been sought. The LHD BL2 injector [2] has been chosen as a first benchmark, because the BL2 Negative Ion Source and Beam Accelerator are geometrically similar to SPIDER, even though BL2 does not include current bars and ferromagnetic materials. A comprehensive 3D magnetic field model of the LHD BL2 device has been developed based on the same assumptions used for SPIDER. In parallel, a detailed experimental magnetic map of themore » BL2 device has been obtained using a suitably designed 3D adjustable structure for the fine positioning of the magnetic sensors inside 27 of the 770 beamlet apertures. The calculated values have been compared to the experimental data. The work has confirmed the quality of the numerical model, and has also provided useful information on the magnetic non-uniformities due to the edge effects and to the tolerance on permanent magnet remanence.« less

  16. Performance of Landslide-HySEA tsunami model for NTHMP benchmarking validation process

    NASA Astrophysics Data System (ADS)

    Macias, Jorge

    2017-04-01

    In its FY2009 Strategic Plan, the NTHMP required that all numerical tsunami inundation models be verified as accurate and consistent through a model benchmarking process. This was completed in 2011, but only for seismic tsunami sources and in a limited manner for idealized solid underwater landslides. Recent work by various NTHMP states, however, has shown that landslide tsunami hazard may be dominant along significant parts of the US coastline, as compared to hazards from other tsunamigenic sources. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory date sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. The Landslide-HySEA model has participated in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017. The aim of this presentation is to show some of the numerical results obtained for Landslide-HySEA in the framework of this benchmarking validation/verification effort. Acknowledgements. This research has been partially supported by the Junta de Andalucía research project TESELA (P11-RNM7069), the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and Universidad de Málaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).

  17. Benchmarking: A Method for Continuous Quality Improvement in Health

    PubMed Central

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-01-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical–social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted. PMID:23634166

  18. Benchmarking of neutron production of heavy-ion transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, I.; Ronningen, R. M.; Heilbronn, L.

    Document available in abstract form only, full text of document follows: Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondarymore » neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required. (authors)« less

  19. Validation of a three-dimensional viscous analysis of axisymmetric supersonic inlet flow fields

    NASA Technical Reports Server (NTRS)

    Benson, T. J.; Anderson, B. H.

    1983-01-01

    A three-dimensional viscous marching analysis for supersonic inlets was developed. To verify this analysis several benchmark axisymmetric test configurations were studied and are compared to experimental data. Detailed two-dimensional results for shock-boundary layer interactions are presented for flows with and without boundary layer bleed. Three dimensional calculations of a cone at angle of attack and a full inlet at attack are also discussed and evaluated. Results of the calculations demonstrate the code's ability to predict complex flow fields and establish guidelines for future calculations using similar codes.

  20. Study of blood flow in several benchmark micro-channels using a two-fluid approach.

    PubMed

    Wu, Wei-Tao; Yang, Fang; Antaki, James F; Aubry, Nadine; Massoudi, Mehrdad

    2015-10-01

    It is known that in a vessel whose characteristic dimension (e.g., its diameter) is in the range of 20 to 500 microns, blood behaves as a non-Newtonian fluid, exhibiting complex phenomena, such as shear-thinning, stress relaxation, and also multi-component behaviors, such as the Fahraeus effect, plasma-skimming, etc. For describing these non-Newtonian and multi-component characteristics of blood, using the framework of mixture theory, a two-fluid model is applied, where the plasma is treated as a Newtonian fluid and the red blood cells (RBCs) are treated as shear-thinning fluid. A computational fluid dynamic (CFD) simulation incorporating the constitutive model was implemented using OpenFOAM® in which benchmark problems including a sudden expansion and various driven slots and crevices were studied numerically. The numerical results exhibited good agreement with the experimental observations with respect to both the velocity field and the volume fraction distribution of RBCs.

  1. Benchmarking hardware architecture candidates for the NFIRAOS real-time controller

    NASA Astrophysics Data System (ADS)

    Smith, Malcolm; Kerley, Dan; Herriot, Glen; Véran, Jean-Pierre

    2014-07-01

    As a part of the trade study for the Narrow Field Infrared Adaptive Optics System, the adaptive optics system for the Thirty Meter Telescope, we investigated the feasibility of performing real-time control computation using a Linux operating system and Intel Xeon E5 CPUs. We also investigated a Xeon Phi based architecture which allows higher levels of parallelism. This paper summarizes both the CPU based real-time controller architecture and the Xeon Phi based RTC. The Intel Xeon E5 CPU solution meets the requirements and performs the computation for one AO cycle in an average of 767 microseconds. The Xeon Phi solution did not meet the 1200 microsecond time requirement and also suffered from unpredictable execution times. More detailed benchmark results are reported for both architectures.

  2. Numerical simulation of air distribution in a room with a sidewall jet under benchmark test conditions

    NASA Astrophysics Data System (ADS)

    Zasimova, Marina; Ivanov, Nikolay

    2018-05-01

    The goal of the study is to validate Large Eddy Simulation (LES) data on mixing ventilation in an isothermal room at conditions of benchmark experiments by Hurnik et al. (2015). The focus is on the accuracy of the mean and rms velocity fields prediction in the quasi-free jet zone of the room with 3D jet supplied from a sidewall rectangular diffuser. Calculations were carried out using the ANSYS Fluent 16.2 software with an algebraic wall-modeled LES subgrid-scale model. CFD results on the mean velocity vector are compared with the Laser Doppler Anemometry data. The difference between the mean velocity vector and the mean air speed in the jet zone, both LES-computed, is presented and discussed.

  3. Benchmark studies of induced radioactivity produced in LHC materials, Part II: Remanent dose rates.

    PubMed

    Brugger, M; Khater, H; Mayer, S; Prinz, A; Roesler, S; Ulrici, L; Vincke, H

    2005-01-01

    A new method to estimate remanent dose rates, to be used with the Monte Carlo code FLUKA, was benchmarked against measurements from an experiment that was performed at the CERN-EU high-energy reference field facility. An extensive collection of samples of different materials were placed downstream of, and laterally to, a copper target, intercepting a positively charged mixed hadron beam with a momentum of 120 GeV c(-1). Emphasis was put on the reduction of uncertainties by taking measures such as careful monitoring of the irradiation parameters, using different instruments to measure dose rates, adopting detailed elemental analyses of the irradiated materials and making detailed simulations of the irradiation experiment. The measured and calculated dose rates are in good agreement.

  4. PMLB: a large benchmark suite for machine learning evaluation and comparison.

    PubMed

    Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H

    2017-01-01

    The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.

  5. An automated benchmarking platform for MHC class II binding prediction methods.

    PubMed

    Andreatta, Massimo; Trolle, Thomas; Yan, Zhen; Greenbaum, Jason A; Peters, Bjoern; Nielsen, Morten

    2018-05-01

    Computational methods for the prediction of peptide-MHC binding have become an integral and essential component for candidate selection in experimental T cell epitope discovery studies. The sheer amount of published prediction methods-and often discordant reports on their performance-poses a considerable quandary to the experimentalist who needs to choose the best tool for their research. With the goal to provide an unbiased, transparent evaluation of the state-of-the-art in the field, we created an automated platform to benchmark peptide-MHC class II binding prediction tools. The platform evaluates the absolute and relative predictive performance of all participating tools on data newly entered into the Immune Epitope Database (IEDB) before they are made public, thereby providing a frequent, unbiased assessment of available prediction tools. The benchmark runs on a weekly basis, is fully automated, and displays up-to-date results on a publicly accessible website. The initial benchmark described here included six commonly used prediction servers, but other tools are encouraged to join with a simple sign-up procedure. Performance evaluation on 59 data sets composed of over 10 000 binding affinity measurements suggested that NetMHCIIpan is currently the most accurate tool, followed by NN-align and the IEDB consensus method. Weekly reports on the participating methods can be found online at: http://tools.iedb.org/auto_bench/mhcii/weekly/. mniel@bioinformatics.dtu.dk. Supplementary data are available at Bioinformatics online.

  6. Assessing Ecosystem Model Performance in Semiarid Systems

    NASA Astrophysics Data System (ADS)

    Thomas, A.; Dietze, M.; Scott, R. L.; Biederman, J. A.

    2017-12-01

    In ecosystem process modelling, comparing outputs to benchmark datasets observed in the field is an important way to validate models, allowing the modelling community to track model performance over time and compare models at specific sites. Multi-model comparison projects as well as models themselves have largely been focused on temperate forests and similar biomes. Semiarid regions, on the other hand, are underrepresented in land surface and ecosystem modelling efforts, and yet will be disproportionately impacted by disturbances such as climate change due to their sensitivity to changes in the water balance. Benchmarking models at semiarid sites is an important step in assessing and improving models' suitability for predicting the impact of disturbance on semiarid ecosystems. In this study, several ecosystem models were compared at a semiarid grassland in southwestern Arizona using PEcAn, or the Predictive Ecosystem Analyzer, an open-source eco-informatics toolbox ideal for creating the repeatable model workflows necessary for benchmarking. Models included SIPNET, DALEC, JULES, ED2, GDAY, LPJ-GUESS, MAESPA, CLM, CABLE, and FATES. Comparison between model output and benchmarks such as net ecosystem exchange (NEE) tended to produce high root mean square error and low correlation coefficients, reflecting poor simulation of seasonality and the tendency for models to create much higher carbon sources than observed. These results indicate that ecosystem models do not currently adequately represent semiarid ecosystem processes.

  7. Numerical Prediction of Signal for Magnetic Flux Leakage Benchmark Task

    NASA Astrophysics Data System (ADS)

    Lunin, V.; Alexeevsky, D.

    2003-03-01

    Numerical results predicted by the finite element method based code are presented. The nonlinear magnetic time-dependent benchmark problem proposed by the World Federation of Nondestructive Evaluation Centers, involves numerical prediction of normal (radial) component of the leaked field in the vicinity of two practically rectangular notches machined on a rotating steel pipe (with known nonlinear magnetic characteristic). One notch is located on external surface of pipe and other is on internal one, and both are oriented axially.

  8. Theoretical research program to study chemical reactions in AOTV bow shock tubes

    NASA Technical Reports Server (NTRS)

    Taylor, P.

    1986-01-01

    Progress in the development of computational methods for the characterization of chemical reactions in aerobraking orbit transfer vehicle (AOTV) propulsive flows is reported. Two main areas of code development were undertaken: (1) the implementation of CASSCF (complete active space self-consistent field) and SCF (self-consistent field) analytical first derivatives on the CRAY X-MP; and (2) the installation of the complete set of electronic structure codes on the CRAY 2. In the area of application calculations the main effort was devoted to performing full configuration-interaction calculations and using these results to benchmark other methods. Preprints describing some of the systems studied are included.

  9. Peridynamic thermal diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oterkus, Selda; Madenci, Erdogan, E-mail: madenci@email.arizona.edu; Agwai, Abigail

    This study presents the derivation of ordinary state-based peridynamic heat conduction equation based on the Lagrangian formalism. The peridynamic heat conduction parameters are related to those of the classical theory. An explicit time stepping scheme is adopted for numerical solution of various benchmark problems with known solutions. It paves the way for applying the peridynamic theory to other physical fields such as neutronic diffusion and electrical potential distribution.

  10. Microbially Mediated Kinetic Sulfur Isotope Fractionation: Reactive Transport Modeling Benchmark

    NASA Astrophysics Data System (ADS)

    Wanner, C.; Druhan, J. L.; Cheng, Y.; Amos, R. T.; Steefel, C. I.; Ajo Franklin, J. B.

    2014-12-01

    Microbially mediated sulfate reduction is a ubiquitous process in many subsurface systems. Isotopic fractionation is characteristic of this anaerobic process, since sulfate reducing bacteria (SRB) favor the reduction of the lighter sulfate isotopologue (S32O42-) over the heavier isotopologue (S34O42-). Detection of isotopic shifts have been utilized as a proxy for the onset of sulfate reduction in subsurface systems such as oil reservoirs and aquifers undergoing uranium bioremediation. Reactive transport modeling (RTM) of kinetic sulfur isotope fractionation has been applied to field and laboratory studies. These RTM approaches employ different mathematical formulations in the representation of kinetic sulfur isotope fractionation. In order to test the various formulations, we propose a benchmark problem set for the simulation of kinetic sulfur isotope fractionation during microbially mediated sulfate reduction. The benchmark problem set is comprised of four problem levels and is based on a recent laboratory column experimental study of sulfur isotope fractionation. Pertinent processes impacting sulfur isotopic composition such as microbial sulfate reduction and dispersion are included in the problem set. To date, participating RTM codes are: CRUNCHTOPE, TOUGHREACT, MIN3P and THE GEOCHEMIST'S WORKBENCH. Preliminary results from various codes show reasonable agreement for the problem levels simulating sulfur isotope fractionation in 1D.

  11. Benchmarking in national health service procurement in Scotland.

    PubMed

    Walker, Scott; Masson, Ron; Telford, Ronnie; White, David

    2007-11-01

    The paper reports the results of a study on benchmarking activities undertaken by the procurement organization within the National Health Service (NHS) in Scotland, namely National Procurement (previously Scottish Healthcare Supplies Contracts Branch). NHS performance is of course politically important, and benchmarking is increasingly seen as a means to improve performance, so the study was carried out to determine if the current benchmarking approaches could be enhanced. A review of the benchmarking activities used by the private sector, local government and NHS organizations was carried out to establish a framework of the motivations, benefits, problems and costs associated with benchmarking. This framework was used to carry out the research through case studies and a questionnaire survey of NHS procurement organizations both in Scotland and other parts of the UK. Nine of the 16 Scottish Health Boards surveyed reported carrying out benchmarking during the last three years. The findings of the research were that there were similarities in approaches between local government and NHS Scotland Health, but differences between NHS Scotland and other UK NHS procurement organizations. Benefits were seen as significant and it was recommended that National Procurement should pursue the formation of a benchmarking group with members drawn from NHS Scotland and external benchmarking bodies to establish measures to be used in benchmarking across the whole of NHS Scotland.

  12. RISC Processors and High Performance Computing

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    This tutorial will discuss the top five RISC microprocessors and the parallel systems in which they are used. It will provide a unique cross-machine comparison not available elsewhere. The effective performance of these processors will be compared by citing standard benchmarks in the context of real applications. The latest NAS Parallel Benchmarks, both absolute performance and performance per dollar, will be listed. The next generation of the NPB will be described. The tutorial will conclude with a discussion of future directions in the field. Technology Transfer Considerations: All of these computer systems are commercially available internationally. Information about these processors is available in the public domain, mostly from the vendors themselves. The NAS Parallel Benchmarks and their results have been previously approved numerous times for public release, beginning back in 1991.

  13. Benchmarking, benchmarks, or best practices? Applying quality improvement principles to decrease surgical turnaround time.

    PubMed

    Mitchell, L

    1996-01-01

    The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.

  14. A Causal-Comparative Study of the Affects of Benchmark Assessments on Middle Grades Science Achievement Scores

    ERIC Educational Resources Information Center

    Galloway, Melissa Ritchie

    2016-01-01

    The purpose of this causal comparative study was to test the theory of assessment that relates benchmark assessments to the Georgia middle grades science Criterion Referenced Competency Test (CRCT) percentages, controlling for schools who do not administer benchmark assessments versus schools who do administer benchmark assessments for all middle…

  15. Nonlinear viscoplasticity in ASPECT: benchmarking and applications to subduction

    NASA Astrophysics Data System (ADS)

    Glerum, Anne; Thieulot, Cedric; Fraters, Menno; Blom, Constantijn; Spakman, Wim

    2018-03-01

    ASPECT (Advanced Solver for Problems in Earth's ConvecTion) is a massively parallel finite element code originally designed for modeling thermal convection in the mantle with a Newtonian rheology. The code is characterized by modern numerical methods, high-performance parallelism and extensibility. This last characteristic is illustrated in this work: we have extended the use of ASPECT from global thermal convection modeling to upper-mantle-scale applications of subduction.

    Subduction modeling generally requires the tracking of multiple materials with different properties and with nonlinear viscous and viscoplastic rheologies. To this end, we implemented a frictional plasticity criterion that is combined with a viscous diffusion and dislocation creep rheology. Because ASPECT uses compositional fields to represent different materials, all material parameters are made dependent on a user-specified number of fields.

    The goal of this paper is primarily to describe and verify our implementations of complex, multi-material rheology by reproducing the results of four well-known two-dimensional benchmarks: the indentor benchmark, the brick experiment, the sandbox experiment and the slab detachment benchmark. Furthermore, we aim to provide hands-on examples for prospective users by demonstrating the use of multi-material viscoplasticity with three-dimensional, thermomechanical models of oceanic subduction, putting ASPECT on the map as a community code for high-resolution, nonlinear rheology subduction modeling.

  16. International benchmarking of specialty hospitals. A series of case studies on comprehensive cancer centres.

    PubMed

    van Lent, Wineke A M; de Beer, Relinde D; van Harten, Wim H

    2010-08-31

    Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the benchmarking process and the success factors of benchmarking in international specialized cancer centres. Three independent international benchmarking studies on operations management in cancer centres were conducted. The first study included three comprehensive cancer centres (CCC), three chemotherapy day units (CDU) were involved in the second study and four radiotherapy departments were included in the final study. Per multiple case study a research protocol was used to structure the benchmarking process. After reviewing the multiple case studies, the resulting description was used to study the research objectives. We adapted and evaluated existing benchmarking processes through formalizing stakeholder involvement and verifying the comparability of the partners. We also devised a framework to structure the indicators to produce a coherent indicator set and better improvement suggestions. Evaluating the feasibility of benchmarking as a tool to improve hospital processes led to mixed results. Case study 1 resulted in general recommendations for the organizations involved. In case study 2, the combination of benchmarking and lean management led in one CDU to a 24% increase in bed utilization and a 12% increase in productivity. Three radiotherapy departments of case study 3, were considering implementing the recommendations.Additionally, success factors, such as a well-defined and small project scope, partner selection based on clear criteria, stakeholder involvement, simple and well-structured indicators, analysis of both the process and its results and, adapt the identified better working methods to the own setting, were found. The improved benchmarking process and the success factors can produce relevant input to improve the operations management of specialty hospitals.

  17. International benchmarking of specialty hospitals. A series of case studies on comprehensive cancer centres

    PubMed Central

    2010-01-01

    Background Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the benchmarking process and the success factors of benchmarking in international specialized cancer centres. Methods Three independent international benchmarking studies on operations management in cancer centres were conducted. The first study included three comprehensive cancer centres (CCC), three chemotherapy day units (CDU) were involved in the second study and four radiotherapy departments were included in the final study. Per multiple case study a research protocol was used to structure the benchmarking process. After reviewing the multiple case studies, the resulting description was used to study the research objectives. Results We adapted and evaluated existing benchmarking processes through formalizing stakeholder involvement and verifying the comparability of the partners. We also devised a framework to structure the indicators to produce a coherent indicator set and better improvement suggestions. Evaluating the feasibility of benchmarking as a tool to improve hospital processes led to mixed results. Case study 1 resulted in general recommendations for the organizations involved. In case study 2, the combination of benchmarking and lean management led in one CDU to a 24% increase in bed utilization and a 12% increase in productivity. Three radiotherapy departments of case study 3, were considering implementing the recommendations. Additionally, success factors, such as a well-defined and small project scope, partner selection based on clear criteria, stakeholder involvement, simple and well-structured indicators, analysis of both the process and its results and, adapt the identified better working methods to the own setting, were found. Conclusions The improved benchmarking process and the success factors can produce relevant input to improve the operations management of specialty hospitals. PMID:20807408

  18. The General Concept of Benchmarking and Its Application in Higher Education in Europe

    ERIC Educational Resources Information Center

    Nazarko, Joanicjusz; Kuzmicz, Katarzyna Anna; Szubzda-Prutis, Elzbieta; Urban, Joanna

    2009-01-01

    The purposes of this paper are twofold: a presentation of the theoretical basis of benchmarking and a discussion on practical benchmarking applications. Benchmarking is also analyzed as a productivity accelerator. The authors study benchmarking usage in the private and public sectors with due consideration of the specificities of the two areas.…

  19. The Grad-Shafranov Reconstruction of Toroidal Magnetic Flux Ropes: Method Development and Benchmark Studies

    NASA Astrophysics Data System (ADS)

    Hu, Qiang

    2017-09-01

    We develop an approach of the Grad-Shafranov (GS) reconstruction for toroidal structures in space plasmas, based on in situ spacecraft measurements. The underlying theory is the GS equation that describes two-dimensional magnetohydrostatic equilibrium, as widely applied in fusion plasmas. The geometry is such that the arbitrary cross-section of the torus has rotational symmetry about the rotation axis, Z, with a major radius, r0. The magnetic field configuration is thus determined by a scalar flux function, Ψ, and a functional F that is a single-variable function of Ψ. The algorithm is implemented through a two-step approach: i) a trial-and-error process by minimizing the residue of the functional F(Ψ) to determine an optimal Z-axis orientation, and ii) for the chosen Z, a χ2 minimization process resulting in a range of r0. Benchmark studies of known analytic solutions to the toroidal GS equation with noise additions are presented to illustrate the two-step procedure and to demonstrate the performance of the numerical GS solver, separately. For the cases presented, the errors in Z and r0 are 9° and 22%, respectively, and the relative percent error in the numerical GS solutions is smaller than 10%. We also make public the computer codes for these implementations and benchmark studies.

  20. Feasibility of using a large Clinical Data Warehouse to automate the selection of diagnostic cohorts.

    PubMed

    Stephen, Reejis; Boxwala, Aziz; Gertman, Paul

    2003-01-01

    Data from Clinical Data Warehouses (CDWs) can be used for retrospective studies and for benchmarking. However, automated identification of cases from large datasets containing data items in free text fields is challenging. We developed an algorithm for categorizing pediatric patients presenting with respiratory distress into Bronchiolitis, Bacterial pneumonia and Asthma using clinical variables from a CDW. A feasibility study of this approach indicates that case selection may be automated.

  1. A quasi two-dimensional benchmark experiment for the solidification of a tin lead binary alloy

    NASA Astrophysics Data System (ADS)

    Wang, Xiao Dong; Petitpas, Patrick; Garnier, Christian; Paulin, Jean-Pierre; Fautrelle, Yves

    2007-05-01

    A horizontal solidification benchmark experiment with pure tin and a binary alloy of Sn-10 wt.%Pb is proposed. The experiment consists in solidifying a rectangular sample using two lateral heat exchangers which allow the application a controlled horizontal temperature difference. An array of fifty thermocouples placed on the lateral wall permits the determination of the instantaneous temperature distribution. The cases with the temperature gradient G=0, and the cooling rates equal to 0.02 and 0.04 K/s are studied. The time evolution of the interfacial total heat flux and the temperature field are recorded and analyzed. This allows us to evaluate heat transfer evolution due to natural convection, as well as its influence on the solidification macrostructure. To cite this article: X.D. Wang et al., C. R. Mecanique 335 (2007).

  2. Engine dynamic analysis with general nonlinear finite element codes. II - Bearing element implementation, overall numerical characteristics and benchmarking

    NASA Technical Reports Server (NTRS)

    Padovan, J.; Adams, M.; Lam, P.; Fertis, D.; Zeid, I.

    1982-01-01

    Second-year efforts within a three-year study to develop and extend finite element (FE) methodology to efficiently handle the transient/steady state response of rotor-bearing-stator structure associated with gas turbine engines are outlined. The two main areas aim at (1) implanting the squeeze film damper element into a general purpose FE code for testing and evaluation; and (2) determining the numerical characteristics of the FE-generated rotor-bearing-stator simulation scheme. The governing FE field equations are set out and the solution methodology is presented. The choice of ADINA as the general-purpose FE code is explained, and the numerical operational characteristics of the direct integration approach of FE-generated rotor-bearing-stator simulations is determined, including benchmarking, comparison of explicit vs. implicit methodologies of direct integration, and demonstration problems.

  3. Benchmark results in the 2D lattice Thirring model with a chemical potential

    NASA Astrophysics Data System (ADS)

    Ayyar, Venkitesh; Chandrasekharan, Shailesh; Rantaharju, Jarno

    2018-03-01

    We study the two-dimensional lattice Thirring model in the presence of a fermion chemical potential. Our model is asymptotically free and contains massive fermions that mimic a baryon and light bosons that mimic pions. Hence, it is a useful toy model for QCD, especially since it, too, suffers from a sign problem in the auxiliary field formulation in the presence of a fermion chemical potential. In this work, we formulate the model in both the world line and fermion-bag representations and show that the sign problem can be completely eliminated with open boundary conditions when the fermions are massless. Hence, we are able accurately compute a variety of interesting quantities in the model, and these results could provide benchmarks for other methods that are being developed to solve the sign problem in QCD.

  4. Groundwater-quality data in the Klamath Mountains study unit, 2010: results from the California GAMA Program

    USGS Publications Warehouse

    Mathany, Timothy M.; Belitz, Kenneth

    2014-01-01

    Groundwater quality in the 8,806-square-mile Klamath Mountains (KLAM) study unit was investigated by the U.S. Geological Survey (USGS) from October to December 2010, as part of the California State Water Resources Control Board (SWRCB) Groundwater Ambient Monitoring and Assessment (GAMA) Program’s Priority Basin Project (PBP). The GAMA-PBP was developed in response to the California Groundwater Quality Monitoring Act of 2001 and is being conducted in collaboration with the SWRCB and Lawrence Livermore National Laboratory (LLNL). The KLAM study unit was the thirty-third study unit to be sampled as part of the GAMA-PBP. The GAMA Klamath Mountains study was designed to provide a spatially unbiased assessment of untreated-groundwater quality in the primary aquifer system and to facilitate statistically consistent comparisons of untreated-groundwater quality throughout California. The primary aquifer system is defined by the perforation intervals of wells listed in the California Department of Public Health (CDPH) database for the KLAM study unit. Groundwater quality in the primary aquifer system may differ from the quality in the shallower or deeper water-bearing zones; shallower groundwater may be more vulnerable to surficial contamination. In the KLAM study unit, groundwater samples were collected from sites in Del Norte, Siskiyou, Humboldt, Trinity, Tehama, and Shasta Counties, California. Of the 39 sites sampled, 38 were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the primary aquifer system in the study unit (grid sites), and the remaining site was non-randomized (understanding site). The groundwater samples were analyzed for basic field parameters, organic constituents (volatile organic compounds [VOCs] and pesticides and pesticide degradates), inorganic constituents (trace elements, nutrients, major and minor ions, total dissolved solids [TDS]), radon-222, gross alpha and gross beta radioactivity, and microbial indicators (total coliform and Escherichia coli [E. coli]). Isotopic tracers (stable isotopes of hydrogen and oxygen in water, isotopic ratios of dissolved strontium in water, and stable isotopes of carbon in dissolved inorganic carbon), dissolved noble gases, and age-dating tracers (tritium and carbon-14) were measured to help identify sources and ages of sampled groundwater. Quality-control samples (field blanks, replicate sample pairs, and matrix spikes) were collected at 13 percent of the sites in the KLAM study unit, and the results were used to evaluate the quality of the data from the groundwater samples. Field blank samples rarely contained detectable concentrations of any constituent, indicating that contamination from sample collection or analysis was not a significant source of bias in the data for the groundwater samples. More than 99 percent of the replicate pair samples were within acceptable limits of variability. Matrix-spike sample recoveries were within the acceptable range (70 to 130 percent) for approximately 91 percent of the compounds. This study did not evaluate the quality of water delivered to consumers. After withdrawal, groundwater typically is treated, disinfected, and (or) blended with other waters to maintain water quality. Regulatory benchmarks apply to water that is delivered to the consumer, not to untreated groundwater. However, to provide some context for the results, concentrations of constituents measured in the untreated groundwater were compared with regulatory and non-regulatory health-based benchmarks established by the U.S. Environmental Protection Agency (USEPA) and CDPH, and to non-health-based benchmarks established for aesthetic concerns by the CDPH. Comparisons between data collected for this study and benchmarks for drinking water are for illustrative purposes only and are not indicative of compliance or non-compliance with those benchmarks. All concentrations of organic constituents from grid sites sampled in the KLAM study unit were less than health-based benchmarks. In total, VOCs were detected in 16 of the 38 grid sites sampled (approximately 42 percent), pesticides and pesticide degradates were detected in 8 grid sites (about 21 percent), and microbial indicators were detected in 14 grid sites (approximately 37 percent). Inorganic constituents (trace elements, major and minor ions, nutrients, and uranium and other radioactive constituents) and microbial indicators were sampled for at 38 grid sites, and all concentrations were less than health-based benchmarks, with the exception of one detection of boron greater than the CDPH notification level of 1,000 micrograms per liter (μg/L). Generally, concentrations of inorganic constituents with non-health-based benchmarks (iron, manganese, chloride, and TDS) were less than the CDPH secondary maximum contaminant level (SMCL-CA). Exceptions include three detections of iron greater than the SMCL-CA of 300 μg/L, four detections of manganese greater than the SMCL-CA of 50 μg/L, one detection of chloride greater than the recommended SMCL-CA of 250 μg/L, and one detection of TDS greater than the recommended SMCL-CA of 500 μg/L.

  5. ForceGen 3D structure and conformer generation: from small lead-like molecules to macrocyclic drugs

    NASA Astrophysics Data System (ADS)

    Cleves, Ann E.; Jain, Ajay N.

    2017-05-01

    We introduce the ForceGen method for 3D structure generation and conformer elaboration of drug-like small molecules. ForceGen is novel, avoiding use of distance geometry, molecular templates, or simulation-oriented stochastic sampling. The method is primarily driven by the molecular force field, implemented using an extension of MMFF94s and a partial charge estimator based on electronegativity-equalization. The force field is coupled to algorithms for direct sampling of realistic physical movements made by small molecules. Results are presented on a standard benchmark from the Cambridge Crystallographic Database of 480 drug-like small molecules, including full structure generation from SMILES strings. Reproduction of protein-bound crystallographic ligand poses is demonstrated on four carefully curated data sets: the ConfGen Set (667 ligands), the PINC cross-docking benchmark (1062 ligands), a large set of macrocyclic ligands (182 total with typical ring sizes of 12-23 atoms), and a commonly used benchmark for evaluating macrocycle conformer generation (30 ligands total). Results compare favorably to alternative methods, and performance on macrocyclic compounds approaches that observed on non-macrocycles while yielding a roughly 100-fold speed improvement over alternative MD-based methods with comparable performance.

  6. Parametrization of an Orbital-Based Linear-Scaling Quantum Force Field for Noncovalent Interactions

    PubMed Central

    2015-01-01

    We parametrize a linear-scaling quantum mechanical force field called mDC for the accurate reproduction of nonbonded interactions. We provide a new benchmark database of accurate ab initio interactions between sulfur-containing molecules. A variety of nonbond databases are used to compare the new mDC method with other semiempirical, molecular mechanical, ab initio, and combined semiempirical quantum mechanical/molecular mechanical methods. It is shown that the molecular mechanical force field significantly and consistently reproduces the benchmark results with greater accuracy than the semiempirical models and our mDC model produces errors twice as small as the molecular mechanical force field. The comparisons between the methods are extended to the docking of drug candidates to the Cyclin-Dependent Kinase 2 protein receptor. We correlate the protein–ligand binding energies to their experimental inhibition constants and find that the mDC produces the best correlation. Condensed phase simulation of mDC water is performed and shown to produce O–O radial distribution functions similar to TIP4P-EW. PMID:24803856

  7. Aerothermal modeling program, phase 1

    NASA Technical Reports Server (NTRS)

    Srinivasan, R.; Reynolds, R.; Ball, I.; Berry, R.; Johnson, K.; Mongia, H.

    1983-01-01

    The combustor performance submodels for complex flows are evaluated. The benchmark test cases for complex nonswirling flows are identified and analyzed. The introduction of swirl into the flow creates much faster mixing, caused by radial pressure gradients and increase in turbulence generation. These phenomena are more difficult to predict than the effects due to geometrical streamline curvatures, like the curved duct, and sudden expansion. Flow fields with swirl, both confined and unconfined are studied. The role of the dilution zone to achieve the turbine inlet radial profile plays an important part, therefore temperature field measurements were made in several idealized dilution zone configurations.

  8. Benchmarking an unstructured grid sediment model in an energetic estuary

    DOE PAGES

    Lopez, Jesse E.; Baptista, António M.

    2016-12-14

    A sediment model coupled to the hydrodynamic model SELFE is validated against a benchmark combining a set of idealized tests and an application to a field-data rich energetic estuary. After sensitivity studies, model results for the idealized tests largely agree with previously reported results from other models in addition to analytical, semi-analytical, or laboratory results. Results of suspended sediment in an open channel test with fixed bottom are sensitive to turbulence closure and treatment for hydrodynamic bottom boundary. Results for the migration of a trench are very sensitive to critical stress and erosion rate, but largely insensitive to turbulence closure.more » The model is able to qualitatively represent sediment dynamics associated with estuarine turbidity maxima in an idealized estuary. Applied to the Columbia River estuary, the model qualitatively captures sediment dynamics observed by fixed stations and shipborne profiles. Representation of the vertical structure of suspended sediment degrades when stratification is underpredicted. Across all tests, skill metrics of suspended sediments lag those of hydrodynamics even when qualitatively representing dynamics. The benchmark is fully documented in an openly available repository to encourage unambiguous comparisons against other models.« less

  9. Benchmark results for few-body hypernuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruffino, Fabrizio Ferrari; Lonardoni, Diego; Barnea, Nir

    2017-03-16

    Here, the Non-Symmetrized Hyperspherical Harmonics method (NSHH) is introduced in the hypernuclear sector and benchmarked with three different ab-initio methods, namely the Auxiliary Field Diffusion Monte Carlo method, the Faddeev–Yakubovsky approach and the Gaussian Expansion Method. Binding energies and hyperon separation energies of three- to five-body hypernuclei are calculated by employing the two-body ΛN component of the phenomenological Bodmer–Usmani potential, and a hyperon-nucleon interaction simulating the scattering phase shifts given by NSC97f. The range of applicability of the NSHH method is briefly discussed.

  10. Full Chain Benchmarking for Open Architecture Airborne ISR Systems: A Case Study for GMTI Radar Applications

    DTIC Science & Technology

    2015-09-15

    middleware implementations via a common object-oriented software hierarchy, with library -specific implementations of the five GMTI benchmark ...Full-Chain Benchmarking for Open Architecture Airborne ISR Systems A Case Study for GMTI Radar Applications Matthias Beebe, Matthew Alexander...time performance, effective benchmarks are necessary to ensure that an ARP system can meet the mission constraints and performance requirements of

  11. Hospital benchmarking: are U.S. eye hospitals ready?

    PubMed

    de Korne, Dirk F; van Wijngaarden, Jeroen D H; Sol, Kees J C A; Betz, Robert; Thomas, Richard C; Schein, Oliver D; Klazinga, Niek S

    2012-01-01

    Benchmarking is increasingly considered a useful management instrument to improve quality in health care, but little is known about its applicability in hospital settings. The aims of this study were to assess the applicability of a benchmarking project in U.S. eye hospitals and compare the results with an international initiative. We evaluated multiple cases by applying an evaluation frame abstracted from the literature to five U.S. eye hospitals that used a set of 10 indicators for efficiency benchmarking. Qualitative analysis entailed 46 semistructured face-to-face interviews with stakeholders, document analyses, and questionnaires. The case studies only partially met the conditions of the evaluation frame. Although learning and quality improvement were stated as overall purposes, the benchmarking initiative was at first focused on efficiency only. No ophthalmic outcomes were included, and clinicians were skeptical about their reporting relevance and disclosure. However, in contrast with earlier findings in international eye hospitals, all U.S. hospitals worked with internal indicators that were integrated in their performance management systems and supported benchmarking. Benchmarking can support performance management in individual hospitals. Having a certain number of comparable institutes provide similar services in a noncompetitive milieu seems to lay fertile ground for benchmarking. International benchmarking is useful only when these conditions are not met nationally. Although the literature focuses on static conditions for effective benchmarking, our case studies show that it is a highly iterative and learning process. The journey of benchmarking seems to be more important than the destination. Improving patient value (health outcomes per unit of cost) requires, however, an integrative perspective where clinicians and administrators closely cooperate on both quality and efficiency issues. If these worlds do not share such a relationship, the added "public" value of benchmarking in health care is questionable.

  12. [Does implementation of benchmarking in quality circles improve the quality of care of patients with asthma and reduce drug interaction?].

    PubMed

    Kaufmann-Kolle, Petra; Szecsenyi, Joachim; Broge, Björn; Haefeli, Walter Emil; Schneider, Antonius

    2011-01-01

    The purpose of this cluster-randomised controlled trial was to evaluate the efficacy of quality circles (QCs) working either with general data-based feedback or with an open benchmark within the field of asthma care and drug-drug interactions. Twelve QCs, involving 96 general practitioners from 85 practices, were randomised. Six QCs worked with traditional anonymous feedback and six with an open benchmark. Two QC meetings supported with feedback reports were held covering the topics "drug-drug interactions" and "asthma"; in both cases discussions were guided by a trained moderator. Outcome measures included health-related quality of life and patient satisfaction with treatment, asthma severity and number of potentially inappropriate drug combinations as well as the general practitioners' satisfaction in relation to the performance of the QC. A significant improvement in the treatment of asthma was observed in both trial arms. However, there was only a slight improvement regarding inappropriate drug combinations. There were no relevant differences between the group with open benchmark (B-QC) and traditional quality circles (T-QC). The physicians' satisfaction with the QC performance was significantly higher in the T-QCs. General practitioners seem to take a critical perspective about open benchmarking in quality circles. Caution should be used when implementing benchmarking in a quality circle as it did not improve healthcare when compared to the traditional procedure with anonymised comparisons. Copyright © 2011. Published by Elsevier GmbH.

  13. Results of the first order leveling surveys in the Mexicali Valley and at the Cerro Prieto field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de la Pena L, A.

    1981-01-01

    The results obtained from the third leveling survey carried out by the Direccion General de Geografia del Territorio Nacional (previously DETENAL) during November and December 1979 are presented. Calculations of the changes in field elevation and plots showing comparisons of the 1977, 1978, and 1979 surveys are also presented. Results from a second order leveling survey performed to ascertain the extent of ground motion resulting from the 8 June 1980 earthquake are presented. This magnitude ML = 6.7 earthquake with epicenter located 15 km southeast of the Guadalupe Victoria village, caused fissures on the surface, the formation of small sandmore » volcanos, and the ejection of ground water in the vicinity of the Cerro Prieto field. This leveling survey was carried out between benchmark BN-10067 at the intersection of the Solfatara canal and the Sonora-Baja California railroad, and benchmark BN-10055 located at the Delta station.« less

  14. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    ERIC Educational Resources Information Center

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  15. Benchmarking Deep Learning Models on Large Healthcare Datasets.

    PubMed

    Purushotham, Sanjay; Meng, Chuizheng; Che, Zhengping; Liu, Yan

    2018-06-04

    Deep learning models (aka Deep Neural Networks) have revolutionized many fields including computer vision, natural language processing, speech recognition, and is being increasingly used in clinical healthcare applications. However, few works exist which have benchmarked the performance of the deep learning models with respect to the state-of-the-art machine learning models and prognostic scoring systems on publicly available healthcare datasets. In this paper, we present the benchmarking results for several clinical prediction tasks such as mortality prediction, length of stay prediction, and ICD-9 code group prediction using Deep Learning models, ensemble of machine learning models (Super Learner algorithm), SAPS II and SOFA scores. We used the Medical Information Mart for Intensive Care III (MIMIC-III) (v1.4) publicly available dataset, which includes all patients admitted to an ICU at the Beth Israel Deaconess Medical Center from 2001 to 2012, for the benchmarking tasks. Our results show that deep learning models consistently outperform all the other approaches especially when the 'raw' clinical time series data is used as input features to the models. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Accelerating progress in Artificial General Intelligence: Choosing a benchmark for natural world interaction

    NASA Astrophysics Data System (ADS)

    Rohrer, Brandon

    2010-12-01

    Measuring progress in the field of Artificial General Intelligence (AGI) can be difficult without commonly accepted methods of evaluation. An AGI benchmark would allow evaluation and comparison of the many computational intelligence algorithms that have been developed. In this paper I propose that a benchmark for natural world interaction would possess seven key characteristics: fitness, breadth, specificity, low cost, simplicity, range, and task focus. I also outline two benchmark examples that meet most of these criteria. In the first, the direction task, a human coach directs a machine to perform a novel task in an unfamiliar environment. The direction task is extremely broad, but may be idealistic. In the second, the AGI battery, AGI candidates are evaluated based on their performance on a collection of more specific tasks. The AGI battery is designed to be appropriate to the capabilities of currently existing systems. Both the direction task and the AGI battery would require further definition before implementing. The paper concludes with a description of a task that might be included in the AGI battery: the search and retrieve task.

  17. First benchmark of the Unstructured Grid Adaptation Working Group

    NASA Technical Reports Server (NTRS)

    Ibanez, Daniel; Barral, Nicolas; Krakos, Joshua; Loseille, Adrien; Michal, Todd; Park, Mike

    2017-01-01

    Unstructured grid adaptation is a technology that holds the potential to improve the automation and accuracy of computational fluid dynamics and other computational disciplines. Difficulty producing the highly anisotropic elements necessary for simulation on complex curved geometries that satisfies a resolution request has limited this technology's widespread adoption. The Unstructured Grid Adaptation Working Group is an open gathering of researchers working on adapting simplicial meshes to conform to a metric field. Current members span a wide range of institutions including academia, industry, and national laboratories. The purpose of this group is to create a common basis for understanding and improving mesh adaptation. We present our first major contribution: a common set of benchmark cases, including input meshes and analytic metric specifications, that are publicly available to be used for evaluating any mesh adaptation code. We also present the results of several existing codes on these benchmark cases, to illustrate their utility in identifying key challenges common to all codes and important differences between available codes. Future directions are defined to expand this benchmark to mature the technology necessary to impact practical simulation workflows.

  18. Verification and benchmark testing of the NUFT computer code

    NASA Astrophysics Data System (ADS)

    Lee, K. H.; Nitao, J. J.; Kulshrestha, A.

    1993-10-01

    This interim report presents results of work completed in the ongoing verification and benchmark testing of the NUFT (Nonisothermal Unsaturated-saturated Flow and Transport) computer code. NUFT is a suite of multiphase, multicomponent models for numerical solution of thermal and isothermal flow and transport in porous media, with application to subsurface contaminant transport problems. The code simulates the coupled transport of heat, fluids, and chemical components, including volatile organic compounds. Grid systems may be cartesian or cylindrical, with one-, two-, or fully three-dimensional configurations possible. In this initial phase of testing, the NUFT code was used to solve seven one-dimensional unsaturated flow and heat transfer problems. Three verification and four benchmarking problems were solved. In the verification testing, excellent agreement was observed between NUFT results and the analytical or quasianalytical solutions. In the benchmark testing, results of code intercomparison were very satisfactory. From these testing results, it is concluded that the NUFT code is ready for application to field and laboratory problems similar to those addressed here. Multidimensional problems, including those dealing with chemical transport, will be addressed in a subsequent report.

  19. Optimization of Deep Drilling Performance - Development and Benchmark Testing of Advanced Diamond Product Drill Bits & HP/HT Fluids to Significantly Improve Rates of Penetration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alan Black; Arnis Judzis

    2005-09-30

    This document details the progress to date on the OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS AND HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION contract for the year starting October 2004 through September 2005. The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for amore » next level of deep drilling performance; Phase 2--Develop advanced smart bit-fluid prototypes and test at large scale; and Phase 3--Field trial smart bit--fluid concepts, modify as necessary and commercialize products. As of report date, TerraTek has concluded all Phase 1 testing and is planning Phase 2 development.« less

  20. Heliostat field cost reduction by `slope drive' optimization

    NASA Astrophysics Data System (ADS)

    Arbes, Florian; Weinrebe, Gerhard; Wöhrbach, Markus

    2016-05-01

    An algorithm to optimize power tower heliostat fields employing heliostats with so-called slope drives is presented. It is shown that a field using heliostats with the slope drive axes configuration has the same performance as a field with conventional azimuth-elevation tracking heliostats. Even though heliostats with the slope drive configuration have a limited tracking range, field groups of heliostats with different axes or different drives are not needed for different positions in the heliostat field. The impacts of selected parameters on a benchmark power plant (PS10 near Seville, Spain) are analyzed.

  1. A benchmarking method to measure dietary absorption efficiency of chemicals by fish.

    PubMed

    Xiao, Ruiyang; Adolfsson-Erici, Margaretha; Åkerman, Gun; McLachlan, Michael S; MacLeod, Matthew

    2013-12-01

    Understanding the dietary absorption efficiency of chemicals in the gastrointestinal tract of fish is important from both a scientific and a regulatory point of view. However, reported fish absorption efficiencies for well-studied chemicals are highly variable. In the present study, the authors developed and exploited an internal chemical benchmarking method that has the potential to reduce uncertainty and variability and, thus, to improve the precision of measurements of fish absorption efficiency. The authors applied the benchmarking method to measure the gross absorption efficiency for 15 chemicals with a wide range of physicochemical properties and structures. They selected 2,2',5,6'-tetrachlorobiphenyl (PCB53) and decabromodiphenyl ethane as absorbable and nonabsorbable benchmarks, respectively. Quantities of chemicals determined in fish were benchmarked to the fraction of PCB53 recovered in fish, and quantities of chemicals determined in feces were benchmarked to the fraction of decabromodiphenyl ethane recovered in feces. The performance of the benchmarking procedure was evaluated based on the recovery of the test chemicals and precision of absorption efficiency from repeated tests. Benchmarking did not improve the precision of the measurements; after benchmarking, however, the median recovery for 15 chemicals was 106%, and variability of recoveries was reduced compared with before benchmarking, suggesting that benchmarking could account for incomplete extraction of chemical in fish and incomplete collection of feces from different tests. © 2013 SETAC.

  2. Integrating Continuous GPS Time Series and Geodetic Leveling Data to Estimate Secular Vertical Velocity of Taiwan

    NASA Astrophysics Data System (ADS)

    LAI, Y. R.; Hsu, Y. J.; You, R. J.

    2017-12-01

    GPS technique services as the most powerful method in monitoring crustal deformation owing to its advantage of temporal continuity. Geodetic leveling is also widely used not only in engineering but also in geophysics applicants due to its high precision in vertical datum determination and spatial continuity advantages. As widely known, the reference frames of GPS and geodetic leveling are different- the former refers to the reference ellipsoid (WGS84 ellipsoid) and the latter refers to the geoid. In order to combine vertical velocity fields from different datums, we decide to examine discrepancy between these two data sets. Moreover, GPS stations and benchmarks always do not locate at the same places. In place of using a spatial reduced function (Ching et.al, JGR, 2011) to find the discrepancy between them, we focused on comparing termporal variation of GPS vertical motions and geodetic leveling displacements. In this study, we analyzed the vertical velocity field from 238 GPS stations and 1634 benchmarks, including the time-period (2000 to 2015) influenced by postseismiceffects from 1999 Chi-Chi earthquake (Mw 7.6), 2003 Chengkung earthquake (Mw 6.8), and so on. After we thoroughly examined all the process and considered coseismic and postseismic deformation of significant earthquakes, we found that the discrepancy of vertical velocity of the GPS station and its nearby benchmarks is about 1 - 2 mm/yr, including several source of errors in data processing. We suggest that this discrepancy of vertical velocity field can be ignored as tolerable error, and two heterogeneous fields can be integrated together without any mathematical presumptions of spatial regression. The result shows that the western coast is suffering sever subsidence with rates up to 40 mm/yr; the Central Range of Taiwan is uplifting with rates about +10 mm/yr and active landslides with significant subsidence of 5-10 mm/yr in local area. A huge velocity contrast of 30 mm;/yr indicating east over west thrusting is shown across the Longitudinal Valley Fault. Estimation of vertical velocity from 2000 to 2015 is consistent with velocities from 2008 to 2015, indicating our modification process is not affected by the Chi-Chi earthquake (Mw 7.6).

  3. Hand washing frequencies and procedures used in retail food services.

    PubMed

    Strohbehn, Catherine; Sneed, Jeannie; Paez, Paola; Meyer, Janell

    2008-08-01

    Transmission of viruses, bacteria, and parasites to food by way of improperly washed hands is a major contributing factor in the spread of foodborne illnesses. Field observers have assessed compliance with hand washing regulations, yet few studies have included consideration of frequency and methods used by sectors of the food service industry or have included benchmarks for hand washing. Five 3-h observation periods of employee (n = 80) hand washing behaviors during menu production, service, and cleaning were conducted in 16 food service operations for a total of 240 h of direct observation. Four operations from each of four sectors of the retail food service industry participated in the study: assisted living for the elderly, childcare, restaurants, and schools. A validated observation form, based on 2005 Food Code guidelines, was used by two trained researchers. Researchers noted when hands should have been washed, when hands were washed, and how hands were washed. Overall compliance with Food Code recommendations for frequency during production, service, and cleaning phases ranged from 5% in restaurants to 33% in assisted living facilities. Procedural compliance rates also were low. Proposed benchmarks for the number of times hand washing should occur by each employee for each sector of food service during each phase of operation are seven times per hour for assisted living, nine times per hour for childcare, 29 times per hour for restaurants, and 11 times per hour for schools. These benchmarks are high, especially for restaurant employees. Implementation would mean lost productivity and potential for dermatitis; thus, active managerial control over work assignments is needed. These benchmarks can be used for training and to guide employee hand washing behaviors.

  4. Dynamic vehicle routing with time windows in theory and practice.

    PubMed

    Yang, Zhiwei; van Osta, Jan-Paul; van Veen, Barry; van Krevelen, Rick; van Klaveren, Richard; Stam, Andries; Kok, Joost; Bäck, Thomas; Emmerich, Michael

    2017-01-01

    The vehicle routing problem is a classical combinatorial optimization problem. This work is about a variant of the vehicle routing problem with dynamically changing orders and time windows. In real-world applications often the demands change during operation time. New orders occur and others are canceled. In this case new schedules need to be generated on-the-fly. Online optimization algorithms for dynamical vehicle routing address this problem but so far they do not consider time windows. Moreover, to match the scenarios found in real-world problems adaptations of benchmarks are required. In this paper, a practical problem is modeled based on the procedure of daily routing of a delivery company. New orders by customers are introduced dynamically during the working day and need to be integrated into the schedule. A multiple ant colony algorithm combined with powerful local search procedures is proposed to solve the dynamic vehicle routing problem with time windows. The performance is tested on a new benchmark based on simulations of a working day. The problems are taken from Solomon's benchmarks but a certain percentage of the orders are only revealed to the algorithm during operation time. Different versions of the MACS algorithm are tested and a high performing variant is identified. Finally, the algorithm is tested in situ: In a field study, the algorithm schedules a fleet of cars for a surveillance company. We compare the performance of the algorithm to that of the procedure used by the company and we summarize insights gained from the implementation of the real-world study. The results show that the multiple ant colony algorithm can get a much better solution on the academic benchmark problem and also can be integrated in a real-world environment.

  5. [The OPTIMISE study (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment]. Results for Luxembourg].

    PubMed

    Michel, G

    2012-01-01

    The OPTIMISE study (NCT00681850) has been run in six European countries, including Luxembourg, to prospectively assess the effect of benchmarking on the quality of primary care in patients with type 2 diabetes, using major modifiable vascular risk factors as critical quality indicators. Primary care centers treating type 2 diabetic patients were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). Primary endpoint was percentage of patients in the benchmarking group achieving pre-set targets of the critical quality indicators: glycated hemoglobin (HbAlc), systolic blood pressure (SBP) and low-density lipoprotein (LDL) cholesterol after 12 months follow-up. In Luxembourg, in the benchmarking group, more patients achieved target for SBP (40.2% vs. 20%) and for LDL-cholesterol (50.4% vs. 44.2%). 12.9% of patients in the benchmarking group met all three targets compared with patients in the control group (8.3%). In this randomized, controlled study, benchmarking was shown to be an effective tool for improving critical quality indicator targets, which are the principal modifiable vascular risk factors in diabetes type 2.

  6. The rotating movement of three immiscible fluids - A benchmark problem

    USGS Publications Warehouse

    Bakker, M.; Oude, Essink G.H.P.; Langevin, C.D.

    2004-01-01

    A benchmark problem involving the rotating movement of three immiscible fluids is proposed for verifying the density-dependent flow component of groundwater flow codes. The problem consists of a two-dimensional strip in the vertical plane filled with three fluids of different densities separated by interfaces. Initially, the interfaces between the fluids make a 45??angle with the horizontal. Over time, the fluids rotate to the stable position whereby the interfaces are horizontal; all flow is caused by density differences. Two cases of the problem are presented, one resulting in a symmetric flow field and one resulting in an asymmetric flow field. An exact analytical solution for the initial flow field is presented by application of the vortex theory and complex variables. Numerical results are obtained using three variable-density groundwater flow codes (SWI, MOCDENS3D, and SEAWAT). Initial horizontal velocities of the interfaces, as simulated by the three codes, compare well with the exact solution. The three codes are used to simulate the positions of the interfaces at two times; the three codes produce nearly identical results. The agreement between the results is evidence that the specific rotational behavior predicted by the models is correct. It also shows that the proposed problem may be used to benchmark variable-density codes. It is concluded that the three models can be used to model accurately the movement of interfaces between immiscible fluids, and have little or no numerical dispersion. ?? 2003 Elsevier B.V. All rights reserved.

  7. Benchmarking on Tsunami Currents with ComMIT

    NASA Astrophysics Data System (ADS)

    Sharghi vand, N.; Kanoglu, U.

    2015-12-01

    There were no standards for the validation and verification of tsunami numerical models before 2004 Indian Ocean tsunami. Even, number of numerical models has been used for inundation mapping effort, evaluation of critical structures, etc. without validation and verification. After 2004, NOAA Center for Tsunami Research (NCTR) established standards for the validation and verification of tsunami numerical models (Synolakis et al. 2008 Pure Appl. Geophys. 165, 2197-2228), which will be used evaluation of critical structures such as nuclear power plants against tsunami attack. NCTR presented analytical, experimental and field benchmark problems aimed to estimate maximum runup and accepted widely by the community. Recently, benchmark problems were suggested by the US National Tsunami Hazard Mitigation Program Mapping & Modeling Benchmarking Workshop: Tsunami Currents on February 9-10, 2015 at Portland, Oregon, USA (http://nws.weather.gov/nthmp/index.html). These benchmark problems concentrated toward validation and verification of tsunami numerical models on tsunami currents. Three of the benchmark problems were: current measurement of the Japan 2011 tsunami in Hilo Harbor, Hawaii, USA and in Tauranga Harbor, New Zealand, and single long-period wave propagating onto a small-scale experimental model of the town of Seaside, Oregon, USA. These benchmark problems were implemented in the Community Modeling Interface for Tsunamis (ComMIT) (Titov et al. 2011 Pure Appl. Geophys. 168, 2121-2131), which is a user-friendly interface to the validated and verified Method of Splitting Tsunami (MOST) (Titov and Synolakis 1995 J. Waterw. Port Coastal Ocean Eng. 121, 308-316) model and is developed by NCTR. The modeling results are compared with the required benchmark data, providing good agreements and results are discussed. Acknowledgment: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement no 603839 (Project ASTARTE - Assessment, Strategy and Risk Reduction for Tsunamis in Europe)

  8. Benchmarking for Higher Education.

    ERIC Educational Resources Information Center

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson…

  9. Computing sextic centrifugal distortion constants by DFT: A benchmark analysis on halogenated compounds

    NASA Astrophysics Data System (ADS)

    Pietropolli Charmet, Andrea; Stoppa, Paolo; Tasinato, Nicola; Giorgianni, Santi

    2017-05-01

    This work presents a benchmark study on the calculation of the sextic centrifugal distortion constants employing cubic force fields computed by means of density functional theory (DFT). For a set of semi-rigid halogenated organic compounds several functionals (B2PLYP, B3LYP, B3PW91, M06, M06-2X, O3LYP, X3LYP, ωB97XD, CAM-B3LYP, LC-ωPBE, PBE0, B97-1 and B97-D) were used for computing the sextic centrifugal distortion constants. The effects related to the size of basis sets and the performances of hybrid approaches, where the harmonic data obtained at higher level of electronic correlation are coupled with cubic force constants yielded by DFT functionals, are presented and discussed. The predicted values were compared to both the available data published in the literature and those obtained by calculations carried out at increasing level of electronic correlation: Hartree-Fock Self Consistent Field (HF-SCF), second order Møller-Plesset perturbation theory (MP2), and coupled-cluster single and double (CCSD) level of theory. Different hybrid approaches, having the cubic force field computed at DFT level of theory coupled to harmonic data computed at increasing level of electronic correlation (up to CCSD level of theory augmented by a perturbational estimate of the effects of connected triple excitations, CCSD(T)) were considered. The obtained results demonstrate that they can represent reliable and computationally affordable methods to predict sextic centrifugal terms with an accuracy almost comparable to that yielded by the more expensive anharmonic force fields fully computed at MP2 and CCSD levels of theory. In view of their reduced computational cost, these hybrid approaches pave the route to the study of more complex systems.

  10. Study of blood flow in several benchmark micro-channels using a two-fluid approach

    PubMed Central

    Wu, Wei-Tao; Yang, Fang; Antaki, James F.; Aubry, Nadine; Massoudi, Mehrdad

    2015-01-01

    It is known that in a vessel whose characteristic dimension (e.g., its diameter) is in the range of 20 to 500 microns, blood behaves as a non-Newtonian fluid, exhibiting complex phenomena, such as shear-thinning, stress relaxation, and also multi-component behaviors, such as the Fahraeus effect, plasma-skimming, etc. For describing these non-Newtonian and multi-component characteristics of blood, using the framework of mixture theory, a two-fluid model is applied, where the plasma is treated as a Newtonian fluid and the red blood cells (RBCs) are treated as shear-thinning fluid. A computational fluid dynamic (CFD) simulation incorporating the constitutive model was implemented using OpenFOAM® in which benchmark problems including a sudden expansion and various driven slots and crevices were studied numerically. The numerical results exhibited good agreement with the experimental observations with respect to both the velocity field and the volume fraction distribution of RBCs. PMID:26240438

  11. Fan Noise Prediction with Applications to Aircraft System Noise Assessment

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Envia, Edmane; Burley, Casey L.

    2009-01-01

    This paper describes an assessment of current fan noise prediction tools by comparing measured and predicted sideline acoustic levels from a benchmark fan noise wind tunnel test. Specifically, an empirical method and newly developed coupled computational approach are utilized to predict aft fan noise for a benchmark test configuration. Comparisons with sideline noise measurements are performed to assess the relative merits of the two approaches. The study identifies issues entailed in coupling the source and propagation codes, as well as provides insight into the capabilities of the tools in predicting the fan noise source and subsequent propagation and radiation. In contrast to the empirical method, the new coupled computational approach provides the ability to investigate acoustic near-field effects. The potential benefits/costs of these new methods are also compared with the existing capabilities in a current aircraft noise system prediction tool. The knowledge gained in this work provides a basis for improved fan source specification in overall aircraft system noise studies.

  12. Evaluation of the synoptic and mesoscale predictive capabilities of a mesoscale atmospheric simulation system

    NASA Technical Reports Server (NTRS)

    Koch, S. E.; Skillman, W. C.; Kocin, P. J.; Wetzel, P. J.; Brill, K.; Keyser, D. A.; Mccumber, M. C.

    1983-01-01

    The overall performance characteristics of a limited area, hydrostatic, fine (52 km) mesh, primitive equation, numerical weather prediction model are determined in anticipation of satellite data assimilations with the model. The synoptic and mesoscale predictive capabilities of version 2.0 of this model, the Mesoscale Atmospheric Simulation System (MASS 2.0), were evaluated. The two part study is based on a sample of approximately thirty 12h and 24h forecasts of atmospheric flow patterns during spring and early summer. The synoptic scale evaluation results benchmark the performance of MASS 2.0 against that of an operational, synoptic scale weather prediction model, the Limited area Fine Mesh (LFM). The large sample allows for the calculation of statistically significant measures of forecast accuracy and the determination of systematic model errors. The synoptic scale benchmark is required before unsmoothed mesoscale forecast fields can be seriously considered.

  13. Evaluation of a novel electronic eigenvalue (EEVA) molecular descriptor for QSAR/QSPR studies: validation using a benchmark steroid data set.

    PubMed

    Tuppurainen, Kari; Viisas, Marja; Laatikainen, Reino; Peräkylä, Mikael

    2002-01-01

    A novel electronic eigenvalue (EEVA) descriptor of molecular structure for use in the derivation of predictive QSAR/QSPR models is described. Like other spectroscopic QSAR/QSPR descriptors, EEVA is also invariant as to the alignment of the structures concerned. Its performance was tested with respect to the CBG (corticosteroid binding globulin) affinity of 31 benchmark steroids. It appeared that the electronic structure of the steroids, i.e., the "spectra" derived from molecular orbital energies, is directly related to the CBG binding affinities. The predictive ability of EEVA is compared to other QSAR approaches, and its performance is discussed in the context of the Hammett equation. The good performance of EEVA is an indication of the essential quantum mechanical nature of QSAR. The EEVA method is a supplement to conventional 3D QSAR methods, which employ fields or surface properties derived from Coulombic and van der Waals interactions.

  14. Visco-Resistive MHD Modeling Benchmark of Forced Magnetic Reconnection

    NASA Astrophysics Data System (ADS)

    Beidler, M. T.; Hegna, C. C.; Sovinec, C. R.; Callen, J. D.; Ferraro, N. M.

    2016-10-01

    The presence of externally-applied 3D magnetic fields can affect important phenomena in tokamaks, including mode locking, disruptions, and edge localized modes. External fields penetrate into the plasma and can lead to forced magnetic reconnection (FMR), and hence magnetic islands, on resonant surfaces if the local plasma rotation relative to the external field is slow. Preliminary visco-resistive MHD simulations of FMR in a slab geometry are consistent with theory. Specifically, linear simulations exhibit proper scaling of the penetrated field with resistivity, viscosity, and flow, and nonlinear simulations exhibit a bifurcation from a flow-screened to a field-penetrated, magnetic island state as the external field is increased, due to the 3D electromagnetic force. These results will be compared to simulations of FMR in a circular cross-section, cylindrical geometry by way of a benchmark between the NIMROD and M3D-C1 extended-MHD codes. Because neither this geometry nor the MHD model has the physics of poloidal flow damping, the theory of will be expanded to include poloidal flow effects. The resulting theory will be tested with linear and nonlinear simulations that vary the resistivity, viscosity, flow, and external field. Supported by OFES DoE Grants DE-FG02-92ER54139, DE-FG02-86ER53218, DE-AC02-09CH11466, and the SciDAC Center for Extended MHD Modeling.

  15. MaNGA: Mapping Nearby Galaxies at Apache Point Observatory

    NASA Astrophysics Data System (ADS)

    Weijmans, A.-M.; MaNGA Team

    2016-10-01

    MaNGA (Mapping Nearby Galaxies at APO) is a galaxy integral-field spectroscopic survey within the fourth generation Sloan Digital Sky Survey (SDSS-IV). It will be mapping the composition and kinematics of gas and stars in 10,000 nearby galaxies, using 17 differently sized fiber bundles. MaNGA's goal is to provide new insights in galaxy formation and evolution, and to deliver a local benchmark for current and future high-redshift studies.

  16. Issues in benchmarking human reliability analysis methods : a literature review.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lois, Erasmia; Forester, John Alan; Tran, Tuan Q.

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted,more » reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less

  17. ATP3 Unified Field Study Data

    DOE Data Explorer

    Wolfrum, Ed (ORCID:0000000273618931); Knoshug, Eric (ORCID:000000025709914X); Laurens, Lieve (ORCID:0000000349303267); Harmon, Valerie; Dempster, Thomas (ORCID:000000029550488X); McGowan, John (ORCID:0000000266920518); Rosov, Theresa; Cardello, David; Arrowsmith, Sarah; Kempkes, Sarah; Bautista, Maria; Lundquist, Tryg; Crowe, Brandon; Murawsky, Garrett; Nicolai, Eric; Rowe, Egan; Knurek, Emily; Javar, Reyna; Saracco Alvarez, Marcela; Schlosser, Steve; Riddle, Mary; Withstandley, Chris; Chen, Yongsheng; Van Ginkel, Steven; Igou, Thomas; Xu, Chunyan; Hu, Zixuan

    2017-10-20

    ATP3 Unified Field Study Data The Algae Testbed Public-Private Partnership (ATP3) was established with the goal of investigating open pond algae cultivation across different geographic, climatic, seasonal, and operational conditions while setting the benchmark for quality data collection, analysis, and dissemination. Identical algae cultivation systems and data analysis methodologies were established at testbed sites across the continental United States and Hawaii. Within this framework, the Unified Field Studies (UFS) were designed to characterize the cultivation of different algal strains during all 4 seasons across this testbed network. The dataset presented here is the complete, curated, climatic, cultivation, harvest, and biomass composition data for each season at each site. These data enable others to do in-depth cultivation, harvest, techno-economic, life cycle, resource, and predictive growth modeling analysis, as well as develop crop protection strategies for the nascent algae industry. NREL Sub award Number: DE-AC36-08-GO28308

  18. Using Benchmarking To Influence Tuition and Fee Decisions.

    ERIC Educational Resources Information Center

    Hubbell, Loren W. Loomis; Massa, Robert J.; Lapovsky, Lucie

    2002-01-01

    Discusses the use of benchmarking in managing enrollment. Using a case study, illustrates how benchmarking can help administrators develop strategies for planning and implementing admissions and pricing practices. (EV)

  19. The application of tailor-made force fields and molecular dynamics for NMR crystallography: a case study of free base cocaine

    PubMed Central

    Neumann, Marcus A.

    2017-01-01

    Motional averaging has been proven to be significant in predicting the chemical shifts in ab initio solid-state NMR calculations, and the applicability of motional averaging with molecular dynamics has been shown to depend on the accuracy of the molecular mechanical force field. The performance of a fully automatically generated tailor-made force field (TMFF) for the dynamic aspects of NMR crystallography is evaluated and compared with existing benchmarks, including static dispersion-corrected density functional theory calculations and the COMPASS force field. The crystal structure of free base cocaine is used as an example. The results reveal that, even though the TMFF outperforms the COMPASS force field for representing the energies and conformations of predicted structures, it does not give significant improvement in the accuracy of NMR calculations. Further studies should direct more attention to anisotropic chemical shifts and development of the method of solid-state NMR calculations. PMID:28250956

  20. A field and statistical modeling study to estimate irrigation water use at Benchmark Farms study sites in southwestern Georgia, 1995-96

    USGS Publications Warehouse

    Fanning, Julia L.; Schwarz, Gregory E.; Lewis, William C.

    2001-01-01

    A benchmark irrigation monitoring network of farms located in a 32-county area in southwestern Georgia was established in 1995 to improve estimates of irrigation water use. A stratified random sample of 500 permitted irrigators was selected from a data base--maintained by the Georgia Department of Natural Resources, Georgia Environmental Protection Division, Water Resources Management Branch--to obtain 180 voluntary participants in the study area. Site-specific irrigation data were collected at each farm using running-time totalizers and noninvasive flowmeters. Data were collected and compiled for 50 farms for 1995 and 130 additional farms for the 1996 growing season--a total of 180 farms. Irrigation data collected during the 1996 growing season were compiled for 180 benchmark farms and used to develop a statistical model to estimate irrigation water use in 32 counties in southwestern Georgia. The estimates derived were developed from using a statistical approach know as "bootstrap analysis" that allows for the estimation of precision. Five model components--whether-to-irrigate, acres irrigated, crop selected, seasonal-irrigation scheduling, and the amount of irrigation applied--compose the irrigation model and were developed to reflect patterns in the data collected at Benchmark Farms Study area sites. The model estimated that peak irrigation for all counties in the study area occurred during July with significant irrigation also occurring during May, June, and August. Irwin and Tift were the most irrigated and Schley and Houston were the least irrigated counties in the study area. High irrigation intensity primarily was located along the eastern border of the study area; whereas, low irrigation intensity was located in the southwestern quadrant where ground water was the dominant irrigation source. Crop-level estimates showed sizable variations across crops and considerable uncertainty for all crops other than peanuts and pecans. Counties having the most irrigated acres showed higher variations in annual irrigation than counties having the least irrigated acres. The Benchmark Farms Study model estimates were higher than previous irrigation estimates, with 20 percent of the bias a result of underestimating irrigation acreage in earlier studies. Model estimates showed evidence of an upward bias of about 15 percent with the likely cause being a misrepresented inches-applied model. A better understanding of the causes of bias in the model could be determined with a larger irrigation sample size and increased substantially by automating the reporting of monthly totalizer amounts.

  1. Results Oriented Benchmarking: The Evolution of Benchmarking at NASA from Competitive Comparisons to World Class Space Partnerships

    NASA Technical Reports Server (NTRS)

    Bell, Michael A.

    1999-01-01

    Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.

  2. Results of the 2013 UT modeling benchmark obtained with models implemented in CIVA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toullelan, Gwénaël; Raillon, Raphaële; Chatillon, Sylvain

    The 2013 Ultrasonic Testing (UT) modeling benchmark concerns direct echoes from side drilled holes (SDH), flat bottom holes (FBH) and corner echoes from backwall breaking artificial notches inspected with a matrix phased array probe. This communication presents the results obtained with the models implemented in the CIVA software: the pencilmodel is used to compute the field radiated by the probe, the Kirchhoff approximation is applied to predict the response of FBH and notches and the SOV (Separation Of Variables) model is used for the SDH responses. The comparison between simulated and experimental results are presented and discussed.

  3. Effects of benchmarking on the quality of type 2 diabetes care: results of the OPTIMISE (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment) study in Greece

    PubMed Central

    Tsimihodimos, Vasilis; Kostapanos, Michael S.; Moulis, Alexandros; Nikas, Nikos; Elisaf, Moses S.

    2015-01-01

    Objectives: To investigate the effect of benchmarking on the quality of type 2 diabetes (T2DM) care in Greece. Methods: The OPTIMISE (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment) study [ClinicalTrials.gov identifier: NCT00681850] was an international multicenter, prospective cohort study. It included physicians randomized 3:1 to either receive benchmarking for glycated hemoglobin (HbA1c), systolic blood pressure (SBP) and low-density lipoprotein cholesterol (LDL-C) treatment targets (benchmarking group) or not (control group). The proportions of patients achieving the targets of the above-mentioned parameters were compared between groups after 12 months of treatment. Also, the proportions of patients achieving those targets at 12 months were compared with baseline in the benchmarking group. Results: In the Greek region, the OPTIMISE study included 797 adults with T2DM (570 in the benchmarking group). At month 12 the proportion of patients within the predefined targets for SBP and LDL-C was greater in the benchmarking compared with the control group (50.6 versus 35.8%, and 45.3 versus 36.1%, respectively). However, these differences were not statistically significant. No difference between groups was noted in the percentage of patients achieving the predefined target for HbA1c. At month 12 the increase in the percentage of patients achieving all three targets was greater in the benchmarking (5.9–15.0%) than in the control group (2.7–8.1%). In the benchmarking group more patients were on target regarding SBP (50.6% versus 29.8%), LDL-C (45.3% versus 31.3%) and HbA1c (63.8% versus 51.2%) at 12 months compared with baseline (p < 0.001 for all comparisons). Conclusion: Benchmarking may comprise a promising tool for improving the quality of T2DM care. Nevertheless, target achievement rates of each, and of all three, quality indicators were suboptimal, indicating there are still unmet needs in the management of T2DM. PMID:26445642

  4. Building Bridges Between Geoscience and Data Science through Benchmark Data Sets

    NASA Astrophysics Data System (ADS)

    Thompson, D. R.; Ebert-Uphoff, I.; Demir, I.; Gel, Y.; Hill, M. C.; Karpatne, A.; Güereque, M.; Kumar, V.; Cabral, E.; Smyth, P.

    2017-12-01

    The changing nature of observational field data demands richer and more meaningful collaboration between data scientists and geoscientists. Thus, among other efforts, the Working Group on Case Studies of the NSF-funded RCN on Intelligent Systems Research To Support Geosciences (IS-GEO) is developing a framework to strengthen such collaborations through the creation of benchmark datasets. Benchmark datasets provide an interface between disciplines without requiring extensive background knowledge. The goals are to create (1) a means for two-way communication between geoscience and data science researchers; (2) new collaborations, which may lead to new approaches for data analysis in the geosciences; and (3) a public, permanent repository of complex data sets, representative of geoscience problems, useful to coordinate efforts in research and education. The group identified 10 key elements and characteristics for ideal benchmarks. High impact: A problem with high potential impact. Active research area: A group of geoscientists should be eager to continue working on the topic. Challenge: The problem should be challenging for data scientists. Data science generality and versatility: It should stimulate development of new general and versatile data science methods. Rich information content: Ideally the data set provides stimulus for analysis at many different levels. Hierarchical problem statement: A hierarchy of suggested analysis tasks, from relatively straightforward to open-ended tasks. Means for evaluating success: Data scientists and geoscientists need means to evaluate whether the algorithms are successful and achieve intended purpose. Quick start guide: Introduction for data scientists on how to easily read the data to enable rapid initial data exploration. Geoscience context: Summary for data scientists of the specific data collection process, instruments used, any pre-processing and the science questions to be answered. Citability: A suitable identifier to facilitate tracking the use of the benchmark later on, e.g. allowing search engines to find all research papers using it. A first sample benchmark developed in collaboration with the Jet Propulsion Laboratory (JPL) deals with the automatic analysis of imaging spectrometer data to detect significant methane sources in the atmosphere.

  5. Towards unbiased benchmarking of evolutionary and hybrid algorithms for real-valued optimisation

    NASA Astrophysics Data System (ADS)

    MacNish, Cara

    2007-12-01

    Randomised population-based algorithms, such as evolutionary, genetic and swarm-based algorithms, and their hybrids with traditional search techniques, have proven successful and robust on many difficult real-valued optimisation problems. This success, along with the readily applicable nature of these techniques, has led to an explosion in the number of algorithms and variants proposed. In order for the field to advance it is necessary to carry out effective comparative evaluations of these algorithms, and thereby better identify and understand those properties that lead to better performance. This paper discusses the difficulties of providing benchmarking of evolutionary and allied algorithms that is both meaningful and logistically viable. To be meaningful the benchmarking test must give a fair comparison that is free, as far as possible, from biases that favour one style of algorithm over another. To be logistically viable it must overcome the need for pairwise comparison between all the proposed algorithms. To address the first problem, we begin by attempting to identify the biases that are inherent in commonly used benchmarking functions. We then describe a suite of test problems, generated recursively as self-similar or fractal landscapes, designed to overcome these biases. For the second, we describe a server that uses web services to allow researchers to 'plug in' their algorithms, running on their local machines, to a central benchmarking repository.

  6. Benchmark studies of induced radioactivity produced in LHC materials, Part I: Specific activities.

    PubMed

    Brugger, M; Khater, H; Mayer, S; Prinz, A; Roesler, S; Ulrici, L; Vincke, H

    2005-01-01

    Samples of materials which will be used in the LHC machine for shielding and construction components were irradiated in the stray radiation field of the CERN-EU high-energy reference field facility. After irradiation, the specific activities induced in the various samples were analysed with a high-precision gamma spectrometer at various cooling times, allowing identification of isotopes with a wide range of half-lives. Furthermore, the irradiation experiment was simulated in detail with the FLUKA Monte Carlo code. A comparison of measured and calculated specific activities shows good agreement, supporting the use of FLUKA for estimating the level of induced activity in the LHC.

  7. Benchmarking reference services: an introduction.

    PubMed

    Marshall, J G; Buchanan, H S

    1995-01-01

    Benchmarking is based on the common sense idea that someone else, either inside or outside of libraries, has found a better way of doing certain things and that your own library's performance can be improved by finding out how others do things and adopting the best practices you find. Benchmarking is one of the tools used for achieving continuous improvement in Total Quality Management (TQM) programs. Although benchmarking can be done on an informal basis, TQM puts considerable emphasis on formal data collection and performance measurement. Used to its full potential, benchmarking can provide a common measuring stick to evaluate process performance. This article introduces the general concept of benchmarking, linking it whenever possible to reference services in health sciences libraries. Data collection instruments that have potential application in benchmarking studies are discussed and the need to develop common measurement tools to facilitate benchmarking is emphasized.

  8. Operationalizing the Rubric: The Effect of Benchmark Selection on the Assessed Quality of Writing.

    ERIC Educational Resources Information Center

    Popp, Sharon E. Osborn; Ryan, Joseph M.; Thompson, Marilyn S.; Behrens, John T.

    The purposes of this study were to investigate the role of benchmark writing samples in direct assessment of writing and to examine the consequences of differential benchmark selection with a common writing rubric. The influences of discourse and grade level were also examined within the context of differential benchmark selection. Raters scored…

  9. Issues in Benchmarking Human Reliability Analysis Methods: A Literature Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring; Stacey M. L. Hendrickson; John A. Forester

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing pastmore » benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less

  10. ff14ipq: A Self-Consistent Force Field for Condensed-Phase Simulations of Proteins

    PubMed Central

    2015-01-01

    We present the ff14ipq force field, implementing the previously published IPolQ charge set for simulations of complete proteins. Minor modifications to the charge derivation scheme and van der Waals interactions between polar atoms are introduced. Torsion parameters are developed through a generational learning approach, based on gas-phase MP2/cc-pVTZ single-point energies computed of structures optimized by the force field itself rather than the quantum benchmark. In this manner, we sacrifice information about the true quantum minima in order to ensure that the force field maintains optimal agreement with the MP2/cc-pVTZ benchmark for the ensembles it will actually produce in simulations. A means of making the gas-phase torsion parameters compatible with solution-phase IPolQ charges is presented. The ff14ipq model is an alternative to ff99SB and other Amber force fields for protein simulations in programs that accommodate pair-specific Lennard–Jones combining rules. The force field gives strong performance on α-helical and β-sheet oligopeptides as well as globular proteins over microsecond time scale simulations, although it has not yet been tested in conjunction with lipid and nucleic acid models. We show how our choices in parameter development influence the resulting force field and how other choices that may have appeared reasonable would actually have led to poorer results. The tools we developed may also aid in the development of future fixed-charge and even polarizable biomolecular force fields. PMID:25328495

  11. OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE - A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alan Black; Arnis Judzis

    2003-01-01

    Progress during current reporting year 2002 by quarter--Progress during Q1 2002: (1) In accordance to Task 7.0 (D. No.2 Technical Publications) TerraTek, NETL, and the Industry Contributors successfully presented a paper detailing Phase 1 testing results at the February 2002 IADC/SPE Drilling Conference, a prestigious venue for presenting DOE and private sector drilling technology advances. The full reference is as follows: IADC/SPE 74540 ''World's First Benchmarking of Drilling Mud Hammer Performance at Depth Conditions'' authored by Gordon A. Tibbitts, TerraTek; Roy C. Long, US Department of Energy, Brian E. Miller, BP America, Inc.; Arnis Judzis, TerraTek; and Alan D. Black,more » TerraTek. Gordon Tibbitts, TerraTek, will presented the well-attended paper in February of 2002. The full text of the Mud Hammer paper was included in the last quarterly report. (2) The Phase 2 project planning meeting (Task 6) was held at ExxonMobil's Houston Greenspoint offices on February 22, 2002. In attendance were representatives from TerraTek, DOE, BP, ExxonMobil, PDVSA, Novatek, and SDS Digger Tools. (3) PDVSA has joined the advisory board to this DOE mud hammer project. PDVSA's commitment of cash and in-kind contributions were reported during the last quarter. (4) Strong Industry support remains for the DOE project. Both Andergauge and Smith Tools have expressed an interest in participating in the ''optimization'' phase of the program. The potential for increased testing with additional Industry cash support was discussed at the planning meeting in February 2002. Progress during Q2 2002: (1) Presentation material was provided to the DOE/NETL project manager (Dr. John Rogers) for the DOE exhibit at the 2002 Offshore Technology Conference. (2) Two meeting at Smith International and one at Andergauge in Houston were held to investigate their interest in joining the Mud Hammer Performance study. (3) SDS Digger Tools (Task 3 Benchmarking participant) apparently has not negotiated a commercial deal with Halliburton on the supply of fluid hammers to the oil and gas business. (4) TerraTek is awaiting progress by Novatek (a DOE contractor) on the redesign and development of their next hammer tool. Their delay will require an extension to TerraTek's contracted program. (5) Smith International has sufficient interest in the program to start engineering and chroming of collars for testing at TerraTek. (6) Shell's Brian Tarr has agreed to join the Industry Advisory Group for the DOE project. The addition of Brian Tarr is welcomed as he has numerous years of experience with the Novatek tool and was involved in the early tests in Europe while with Mobil Oil. (7) Conoco's field trial of the Smith fluid hammer for an application in Vietnam was organized and has contributed to the increased interest in their tool. Progress during Q3 2002: (1) Smith International agreed to participate in the DOE Mud Hammer program. (2) Smith International chromed collars for upcoming benchmark tests at TerraTek, now scheduled for 4Q 2002. (3) ConocoPhillips had a field trial of the Smith fluid hammer offshore Vietnam. The hammer functioned properly, though the well encountered hole conditions and reaming problems. ConocoPhillips plan another field trial as a result. (4) DOE/NETL extended the contract for the fluid hammer program to allow Novatek to ''optimize'' their much delayed tool to 2003 and to allow Smith International to add ''benchmarking'' tests in light of SDS Digger Tools' current financial inability to participate. (5) ConocoPhillips joined the Industry Advisors for the mud hammer program. Progress during Q4 2002: (1) Smith International participated in the DOE Mud Hammer program through full scale benchmarking testing during the week of 4 November 2003. (2) TerraTek acknowledges Smith International, BP America, PDVSA, and ConocoPhillips for cost-sharing the Smith benchmarking tests allowing extension of the contract to add to the benchmarking testing program. (3) Following the benchmark testing of the Smith International hammer, representatives from DOE/NETL, TerraTek, Smith International and PDVSA met at TerraTek in Salt Lake City to review observations, performance and views on the optimization step for 2003. (4) The December 2002 issue of Journal of Petroleum Technology (Society of Petroleum Engineers) highlighted the DOE fluid hammer testing program and reviewed last years paper on the benchmark performance of the SDS Digger and Novatek hammers. (5) TerraTek's Sid Green presented a technical review for DOE/NETL personnel in Morgantown on ''Impact Rock Breakage'' and its importance on improving fluid hammer performance. Much discussion has taken place on the issues surrounding mud hammer performance at depth conditions.« less

  12. Benchmarking biology research organizations using a new, dedicated tool.

    PubMed

    van Harten, Willem H; van Bokhorst, Leonard; van Luenen, Henri G A M

    2010-02-01

    International competition forces fundamental research organizations to assess their relative performance. We present a benchmark tool for scientific research organizations where, contrary to existing models, the group leader is placed in a central position within the organization. We used it in a pilot benchmark study involving six research institutions. Our study shows that data collection and data comparison based on this new tool can be achieved. It proved possible to compare relative performance and organizational characteristics and to generate suggestions for improvement for most participants. However, strict definitions of the parameters used for the benchmark and a thorough insight into the organization of each of the benchmark partners is required to produce comparable data and draw firm conclusions.

  13. Teaching Benchmark Strategy for Fifth-Graders in Taiwan

    ERIC Educational Resources Information Center

    Yang, Der-Ching; Lai, M. L.

    2013-01-01

    The key purpose of this study was how we taught the use of benchmark strategy when comparing fraction for fifth-graders in Taiwan. 26 fifth graders from a public elementary in south Taiwan were selected to join this study. Results of this case study showed that students had a much progress on the use of benchmark strategy when comparing fraction…

  14. [Impact of quality-indicator-based measures to improve the treatment of acute poisoning in pediatric emergency patients].

    PubMed

    Martínez Sánchez, Lidia; Trenchs Sainz de la Maza, Victoria; Azkunaga Santibáñez, Beatriz; Nogué-Xarau, Santiago; Ferrer Bosch, Nuria; García González, Elsa; Luaces I Cubells, Carles

    2016-02-01

    To analyze the impact of quality-indicator-based measures for improving quality of care for acute poisoning in pediatric emergency departments. Recent assessments of quality indicators were compared with benchmark targets and with results from previous studies. The first study evaluated 6 basic indicators in the pediatric emergency departments of members of to the working group on poisoning of the Spanish Society of Pediatric Emergency Medicine (GTI-SEUP). The second study evaluated 20 indicators in a single emergency department of GTI-SEUP members. Based on the results of those studies, the departments implemented the following corrective measures: creation of a team for gastric lavage follow-up, preparation of a new GTI-SEUP manual on poisoning, implementation of a protocol for poisoning incidents, and creation of specific poisoning-related fields for computerized patient records. The benchmark targets were reached on 4 quality indicators in the first study. Improvements were seen in the availability of protocols, as indicators exceeded the target in all the pediatric emergency departments (vs 29.2% of the departments in an earlier study, P < .001). No other significant improvements were observed. In the second study the benchmarks were reached on 13 indicators. Improvements were seen in compliance with incident reporting to the police (recently, 44.4% vs 19.2% previously, P = .036), case registration in the minimum basic data set (51.0% vs 1.9%, P < .001), and a trend toward increased administration of activated carbon within 2 hours (93.1% vs 83.5%, P = .099). No other significant improvements were seen. The corrective measures led to improvements in some quality indicators. There is still room for improvement in these emergency departamens' care of pediatric poisoning.

  15. Adiabatic Quantum Computing via the Rydberg Blockade

    NASA Astrophysics Data System (ADS)

    Keating, Tyler; Goyal, Krittika; Deutsch, Ivan

    2012-06-01

    We study an architecture for implementing adiabatic quantum computation with trapped neutral atoms. Ground state atoms are dressed by laser fields in a manner conditional on the Rydberg blockade mechanism, thereby providing the requisite entangling interactions. As a benchmark we study the performance of a Quadratic Unconstrained Binary Optimization (QUBO) problem whose solution is found in the ground state spin configuration of an Ising-like model. We model a realistic architecture, including the effects of magnetic level structure, with qubits encoded into the clock states of ^133Cs, effective B-fields implemented through microwaves and light shifts, and atom-atom coupling achieved by excitation to a high-lying Rydberg level. Including the fundamental effects of photon scattering we find a high fidelity for the two-qubit implementation.

  16. Thought Experiment to Examine Benchmark Performance for Fusion Nuclear Data

    NASA Astrophysics Data System (ADS)

    Murata, Isao; Ohta, Masayuki; Kusaka, Sachie; Sato, Fuminobu; Miyamaru, Hiroyuki

    2017-09-01

    There are many benchmark experiments carried out so far with DT neutrons especially aiming at fusion reactor development. These integral experiments seemed vaguely to validate the nuclear data below 14 MeV. However, no precise studies exist now. The author's group thus started to examine how well benchmark experiments with DT neutrons can play a benchmarking role for energies below 14 MeV. Recently, as a next phase, to generalize the above discussion, the energy range was expanded to the entire region. In this study, thought experiments with finer energy bins have thus been conducted to discuss how to generally estimate performance of benchmark experiments. As a result of thought experiments with a point detector, the sensitivity for a discrepancy appearing in the benchmark analysis is "equally" due not only to contribution directly conveyed to the deterctor, but also due to indirect contribution of neutrons (named (A)) making neutrons conveying the contribution, indirect controbution of neutrons (B) making the neutrons (A) and so on. From this concept, it would become clear from a sensitivity analysis in advance how well and which energy nuclear data could be benchmarked with a benchmark experiment.

  17. Optimally Stopped Optimization

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Lidar, Daniel

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known, and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time, optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark the performance of a D-Wave 2X quantum annealer and the HFS solver, a specialized classical heuristic algorithm designed for low tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N = 1098 variables, the D-Wave device is between one to two orders of magnitude faster than the HFS solver.

  18. Assessing the quality of GEOID12B model through field surveys

    NASA Astrophysics Data System (ADS)

    Elaksher, Ahmed; Kamtchang, Franck; Wegmann, Christian; Guerrero, Adalberto

    2018-01-01

    Elevation differences have been determined through conventional ground surveying techniques for over a century. Since the mid-80s GPS, GLONASS and other satellite systems have modernized the means by which elevation differences are observed. In this article, we assessed the quality of GEIOD12B through long-occupation GNSS static surveys. A set of NGS benchmarks was occupied for at least one hour using dual-frequency GNSS receivers. Collected measurements were processed using a single CORS station at most 24 kilometers from the benchmarks. Geoid undulation values were driven by subtracting measured ellipsoidal heights from the orthometric heights posted on the NGS website. To assess the quality of GEOID12B, we compared our computed vertical shifts at the benchmarks with those estimated from GEOID12B published by NGS. In addition, Kriging model was used to interpolate local maps for the geoid undulations from the benchmark heights. The maps were compared with corresponding parts of GEOID12B. No biases were detected in the results and only shifts due to random errors were found. Discrepancies in the range of ten centimetres were noticed between our geoid undulation and the values available from NGS.

  19. Performance Evaluation and Benchmarking of Next Intelligent Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    del Pobil, Angel; Madhavan, Raj; Bonsignorio, Fabio

    Performance Evaluation and Benchmarking of Intelligent Systems presents research dedicated to the subject of performance evaluation and benchmarking of intelligent systems by drawing from the experiences and insights of leading experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. This contributed volume offers a detailed and coherent picture of state-of-the-art, recent developments, and further research areas in intelligent systems. The chapters cover a broad range of applications, such as assistive robotics, planetary surveying, urban search and rescue, and line tracking for automotive assembly. Subsystems or components described in this bookmore » include human-robot interaction, multi-robot coordination, communications, perception, and mapping. Chapters are also devoted to simulation support and open source software for cognitive platforms, providing examples of the type of enabling underlying technologies that can help intelligent systems to propagate and increase in capabilities. Performance Evaluation and Benchmarking of Intelligent Systems serves as a professional reference for researchers and practitioners in the field. This book is also applicable to advanced courses for graduate level students and robotics professionals in a wide range of engineering and related disciplines including computer science, automotive, healthcare, manufacturing, and service robotics.« less

  20. Approximate methods in gamma-ray skyshine calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faw, R.E.; Roseberry, M.L.; Shultis, J.K.

    1985-11-01

    Gamma-ray skyshine, an important component of the radiation field in the environment of a nuclear power plant, has recently been studied in relation to storage of spent fuel and nuclear waste. This paper reviews benchmark skyshine experiments and transport calculations against which computational procedures may be tested. The paper also addresses the applicability of simplified computational methods involving single-scattering approximations. One such method, suitable for microcomputer implementation, is described and results are compared with other work.

  1. 47 CFR 54.805 - Zone and study area above benchmark revenues calculated by the Administrator.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Period Residential and Single-Line Business Lines times 12. If negative, the Zone Above Benchmark...) multiplied by all eligible telecommunications carrier zone Base Period Multi-line Business Lines times 12. If... 47 Telecommunication 3 2010-10-01 2010-10-01 false Zone and study area above benchmark revenues...

  2. Continental Deformation in Madagascar from GNSS Observations

    NASA Astrophysics Data System (ADS)

    Stamps, D. S.; Rajaonarison, T.; Rambolamanana, G.; Herimitsinjo, N.; Carrillo, R.; Jesmok, G.

    2015-12-01

    D.S. Stamps, T. Rajaonarison, G. Rambolamanana Madagascar is the easternmost continental segment of the East African Rift System (EARS). Plate reconstructions assume the continental island behaves as a rigid block, but studies of geologically recent kinematics suggest Madagascar undergoes extension related to the broader EARS. In this work we test for rigidity of Madagascar in two steps. First, we quantify surface motions using a novel dataset of episodic and continuous GNSS observations that span Madagascar from north to south. We established a countrywide network of precision benchmarks fixed in bedrock and with open skyview in 2010 that we measured for 48-72 hours with dual frequency receivers. The benchmarks were remeasured in 2012 and 2014. We processed the episodic GNSS data with ABPO, the only continuous GNSS station in Madagascar with >2.5 years of data, for millimeter precision positions and velocities at 7 locations using GAMIT-GLOBK. Our velocity field shows 2 mm/yr of differential motion between southern and northern Madagascar. Second, we test a suite of kinematic predictions from previous studies and find residual velocities are greater than 95% uncertainties. We also calculate angular velocity vectors assuming Madagascar moves with the Lwandle plate or the Somalian plate. Our new velocity field in Madagascar is inconsistent with all models that assume plate rigidity at the 95% uncertainty level; this result indicates the continental island undergoes statistically significant internal deformation.

  3. Formation of current singularity in a topologically constrained plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Yao; Huang, Yi-Min; Qin, Hong

    2016-02-01

    Recently a variational integrator for ideal magnetohydrodynamics in Lagrangian labeling has been developed. Its built-in frozen-in equation makes it optimal for studying current sheet formation. We use this scheme to study the Hahm-Kulsrud-Taylor problem, which considers the response of a 2D plasma magnetized by a sheared field under sinusoidal boundary forcing. We obtain an equilibrium solution that preserves the magnetic topology of the initial field exactly, with a fluid mapping that is non-differentiable. Unlike previous studies that examine the current density output, we identify a singular current sheet from the fluid mapping. These results are benchmarked with a constrained Grad-Shafranovmore » solver. The same signature of current singularity can be found in other cases with more complex magnetic topologies.« less

  4. Recovery of time evolution of Grad-Shafranov equilibria from single-spacecraft data: Benchmarking and application to a flux transfer event

    NASA Astrophysics Data System (ADS)

    Hasegawa, Hiroshi; Sonnerup, Bengt U. Ã.-.; Nakamura, Takuma K. M.

    2010-11-01

    First results are presented of a method, developed by Sonnerup and Hasegawa (2010), for analyzing time evolution of magnetohydrostatic Grad-Shafranov (GS) equilibria, using data recorded by an observing probe as it traverses a quasi-static, two-dimensional (2D), magnetic-field/plasma structure. The method recovers spatial initial values used in the classical GS reconstruction for an interval before and after the time of actual measurements, by advancing them backward and forward in time based on a set of equations for an incompressible plasma; the consequence is generation of multiple GS maps or a movie of the 2D field structure. The method is successfully benchmarked by use of a 2D magnetohydrodynamic simulation of time-dependent magnetic reconnection, and then is applied to a flux transfer event (FTE) seen by the Cluster spacecraft at the dayside high-latitude magnetopause. The application shows that the field lines constituting the FTE flux rope were contracting toward its center as a result of modest convective flow in the region around the core of the flux rope.

  5. Performance Against WELCOA's Worksite Health Promotion Benchmarks Across Years Among Selected US Organizations.

    PubMed

    Weaver, GracieLee M; Mendenhall, Brandon N; Hunnicutt, David; Picarella, Ryan; Leffelman, Brittanie; Perko, Michael; Bibeau, Daniel L

    2018-05-01

    The purpose of this study was to quantify the performance of organizations' worksite health promotion (WHP) activities against the benchmarking criteria included in the Well Workplace Checklist (WWC). The Wellness Council of America (WELCOA) developed a tool to assess WHP with its 100-item WWC, which represents WELCOA's 7 performance benchmarks. Workplaces. This study includes a convenience sample of organizations who completed the checklist from 2008 to 2015. The sample size was 4643 entries from US organizations. The WWC includes demographic questions, general questions about WHP programs, and scales to measure the performance against the WELCOA 7 benchmarks. Descriptive analyses of WWC items were completed separately for each year of the study period. The majority of the organizations represented each year were multisite, multishift, medium- to large-sized companies mostly in the services industry. Despite yearly changes in participating organizations, results across the WELCOA 7 benchmark scores were consistent year to year. Across all years, benchmarks that organizations performed the lowest were senior-level support, data collection, and programming; wellness teams and supportive environments were the highest scoring benchmarks. In an era marked with economic swings and health-care reform, it appears that organizations are staying consistent in their performance across these benchmarks. The WWC could be useful for organizations, practitioners, and researchers in assessing the quality of WHP programs.

  6. [Benchmarking of university trauma centers in Germany. Research and teaching].

    PubMed

    Gebhard, F; Raschke, M; Ruchholtz, S; Meffert, R; Marzi, I; Pohlemann, T; Südkamp, N; Josten, C; Zwipp, H

    2011-07-01

    Benchmarking is a very popular business process and meanwhile is used in research as well. The aim of the present study is to elucidate key numbers of German university trauma departments regarding research and teaching. The data set is based upon the monthly reports given by the administration in each university. As a result the study shows that only well-known parameters such as fund-raising and impact factors can be used to benchmark university-based trauma centers. The German federal system does not allow a nationwide benchmarking.

  7. Strömgren survey for asteroseismology and galactic archaeology: Let the saga begin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casagrande, L.; Dotter, A.; Milone, A. P.

    2014-06-01

    Asteroseismology has the capability of precisely determining stellar properties that would otherwise be inaccessible, such as radii, masses, and thus ages of stars. When coupling this information with classical determinations of stellar parameters, such as metallicities, effective temperatures, and angular diameters, powerful new diagnostics for Galactic studies can be obtained. The ongoing Strömgren survey for Asteroseismology and Galactic Archaeology has the goal of transforming the Kepler field into a new benchmark for Galactic studies, similar to the solar neighborhood. Here we present the first results from a stripe centered at a Galactic longitude of 74° and covering latitude from aboutmore » 8° to 20°, which includes almost 1000 K giants with seismic information and the benchmark open cluster NGC 6819. We describe the coupling of classical and seismic parameters, the accuracy as well as the caveats of the derived effective temperatures, metallicities, distances, surface gravities, masses, and radii. Confidence in the achieved precision is corroborated by the detection of the first and secondary clumps in a population of field stars with a ratio of 2 to 1 and by the negligible scatter in the seismic distances among NGC 6819 member stars. An assessment of the reliability of stellar parameters in the Kepler Input Catalog is also performed, and the impact of our results for population studies in the Milky Way is discussed, along with the importance of an all-sky Strömgren survey.« less

  8. A Comparison of Field-Dependence Cognitive Styles of Professionals in Purchasing and Consumer Service and Secondary Marketing Education Students, with Implications for Workforce Development.

    ERIC Educational Resources Information Center

    Fritz, Robert L.; Stewart, Barbara; Norwood, Marcella

    2002-01-01

    The field-dependent cognitive styles of 44 professionals in customer service occupations provided a benchmark to interpret data for 239 secondary marketing education students. Results suggest that males have greater access to analytic traits such as restructuring skill, problem-solving interest, and skill with abstractions. (Contains 38…

  9. Utilizing Benchmarking to Study the Effectiveness of Parent-Child Interaction Therapy Implemented in a Community Setting

    ERIC Educational Resources Information Center

    Self-Brown, Shannon; Valente, Jessica R.; Wild, Robert C.; Whitaker, Daniel J.; Galanter, Rachel; Dorsey, Shannon; Stanley, Jenelle

    2012-01-01

    Benchmarking is a program evaluation approach that can be used to study whether the outcomes of parents/children who participate in an evidence-based program in the community approximate the outcomes found in randomized trials. This paper presents a case illustration using benchmarking methodology to examine a community implementation of…

  10. The infrared luminosity function of AKARI 90 μm galaxies in the local Universe

    NASA Astrophysics Data System (ADS)

    Kilerci Eser, Ece; Goto, Tomotsugu

    2018-03-01

    Local infrared (IR) luminosity functions (LFs) are necessary benchmarks for high-redshift IR galaxy evolution studies. Any accurate IR LF evolution studies require accordingly accurate local IR LFs. We present IR galaxy LFs at redshifts of z ≤ 0.3 from AKARI space telescope, which performed an all-sky survey in six IR bands (9, 18, 65, 90, 140, and 160 μm) with 10 times better sensitivity than its precursor Infrared Astronomical Satellite. Availability of 160 μm filter is critically important in accurately measuring total IR luminosity of galaxies, covering across the peak of the dust emission. By combining data from Wide-field Infrared Survey Explorer (WISE), Sloan Digital Sky Survey (SDSS) Data Release 13 (DR 13), six-degree Field Galaxy Survey and the 2MASS Redshift Survey, we created a sample of 15 638 local IR galaxies with spectroscopic redshifts, factor of 7 larger compared to previously studied AKARI-SDSS sample. After carefully correcting for volume effects in both IR and optical, the obtained IR LFs agree well with previous studies, but comes with much smaller errors. Measured local IR luminosity density is ΩIR = 1.19 ± 0.05 × 108L⊙ Mpc-3. The contributions from luminous IR galaxies and ultraluminous IR galaxies to ΩIR are very small, 9.3 per cent and 0.9 per cent, respectively. There exists no future all-sky survey in far-IR wavelengths in the foreseeable future. The IR LFs obtained in this work will therefore remain an important benchmark for high-redshift studies for decades.

  11. Creating a non-linear total sediment load formula using polynomial best subset regression model

    NASA Astrophysics Data System (ADS)

    Okcu, Davut; Pektas, Ali Osman; Uyumaz, Ali

    2016-08-01

    The aim of this study is to derive a new total sediment load formula which is more accurate and which has less application constraints than the well-known formulae of the literature. 5 most known stream power concept sediment formulae which are approved by ASCE are used for benchmarking on a wide range of datasets that includes both field and flume (lab) observations. The dimensionless parameters of these widely used formulae are used as inputs in a new regression approach. The new approach is called Polynomial Best subset regression (PBSR) analysis. The aim of the PBRS analysis is fitting and testing all possible combinations of the input variables and selecting the best subset. Whole the input variables with their second and third powers are included in the regression to test the possible relation between the explanatory variables and the dependent variable. While selecting the best subset a multistep approach is used that depends on significance values and also the multicollinearity degrees of inputs. The new formula is compared to others in a holdout dataset and detailed performance investigations are conducted for field and lab datasets within this holdout data. Different goodness of fit statistics are used as they represent different perspectives of the model accuracy. After the detailed comparisons are carried out we figured out the most accurate equation that is also applicable on both flume and river data. Especially, on field dataset the prediction performance of the proposed formula outperformed the benchmark formulations.

  12. TRUST. I. A 3D externally illuminated slab benchmark for dust radiative transfer

    NASA Astrophysics Data System (ADS)

    Gordon, K. D.; Baes, M.; Bianchi, S.; Camps, P.; Juvela, M.; Kuiper, R.; Lunttila, T.; Misselt, K. A.; Natale, G.; Robitaille, T.; Steinacker, J.

    2017-07-01

    Context. The radiative transport of photons through arbitrary three-dimensional (3D) structures of dust is a challenging problem due to the anisotropic scattering of dust grains and strong coupling between different spatial regions. The radiative transfer problem in 3D is solved using Monte Carlo or Ray Tracing techniques as no full analytic solution exists for the true 3D structures. Aims: We provide the first 3D dust radiative transfer benchmark composed of a slab of dust with uniform density externally illuminated by a star. This simple 3D benchmark is explicitly formulated to provide tests of the different components of the radiative transfer problem including dust absorption, scattering, and emission. Methods: The details of the external star, the slab itself, and the dust properties are provided. This benchmark includes models with a range of dust optical depths fully probing cases that are optically thin at all wavelengths to optically thick at most wavelengths. The dust properties adopted are characteristic of the diffuse Milky Way interstellar medium. This benchmark includes solutions for the full dust emission including single photon (stochastic) heating as well as two simplifying approximations: One where all grains are considered in equilibrium with the radiation field and one where the emission is from a single effective grain with size-distribution-averaged properties. A total of six Monte Carlo codes and one Ray Tracing code provide solutions to this benchmark. Results: The solution to this benchmark is given as global spectral energy distributions (SEDs) and images at select diagnostic wavelengths from the ultraviolet through the infrared. Comparison of the results revealed that the global SEDs are consistent on average to a few percent for all but the scattered stellar flux at very high optical depths. The image results are consistent within 10%, again except for the stellar scattered flux at very high optical depths. The lack of agreement between different codes of the scattered flux at high optical depths is quantified for the first time. Convergence tests using one of the Monte Carlo codes illustrate the sensitivity of the solutions to various model parameters. Conclusions: We provide the first 3D dust radiative transfer benchmark and validate the accuracy of this benchmark through comparisons between multiple independent codes and detailed convergence tests.

  13. Finite Element Modeling of the World Federation's Second MFL Benchmark Problem

    NASA Astrophysics Data System (ADS)

    Zeng, Zhiwei; Tian, Yong; Udpa, Satish; Udpa, Lalita

    2004-02-01

    This paper presents results obtained by simulating the second magnetic flux leakage benchmark problem proposed by the World Federation of NDE Centers. The geometry consists of notches machined on the internal and external surfaces of a rotating steel pipe that is placed between two yokes that are part of a magnetic circuit energized by an electromagnet. The model calculates the radial component of the leaked field at specific positions. The nonlinear material property of the ferromagnetic pipe is taken into account in simulating the problem. The velocity effect caused by the rotation of the pipe is, however, ignored for reasons of simplicity.

  14. Sensitivity and Uncertainty Analysis of the GFR MOX Fuel Subassembly

    NASA Astrophysics Data System (ADS)

    Lüley, J.; Vrban, B.; Čerba, Š.; Haščík, J.; Nečas, V.; Pelloni, S.

    2014-04-01

    We performed sensitivity and uncertainty analysis as well as benchmark similarity assessment of the MOX fuel subassembly designed for the Gas-Cooled Fast Reactor (GFR) as a representative material of the core. Material composition was defined for each assembly ring separately allowing us to decompose the sensitivities not only for isotopes and reactions but also for spatial regions. This approach was confirmed by direct perturbation calculations for chosen materials and isotopes. Similarity assessment identified only ten partly comparable benchmark experiments that can be utilized in the field of GFR development. Based on the determined uncertainties, we also identified main contributors to the calculation bias.

  15. Vibrational multiconfiguration self-consistent field theory: implementation and test calculations.

    PubMed

    Heislbetz, Sandra; Rauhut, Guntram

    2010-03-28

    A state-specific vibrational multiconfiguration self-consistent field (VMCSCF) approach based on a multimode expansion of the potential energy surface is presented for the accurate calculation of anharmonic vibrational spectra. As a special case of this general approach vibrational complete active space self-consistent field calculations will be discussed. The latter method shows better convergence than the general VMCSCF approach and must be considered the preferred choice within the multiconfigurational framework. Benchmark calculations are provided for a small set of test molecules.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michal, V. P., E-mail: vincent.michal@cea.fr

    The formalism for analyzing the magnetic field distribution in the vortex lattice of Pauli-limit heavy-electron superconductors is applied to the evaluation of the vortex lattice static linewidth relevant to the muon spin rotation ({mu}SR) experiment. Based on the Ginzburg-Landau expansion for the superconductor free energy, we study the evolution with respect to the external field of the static linewidth both in the limit of independent vortices (low magnetic field) with a variational expression for the order parameter and in the near H{sub c2}{sup P}(T) regime with an extension of the Abrikosov analysis to Pauli-limit superconductors. We conclude that in themore » Ginzburg-Landau regime in the Pauli-limit, anomalous variations of the static linewidth with the applied field are predicted as a result of the superconductor spin response around a vortex core that dominates the usual charge-response screening supercurrents. We propose the effect as a benchmark for studying new puzzling vortex lattice properties recently observed in CeCoIn{sub 5}.« less

  17. Benchmarking of venous thromboembolism prophylaxis practice with ENT.UK guidelines.

    PubMed

    Al-Qahtani, Ali S

    2017-05-01

    The aim of this study was to benchmark our guidelines of prevention of venous thromboembolism (VTE) in ENT surgical population against ENT.UK guidelines, and also to encourage healthcare providers to utilize benchmarking as an effective method of improving performance. The study design is prospective descriptive analysis. The setting of this study is tertiary referral centre (Assir Central Hospital, Abha, Saudi Arabia). In this study, we are benchmarking our practice guidelines of the prevention of VTE in the ENT surgical population against that of ENT.UK guidelines to mitigate any gaps. ENT guidelines 2010 were downloaded from the ENT.UK Website. Our guidelines were compared with the possibilities that either our performance meets or fall short of ENT.UK guidelines. Immediate corrective actions will take place if there is quality chasm between the two guidelines. ENT.UK guidelines are evidence-based and updated which may serve as role-model for adoption and benchmarking. Our guidelines were accordingly amended to contain all factors required in providing a quality service to ENT surgical patients. While not given appropriate attention, benchmarking is a useful tool in improving quality of health care. It allows learning from others' practices and experiences, and works towards closing any quality gaps. In addition, benchmarking clinical outcomes is critical for quality improvement and informing decisions concerning service provision. It is recommended to be included on the list of quality improvement methods of healthcare services.

  18. A Methodology for Benchmarking Relational Database Machines,

    DTIC Science & Technology

    1984-01-01

    user benchmarks is to compare the multiple users to the best-case performance The data for each query classification coll and the performance...called a benchmark. The term benchmark originates from the markers used by sur - veyors in establishing common reference points for their measure...formatted databases. In order to further simplify the problem, we restrict our study to those DBMs which support the relational model. A sur - vey

  19. Benchmarking and the laboratory

    PubMed Central

    Galloway, M; Nadin, L

    2001-01-01

    This article describes how benchmarking can be used to assess laboratory performance. Two benchmarking schemes are reviewed, the Clinical Benchmarking Company's Pathology Report and the College of American Pathologists' Q-Probes scheme. The Clinical Benchmarking Company's Pathology Report is undertaken by staff based in the clinical management unit, Keele University with appropriate input from the professional organisations within pathology. Five annual reports have now been completed. Each report is a detailed analysis of 10 areas of laboratory performance. In this review, particular attention is focused on the areas of quality, productivity, variation in clinical practice, skill mix, and working hours. The Q-Probes scheme is part of the College of American Pathologists programme in studies of quality assurance. The Q-Probes scheme and its applicability to pathology in the UK is illustrated by reviewing two recent Q-Probe studies: routine outpatient test turnaround time and outpatient test order accuracy. The Q-Probes scheme is somewhat limited by the small number of UK laboratories that have participated. In conclusion, as a result of the government's policy in the UK, benchmarking is here to stay. Benchmarking schemes described in this article are one way in which pathologists can demonstrate that they are providing a cost effective and high quality service. Key Words: benchmarking • pathology PMID:11477112

  20. Numerical Simulations of Vortex Shedding in Hydraulic Turbines

    NASA Technical Reports Server (NTRS)

    Dorney, Daniel; Marcu, Bogdan

    2004-01-01

    Turbomachines for rocket propulsion applications operate with many different working fluids and flow conditions. Oxidizer boost turbines often operate in liquid oxygen, resulting in an incompressible flow field. Vortex shedding from airfoils in this flow environment can have adverse effects on both turbine performance and durability. In this study the effects of vortex shedding in a low-pressure oxidizer turbine are investigated. Benchmark results are also presented for vortex shedding behind a circular cylinder. The predicted results are compared with available experimental data.

  1. Finite difference time domain (FDTD) modeling of implanted deep brain stimulation electrodes and brain tissue.

    PubMed

    Gabran, S R I; Saad, J H; Salama, M M A; Mansour, R R

    2009-01-01

    This paper demonstrates the electromagnetic modeling and simulation of an implanted Medtronic deep brain stimulation (DBS) electrode using finite difference time domain (FDTD). The model is developed using Empire XCcel and represents the electrode surrounded with brain tissue assuming homogenous and isotropic medium. The model is created to study the parameters influencing the electric field distribution within the tissue in order to provide reference and benchmarking data for DBS and intra-cortical electrode development.

  2. The impact of a scheduling change on ninth grade high school performance on biology benchmark exams and the California Standards Test

    NASA Astrophysics Data System (ADS)

    Leonardi, Marcelo

    The primary purpose of this study was to examine the impact of a scheduling change from a trimester 4x4 block schedule to a modified hybrid schedule on student achievement in ninth grade biology courses. This study examined the impact of the scheduling change on student achievement through teacher created benchmark assessments in Genetics, DNA, and Evolution and on the California Standardized Test in Biology. The secondary purpose of this study examined the ninth grade biology teacher perceptions of ninth grade biology student achievement. Using a mixed methods research approach, data was collected both quantitatively and qualitatively as aligned to research questions. Quantitative methods included gathering data from departmental benchmark exams and California Standardized Test in Biology and conducting multiple analysis of covariance and analysis of covariance to determine significance differences. Qualitative methods include journal entries questions and focus group interviews. The results revealed a statistically significant increase in scores on both the DNA and Evolution benchmark exams. DNA and Evolution benchmark exams showed significant improvements from a change in scheduling format. The scheduling change was responsible for 1.5% of the increase in DNA benchmark scores and 2% of the increase in Evolution benchmark scores. The results revealed a statistically significant decrease in scores on the Genetics Benchmark exam as a result of the scheduling change. The scheduling change was responsible for 1% of the decrease in Genetics benchmark scores. The results also revealed a statistically significant increase in scores on the CST Biology exam. The scheduling change was responsible for .7% of the increase in CST Biology scores. Results of the focus group discussions indicated that all teachers preferred the modified hybrid schedule over the trimester schedule and that it improved student achievement.

  3. On the Ground or in the Air? A Methodological Experiment on Crop Residue Cover Measurement in Ethiopia

    NASA Astrophysics Data System (ADS)

    Kosmowski, Frédéric; Stevenson, James; Campbell, Jeff; Ambel, Alemayehu; Haile Tsegay, Asmelash

    2017-10-01

    Maintaining permanent coverage of the soil using crop residues is an important and commonly recommended practice in conservation agriculture. Measuring this practice is an essential step in improving knowledge about the adoption and impact of conservation agriculture. Different data collection methods can be implemented to capture the field level crop residue coverage for a given plot, each with its own implication on survey budget, implementation speed and respondent and interviewer burden. In this paper, six alternative methods of crop residue coverage measurement are tested among the same sample of rural households in Ethiopia. The relative accuracy of these methods are compared against a benchmark, the line-transect method. The alternative methods compared against the benchmark include: (i) interviewee (respondent) estimation; (ii) enumerator estimation visiting the field; (iii) interviewee with visual-aid without visiting the field; (iv) enumerator with visual-aid visiting the field; (v) field picture collected with a drone and analyzed with image-processing methods and (vi) satellite picture of the field analyzed with remote sensing methods. Results of the methodological experiment show that survey-based methods tend to underestimate field residue cover. When quantitative data on cover are needed, the best estimates are provided by visual-aid protocols. For categorical analysis (i.e., >30% cover or not), visual-aid protocols and remote sensing methods perform equally well. Among survey-based methods, the strongest correlates of measurement errors are total farm size, field size, distance, and slope. Results deliver a ranking of measurement options that can inform survey practitioners and researchers.

  4. On the Ground or in the Air? A Methodological Experiment on Crop Residue Cover Measurement in Ethiopia.

    PubMed

    Kosmowski, Frédéric; Stevenson, James; Campbell, Jeff; Ambel, Alemayehu; Haile Tsegay, Asmelash

    2017-10-01

    Maintaining permanent coverage of the soil using crop residues is an important and commonly recommended practice in conservation agriculture. Measuring this practice is an essential step in improving knowledge about the adoption and impact of conservation agriculture. Different data collection methods can be implemented to capture the field level crop residue coverage for a given plot, each with its own implication on survey budget, implementation speed and respondent and interviewer burden. In this paper, six alternative methods of crop residue coverage measurement are tested among the same sample of rural households in Ethiopia. The relative accuracy of these methods are compared against a benchmark, the line-transect method. The alternative methods compared against the benchmark include: (i) interviewee (respondent) estimation; (ii) enumerator estimation visiting the field; (iii) interviewee with visual-aid without visiting the field; (iv) enumerator with visual-aid visiting the field; (v) field picture collected with a drone and analyzed with image-processing methods and (vi) satellite picture of the field analyzed with remote sensing methods. Results of the methodological experiment show that survey-based methods tend to underestimate field residue cover. When quantitative data on cover are needed, the best estimates are provided by visual-aid protocols. For categorical analysis (i.e., >30% cover or not), visual-aid protocols and remote sensing methods perform equally well. Among survey-based methods, the strongest correlates of measurement errors are total farm size, field size, distance, and slope. Results deliver a ranking of measurement options that can inform survey practitioners and researchers.

  5. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  6. Benchmarking Data Sets for the Evaluation of Virtual Ligand Screening Methods: Review and Perspectives.

    PubMed

    Lagarde, Nathalie; Zagury, Jean-François; Montes, Matthieu

    2015-07-27

    Virtual screening methods are commonly used nowadays in drug discovery processes. However, to ensure their reliability, they have to be carefully evaluated. The evaluation of these methods is often realized in a retrospective way, notably by studying the enrichment of benchmarking data sets. To this purpose, numerous benchmarking data sets were developed over the years, and the resulting improvements led to the availability of high quality benchmarking data sets. However, some points still have to be considered in the selection of the active compounds, decoys, and protein structures to obtain optimal benchmarking data sets.

  7. Determining the sample size required to establish whether a medical device is non-inferior to an external benchmark.

    PubMed

    Sayers, Adrian; Crowther, Michael J; Judge, Andrew; Whitehouse, Michael R; Blom, Ashley W

    2017-08-28

    The use of benchmarks to assess the performance of implants such as those used in arthroplasty surgery is a widespread practice. It provides surgeons, patients and regulatory authorities with the reassurance that implants used are safe and effective. However, it is not currently clear how or how many implants should be statistically compared with a benchmark to assess whether or not that implant is superior, equivalent, non-inferior or inferior to the performance benchmark of interest.We aim to describe the methods and sample size required to conduct a one-sample non-inferiority study of a medical device for the purposes of benchmarking. Simulation study. Simulation study of a national register of medical devices. We simulated data, with and without a non-informative competing risk, to represent an arthroplasty population and describe three methods of analysis (z-test, 1-Kaplan-Meier and competing risks) commonly used in surgical research. We evaluate the performance of each method using power, bias, root-mean-square error, coverage and CI width. 1-Kaplan-Meier provides an unbiased estimate of implant net failure, which can be used to assess if a surgical device is non-inferior to an external benchmark. Small non-inferiority margins require significantly more individuals to be at risk compared with current benchmarking standards. A non-inferiority testing paradigm provides a useful framework for determining if an implant meets the required performance defined by an external benchmark. Current contemporary benchmarking standards have limited power to detect non-inferiority, and substantially larger samples sizes, in excess of 3200 procedures, are required to achieve a power greater than 60%. It is clear when benchmarking implant performance, net failure estimated using 1-KM is preferential to crude failure estimated by competing risk models. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  8. A Competitive Benchmarking Study of Noncredit Program Administration.

    ERIC Educational Resources Information Center

    Alstete, Jeffrey W.

    1996-01-01

    A benchmarking project to measure administrative processes and financial ratios received 57 usable replies from 300 noncredit continuing education programs. Programs with strong financial surpluses were identified and their processes benchmarked (including response to inquiries, registrants, registrant/staff ratio, new courses, class size,…

  9. The Learning Organisation: Results of a Benchmarking Study.

    ERIC Educational Resources Information Center

    Zairi, Mohamed

    1999-01-01

    Learning in corporations was assessed using these benchmarks: core qualities of creative organizations, characteristic of organizational creativity, attributes of flexible organizations, use of diversity and conflict, creative human resource management systems, and effective and successful teams. These benchmarks are key elements of the learning…

  10. Benchmarking and Its Relevance to the Library and Information Sector. Interim Findings of "Best Practice Benchmarking in the Library and Information Sector," a British Library Research and Development Department Project.

    ERIC Educational Resources Information Center

    Kinnell, Margaret; Garrod, Penny

    This British Library Research and Development Department study assesses current activities and attitudes toward quality management in library and information services (LIS) in the academic sector as well as the commercial/industrial sector. Definitions and types of benchmarking are described, and the relevance of benchmarking to LIS is evaluated.…

  11. Changing Use and Occurrence of Pesticides in Surface Waters of California's Rice-Growing Region

    NASA Astrophysics Data System (ADS)

    Orlando, J. L.; Hladik, M.; Smalling, K. L.; Kuivila, K.

    2011-12-01

    Pesticide use in rice agriculture in California has changed significantly over the past two decades. California is the second largest producer of rice in the United States and rice is a pesticide intensive crop with over 1.7 million kg of pesticide active ingredients applied in 2009. Prior to 1999, the herbicides molinate and thiobencarb were the most heavily used pesticides. Molinate was phased out in 2009, replaced primarily by propanil, the use of which exceeded 860,000 kg in that year. Over the same time period, use of thiobencarb has been in decline while applications of newer herbicides like clomazone have increased. The use of insecticides on rice has fallen by an order of magnitude over the last 20 years and now fluctuates around 4,500 kg per year. Another major change has been a steady increase in use of the fungicide azoxystrobin. Pesticides are applied either directly to the soil prior to planting and flooding of the fields, or a few weeks after flooding. Fields treated with thiobencarb or propanil are subject to holding times of 30 or 7 days, respectively, to allow for degradation prior to release of treated water to the environment. When rice-field water is released, it flows into local drains and creeks, and ultimately into Sacramento/San Joaquin Delta, a critical habitat for many threatened native species. A study was conducted in 2010 to measure the occurrence of rice pesticides Northern California, and to document how changes in rice pesticide application patterns over the last decade have influenced pesticide concentrations in the environment. Three sites in agriculturally dominated watersheds where rice is the major crop were sampled weekly from the time of initial rice-field flooding (mid-May) through mid-August. Filtered water samples were analyzed for 92 pesticides and pesticide degradates by gas chromatography/mass spectrometry. Azoxystrobin and 3,4-DCA (the major breakdown product of propanil) were detected in every sample, and at concentrations up to 136 and 128 μg/L, respectively. Clomazone and thiobencarb were detected in greater than 93% of water samples, with maximum concentrations of 19.4, and 12.4 μg/L. Propanil was present in 60% of samples and at a maximum concentration of 6.5 μg/L. The U.S. Environmental Protection Agency (EPA) has established chronic invertebrate toxicity benchmarks for concentrations of azoxystrobin, clomazone, and thiobencarb in water of 44, 2,200, and 1.0 μg/L, respectively. Concentrations of azoxystrobin and thiobencarb exceeded these benchmarks in one and three samples, respectively. The chronic fish toxicity benchmark of 9.1μg/L for propanil was not exceeded in any samples. Although the propanil degradate 3,4-DCA does not have established aquatic life benchmarks, EPA noted that it may be 11 and 7 times more toxic than the parent compound to freshwater invertebrates on an acute and chronic basis, respectively (2009 memo on Risks of Propanil Use to Federally Threatened California Red-legged Frog). This study illustrates the importance of understanding changing pesticide use and the resulting changes in pesticide concentrations in the environment.

  12. Implementing a benchmarking and feedback concept decreases postoperative pain after total knee arthroplasty: A prospective study including 256 patients.

    PubMed

    Benditz, A; Drescher, J; Greimel, F; Zeman, F; Grifka, J; Meißner, W; Völlner, F

    2016-12-05

    Perioperative pain reduction, particularly during the first two days, is highly important for patients after total knee arthroplasty (TKA). Problems are not only caused by medical issues but by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent benchmarking. All patients included into the study had undergone total knee arthroplasty. Outcome parameters were analyzed by means of a questionnaire on the first postoperative day. A multidisciplinary team implemented a regular procedure of data analyzes and external benchmarking by participating in a nationwide quality improvement project. At the beginning of the study, our hospital ranked 16 th in terms of activity-related pain and 9 th in patient satisfaction among 47 anonymized hospitals participating in the benchmarking project. At the end of the study, we had improved to 1 st activity-related pain and to 2 nd in patient satisfaction. Although benchmarking started and finished with the same standardized pain management concept, results were initially pure. Beside pharmacological treatment, interdisciplinary teamwork and benchmarking with direct feedback mechanisms are also very important for decreasing postoperative pain and for increasing patient satisfaction after TKA.

  13. Implementing a benchmarking and feedback concept decreases postoperative pain after total knee arthroplasty: A prospective study including 256 patients

    PubMed Central

    Benditz, A.; Drescher, J.; Greimel, F.; Zeman, F.; Grifka, J.; Meißner, W.; Völlner, F.

    2016-01-01

    Perioperative pain reduction, particularly during the first two days, is highly important for patients after total knee arthroplasty (TKA). Problems are not only caused by medical issues but by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent benchmarking. All patients included into the study had undergone total knee arthroplasty. Outcome parameters were analyzed by means of a questionnaire on the first postoperative day. A multidisciplinary team implemented a regular procedure of data analyzes and external benchmarking by participating in a nationwide quality improvement project. At the beginning of the study, our hospital ranked 16th in terms of activity-related pain and 9th in patient satisfaction among 47 anonymized hospitals participating in the benchmarking project. At the end of the study, we had improved to 1st activity-related pain and to 2nd in patient satisfaction. Although benchmarking started and finished with the same standardized pain management concept, results were initially pure. Beside pharmacological treatment, interdisciplinary teamwork and benchmarking with direct feedback mechanisms are also very important for decreasing postoperative pain and for increasing patient satisfaction after TKA. PMID:27917911

  14. OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS & HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alan Black; Arnis Judzis

    2004-10-01

    The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for a next level of deep drilling performance; Phase 2--Develop advanced smart bit-fluid prototypes and test at large scale; and Phase 3--Field trial smart bit-fluid concepts, modify as necessary and commercialize products. As of report date, TerraTek has concluded all major preparations for themore » high pressure drilling campaign. Baker Hughes encountered difficulties in providing additional pumping capacity before TerraTek's scheduled relocation to another facility, thus the program was delayed further to accommodate the full testing program.« less

  15. Corporate Speak and "Collateral Recruitment": Surfing the Student Body

    ERIC Educational Resources Information Center

    McGloin, Colleen

    2015-01-01

    Academic practice is scrutinized and regulated with such "Corporate speak" terms as "performance indicators," "benchmarking," "service providers" and "clients." As part of a field where ideological shifts continue to apply marketized frames of reference as neoliberalism tightens its grip, new terms…

  16. Student Learning: Education's Field of Dreams.

    ERIC Educational Resources Information Center

    Blackwell, Peggy L.

    2003-01-01

    Discusses seven research-based benchmarks providing a framework for the student-learning-focused reform of teacher education: knowledge and understanding based on previous experience, usable content knowledge, transfer of learning/the learning context, strategic thinking, motivation and affect, development and individual differences, and standards…

  17. A Million Cancer Genome Warehouse

    DTIC Science & Technology

    2012-11-20

    Software, Strawberry Canyon, 2012. 25 Units (GPUs) without any changes needed to the client applications. ● Service-level APIs are designed to... Strawberry Canyon, 2012. 62 Patterson, D. For better or worse, benchmarks shape a field: technical perspective, Communications of the ACM, v.55 n.7

  18. Targeting the affordability of cigarettes: a new benchmark for taxation policy in low-income and-middle-income countries.

    PubMed

    Blecher, Evan

    2010-08-01

    To investigate the appropriateness of tax incidence (the percentage of the retail price occupied by taxes) benchmarking in low-income and-middle-income countries (LMICs) with rapidly growing economies and to explore the viability of an alternative tax policy rule based on the affordability of cigarettes. The paper outlines criticisms of tax incidence benchmarking, particularly in the context of LMICs. It then considers an affordability-based benchmark using relative income price (RIP) as a measure of affordability. The RIP measures the percentage of annual per capita GDP required to purchase 100 packs of cigarettes. Using South Africa as a case study of an LMIC, future consumption is simulated using both tax incidence benchmarks and affordability benchmarks. I show that a tax incidence benchmark is not an optimal policy tool in South Africa and that an affordability benchmark could be a more effective means of reducing tobacco consumption in the future. Although a tax incidence benchmark was successful in increasing prices and reducing tobacco consumption in South Africa in the past, this approach has drawbacks, particularly in the context of a rapidly growing LMIC economy. An affordability benchmark represents an appropriate alternative that would be more effective in reducing future cigarette consumption.

  19. Testing variations of the GW approximation on strongly correlated transition metal oxides: hematite (α-Fe2O3) as a benchmark.

    PubMed

    Liao, Peilin; Carter, Emily A

    2011-09-07

    Quantitative characterization of low-lying excited electronic states in materials is critical for the development of solar energy conversion materials. The many-body Green's function method known as the GW approximation (GWA) directly probes states corresponding to photoemission and inverse photoemission experiments, thereby determining the associated band structure. Several versions of the GW approximation with different levels of self-consistency exist in the field. While the GWA based on density functional theory (DFT) works well for conventional semiconductors, less is known about its reliability for strongly correlated semiconducting materials. Here we present a systematic study of the GWA using hematite (α-Fe(2)O(3)) as the benchmark material. We analyze its performance in terms of the calculated photoemission/inverse photoemission band gaps, densities of states, and dielectric functions. Overall, a non-self-consistent G(0)W(0) using input from DFT+U theory produces physical observables in best agreement with experiments. This journal is © the Owner Societies 2011

  20. FLUKA Monte Carlo simulations and benchmark measurements for the LHC beam loss monitors

    NASA Astrophysics Data System (ADS)

    Sarchiapone, L.; Brugger, M.; Dehning, B.; Kramer, D.; Stockner, M.; Vlachoudis, V.

    2007-10-01

    One of the crucial elements in terms of machine protection for CERN's Large Hadron Collider (LHC) is its beam loss monitoring (BLM) system. On-line loss measurements must prevent the superconducting magnets from quenching and protect the machine components from damages due to unforeseen critical beam losses. In order to ensure the BLM's design quality, in the final design phase of the LHC detailed FLUKA Monte Carlo simulations were performed for the betatron collimation insertion. In addition, benchmark measurements were carried out with LHC type BLMs installed at the CERN-EU high-energy Reference Field facility (CERF). This paper presents results of FLUKA calculations performed for BLMs installed in the collimation region, compares the results of the CERF measurement with FLUKA simulations and evaluates related uncertainties. This, together with the fact that the CERF source spectra at the respective BLM locations are comparable with those at the LHC, allows assessing the sensitivity of the performed LHC design studies.

  1. Deriving detector-specific correction factors for rectangular small fields using a scintillator detector.

    PubMed

    Qin, Yujiao; Zhong, Hualiang; Wen, Ning; Snyder, Karen; Huang, Yimei; Chetty, Indrin J

    2016-11-08

    The goal of this study was to investigate small field output factors (OFs) for flat-tening filter-free (FFF) beams on a dedicated stereotactic linear accelerator-based system. From this data, the collimator exchange effect was quantified, and detector-specific correction factors were generated. Output factors for 16 jaw-collimated small fields (from 0.5 to 2 cm) were measured using five different detectors including an ion chamber (CC01), a stereotactic field diode (SFD), a diode detector (Edge), Gafchromic film (EBT3), and a plastic scintillator detector (PSD, W1). Chamber, diodes, and PSD measurements were performed in a Wellhofer water tank, while films were irradiated in solid water at 100 cm source-to-surface distance and 10 cm depth. The collimator exchange effect was quantified for rectangular fields. Monte Carlo (MC) simulations of the measured configurations were also performed using the EGSnrc/DOSXYZnrc code. Output factors measured by the PSD and verified against film and MC calculations were chosen as the benchmark measurements. Compared with plastic scintillator detector (PSD), the small volume ion chamber (CC01) underestimated output factors by an average of -1.0% ± 4.9% (max. = -11.7% for 0.5 × 0.5 cm2 square field). The stereotactic diode (SFD) overestimated output factors by 2.5% ± 0.4% (max. = 3.3% for 0.5 × 1 cm2 rectangular field). The other diode detector (Edge) also overestimated the OFs by an average of 4.2% ± 0.9% (max. = 6.0% for 1 × 1 cm2 square field). Gafchromic film (EBT3) measure-ments and MC calculations agreed with the scintillator detector measurements within 0.6% ± 1.8% and 1.2% ± 1.5%, respectively. Across all the X and Y jaw combinations, the average collimator exchange effect was computed: 1.4% ± 1.1% (CC01), 5.8% ± 5.4% (SFD), 5.1% ± 4.8% (Edge diode), 3.5% ± 5.0% (Monte Carlo), 3.8% ± 4.7% (film), and 5.5% ± 5.1% (PSD). Small field detectors should be used with caution with a clear understanding of their behaviors, especially for FFF beams and small, elongated fields. The scintillator detector exhibited good agreement against Gafchromic film measurements and MC simulations over the range of field sizes studied. The collimator exchange effect was found to be impor-tant at these small field sizes. Detector-specific correction factors were computed using the scintillator measurements as the benchmark. © 2016 The Authors.

  2. Experimental and CFD Studies of Coolant Flow Mixing within Scaled Models of the Upper and Lower Plenums of NGNP Gas-Cooled Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hassan, Yassin; Anand, Nk

    2016-03-30

    A 1/16th scaled VHTR experimental model was constructed and the preliminary test was performed in this study. To produce benchmark data for CFD validation in the future, the facility was first run at partial operation with five pipes being heated. PIV was performed to extract the vector velocity field for three adjacent naturally convective jets at statistically steady state. A small recirculation zone was found between the pipes, and the jets entered the merging zone at 3 cm from the pipe outlet but diverged as the flow approached the top of the test geometry. Turbulence analysis shows the turbulence intensitymore » peaked at 41-45% as the jets mixed. A sensitivity analysis confirmed that 1000 frames were sufficient to measure statistically steady state. The results were then validated by extracting the flow rate from the PIV jet velocity profile, and comparing it with an analytic flow rate and ultrasonic flowmeter; all flow rates lie within the uncertainty of the other two methods for Tests 1 and 2. This test facility can be used for further analysis of naturally convective mixing, and eventually produce benchmark data for CFD validation for the VHTR during a PCC or DCC accident scenario. Next, a PTV study of 3000 images (1500 image pairs) were used to quantify the velocity field in the upper plenum. A sensitivity analysis confirmed that 1500 frames were sufficient to precisely estimate the flow. Subsequently, three (3, 9, and 15 cm) Y-lines from the pipe output were extracted to consider the output differences between 50 to 1500 frames. The average velocity field and standard deviation error that accrued in the three different tests were calculated to assess repeatability. The error was varied, from 1 to 14%, depending on Y-elevation. The error decreased as the flow moved farther from the output pipe. In addition, turbulent intensity was calculated and found to be high near the output. Reynolds stresses and turbulent intensity were used to validate the data by comparing it with benchmark data. The experimental data gave the same pattern as the benchmark data. A turbulent single buoyant jet study was performed for the case of LOFC in the upper plenum of scaled VHTR. Time-averaged profiles show that 3,000 frames of images were sufficient for the study up to second-order statistics. Self-similarity is an important feature of jets since the behavior of jets is independent of Reynolds number and a sole function of geometry. Self-similarity profiles were well observed in the axial velocity and velocity magnitude profile regardless of z/D where the radial velocity did not show any similarity pattern. The normal components of Reynolds stresses have self-similarity within the expected range. The study shows that large vortices were observed close to the dome wall, indicating that the geometry of the VHTR has a significant impact on its safety and performance. Near the dome surface, large vortices were shown to inhibit the flows, resulting in reduced axial jet velocity. The vortices that develop subsequently reduce the Reynolds stresses that develop and the impact on the integrity of the VHTR upper plenum surface. Multiple jets study, including two, three and five jets, were investigated.« less

  3. Benchmarking network for clinical and humanistic outcomes in diabetes (BENCH-D) study: protocol, tools, and population.

    PubMed

    Nicolucci, Antonio; Rossi, Maria C; Pellegrini, Fabio; Lucisano, Giuseppe; Pintaudi, Basilio; Gentile, Sandro; Marra, Giampiero; Skovlund, Soren E; Vespasiani, Giacomo

    2014-01-01

    In the context of the DAWN-2 initiatives, the BENCH-D Study aims to test a model of regional benchmarking to improve not only the quality of diabetes care, but also patient-centred outcomes. As part of the AMD-Annals quality improvement program, 32 diabetes clinics in 4 Italian regions extracted clinical data from electronic databases for measuring process and outcome quality indicators. A random sample of patients with type 2 diabetes filled in a questionnaire including validated instruments to assess patient-centred indicators: SF-12 Health Survey, WHO-5 Well-Being Index, Diabetes Empowerment Scale, Problem Areas in Diabetes, Health Care Climate Questionnaire, Patients Assessment of Chronic Illness Care, Barriers to Medications, Patient Support, Diabetes Self-care Activities, and Global Satisfaction for Diabetes Treatment. Data were discussed with participants in regional meetings. Main problems, obstacles and solutions were identified through a standardized process, and a regional mandate was produced to drive the priority actions. Overall, clinical indicators on 78,854 patients have been measured; additionally, 2,390 patients filled-in the questionnaire. The regional mandates were officially launched in March 2012. Clinical and patient-centred indicators will be evaluated again after 18 months. A final assessment of clinical indicators will take place after 30 months. In the context of the BENCH-D study, a set of instruments has been validated to measure patient well-being and satisfaction with the care. In the four regional meetings, different priorities were identified, reflecting different organizational resources of the different areas. In all the regions, a major challenge was represented by the need of skills and instruments to address psychosocial issues of people with diabetes. The BENCH-D study allows a field testing of benchmarking activities focused on clinical and patient-centred indicators.

  4. MECHANICAL DESIGN CRITERIA FOR INTERVERTEBRAL DISC TISSUE ENGINEERING

    PubMed Central

    Nerurkar, Nandan L.; Elliott, Dawn M.; Mauck, Robert L.

    2009-01-01

    Due to the inability of current clinical practices to restore function to degenerated intervertebral discs, the arena of disc tissue engineering has received substantial attention in recent years. Despite tremendous growth and progress in this field, translation to clinical implementation has been hindered by a lack of well-defined functional benchmarks. Because successful replacement of the disc is contingent upon replication of some or all of its complex mechanical behaviour, it is critically important that disc mechanics be well characterized in order to establish discrete functional goals for tissue engineering. In this review, the key functional signatures of the intervertebral disc are discussed and used to propose a series of native tissue benchmarks to guide the development of engineered replacement tissues. These benchmarks include measures of mechanical function under tensile, compressive and shear deformations for the disc and its substructures. In some cases, important functional measures are identified that have yet to be measured in the native tissue. Ultimately, native tissue benchmark values are compared to measurements that have been made on engineered disc tissues, identifying measures where functional equivalence was achieved, and others where there remain opportunities for advancement. Several excellent reviews exist regarding disc composition and structure, as well as recent tissue engineering strategies; therefore this review will remain focused on the functional aspects of disc tissue engineering. PMID:20080239

  5. Ecological risk assessment for Mather Air Force Base, California: Phase 1, screening assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyers-Schoene, L.; Fischer, N.T.; Rabe, J.J.

    Mather Air Force Base (AFB) is among the numerous facilities scheduled for closure under the US Air Force (USAF) Installation Restoration Program (IRP). A component of the Mather AFB IRP is to prepare risk assessments for each of the chemically contaminated sites. Because no previous ecological risk related studies have been conducted on Mather AFB, the authors proposed a phased approach to assessing ecological risks at the base. Phase 1 consisted of baseline ecological surveys that collected data over a 12-month period. In addition, benchmark screening criteria were used in conjunction with modeling results that utilized measured concentrations of chemicalmore » analytes in abiotic samples. Phase 2 may consist of the collection of more site-specific data and toxicity testing, if warranted by the Phase 1 screening analysis. This approach was in agreement with the USAF`s ecological risk assessment guidance and met the approval of the Air Force and USEPA Region 9. The authors found the use of established and derived screening values to effectively aid in the focusing of the ecological risk assessment on those chemicals most likely to be hazardous to ecological receptors at the base. Disadvantages in the use of screening values include the uncertainties associated with the conservative assumptions inherent in the derivation of benchmark values and the difficulty in extrapolating from laboratory determined benchmark values to impacts in the field.« less

  6. Unusually High Incidences of Staphylococcus aureus Infection within Studies of Ventilator Associated Pneumonia Prevention Using Topical Antibiotics: Benchmarking the Evidence Base

    PubMed Central

    2018-01-01

    Selective digestive decontamination (SDD, topical antibiotic regimens applied to the respiratory tract) appears effective for preventing ventilator associated pneumonia (VAP) in intensive care unit (ICU) patients. However, potential contextual effects of SDD on Staphylococcus aureus infections in the ICU remain unclear. The S. aureus ventilator associated pneumonia (S. aureus VAP), VAP overall and S. aureus bacteremia incidences within component (control and intervention) groups within 27 SDD studies were benchmarked against 115 observational groups. Component groups from 66 studies of various interventions other than SDD provided additional points of reference. In 27 SDD study control groups, the mean S. aureus VAP incidence is 9.6% (95% CI; 6.9–13.2) versus a benchmark derived from 115 observational groups being 4.8% (95% CI; 4.2–5.6). In nine SDD study control groups the mean S. aureus bacteremia incidence is 3.8% (95% CI; 2.1–5.7) versus a benchmark derived from 10 observational groups being 2.1% (95% CI; 1.1–4.1). The incidences of S. aureus VAP and S. aureus bacteremia within the control groups of SDD studies are each higher than literature derived benchmarks. Paradoxically, within the SDD intervention groups, the incidences of both S. aureus VAP and VAP overall are more similar to the benchmarks. PMID:29300363

  7. Benchmarks for Psychotherapy Efficacy in Adult Major Depression

    ERIC Educational Resources Information Center

    Minami, Takuya; Wampold, Bruce E.; Serlin, Ronald C.; Kircher, John C.; Brown, George S.

    2007-01-01

    This study estimates pretreatment-posttreatment effect size benchmarks for the treatment of major depression in adults that may be useful in evaluating psychotherapy effectiveness in clinical practice. Treatment efficacy benchmarks for major depression were derived for 3 different types of outcome measures: the Hamilton Rating Scale for Depression…

  8. Practical Considerations when Using Benchmarking for Accountability in Higher Education

    ERIC Educational Resources Information Center

    Achtemeier, Sue D.; Simpson, Ronald D.

    2005-01-01

    The qualitative study on which this article is based examined key individuals' perceptions, both within a research university community and beyond in its external governing board, of how to improve benchmarking as an accountability method in higher education. Differing understanding of benchmarking revealed practical implications for using it as…

  9. Electric-Drive Vehicle Thermal Performance Benchmarking | Transportation

    Science.gov Websites

    studies are as follows: Characterize the thermal resistance and conductivity of various layers in the Research | NREL Electric-Drive Vehicle Thermal Performance Benchmarking Electric-Drive Vehicle Thermal Performance Benchmarking A photo of the internal components of an automotive inverter. NREL

  10. Groundwater-quality data in 12 GAMA study units: Results from the 2006–10 initial sampling period and the 2008–13 trend sampling period, California GAMA Priority Basin Project

    USGS Publications Warehouse

    Mathany, Timothy M.

    2017-03-09

    The Priority Basin Project (PBP) of the Groundwater Ambient Monitoring and Assessment (GAMA) program was developed in response to the Groundwater Quality Monitoring Act of 2001 and is being conducted by the U.S. Geological Survey in cooperation with the California State Water Resources Control Board. From 2004 through 2012, the GAMA-PBP collected samples and assessed the quality of groundwater resources that supply public drinking water in 35 study units across the State. Selected sites in each study unit were sampled again approximately 3 years after initial sampling as part of an assessment of temporal trends in water quality by the GAMA-PBP. Twelve of the study units, initially sampled during 2006–11 (initial sampling period) and sampled a second time during 2008–13 (trend sampling period) to assess temporal trends, are the subject of this report.The initial sampling was designed to provide a spatially unbiased assessment of the quality of untreated groundwater used for public water supplies in the 12 study units. In these study units, 550 sampling sites were selected by using a spatially distributed, randomized, grid-based method to provide spatially unbiased representation of the areas assessed (grid sites, also called “status sites”). After the initial sampling period, 76 of the previously sampled status sites (approximately 10 percent in each study unit) were randomly selected for trend sampling (“trend sites”). The 12 study units sampled both during the initial sampling and during the trend sampling period were distributed among 6 hydrogeologic provinces: Coastal (Northern and Southern), Transverse Ranges and Selected Peninsular Ranges, Klamath, Modoc Plateau and Cascades, and Sierra Nevada Hydrogeologic Provinces. For the purposes of this trend report, the six hydrogeologic provinces were grouped into two hydrogeologic regions based on location: Coastal and Mountain.The groundwater samples were analyzed for a number of synthetic organic constituents (volatile organic compounds, pesticides, and pesticide degradates), constituents of special interest (perchlorate and 1,2,3-trichloropropane), and natural inorganic constituents (nutrients, major and minor ions, and trace elements). Isotopic tracers (tritium, carbon-14, and stable isotopes of hydrogen and oxygen in water) also were measured to help identify processes affecting groundwater quality and the sources and ages of the sampled groundwater. More than 200 constituents and water-quality indicators were measured during the trend sampling period.Quality-control samples (blanks, replicates, matrix-spikes, and surrogate compounds) were collected at about one-third of the trend sites, and the results for these samples were used to evaluate the quality of the data for the groundwater samples. On the basis of detections in laboratory and field blank samples collected by GAMA-PBP study units, including the 12 study units presented here, reporting levels for some groundwater results were adjusted in this report. Differences between replicate samples were mostly within acceptable ranges, indicating low variability in analytical results. Matrix-spike recoveries were largely within the acceptable range (70 to 130 percent).This study did not attempt to evaluate the quality of water delivered to consumers. After withdrawal, groundwater used for drinking water typically is treated, disinfected, and blended with other waters to achieve acceptable water quality. The comparison benchmarks used in this report apply to treated water that is served to the consumer, not to untreated groundwater. To provide some context for the results, however, concentrations of constituents measured in these groundwater samples were compared with benchmarks established by the U.S. Environmental Protection Agency and the State of California. Comparisons between data collected for this study and benchmarks for drinking water are for illustrative purposes only and are not indicative of compliance or non-compliance with those benchmarks.Most organic constituents that were detected in groundwater samples from the trend sites were found at concentrations less than health-based benchmarks. One volatile organic compound—perchloroethene—was detected at a concentration greater than the health-based benchmark in samples from one trend site during the initial and trend sampling periods. Chloroform was detected in at least 10 percent of the samples at trend sites in both sampling periods. Methyl tert-butyl ether was detected in samples from more than 10 percent of the trend sites during the initial sampling period. No pesticide or pesticide degradate was detected in greater than 10 percent of the samples from trend sites or at concentrations greater than their health-based benchmarks during either sampling period. Nutrients were not detected at concentrations greater than their health-based benchmarks during either sampling period.Most detections of major ions and trace elements in samples from trend sites were less than health-based benchmarks during both sampling periods. Arsenic and boron each were detected at concentrations greater than the health-based benchmark in samples from four trend sites during the initial and trend sampling periods. Molybdenum was detected in samples from four trend sites at concentrations greater than the health-based benchmark during both sampling periods. Samples from two of these trend sites had similar molybdenum concentrations, and two had substantially different concentrations during the initial and trend sampling periods. Uranium was detected at a concentration greater than the health-based benchmark only at two trend sites.

  11. Performance evaluation of firefly algorithm with variation in sorting for non-linear benchmark problems

    NASA Astrophysics Data System (ADS)

    Umbarkar, A. J.; Balande, U. T.; Seth, P. D.

    2017-06-01

    The field of nature inspired computing and optimization techniques have evolved to solve difficult optimization problems in diverse fields of engineering, science and technology. The firefly attraction process is mimicked in the algorithm for solving optimization problems. In Firefly Algorithm (FA) sorting of fireflies is done by using sorting algorithm. The original FA is proposed with bubble sort for ranking the fireflies. In this paper, the quick sort replaces bubble sort to decrease the time complexity of FA. The dataset used is unconstrained benchmark functions from CEC 2005 [22]. The comparison of FA using bubble sort and FA using quick sort is performed with respect to best, worst, mean, standard deviation, number of comparisons and execution time. The experimental result shows that FA using quick sort requires less number of comparisons but requires more execution time. The increased number of fireflies helps to converge into optimal solution whereas by varying dimension for algorithm performed better at a lower dimension than higher dimension.

  12. Allowing for Slow Evolution of Background Plasma in the 3D FDTD Plasma, Sheath, and Antenna Model

    NASA Astrophysics Data System (ADS)

    Smithe, David; Jenkins, Thomas; King, Jake

    2015-11-01

    We are working to include a slow-time evolution capability for what has previously been the static background plasma parameters, in the 3D finite-difference time-domain (FDTD) plasma and sheath model used to model ICRF antennas in fusion plasmas. A key aspect of this is SOL-density time-evolution driven by ponderomotive rarefaction from the strong fields in the vicinity of the antenna. We demonstrate and benchmark a Scalar Ponderomotive Potential method, based on local field amplitudes, which is included in the 3D simulation. And present a more advanced Tensor Ponderomotive Potential approach, which we hope to employ in the future, which should improve the physical fidelity in the highly anisotropic environment of the SOL. Finally, we demonstrate and benchmark slow time (non-linear) evolution of the RF sheath, and include realistic collisional effects from the neutral gas. Support from US DOE Grants DE-FC02-08ER54953, DE-FG02-09ER55006.

  13. A Standard-Setting Study to Establish College Success Criteria to Inform the SAT® College and Career Readiness Benchmark. Research Report 2012-3

    ERIC Educational Resources Information Center

    Kobrin, Jennifer L.; Patterson, Brian F.; Wiley, Andrew; Mattern, Krista D.

    2012-01-01

    In 2011, the College Board released its SAT college and career readiness benchmark, which represents the level of academic preparedness associated with a high likelihood of college success and completion. The goal of this study, which was conducted in 2008, was to establish college success criteria to inform the development of the benchmark. The…

  14. Investigation of wing crack formation with a combined phase-field and experimental approach

    NASA Astrophysics Data System (ADS)

    Lee, Sanghyun; Reber, Jacqueline E.; Hayman, Nicholas W.; Wheeler, Mary F.

    2016-08-01

    Fractures that propagate off of weak slip planes are known as wing cracks and often play important roles in both tectonic deformation and fluid flow across reservoir seals. Previous numerical models have produced the basic kinematics of wing crack openings but generally have not been able to capture fracture geometries seen in nature. Here we present both a phase-field modeling approach and a physical experiment using gelatin for a wing crack formation. By treating the fracture surfaces as diffusive zones instead of as discontinuities, the phase-field model does not require consideration of unpredictable rock properties or stress inhomogeneities around crack tips. It is shown by benchmarking the models with physical experiments that the numerical assumptions in the phase-field approach do not affect the final model predictions of wing crack nucleation and growth. With this study, we demonstrate that it is feasible to implement the formation of wing cracks in large scale phase-field reservoir models.

  15. Assessment of Static Delamination Propagation Capabilities in Commercial Finite Element Codes Using Benchmark Analysis

    NASA Technical Reports Server (NTRS)

    Orifici, Adrian C.; Krueger, Ronald

    2010-01-01

    With capabilities for simulating delamination growth in composite materials becoming available, the need for benchmarking and assessing these capabilities is critical. In this study, benchmark analyses were performed to assess the delamination propagation simulation capabilities of the VCCT implementations in Marc TM and MD NastranTM. Benchmark delamination growth results for Double Cantilever Beam, Single Leg Bending and End Notched Flexure specimens were generated using a numerical approach. This numerical approach was developed previously, and involves comparing results from a series of analyses at different delamination lengths to a single analysis with automatic crack propagation. Specimens were analyzed with three-dimensional and two-dimensional models, and compared with previous analyses using Abaqus . The results demonstrated that the VCCT implementation in Marc TM and MD Nastran(TradeMark) was capable of accurately replicating the benchmark delamination growth results and that the use of the numerical benchmarks offers advantages over benchmarking using experimental and analytical results.

  16. Can data-driven benchmarks be used to set the goals of healthy people 2010?

    PubMed Central

    Allison, J; Kiefe, C I; Weissman, N W

    1999-01-01

    OBJECTIVES: Expert panels determined the public health goals of Healthy People 2000 subjectively. The present study examined whether data-driven benchmarks provide a better alternative. METHODS: We developed the "pared-mean" method to define from data the best achievable health care practices. We calculated the pared-mean benchmark for screening mammography from the 1994 National Health Interview Survey, using the metropolitan statistical area as the "provider" unit. Beginning with the best-performing provider and adding providers in descending sequence, we established the minimum provider subset that included at least 10% of all women surveyed on this question. The pared-mean benchmark is then the proportion of women in this subset who received mammography. RESULTS: The pared-mean benchmark for screening mammography was 71%, compared with the Healthy People 2000 goal of 60%. CONCLUSIONS: For Healthy People 2010, benchmarks derived from data reflecting the best available care provide viable alternatives to consensus-derived targets. We are currently pursuing additional refinements to the data-driven pared-mean benchmark approach. PMID:9987466

  17. Issues in Institutional Benchmarking of Student Learning Outcomes Using Case Examples

    ERIC Educational Resources Information Center

    Judd, Thomas P.; Pondish, Christopher; Secolsky, Charles

    2013-01-01

    Benchmarking is a process that can take place at both the inter-institutional and intra-institutional level. This paper focuses on benchmarking intra-institutional student learning outcomes using case examples. The findings of the study illustrate the point that when the outcomes statements associated with the mission of the institution are…

  18. Benchmarking in TESOL: A Study of the Malaysia Education Blueprint 2013

    ERIC Educational Resources Information Center

    Jawaid, Arif

    2014-01-01

    Benchmarking is a very common real-life function occurring every moment unnoticed. It has travelled from industry to education like other quality disciplines. Initially benchmarking was used in higher education. .Now it is diffusing into other areas including TESOL (Teaching English to Speakers of Other Languages), which has yet to devise a…

  19. Learning Through Benchmarking: Developing a Relational, Prospective Approach to Benchmarking ICT in Learning and Teaching

    ERIC Educational Resources Information Center

    Ellis, Robert A.; Moore, Roger R.

    2006-01-01

    This study discusses benchmarking the use of information and communication technologies (ICT) in teaching and learning between two universities with different missions: one an Australian campus-based metropolitan university and the other a British distance-education provider. It argues that the differences notwithstanding, it is possible to…

  20. Benchmark Factors in Student Retention.

    ERIC Educational Resources Information Center

    Waggener, Anna T.; Smith, Constance K.

    The first purpose of this study was to identify significant factors affecting the first benchmark in retaining students in college--the decision to enroll in the first fall semester after orientation. The second purpose was to examine enrollment decisions at the second benchmark--the decision to re-enroll in the second fall semester after freshman…

  1. Quality management benchmarking: FDA compliance in pharmaceutical industry.

    PubMed

    Jochem, Roland; Landgraf, Katja

    2010-01-01

    By analyzing and comparing industry and business best practice, processes can be optimized and become more successful mainly because efficiency and competitiveness increase. This paper aims to focus on some examples. Case studies are used to show knowledge exchange in the pharmaceutical industry. Best practice solutions were identified in two companies using a benchmarking method and five-stage model. Despite large administrations, there is much potential regarding business process organization. This project makes it possible for participants to fully understand their business processes. The benchmarking method gives an opportunity to critically analyze value chains (a string of companies or players working together to satisfy market demands for a special product). Knowledge exchange is interesting for companies that like to be global players. Benchmarking supports information exchange and improves competitive ability between different enterprises. Findings suggest that the five-stage model improves efficiency and effectiveness. Furthermore, the model increases the chances for reaching targets. The method gives security to partners that did not have benchmarking experience. The study identifies new quality management procedures. Process management and especially benchmarking is shown to support pharmaceutical industry improvements.

  2. Experimental validation of the TOPAS Monte Carlo system for passive scattering proton therapy

    PubMed Central

    Testa, M.; Schümann, J.; Lu, H.-M.; Shin, J.; Faddegon, B.; Perl, J.; Paganetti, H.

    2013-01-01

    Purpose: TOPAS (TOol for PArticle Simulation) is a particle simulation code recently developed with the specific aim of making Monte Carlo simulations user-friendly for research and clinical physicists in the particle therapy community. The authors present a thorough and extensive experimental validation of Monte Carlo simulations performed with TOPAS in a variety of setups relevant for proton therapy applications. The set of validation measurements performed in this work represents an overall end-to-end testing strategy recommended for all clinical centers planning to rely on TOPAS for quality assurance or patient dose calculation and, more generally, for all the institutions using passive-scattering proton therapy systems. Methods: The authors systematically compared TOPAS simulations with measurements that are performed routinely within the quality assurance (QA) program in our institution as well as experiments specifically designed for this validation study. First, the authors compared TOPAS simulations with measurements of depth-dose curves for spread-out Bragg peak (SOBP) fields. Second, absolute dosimetry simulations were benchmarked against measured machine output factors (OFs). Third, the authors simulated and measured 2D dose profiles and analyzed the differences in terms of field flatness and symmetry and usable field size. Fourth, the authors designed a simple experiment using a half-beam shifter to assess the effects of multiple Coulomb scattering, beam divergence, and inverse square attenuation on lateral and longitudinal dose profiles measured and simulated in a water phantom. Fifth, TOPAS’ capabilities to simulate time dependent beam delivery was benchmarked against dose rate functions (i.e., dose per unit time vs time) measured at different depths inside an SOBP field. Sixth, simulations of the charge deposited by protons fully stopping in two different types of multilayer Faraday cups (MLFCs) were compared with measurements to benchmark the nuclear interaction models used in the simulations. Results: SOBPs’ range and modulation width were reproduced, on average, with an accuracy of +1, −2 and ±3 mm, respectively. OF simulations reproduced measured data within ±3%. Simulated 2D dose-profiles show field flatness and average field radius within ±3% of measured profiles. The field symmetry resulted, on average in ±3% agreement with commissioned profiles. TOPAS accuracy in reproducing measured dose profiles downstream the half beam shifter is better than 2%. Dose rate function simulation reproduced the measurements within ∼2% showing that the four-dimensional modeling of the passively modulation system was implement correctly and millimeter accuracy can be achieved in reproducing measured data. For MLFCs simulations, 2% agreement was found between TOPAS and both sets of experimental measurements. The overall results show that TOPAS simulations are within the clinical accepted tolerances for all QA measurements performed at our institution. Conclusions: Our Monte Carlo simulations reproduced accurately the experimental data acquired through all the measurements performed in this study. Thus, TOPAS can reliably be applied to quality assurance for proton therapy and also as an input for commissioning of commercial treatment planning systems. This work also provides the basis for routine clinical dose calculations in patients for all passive scattering proton therapy centers using TOPAS. PMID:24320505

  3. Benchmarks for effective primary care-based nursing services for adults with depression: a Delphi study.

    PubMed

    McIlrath, Carole; Keeney, Sinead; McKenna, Hugh; McLaughlin, Derek

    2010-02-01

    This paper is a report of a study conducted to identify and gain consensus on appropriate benchmarks for effective primary care-based nursing services for adults with depression. Worldwide evidence suggests that between 5% and 16% of the population have a diagnosis of depression. Most of their care and treatment takes place in primary care. In recent years, primary care nurses, including community mental health nurses, have become more involved in the identification and management of patients with depression; however, there are no appropriate benchmarks to guide, develop and support their practice. In 2006, a three-round electronic Delphi survey was completed by a United Kingdom multi-professional expert panel (n = 67). Round 1 generated 1216 statements relating to structures (such as training and protocols), processes (such as access and screening) and outcomes (such as patient satisfaction and treatments). Content analysis was used to collapse statements into 140 benchmarks. Seventy-three benchmarks achieved consensus during subsequent rounds. Of these, 45 (61%) were related to structures, 18 (25%) to processes and 10 (14%) to outcomes. Multi-professional primary care staff have similar views about the appropriate benchmarks for care of adults with depression. These benchmarks could serve as a foundation for depression improvement initiatives in primary care and ongoing research into depression management by nurses.

  4. Evaluation of the influence of the definition of an isolated hip fracture as an exclusion criterion for trauma system benchmarking: a multicenter cohort study.

    PubMed

    Tiao, J; Moore, L; Porgo, T V; Belcaid, A

    2016-06-01

    To assess whether the definition of an IHF used as an exclusion criterion influences the results of trauma center benchmarking. We conducted a multicenter retrospective cohort study with data from an integrated Canadian trauma system. The study population included all patients admitted between 1999 and 2010 to any of the 57 adult trauma centers. Seven definitions of IHF based on diagnostic codes, age, mechanism of injury, and secondary injuries, identified in a systematic review, were used. Trauma centers were benchmarked using risk-adjusted mortality estimates generated using the Trauma Risk Adjustment Model. The agreement between benchmarking results generated under different IHF definitions was evaluated with correlation coefficients on adjusted mortality estimates. Correlation coefficients >0.95 were considered to convey acceptable agreement. The study population consisted of 172,872 patients before exclusion of IHF and between 128,094 and 139,588 patients after exclusion. Correlation coefficients between risk-adjusted mortality estimates generated in populations including and excluding IHF varied between 0.86 and 0.90. Correlation coefficients of estimates generated under different definitions of IHF varied between 0.97 and 0.99, even when analyses were restricted to patients aged ≥65 years. Although the exclusion of patients with IHF has an influence on the results of trauma center benchmarking based on mortality, the definition of IHF in terms of diagnostic codes, age, mechanism of injury and secondary injury has no significant impact on benchmarking results. Results suggest that there is no need to obtain formal consensus on the definition of IHF for benchmarking activities.

  5. Benchmark Results Of Active Tracer Particles In The Open Souce Code ASPECT For Modelling Convection In The Earth's Mantle

    NASA Astrophysics Data System (ADS)

    Jiang, J.; Kaloti, A. P.; Levinson, H. R.; Nguyen, N.; Puckett, E. G.; Lokavarapu, H. V.

    2016-12-01

    We present the results of three standard benchmarks for the new active tracer particle algorithm in ASPECT. The three benchmarks are SolKz, SolCx, and SolVI (also known as the 'inclusion benchmark') first proposed by Duretz, May, Gerya, and Tackley (G Cubed, 2011) and in subsequent work by Theilman, May, and Kaus (Pure and Applied Geophysics, 2014). Each of the three benchmarks compares the accuracy of the numerical solution to a steady (time-independent) solution of the incompressible Stokes equations with a known exact solution. These benchmarks are specifically designed to test the accuracy and effectiveness of the numerical method when the viscosity varies up to six orders of magnitude. ASPECT has been shown to converge to the exact solution of each of these benchmarks at the correct design rate when all of the flow variables, including the density and viscosity, are discretized on the underlying finite element grid (Krobichler, Heister, and Bangerth, GJI, 2012). In our work we discretize the density and viscosity by initially placing the true values of the density and viscosity at the intial particle positions. At each time step, including the initialization step, the density and viscosity are interpolated from the particles onto the finite element grid. The resulting Stokes system is solved for the velocity and pressure, and the particle positions are advanced in time according to this new, numerical, velocity field. Note that this procedure effectively changes a steady solution of the Stokes equaton (i.e., one that is independent of time) to a solution of the Stokes equations that is time dependent. Furthermore, the accuracy of the active tracer particle algorithm now also depends on the accuracy of the interpolation algorithm and of the numerical method one uses to advance the particle positions in time. Finally, we will present new interpolation algorithms designed to increase the overall accuracy of the active tracer algorithms in ASPECT and interpolation algotithms designed to conserve properties, such as mass density, that are being carried by the particles.

  6. Paradoxical Acinetobacter-associated ventilator-associated pneumonia incidence rates within prevention studies using respiratory tract applications of topical polymyxin: benchmarking the evidence base.

    PubMed

    Hurley, J C

    2018-04-10

    Regimens containing topical polymyxin appear to be more effective in preventing ventilator-associated pneumonia (VAP) than other methods. To benchmark the incidence rates of Acinetobacter-associated VAP (AAVAP) within component (control and intervention) groups from concurrent controlled studies of polymyxin compared with studies of various VAP prevention methods other than polymyxin (non-polymyxin studies). An AAVAP benchmark was derived using data from 77 observational groups without any VAP prevention method under study. Data from 41 non-polymyxin studies provided additional points of reference. The benchmarking was undertaken by meta-regression using generalized estimating equation methods. Within 20 studies of topical polymyxin, the mean AAVAP was 4.6% [95% confidence interval (CI) 3.0-6.9] and 3.7% (95% CI 2.0-5.3) for control and intervention groups, respectively. In contrast, the AAVAP benchmark was 1.5% (95% CI 1.2-2.0). In the AAVAP meta-regression model, group origin from a trauma intensive care unit (+0.55; +0.16 to +0.94, P = 0.006) or membership of a polymyxin control group (+0.64; +0.21 to +1.31, P = 0.023), but not membership of a polymyxin intervention group (+0.24; -0.37 to +0.84, P = 0.45), were significant positive correlates. The mean incidence of AAVAP within the control groups of studies of topical polymyxin is more than double the benchmark, whereas the incidence rates within the groups of non-polymyxin studies and, paradoxically, polymyxin intervention groups are more similar to the benchmark. These incidence rates, which are paradoxical in the context of an apparent effect against VAP within controlled trials of topical polymyxin-based interventions, force a re-appraisal. Copyright © 2018 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  7. Gust Acoustics Computation with a Space-Time CE/SE Parallel 3D Solver

    NASA Technical Reports Server (NTRS)

    Wang, X. Y.; Himansu, A.; Chang, S. C.; Jorgenson, P. C. E.; Reddy, D. R. (Technical Monitor)

    2002-01-01

    The benchmark Problem 2 in Category 3 of the Third Computational Aero-Acoustics (CAA) Workshop is solved using the space-time conservation element and solution element (CE/SE) method. This problem concerns the unsteady response of an isolated finite-span swept flat-plate airfoil bounded by two parallel walls to an incident gust. The acoustic field generated by the interaction of the gust with the flat-plate airfoil is computed by solving the 3D (three-dimensional) Euler equations in the time domain using a parallel version of a 3D CE/SE solver. The effect of the gust orientation on the far-field directivity is studied. Numerical solutions are presented and compared with analytical solutions, showing a reasonable agreement.

  8. Validation and Comparison of 2D and 3D Codes for Nearshore Motion of Long Waves Using Benchmark Problems

    NASA Astrophysics Data System (ADS)

    Velioǧlu, Deniz; Cevdet Yalçıner, Ahmet; Zaytsev, Andrey

    2016-04-01

    Tsunamis are huge waves with long wave periods and wave lengths that can cause great devastation and loss of life when they strike a coast. The interest in experimental and numerical modeling of tsunami propagation and inundation increased considerably after the 2011 Great East Japan earthquake. In this study, two numerical codes, FLOW 3D and NAMI DANCE, that analyze tsunami propagation and inundation patterns are considered. Flow 3D simulates linear and nonlinear propagating surface waves as well as long waves by solving three-dimensional Navier-Stokes (3D-NS) equations. NAMI DANCE uses finite difference computational method to solve 2D depth-averaged linear and nonlinear forms of shallow water equations (NSWE) in long wave problems, specifically tsunamis. In order to validate these two codes and analyze the differences between 3D-NS and 2D depth-averaged NSWE equations, two benchmark problems are applied. One benchmark problem investigates the runup of long waves over a complex 3D beach. The experimental setup is a 1:400 scale model of Monai Valley located on the west coast of Okushiri Island, Japan. Other benchmark problem is discussed in 2015 National Tsunami Hazard Mitigation Program (NTHMP) Annual meeting in Portland, USA. It is a field dataset, recording the Japan 2011 tsunami in Hilo Harbor, Hawaii. The computed water surface elevation and velocity data are compared with the measured data. The comparisons showed that both codes are in fairly good agreement with each other and benchmark data. The differences between 3D-NS and 2D depth-averaged NSWE equations are highlighted. All results are presented with discussions and comparisons. Acknowledgements: Partial support by Japan-Turkey Joint Research Project by JICA on earthquakes and tsunamis in Marmara Region (JICA SATREPS - MarDiM Project), 603839 ASTARTE Project of EU, UDAP-C-12-14 project of AFAD Turkey, 108Y227, 113M556 and 213M534 projects of TUBITAK Turkey, RAPSODI (CONCERT_Dis-021) of CONCERT-Japan Joint Call and Istanbul Metropolitan Municipality are all acknowledged.

  9. Proposed biopsy performance benchmarks for MRI based on an audit of a large academic center.

    PubMed

    Sedora Román, Neda I; Mehta, Tejas S; Sharpe, Richard E; Slanetz, Priscilla J; Venkataraman, Shambhavi; Fein-Zachary, Valerie; Dialani, Vandana

    2018-05-01

    Performance benchmarks exist for mammography (MG); however, performance benchmarks for magnetic resonance imaging (MRI) are not yet fully developed. The purpose of our study was to perform an MRI audit based on established MG and screening MRI benchmarks and to review whether these benchmarks can be applied to an MRI practice. An IRB approved retrospective review of breast MRIs was performed at our center from 1/1/2011 through 12/31/13. For patients with biopsy recommendation, core biopsy and surgical pathology results were reviewed. The data were used to derive mean performance parameter values, including abnormal interpretation rate (AIR), positive predictive value (PPV), cancer detection rate (CDR), percentage of minimal cancers and axillary node negative cancers and compared with MG and screening MRI benchmarks. MRIs were also divided by screening and diagnostic indications to assess for differences in performance benchmarks amongst these two groups. Of the 2455 MRIs performed over 3-years, 1563 were performed for screening indications and 892 for diagnostic indications. With the exception of PPV2 for screening breast MRIs from 2011 to 2013, PPVs were met for our screening and diagnostic populations when compared to the MRI screening benchmarks established by the Breast Imaging Reporting and Data System (BI-RADS) 5 Atlas ® . AIR and CDR were lower for screening indications as compared to diagnostic indications. New MRI screening benchmarks can be used for screening MRI audits while the American College of Radiology (ACR) desirable goals for diagnostic MG can be used for diagnostic MRI audits. Our study corroborates established findings regarding differences in AIR and CDR amongst screening versus diagnostic indications. © 2017 Wiley Periodicals, Inc.

  10. The electronegativity equalization method and the split charge equilibration applied to organic systems: parametrization, validation, and comparison.

    PubMed

    Verstraelen, Toon; Van Speybroeck, Veronique; Waroquier, Michel

    2009-07-28

    An extensive benchmark of the electronegativity equalization method (EEM) and the split charge equilibration (SQE) model on a very diverse set of organic molecules is presented. These models efficiently compute atomic partial charges and are used in the development of polarizable force fields. The predicted partial charges that depend on empirical parameters are calibrated to reproduce results from quantum mechanical calculations. Recently, SQE is presented as an extension of the EEM to obtain the correct size dependence of the molecular polarizability. In this work, 12 parametrization protocols are applied to each model and the optimal parameters are benchmarked systematically. The training data for the empirical parameters comprise of MP2/Aug-CC-pVDZ calculations on 500 organic molecules containing the elements H, C, N, O, F, S, Cl, and Br. These molecules have been selected by an ingenious and autonomous protocol from an initial set of almost 500,000 small organic molecules. It is clear that the SQE model outperforms the EEM in all benchmark assessments. When using Hirshfeld-I charges for the calibration, the SQE model optimally reproduces the molecular electrostatic potential from the ab initio calculations. Applications on chain molecules, i.e., alkanes, alkenes, and alpha alanine helices, confirm that the EEM gives rise to a divergent behavior for the polarizability, while the SQE model shows the correct trends. We conclude that the SQE model is an essential component of a polarizable force field, showing several advantages over the original EEM.

  11. SU-E-T-577: Commissioning of a Deterministic Algorithm for External Photon Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, T; Finlay, J; Mesina, C

    Purpose: We report commissioning results for a deterministic algorithm for external photon beam treatment planning. A deterministic algorithm solves the radiation transport equations directly using a finite difference method, thus improve the accuracy of dose calculation, particularly under heterogeneous conditions with results similar to that of Monte Carlo (MC) simulation. Methods: Commissioning data for photon energies 6 – 15 MV includes the percentage depth dose (PDD) measured at SSD = 90 cm and output ratio in water (Spc), both normalized to 10 cm depth, for field sizes between 2 and 40 cm and depths between 0 and 40 cm. Off-axismore » ratio (OAR) for the same set of field sizes was used at 5 depths (dmax, 5, 10, 20, 30 cm). The final model was compared with the commissioning data as well as additional benchmark data. The benchmark data includes dose per MU determined for 17 points for SSD between 80 and 110 cm, depth between 5 and 20 cm, and lateral offset of up to 16.5 cm. Relative comparisons were made in a heterogeneous phantom made of cork and solid water. Results: Compared to the commissioning beam data, the agreement are generally better than 2% with large errors (up to 13%) observed in the buildup regions of the FDD and penumbra regions of the OAR profiles. The overall mean standard deviation is 0.04% when all data are taken into account. Compared to the benchmark data, the agreements are generally better than 2%. Relative comparison in heterogeneous phantom is in general better than 4%. Conclusion: A commercial deterministic algorithm was commissioned for megavoltage photon beams. In a homogeneous medium, the agreement between the algorithm and measurement at the benchmark points is generally better than 2%. The dose accuracy for a deterministic algorithm is better than a convolution algorithm in heterogeneous medium.« less

  12. A Prototype Tool to Enable Farmers to Measure and Improve the Welfare Performance of the Farm Animal Enterprise: The Unified Field Index

    PubMed Central

    Colditz, Ian G.; Ferguson, Drewe M.; Collins, Teresa; Matthews, Lindsay; Hemsworth, Paul H.

    2014-01-01

    Simple Summary Benchmarking is a tool widely used in agricultural industries that harnesses the experience of farmers to generate knowledge of practices that lead to better on-farm productivity and performance. We propose, by analogy with production performance, a method for measuring the animal welfare performance of an enterprise and describe a tool for farmers to monitor and improve the animal welfare performance of their business. A general framework is outlined for assessing and monitoring risks to animal welfare based on measures of animals, the environment they are kept in and how they are managed. The tool would enable farmers to continually improve animal welfare. Abstract Schemes for the assessment of farm animal welfare and assurance of welfare standards have proliferated in recent years. An acknowledged short-coming has been the lack of impact of these schemes on the welfare standards achieved on farm due in part to sociological factors concerning their implementation. Here we propose the concept of welfare performance based on a broad set of performance attributes of an enterprise and describe a tool based on risk assessment and benchmarking methods for measuring and managing welfare performance. The tool termed the Unified Field Index is presented in a general form comprising three modules addressing animal, resource, and management factors. Domains within these modules accommodate the principle conceptual perspectives for welfare assessment: biological functioning; emotional states; and naturalness. Pan-enterprise analysis in any livestock sector could be used to benchmark welfare performance of individual enterprises and also provide statistics of welfare performance for the livestock sector. An advantage of this concept of welfare performance is its use of continuous scales of measurement rather than traditional pass/fail measures. Through the feedback provided via benchmarking, the tool should help farmers better engage in on-going improvement of farm practices that affect animal welfare. PMID:26480317

  13. Orthogonal Electric Field Measurements near the Green Fluorescent Protein Fluorophore through Stark Effect Spectroscopy and pKa Shifts Provide a Unique Benchmark for Electrostatics Models.

    PubMed

    Slocum, Joshua D; First, Jeremy T; Webb, Lauren J

    2017-07-20

    Measurement of the magnitude, direction, and functional importance of electric fields in biomolecules has been a long-standing experimental challenge. pK a shifts of titratable residues have been the most widely implemented measurements of the local electrostatic environment around the labile proton, and experimental data sets of pK a shifts in a variety of systems have been used to test and refine computational prediction capabilities of protein electrostatic fields. A more direct and increasingly popular technique to measure electric fields in proteins is Stark effect spectroscopy, where the change in absorption energy of a chromophore relative to a reference state is related to the change in electric field felt by the chromophore. While there are merits to both of these methods and they are both reporters of local electrostatic environment, they are fundamentally different measurements, and to our knowledge there has been no direct comparison of these two approaches in a single protein. We have recently demonstrated that green fluorescent protein (GFP) is an ideal model system for measuring changes in electric fields in a protein interior caused by amino acid mutations using both electronic and vibrational Stark effect chromophores. Here we report the changes in pK a of the GFP fluorophore in response to the same mutations and show that they are in excellent agreement with Stark effect measurements. This agreement in the results of orthogonal experiments reinforces our confidence in the experimental results of both Stark effect and pK a measurements and provides an excellent target data set to benchmark diverse protein electrostatics calculations. We used this experimental data set to test the pK a prediction ability of the adaptive Poisson-Boltzmann solver (APBS) and found that a simple continuum dielectric model of the GFP interior is insufficient to accurately capture the measured pK a and Stark effect shifts. We discuss some of the limitations of this continuum-based model in this system and offer this experimentally self-consistent data set as a target benchmark for electrostatics models, which could allow for a more rigorous test of pK a prediction techniques due to the unique environment of the water-filled GFP barrel compared to traditional globular proteins.

  14. Optimization of an AMBER Force Field for the Artificial Nucleic Acid, LNA, and Benchmarking with NMR of L(CAAU)

    PubMed Central

    2013-01-01

    Locked Nucleic Acids (LNAs) are RNA analogues with an O2′-C4′ methylene bridge which locks the sugar into a C3′-endo conformation. This enhances hybridization to DNA and RNA, making LNAs useful in microarrays and potential therapeutics. Here, the LNA, L(CAAU), provides a simplified benchmark for testing the ability of molecular dynamics (MD) to approximate nucleic acid properties. LNA χ torsions and partial charges were parametrized to create AMBER parm99_LNA. The revisions were tested by comparing MD predictions with AMBER parm99 and parm99_LNA against a 200 ms NOESY NMR spectrum of L(CAAU). NMR indicates an A-Form equilibrium ensemble. In 3000 ns simulations starting with an A-form structure, parm99_LNA and parm99 provide 66% and 35% agreement, respectively, with NMR NOE volumes and 3J-couplings. In simulations of L(CAAU) starting with all χ torsions in a syn conformation, only parm99_LNA is able to repair the structure. This implies methods for parametrizing force fields for nucleic acid mimics can reasonably approximate key interactions and that parm99_LNA will improve reliability of MD studies for systems with LNA. A method for approximating χ population distribution on the basis of base to sugar NOEs is also introduced. PMID:24377321

  15. A benchmarking program to reduce red blood cell outdating: implementation, evaluation, and a conceptual framework.

    PubMed

    Barty, Rebecca L; Gagliardi, Kathleen; Owens, Wendy; Lauzon, Deborah; Scheuermann, Sheena; Liu, Yang; Wang, Grace; Pai, Menaka; Heddle, Nancy M

    2015-07-01

    Benchmarking is a quality improvement tool that compares an organization's performance to that of its peers for selected indicators, to improve practice. Processes to develop evidence-based benchmarks for red blood cell (RBC) outdating in Ontario hospitals, based on RBC hospital disposition data from Canadian Blood Services, have been previously reported. These benchmarks were implemented in 160 hospitals provincewide with a multifaceted approach, which included hospital education, inventory management tools and resources, summaries of best practice recommendations, recognition of high-performing sites, and audit tools on the Transfusion Ontario website (http://transfusionontario.org). In this study we describe the implementation process and the impact of the benchmarking program on RBC outdating. A conceptual framework for continuous quality improvement of a benchmarking program was also developed. The RBC outdating rate for all hospitals trended downward continuously from April 2006 to February 2012, irrespective of hospitals' transfusion rates or their distance from the blood supplier. The highest annual outdating rate was 2.82%, at the beginning of the observation period. Each year brought further reductions, with a nadir outdating rate of 1.02% achieved in 2011. The key elements of the successful benchmarking strategy included dynamic targets, a comprehensive and evidence-based implementation strategy, ongoing information sharing, and a robust data system to track information. The Ontario benchmarking program for RBC outdating resulted in continuous and sustained quality improvement. Our conceptual iterative framework for benchmarking provides a guide for institutions implementing a benchmarking program. © 2015 AABB.

  16. Benchmarking can add up for healthcare accounting.

    PubMed

    Czarnecki, M T

    1994-09-01

    In 1993, a healthcare accounting and finance benchmarking survey of hospital and nonhospital organizations gathered statistics about key common performance areas. A low response did not allow for statistically significant findings, but the survey identified performance measures that can be used in healthcare financial management settings. This article explains the benchmarking process and examines some of the 1993 study's findings.

  17. Learning from Follow Up Surveys of Graduates: The Austin Teacher Program and the Benchmark Project. A Discussion Paper.

    ERIC Educational Resources Information Center

    Baker, Thomas E.

    This paper describes Austin College's (Texas) participation in the Benchmark Project, a collaborative followup study of teacher education graduates and their principals, focusing on the second round of data collection. The Benchmark Project was a collaboration of 11 teacher preparation programs that gathered and analyzed data comparing graduates…

  18. Using Participatory Action Research to Study the Implementation of Career Development Benchmarks at a New Zealand University

    ERIC Educational Resources Information Center

    Furbish, Dale S.; Bailey, Robyn; Trought, David

    2016-01-01

    Benchmarks for career development services at tertiary institutions have been developed by Careers New Zealand. The benchmarks are intended to provide standards derived from international best practices to guide career development services. A new career development service was initiated at a large New Zealand university just after the benchmarks…

  19. Teachers' Perceptions of the Effectiveness of Benchmark Assessment Data to Predict Student Math Grades

    ERIC Educational Resources Information Center

    Lewis, Lawanna M.

    2010-01-01

    The purpose of this correlational quantitative study was to examine the extent to which teachers perceive the use of benchmark assessment data as effective; the extent to which the time spent teaching mathematics is associated with students' mathematics grades, and the extent to which the results of math benchmark assessment influence teachers'…

  20. Adiabatic Quantum Computation with Neutral Cesium

    NASA Astrophysics Data System (ADS)

    Hankin, Aaron; Parazzoli, L.; Chou, Chin-Wen; Jau, Yuan-Yu; Burns, George; Young, Amber; Kemme, Shanalyn; Ferdinand, Andrew; Biedermann, Grant; Landahl, Andrew; Ivan H. Deutsch Collaboration; Mark Saffman Collaboration

    2013-05-01

    We are implementing a new platform for adiabatic quantum computation (AQC) based on trapped neutral atoms whose coupling is mediated by the dipole-dipole interactions of Rydberg states. Ground state cesium atoms are dressed by laser fields in a manner conditional on the Rydberg blockade mechanism, thereby providing the requisite entangling interactions. As a benchmark we study a Quadratic Unconstrained Binary Optimization (QUBO) problem whose solution is found in the ground state spin configuration of an Ising-like model. University of New Mexico: Ivan H. Deutsch, Tyler Keating, Krittika Goyal.

  1. Benchmarking Ada tasking on tightly coupled multiprocessor architectures

    NASA Technical Reports Server (NTRS)

    Collard, Philippe; Goforth, Andre; Marquardt, Matthew

    1989-01-01

    The development of benchmarks and performance measures for parallel Ada tasking is reported with emphasis on the macroscopic behavior of the benchmark across a set of load parameters. The application chosen for the study was the NASREM model for telerobot control, relevant to many NASA missions. The results of the study demonstrate the potential of parallel Ada in accomplishing the task of developing a control system for a system such as the Flight Telerobotic Servicer using the NASREM framework.

  2. COMPETITIVE BIDDING IN MEDICARE ADVANTAGE: EFFECT OF BENCHMARK CHANGES ON PLAN BIDS

    PubMed Central

    Song, Zirui; Landrum, Mary Beth; Chernew, Michael E.

    2013-01-01

    Bidding has been proposed to replace or complement the administered prices in Medicare pays to hospitals and health plans. In 2006, the Medicare Advantage program implemented a competitive bidding system to determine plan payments. In perfectly competitive models, plans bid their costs and thus bids are insensitive to the benchmark. Under many other models of competition, bids respond to changes in the benchmark. We conceptualize the bidding system and use an instrumental variable approach to study the effect of benchmark changes on bids. We use 2006–2010 plan payment data from the Centers for Medicare and Medicaid Services, published county benchmarks, actual realized fee-for-service costs, and Medicare Advantage enrollment. We find that a $1 increase in the benchmark leads to about a $0.53 increase in bids, suggesting that plans in the Medicare Advantage market have meaningful market power. PMID:24308881

  3. Competitive bidding in Medicare Advantage: effect of benchmark changes on plan bids.

    PubMed

    Song, Zirui; Landrum, Mary Beth; Chernew, Michael E

    2013-12-01

    Bidding has been proposed to replace or complement the administered prices that Medicare pays to hospitals and health plans. In 2006, the Medicare Advantage program implemented a competitive bidding system to determine plan payments. In perfectly competitive models, plans bid their costs and thus bids are insensitive to the benchmark. Under many other models of competition, bids respond to changes in the benchmark. We conceptualize the bidding system and use an instrumental variable approach to study the effect of benchmark changes on bids. We use 2006-2010 plan payment data from the Centers for Medicare and Medicaid Services, published county benchmarks, actual realized fee-for-service costs, and Medicare Advantage enrollment. We find that a $1 increase in the benchmark leads to about a $0.53 increase in bids, suggesting that plans in the Medicare Advantage market have meaningful market power. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Groundwater-quality data in the North San Francisco Bay Shallow Aquifer study unit, 2012: results from the California GAMA Program

    USGS Publications Warehouse

    Bennett, George L.; Fram, Miranda S.

    2014-01-01

    Results for constituents with non-regulatory benchmarks set for aesthetic concerns from the grid wells showed that iron concentrations greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 300 μg/L were detected in 13 grid wells. Chloride was detected at a concentration greater than the SMCL-CA recommended benchmark of 250 mg/L in two grid wells. Sulfate concentrations greater than the SMCL-CA recommended benchmark of 250 mg/L were measured in two grid wells, and the concentration in one of these wells was also greater than the SMCL-CA upper benchmark of 500 mg/L. TDS concentrations greater than the SMCL-CA recommended benchmark of 500 mg/L were measured in 15 grid wells, and concentrations in 4 of these wells were also greater than the SMCL-CA upper benchmark of 1,000 mg/L.

  5. The Problem of Boys' Literacy Underachievement: Raising Some Questions

    ERIC Educational Resources Information Center

    Watson, Anne; Kehler, Michael; Martino, Wayne

    2010-01-01

    Boys' literacy underachievement continues to garner significant attention and has been identified by journalists, educational policymakers, and scholars in the field as the cause for much concern. It has been established that boys perform less well than girls on literacy benchmark or standardized tests. According to the National Assessment of…

  6. 75 FR 81268 - Science Advisory Board Staff Office; Notification of Two Public Quality Review Teleconferences of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-27

    ... Two Public Quality Review Teleconferences of the Chartered Science Advisory Board AGENCY... Office announces two public teleconferences of the chartered SAB to conduct quality reviews of three SAB... Appalachian Coalfields'' and ``Review of Field-Based Aquatic Life Benchmark for Conductivity in Central...

  7. Modeling conservation practices in APEX: From the field to the watershed

    USDA-ARS?s Scientific Manuscript database

    The evaluation of USDA conservation programs is required as part of the Conservation Effects Assessment Project (CEAP). The Agricultural Policy/Environmental eXtender (APEX) model was applied to the St. Joseph River Watershed, one of CEAP’s benchmark watersheds. Using a previously calibrated and val...

  8. Reflections on "Real-World" Community Psychology

    ERIC Educational Resources Information Center

    Wolff, Tom; Swift, Carolyn

    2008-01-01

    Reflections on the history of real-world (applied) community psychologists trace their participation in the field's official guild, the Society for Community Research and Action (SCRA), beginning with the Swampscott Conference in 1965 through the current date. Four benchmarks are examined. The issues these real-world psychologists bring to the…

  9. Reflective Field Experiences for Success in Teaching Elementary Mathematics

    ERIC Educational Resources Information Center

    Robards, Shirley N.

    2009-01-01

    In this paper, the author discusses the major components of a junior level pedagogy course for elementary education majors learning to teach mathematics. The course reviews content and knowledge of the teacher candidates and introduces methods and materials for teaching elementary mathematics using the Standards or benchmarks from the National…

  10. A partial entropic lattice Boltzmann MHD simulation of the Orszag-Tang vortex

    NASA Astrophysics Data System (ADS)

    Flint, Christopher; Vahala, George

    2018-02-01

    Karlin has introduced an analytically determined entropic lattice Boltzmann (LB) algorithm for Navier-Stokes turbulence. Here, this is partially extended to an LB model of magnetohydrodynamics, on using the vector distribution function approach of Dellar for the magnetic field (which is permitted to have field reversal). The partial entropic algorithm is benchmarked successfully against standard simulations of the Orszag-Tang vortex [Orszag, S.A.; Tang, C.M. J. Fluid Mech. 1979, 90 (1), 129-143].

  11. Interpreting Neutron Reflectivity Profiles of Diblock Copolymer Nanocomposite Thin Films Using Hybrid Particle-Field Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahalik, Jyoti P.; Dugger, Jason W.; Sides, Scott W.

    Mixtures of block copolymers and nanoparticles (block copolymer nanocomposites) are known to microphase separate into a plethora of microstructures, depending on the composition, length scale and nature of interactions among its different constituents. Theoretical and experimental works on this class of nanocomposites have already high-lighted intricate relations among chemical details of the polymers, nanoparticles, and various microstructures. Confining these nanocomposites in thin films yields an even larger array of structures, which are not normally observed in the bulk. In contrast to the bulk, exploring various microstructures in thin films by the experimental route remains a challenging task. Here in thismore » work, we construct a model for the thin films of lamellar forming diblock copolymers containing spherical nanoparticles based on a hybrid particle-field approach. The model is benchmarked by comparison with the depth profiles obtained from the neutron reflectivity experiments for symmetric poly(deuterated styrene-b-n butyl methacrylate) copolymers blended with spherical magnetite nanoparticles covered with hydrogenated poly(styrene) corona. We show that the model based on a hybrid particle-field approach provides details of the underlying microphase separation in the presence of the nanoparticles through a direct comparison to the neutron reflectivity data. This work benchmarks the application of the hybrid particle-field model to extract the interaction parameters for exploring different microstructures in thin films containing block copolymers and nanocomposites.« less

  12. Interpreting Neutron Reflectivity Profiles of Diblock Copolymer Nanocomposite Thin Films Using Hybrid Particle-Field Simulations

    DOE PAGES

    Mahalik, Jyoti P.; Dugger, Jason W.; Sides, Scott W.; ...

    2018-04-10

    Mixtures of block copolymers and nanoparticles (block copolymer nanocomposites) are known to microphase separate into a plethora of microstructures, depending on the composition, length scale and nature of interactions among its different constituents. Theoretical and experimental works on this class of nanocomposites have already high-lighted intricate relations among chemical details of the polymers, nanoparticles, and various microstructures. Confining these nanocomposites in thin films yields an even larger array of structures, which are not normally observed in the bulk. In contrast to the bulk, exploring various microstructures in thin films by the experimental route remains a challenging task. Here in thismore » work, we construct a model for the thin films of lamellar forming diblock copolymers containing spherical nanoparticles based on a hybrid particle-field approach. The model is benchmarked by comparison with the depth profiles obtained from the neutron reflectivity experiments for symmetric poly(deuterated styrene-b-n butyl methacrylate) copolymers blended with spherical magnetite nanoparticles covered with hydrogenated poly(styrene) corona. We show that the model based on a hybrid particle-field approach provides details of the underlying microphase separation in the presence of the nanoparticles through a direct comparison to the neutron reflectivity data. This work benchmarks the application of the hybrid particle-field model to extract the interaction parameters for exploring different microstructures in thin films containing block copolymers and nanocomposites.« less

  13. Kibble Zurek mechanism of topological defect formation in quantum field theory with matrix product states

    NASA Astrophysics Data System (ADS)

    Gillman, Edward; Rajantie, Arttu

    2018-05-01

    The Kibble Zurek mechanism in a relativistic ϕ4 scalar field theory in D =(1 +1 ) is studied using uniform matrix product states. The equal time two point function in momentum space G2(k ) is approximated as the system is driven through a quantum phase transition at a variety of different quench rates τQ. We focus on looking for signatures of topological defect formation in the system and demonstrate the consistency of the picture that the two point function G2(k ) displays two characteristic scales, the defect density n and the kink width dK. Consequently, G2(k ) provides a clear signature for the formation of defects and a well defined measure of the defect density in the system. These results provide a benchmark for the use of tensor networks as powerful nonperturbative nonequilibrium methods for relativistic quantum field theory, providing a promising technique for the future study of high energy physics and cosmology.

  14. Calcium ions in aqueous solutions: Accurate force field description aided by ab initio molecular dynamics and neutron scattering

    NASA Astrophysics Data System (ADS)

    Martinek, Tomas; Duboué-Dijon, Elise; Timr, Štěpán; Mason, Philip E.; Baxová, Katarina; Fischer, Henry E.; Schmidt, Burkhard; Pluhařová, Eva; Jungwirth, Pavel

    2018-06-01

    We present a combination of force field and ab initio molecular dynamics simulations together with neutron scattering experiments with isotopic substitution that aim at characterizing ion hydration and pairing in aqueous calcium chloride and formate/acetate solutions. Benchmarking against neutron scattering data on concentrated solutions together with ion pairing free energy profiles from ab initio molecular dynamics allows us to develop an accurate calcium force field which accounts in a mean-field way for electronic polarization effects via charge rescaling. This refined calcium parameterization is directly usable for standard molecular dynamics simulations of processes involving this key biological signaling ion.

  15. Optimally Stopped Optimization

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Lidar, Daniel A.

    2016-11-01

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark simulated annealing on a class of maximum-2-satisfiability (MAX2SAT) problems. We also compare the performance of a D-Wave 2X quantum annealer to the Hamze-Freitas-Selby (HFS) solver, a specialized classical heuristic algorithm designed for low-tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N =1098 variables, the D-Wave device is 2 orders of magnitude faster than the HFS solver, and, modulo known caveats related to suboptimal annealing times, exhibits identical scaling with problem size.

  16. Mechanical design criteria for intervertebral disc tissue engineering.

    PubMed

    Nerurkar, Nandan L; Elliott, Dawn M; Mauck, Robert L

    2010-04-19

    Due to the inability of current clinical practices to restore function to degenerated intervertebral discs, the arena of disc tissue engineering has received substantial attention in recent years. Despite tremendous growth and progress in this field, translation to clinical implementation has been hindered by a lack of well-defined functional benchmarks. Because successful replacement of the disc is contingent upon replication of some or all of its complex mechanical behaviors, it is critically important that disc mechanics be well characterized in order to establish discrete functional goals for tissue engineering. In this review, the key functional signatures of the intervertebral disc are discussed and used to propose a series of native tissue benchmarks to guide the development of engineered replacement tissues. These benchmarks include measures of mechanical function under tensile, compressive, and shear deformations for the disc and its substructures. In some cases, important functional measures are identified that have yet to be measured in the native tissue. Ultimately, native tissue benchmark values are compared to measurements that have been made on engineered disc tissues, identifying where functional equivalence was achieved, and where there remain opportunities for advancement. Several excellent reviews exist regarding disc composition and structure, as well as recent tissue engineering strategies; therefore this review will remain focused on the functional aspects of disc tissue engineering. Copyright 2009 Elsevier Ltd. All rights reserved.

  17. Evaluating the Effect of Labeled Benchmarks on Children’s Number Line Estimation Performance and Strategy Use

    PubMed Central

    Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen

    2017-01-01

    Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children’s strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders’ NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children’s NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders’ NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children’s benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that, depending on children’s age and familiarity with the number range, these additional external benchmarks might need to be labeled. PMID:28713302

  18. Evaluating the Effect of Labeled Benchmarks on Children's Number Line Estimation Performance and Strategy Use.

    PubMed

    Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen

    2017-01-01

    Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children's strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders' NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children's NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders' NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children's benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that, depending on children's age and familiarity with the number range, these additional external benchmarks might need to be labeled.

  19. IEA-Task 31 WAKEBENCH: Towards a protocol for wind farm flow model evaluation. Part 2: Wind farm wake models

    NASA Astrophysics Data System (ADS)

    Moriarty, Patrick; Sanz Rodrigo, Javier; Gancarski, Pawel; Chuchfield, Matthew; Naughton, Jonathan W.; Hansen, Kurt S.; Machefaux, Ewan; Maguire, Eoghan; Castellani, Francesco; Terzi, Ludovico; Breton, Simon-Philippe; Ueda, Yuko

    2014-06-01

    Researchers within the International Energy Agency (IEA) Task 31: Wakebench have created a framework for the evaluation of wind farm flow models operating at the microscale level. The framework consists of a model evaluation protocol integrated with a web-based portal for model benchmarking (www.windbench.net). This paper provides an overview of the building-block validation approach applied to wind farm wake models, including best practices for the benchmarking and data processing procedures for validation datasets from wind farm SCADA and meteorological databases. A hierarchy of test cases has been proposed for wake model evaluation, from similarity theory of the axisymmetric wake and idealized infinite wind farm, to single-wake wind tunnel (UMN-EPFL) and field experiments (Sexbierum), to wind farm arrays in offshore (Horns Rev, Lillgrund) and complex terrain conditions (San Gregorio). A summary of results from the axisymmetric wake, Sexbierum, Horns Rev and Lillgrund benchmarks are used to discuss the state-of-the-art of wake model validation and highlight the most relevant issues for future development.

  20. Accuracy of a simplified method for shielded gamma-ray skyshine sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bassett, M.S.; Shultis, J.K.

    1989-11-01

    Rigorous transport or Monte Carlo methods for estimating far-field gamma-ray skyshine doses generally are computationally intensive. consequently, several simplified techniques such as point-kernel methods and methods based on beam response functions have been proposed. For unshielded skyshine sources, these simplified methods have been shown to be quite accurate from comparisons to benchmark problems and to benchmark experimental results. For shielded sources, the simplified methods typically use exponential attenuation and photon buildup factors to describe the effect of the shield. However, the energy and directional redistribution of photons scattered in the shield is usually ignored, i.e., scattered photons are assumed tomore » emerge from the shield with the same energy and direction as the uncollided photons. The accuracy of this shield treatment is largely unknown due to the paucity of benchmark results for shielded sources. In this paper, the validity of such a shield treatment is assessed by comparison to a composite method, which accurately calculates the energy and angular distribution of photons penetrating the shield.« less

  1. Predicting drug-target interactions by dual-network integrated logistic matrix factorization

    NASA Astrophysics Data System (ADS)

    Hao, Ming; Bryant, Stephen H.; Wang, Yanli

    2017-01-01

    In this work, we propose a dual-network integrated logistic matrix factorization (DNILMF) algorithm to predict potential drug-target interactions (DTI). The prediction procedure consists of four steps: (1) inferring new drug/target profiles and constructing profile kernel matrix; (2) diffusing drug profile kernel matrix with drug structure kernel matrix; (3) diffusing target profile kernel matrix with target sequence kernel matrix; and (4) building DNILMF model and smoothing new drug/target predictions based on their neighbors. We compare our algorithm with the state-of-the-art method based on the benchmark dataset. Results indicate that the DNILMF algorithm outperforms the previously reported approaches in terms of AUPR (area under precision-recall curve) and AUC (area under curve of receiver operating characteristic) based on the 5 trials of 10-fold cross-validation. We conclude that the performance improvement depends on not only the proposed objective function, but also the used nonlinear diffusion technique which is important but under studied in the DTI prediction field. In addition, we also compile a new DTI dataset for increasing the diversity of currently available benchmark datasets. The top prediction results for the new dataset are confirmed by experimental studies or supported by other computational research.

  2. Application of ab initio many-body perturbation theory with Gaussian basis sets to the singlet and triplet excitations of organic molecules

    NASA Astrophysics Data System (ADS)

    Hamed, Samia; Rangel, Tonatiuh; Bruneval, Fabien; Neaton, Jeffrey B.

    Quantitative understanding of charged and neutral excitations of organic molecules is critical in diverse areas of study that include astrophysics and the development of energy technologies that are clean and efficient. The recent use of local basis sets with ab initio many-body perturbation theory in the GW approximation and the Bethe-Saltpeter equation approach (BSE), methods traditionally applied to periodic condensed phases with a plane-wave basis, has opened the door to detailed study of such excitations for molecules, as well as accurate numerical benchmarks. Here, through a series of systematic benchmarks with a Gaussian basis, we report on the extent to which the predictive power and utility of this approach depend critically on interdependent underlying approximations and choices for molecules, including the mean-field starting point (eg optimally-tuned range separated hybrids, pure DFT functionals, and untuned hybrids), the GW scheme, and the Tamm Dancoff approximation. We demonstrate the effects of these choices in the context of Thiels' set while drawing analogies to linear-response time-dependent DFT and making comparisons to best theoretical estimates from higher-order wavefunction-based theories.

  3. A Diagnostic Assessment of Evolutionary Multiobjective Optimization for Water Resources Systems

    NASA Astrophysics Data System (ADS)

    Reed, P.; Hadka, D.; Herman, J.; Kasprzyk, J.; Kollat, J.

    2012-04-01

    This study contributes a rigorous diagnostic assessment of state-of-the-art multiobjective evolutionary algorithms (MOEAs) and highlights key advances that the water resources field can exploit to better discover the critical tradeoffs constraining our systems. This study provides the most comprehensive diagnostic assessment of MOEAs for water resources to date, exploiting more than 100,000 MOEA runs and trillions of design evaluations. The diagnostic assessment measures the effectiveness, efficiency, reliability, and controllability of ten benchmark MOEAs for a representative suite of water resources applications addressing rainfall-runoff calibration, long-term groundwater monitoring (LTM), and risk-based water supply portfolio planning. The suite of problems encompasses a range of challenging problem properties including (1) many-objective formulations with 4 or more objectives, (2) multi-modality (or false optima), (3) nonlinearity, (4) discreteness, (5) severe constraints, (6) stochastic objectives, and (7) non-separability (also called epistasis). The applications are representative of the dominant problem classes that have shaped the history of MOEAs in water resources and that will be dominant foci in the future. Recommendations are provided for which modern MOEAs should serve as tools and benchmarks in the future water resources literature.

  4. Effectiveness of Social Marketing Interventions to Promote Physical Activity Among Adults: A Systematic Review.

    PubMed

    Xia, Yuan; Deshpande, Sameer; Bonates, Tiberius

    2016-11-01

    Social marketing managers promote desired behaviors to an audience by making them tangible in the form of environmental opportunities to enhance benefits and reduce barriers. This study proposed "benchmarks," modified from those found in the past literature, that would match important concepts of the social marketing framework and the inclusion of which would ensure behavior change effectiveness. In addition, we analyzed behavior change interventions on a "social marketing continuum" to assess whether the number of benchmarks and the role of specific benchmarks influence the effectiveness of physical activity promotion efforts. A systematic review of social marketing interventions available in academic studies published between 1997 and 2013 revealed 173 conditions in 92 interventions. Findings based on χ 2 , Mallows' Cp, and Logical Analysis of Data tests revealed that the presence of more benchmarks in interventions increased the likelihood of success in promoting physical activity. The presence of more than 3 benchmarks improved the success of the interventions; specifically, all interventions were successful when more than 7.5 benchmarks were present. Further, primary formative research, core product, actual product, augmented product, promotion, and behavioral competition all had a significant influence on the effectiveness of interventions. Social marketing is an effective approach in promoting physical activity among adults when a substantial number of benchmarks are used and when managers understand the audience, make the desired behavior tangible, and promote the desired behavior persuasively.

  5. LipidQC: Method Validation Tool for Visual Comparison to SRM 1950 Using NIST Interlaboratory Comparison Exercise Lipid Consensus Mean Estimate Values.

    PubMed

    Ulmer, Candice Z; Ragland, Jared M; Koelmel, Jeremy P; Heckert, Alan; Jones, Christina M; Garrett, Timothy J; Yost, Richard A; Bowden, John A

    2017-12-19

    As advances in analytical separation techniques, mass spectrometry instrumentation, and data processing platforms continue to spur growth in the lipidomics field, more structurally unique lipid species are detected and annotated. The lipidomics community is in need of benchmark reference values to assess the validity of various lipidomics workflows in providing accurate quantitative measurements across the diverse lipidome. LipidQC addresses the harmonization challenge in lipid quantitation by providing a semiautomated process, independent of analytical platform, for visual comparison of experimental results of National Institute of Standards and Technology Standard Reference Material (SRM) 1950, "Metabolites in Frozen Human Plasma", against benchmark consensus mean concentrations derived from the NIST Lipidomics Interlaboratory Comparison Exercise.

  6. Gatemon Benchmarking and Two-Qubit Operation

    NASA Astrophysics Data System (ADS)

    Casparis, Lucas; Larsen, Thorvald; Olsen, Michael; Petersson, Karl; Kuemmeth, Ferdinand; Krogstrup, Peter; Nygard, Jesper; Marcus, Charles

    Recent experiments have demonstrated superconducting transmon qubits with semiconductor nanowire Josephson junctions. These hybrid gatemon qubits utilize field effect tunability singular to semiconductors to allow complete qubit control using gate voltages, potentially a technological advantage over conventional flux-controlled transmons. Here, we present experiments with a two-qubit gatemon circuit. We characterize qubit coherence and stability and use randomized benchmarking to demonstrate single-qubit gate errors of ~0.5 % for all gates, including voltage-controlled Z rotations. We show coherent capacitive coupling between two gatemons and coherent SWAP operations. Finally, we perform a two-qubit controlled-phase gate with an estimated fidelity of ~91 %, demonstrating the potential of gatemon qubits for building scalable quantum processors. We acknowledge financial support from Microsoft Project Q and the Danish National Research Foundation.

  7. Piloting a Process Maturity Model as an e-Learning Benchmarking Method

    ERIC Educational Resources Information Center

    Petch, Jim; Calverley, Gayle; Dexter, Hilary; Cappelli, Tim

    2007-01-01

    As part of a national e-learning benchmarking initiative of the UK Higher Education Academy, the University of Manchester is carrying out a pilot study of a method to benchmark e-learning in an institution. The pilot was designed to evaluate the operational viability of a method based on the e-Learning Maturity Model developed at the University of…

  8. Student Satisfaction Surveys: The Value in Taking an Historical Perspective

    ERIC Educational Resources Information Center

    Kane, David; Williams, James; Cappuccini-Ansfield, Gillian

    2008-01-01

    Benchmarking satisfaction over time can be extremely valuable where a consistent feedback cycle is employed. However, the value of benchmarking over a long period of time has not been analysed in depth. What is the value of benchmarking this type of data over time? What does it tell us about a feedback and action cycle? What impact does a study of…

  9. [Benchmark experiment to verify radiation transport calculations for dosimetry in radiation therapy].

    PubMed

    Renner, Franziska

    2016-09-01

    Monte Carlo simulations are regarded as the most accurate method of solving complex problems in the field of dosimetry and radiation transport. In (external) radiation therapy they are increasingly used for the calculation of dose distributions during treatment planning. In comparison to other algorithms for the calculation of dose distributions, Monte Carlo methods have the capability of improving the accuracy of dose calculations - especially under complex circumstances (e.g. consideration of inhomogeneities). However, there is a lack of knowledge of how accurate the results of Monte Carlo calculations are on an absolute basis. A practical verification of the calculations can be performed by direct comparison with the results of a benchmark experiment. This work presents such a benchmark experiment and compares its results (with detailed consideration of measurement uncertainty) with the results of Monte Carlo calculations using the well-established Monte Carlo code EGSnrc. The experiment was designed to have parallels to external beam radiation therapy with respect to the type and energy of the radiation, the materials used and the kind of dose measurement. Because the properties of the beam have to be well known in order to compare the results of the experiment and the simulation on an absolute basis, the benchmark experiment was performed using the research electron accelerator of the Physikalisch-Technische Bundesanstalt (PTB), whose beam was accurately characterized in advance. The benchmark experiment and the corresponding Monte Carlo simulations were carried out for two different types of ionization chambers and the results were compared. Considering the uncertainty, which is about 0.7 % for the experimental values and about 1.0 % for the Monte Carlo simulation, the results of the simulation and the experiment coincide. Copyright © 2015. Published by Elsevier GmbH.

  10. The mass storage testing laboratory at GSFC

    NASA Technical Reports Server (NTRS)

    Venkataraman, Ravi; Williams, Joel; Michaud, David; Gu, Heng; Kalluri, Atri; Hariharan, P. C.; Kobler, Ben; Behnke, Jeanne; Peavey, Bernard

    1998-01-01

    Industry-wide benchmarks exist for measuring the performance of processors (SPECmarks), and of database systems (Transaction Processing Council). Despite storage having become the dominant item in computing and IT (Information Technology) budgets, no such common benchmark is available in the mass storage field. Vendors and consultants provide services and tools for capacity planning and sizing, but these do not account for the complete set of metrics needed in today's archives. The availability of automated tape libraries, high-capacity RAID systems, and high- bandwidth interconnectivity between processor and peripherals has led to demands for services which traditional file systems cannot provide. File Storage and Management Systems (FSMS), which began to be marketed in the late 80's, have helped to some extent with large tape libraries, but their use has introduced additional parameters affecting performance. The aim of the Mass Storage Test Laboratory (MSTL) at Goddard Space Flight Center is to develop a test suite that includes not only a comprehensive check list to document a mass storage environment but also benchmark code. Benchmark code is being tested which will provide measurements for both baseline systems, i.e. applications interacting with peripherals through the operating system services, and for combinations involving an FSMS. The benchmarks are written in C, and are easily portable. They are initially being aimed at the UNIX Open Systems world. Measurements are being made using a Sun Ultra 170 Sparc with 256MB memory running Solaris 2.5.1 with the following configuration: 4mm tape stacker on SCSI 2 Fast/Wide; 4GB disk device on SCSI 2 Fast/Wide; and Sony Petaserve on Fast/Wide differential SCSI 2.

  11. TMFF-A Two-Bead Multipole Force Field for Coarse-Grained Molecular Dynamics Simulation of Protein.

    PubMed

    Li, Min; Liu, Fengjiao; Zhang, John Z H

    2016-12-13

    Coarse-grained (CG) models are desirable for studying large and complex biological systems. In this paper, we propose a new two-bead multipole force field (TMFF) in which electric multipoles up to the quadrupole are included in the CG force field. The inclusion of electric multipoles in the proposed CG force field enables a more realistic description of the anisotropic electrostatic interactions in the protein system and, thus, provides an improvement over the standard isotropic two-bead CG models. In order to test the accuracy of the new CG force field model, extensive molecular dynamics simulations were carried out for a series of benchmark protein systems. These simulation studies showed that the TMFF model can realistically reproduce the structural and dynamical properties of proteins, as demonstrated by the close agreement of the CG results with those from the corresponding all-atom simulations in terms of root-mean-square deviations (RMSDs) and root-mean-square fluctuations (RMSFs) of the protein backbones. The current two-bead model is highly coarse-grained and is 50-fold more efficient than all-atom method in MD simulation of proteins in explicit water.

  12. Elastic parabolic equation solutions for underwater acoustic problems using seismic sources.

    PubMed

    Frank, Scott D; Odom, Robert I; Collis, Jon M

    2013-03-01

    Several problems of current interest involve elastic bottom range-dependent ocean environments with buried or earthquake-type sources, specifically oceanic T-wave propagation studies and interface wave related analyses. Additionally, observed deep shadow-zone arrivals are not predicted by ray theoretic methods, and attempts to model them with fluid-bottom parabolic equation solutions suggest that it may be necessary to account for elastic bottom interactions. In order to study energy conversion between elastic and acoustic waves, current elastic parabolic equation solutions must be modified to allow for seismic starting fields for underwater acoustic propagation environments. Two types of elastic self-starter are presented. An explosive-type source is implemented using a compressional self-starter and the resulting acoustic field is consistent with benchmark solutions. A shear wave self-starter is implemented and shown to generate transmission loss levels consistent with the explosive source. Source fields can be combined to generate starting fields for source types such as explosions, earthquakes, or pile driving. Examples demonstrate the use of source fields for shallow sources or deep ocean-bottom earthquake sources, where down slope conversion, a known T-wave generation mechanism, is modeled. Self-starters are interpreted in the context of the seismic moment tensor.

  13. The InterFrost benchmark of Thermo-Hydraulic codes for cold regions hydrology - first inter-comparison results

    NASA Astrophysics Data System (ADS)

    Grenier, Christophe; Roux, Nicolas; Anbergen, Hauke; Collier, Nathaniel; Costard, Francois; Ferrry, Michel; Frampton, Andrew; Frederick, Jennifer; Holmen, Johan; Jost, Anne; Kokh, Samuel; Kurylyk, Barret; McKenzie, Jeffrey; Molson, John; Orgogozo, Laurent; Rivière, Agnès; Rühaak, Wolfram; Selroos, Jan-Olof; Therrien, René; Vidstrand, Patrik

    2015-04-01

    The impacts of climate change in boreal regions has received considerable attention recently due to the warming trends that have been experienced in recent decades and are expected to intensify in the future. Large portions of these regions, corresponding to permafrost areas, are covered by water bodies (lakes, rivers) that interact with the surrounding permafrost. For example, the thermal state of the surrounding soil influences the energy and water budget of the surface water bodies. Also, these water bodies generate taliks (unfrozen zones below) that disturb the thermal regimes of permafrost and may play a key role in the context of climate change. Recent field studies and modeling exercises indicate that a fully coupled 2D or 3D Thermo-Hydraulic (TH) approach is required to understand and model the past and future evolution of landscapes, rivers, lakes and associated groundwater systems in a changing climate. However, there is presently a paucity of 3D numerical studies of permafrost thaw and associated hydrological changes, and the lack of study can be partly attributed to the difficulty in verifying multi-dimensional results produced by numerical models. Numerical approaches can only be validated against analytical solutions for a purely thermic 1D equation with phase change (e.g. Neumann, Lunardini). When it comes to the coupled TH system (coupling two highly non-linear equations), the only possible approach is to compare the results from different codes to provided test cases and/or to have controlled experiments for validation. Such inter-code comparisons can propel discussions to try to improve code performances. A benchmark exercise was initialized in 2014 with a kick-off meeting in Paris in November. Participants from USA, Canada, Germany, Sweden and France convened, representing altogether 13 simulation codes. The benchmark exercises consist of several test cases inspired by existing literature (e.g. McKenzie et al., 2007) as well as new ones. They range from simpler, purely thermal cases (benchmark T1) to more complex, coupled 2D TH cases (benchmarks TH1, TH2, and TH3). Some experimental cases conducted in cold room complement the validation approach. A web site hosted by LSCE (Laboratoire des Sciences du Climat et de l'Environnement) is an interaction platform for the participants and hosts the test cases database at the following address: https://wiki.lsce.ipsl.fr/interfrost. The results of the first stage of the benchmark exercise will be presented. We will mainly focus on the inter-comparison of participant results for the coupled cases (TH1, TH2 & TH3). Further perspectives of the exercise will also be presented. Extensions to more complex physical conditions (e.g. unsaturated conditions and geometrical deformations) are contemplated. In addition, 1D vertical cases of interest to the Climate Modeling community will be proposed. Keywords: Permafrost; Numerical modeling; River-soil interaction; Arctic systems; soil freeze-thaw

  14. Benchmarking CRISPR on-target sgRNA design.

    PubMed

    Yan, Jifang; Chuai, Guohui; Zhou, Chi; Zhu, Chenyu; Yang, Jing; Zhang, Chao; Gu, Feng; Xu, Han; Wei, Jia; Liu, Qi

    2017-02-15

    CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats)-based gene editing has been widely implemented in various cell types and organisms. A major challenge in the effective application of the CRISPR system is the need to design highly efficient single-guide RNA (sgRNA) with minimal off-target cleavage. Several tools are available for sgRNA design, while limited tools were compared. In our opinion, benchmarking the performance of the available tools and indicating their applicable scenarios are important issues. Moreover, whether the reported sgRNA design rules are reproducible across different sgRNA libraries, cell types and organisms remains unclear. In our study, a systematic and unbiased benchmark of the sgRNA predicting efficacy was performed on nine representative on-target design tools, based on six benchmark data sets covering five different cell types. The benchmark study presented here provides novel quantitative insights into the available CRISPR tools. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Benchmarking expert system tools

    NASA Technical Reports Server (NTRS)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  16. A Benchmark and Comparative Study of Video-Based Face Recognition on COX Face Database.

    PubMed

    Huang, Zhiwu; Shan, Shiguang; Wang, Ruiping; Zhang, Haihong; Lao, Shihong; Kuerban, Alifu; Chen, Xilin

    2015-12-01

    Face recognition with still face images has been widely studied, while the research on video-based face recognition is inadequate relatively, especially in terms of benchmark datasets and comparisons. Real-world video-based face recognition applications require techniques for three distinct scenarios: 1) Videoto-Still (V2S); 2) Still-to-Video (S2V); and 3) Video-to-Video (V2V), respectively, taking video or still image as query or target. To the best of our knowledge, few datasets and evaluation protocols have benchmarked for all the three scenarios. In order to facilitate the study of this specific topic, this paper contributes a benchmarking and comparative study based on a newly collected still/video face database, named COX(1) Face DB. Specifically, we make three contributions. First, we collect and release a largescale still/video face database to simulate video surveillance with three different video-based face recognition scenarios (i.e., V2S, S2V, and V2V). Second, for benchmarking the three scenarios designed on our database, we review and experimentally compare a number of existing set-based methods. Third, we further propose a novel Point-to-Set Correlation Learning (PSCL) method, and experimentally show that it can be used as a promising baseline method for V2S/S2V face recognition on COX Face DB. Extensive experimental results clearly demonstrate that video-based face recognition needs more efforts, and our COX Face DB is a good benchmark database for evaluation.

  17. Integrated materials design of organic semiconductors for field-effect transistors.

    PubMed

    Mei, Jianguo; Diao, Ying; Appleton, Anthony L; Fang, Lei; Bao, Zhenan

    2013-05-08

    The past couple of years have witnessed a remarkable burst in the development of organic field-effect transistors (OFETs), with a number of organic semiconductors surpassing the benchmark mobility of 10 cm(2)/(V s). In this perspective, we highlight some of the major milestones along the way to provide a historical view of OFET development, introduce the integrated molecular design concepts and process engineering approaches that lead to the current success, and identify the challenges ahead to make OFETs applicable in real applications.

  18. Convolutional Neural Network on Embedded Linux(trademark) System-on-Chip: A Methodology and Performance Benchmark

    DTIC Science & Technology

    2016-05-01

    A9 CPU and 15 W for the i7 CPU. A method of accelerating this computation is by using a customized hardware unit called a field- programmable gate...implementation of custom logic to accelerate com- putational workloads. This FPGA fabric, in addition to the standard programmable logic, contains 220...chip; field- programmable gate array Daniel Gebhardt U U U U 18 (619) 553-2786 INITIAL DISTRIBUTION 84300 Library (2) 85300 Archive/Stock (1

  19. Convolutional Neural Network on Embedded Linux System-on-Chip: A Methodology and Performance Benchmark

    DTIC Science & Technology

    2016-05-01

    A9 CPU and 15 W for the i7 CPU. A method of accelerating this computation is by using a customized hardware unit called a field- programmable gate...implementation of custom logic to accelerate com- putational workloads. This FPGA fabric, in addition to the standard programmable logic, contains 220...chip; field- programmable gate array Daniel Gebhardt U U U U 18 (619) 553-2786 INITIAL DISTRIBUTION 84300 Library (2) 85300 Archive/Stock (1

  20. Passive millimeter-wave imaging

    NASA Technical Reports Server (NTRS)

    Young, Stephen K.; Davidheiser, Roger A.; Hauss, Bruce; Lee, Paul S. C.; Mussetto, Michael; Shoucri, Merit M.; Yujiri, Larry

    1993-01-01

    Millimeter-wave hardware systems are being developed. Our approach begins with identifying and defining the applications. System requirements are then specified based on mission needs using our end-to-end performance model. The model was benchmarked against existing data bases and, where data is deficient, it is acquired via field measurements. The derived system requirements are then validated with the appropriate field measurements using our imaging testbeds and hardware breadboards. The result is a final system that satisfies all the requirements of the target mission.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopez, Jesse E.; Baptista, António M.

    A sediment model coupled to the hydrodynamic model SELFE is validated against a benchmark combining a set of idealized tests and an application to a field-data rich energetic estuary. After sensitivity studies, model results for the idealized tests largely agree with previously reported results from other models in addition to analytical, semi-analytical, or laboratory results. Results of suspended sediment in an open channel test with fixed bottom are sensitive to turbulence closure and treatment for hydrodynamic bottom boundary. Results for the migration of a trench are very sensitive to critical stress and erosion rate, but largely insensitive to turbulence closure.more » The model is able to qualitatively represent sediment dynamics associated with estuarine turbidity maxima in an idealized estuary. Applied to the Columbia River estuary, the model qualitatively captures sediment dynamics observed by fixed stations and shipborne profiles. Representation of the vertical structure of suspended sediment degrades when stratification is underpredicted. Across all tests, skill metrics of suspended sediments lag those of hydrodynamics even when qualitatively representing dynamics. The benchmark is fully documented in an openly available repository to encourage unambiguous comparisons against other models.« less

  2. Dynamic behaviour of a planar micro-beam loaded by a fluid-gap: Analytical and numerical approach in a high frequency range, benchmark solutions

    NASA Astrophysics Data System (ADS)

    Novak, A.; Honzik, P.; Bruneau, M.

    2017-08-01

    Miniaturized vibrating MEMS devices, active (receivers or emitters) or passive devices, and their use for either new applications (hearing, meta-materials, consumer devices,…) or metrological purposes under non-standard conditions, are involved today in several acoustic domains. More in-depth characterisation than the classical ones available until now are needed. In this context, the paper presents analytical and numerical approaches for describing the behaviour of three kinds of planar micro-beams of rectangular shape (suspended rigid or clamped elastic planar beam) loaded by a backing cavity or a fluid-gap, surrounded by very thin slits, and excited by an incident acoustic field. The analytical approach accounts for the coupling between the vibrating structure and the acoustic field in the backing cavity, the thermal and viscous diffusion processes in the boundary layers in the slits and the cavity, the modal behaviour for the vibrating structure, and the non-uniformity of the acoustic field in the backing cavity which is modelled in using an integral formulation with a suitable Green's function. Benchmark solutions are proposed in terms of beam motion (from which the sensitivity, input impedance, and pressure transfer function can be calculated). A numerical implementation (FEM) is handled against which the analytical results are tested.

  3. Clomp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gylenhaal, J.; Bronevetsky, G.

    2007-05-25

    CLOMP is the C version of the Livermore OpenMP benchmark deeloped to measure OpenMP overheads and other performance impacts due to threading (like NUMA memory layouts, memory contention, cache effects, etc.) in order to influence future system design. Current best-in-class implementations of OpenMP have overheads at least ten times larger than is required by many of our applications for effective use of OpenMP. This benchmark shows the significant negative performance impact of these relatively large overheads and of other thread effects. The CLOMP benchmark highly configurable to allow a variety of problem sizes and threading effects to be studied andmore » it carefully checks its results to catch many common threading errors. This benchmark is expected to be included as part of the Sequoia Benchmark suite for the Sequoia procurement.« less

  4. Derivation of Draft Ecological Soil Screening Levels for TNT and RDX Utilizing Terrestrial Plant and Soil Invertebrate Toxicity Benchmarks

    DTIC Science & Technology

    2012-11-01

    TSL Soils Utilizing Growth Benchmarks for Alfalfa , Barnyard Grass, and Perennial Ryegrass ............................................. 5 3...Derivation of Terrestrial Plant-Based Draft Eco-SSL Value for RDX Weathered-and-Aged in SSL or TSL Soils Utilizing Growth Benchmarks for Alfalfa ...studies were conducted using the following plant species:  Dicotyledonous symbiotic species alfalfa (Medicago sativa L.)  Monocotyledonous

  5. Comparing MCDA Aggregation Methods in Constructing Composite Indicators Using the Shannon-Spearman Measure

    ERIC Educational Resources Information Center

    Zhou, P.; Ang, B. W.

    2009-01-01

    Composite indicators have been increasingly recognized as a useful tool for performance monitoring, benchmarking comparisons and public communication in a wide range of fields. The usefulness of a composite indicator depends heavily on the underlying data aggregation scheme where multiple criteria decision analysis (MCDA) is commonly used. A…

  6. Individual and community responses in stream mesocosms with different ionic compositions of conductivity and compared to a field-based benchmark

    EPA Science Inventory

    Several anthropogenic activities cause excess total dissolved solids (TDS) content and its correlate, specific conductivity, in surface waters due to increases in the major geochemical ions (e.g., Na, Ca, Cl, SO4). However, the relative concentrations of major ions varies with t...

  7. Teaching and Research in Mid-Career Management Education: Function and Fusion

    ERIC Educational Resources Information Center

    Quinn, Bríd C.

    2016-01-01

    The apparent disconnect between teaching and research has implications for both curricular content and pedagogic practice and has particular salience in the field of mid-career education. To overcome this disconnect, faculty endeavour to integrate teaching and research. Pressure to do so stems from many sources. Benchmarks of professional…

  8. The Role of a Reference Synthetic Data Generator within the Field of Learning Analytics

    ERIC Educational Resources Information Center

    Berg, Alan\tM.; Mol, Stefan T.; Kismihók, Gábor; Sclater, Niall

    2016-01-01

    This paper details the anticipated impact of synthetic "big" data on learning analytics (LA) infrastructures, with a particular focus on data governance, the acceleration of service development, and the benchmarking of predictive models. By reviewing two cases, one at the sector-wide level (the Jisc learning analytics architecture) and…

  9. Barriers, Springboards and Benchmarks: China Conceptualizes the Pacific Island Chains

    DTIC Science & Technology

    2016-03-04

    the South China Sea during World War II, severing Japanese SLOCs and thus Japan’s sup- ply of oil and raw materials.”59 Chinese sources refer to Guam...Training for joint operations in an informatized battle- field), Renmin haijun, 7 April 2009. 65 Jiefangjun bao, 12 June 1980, 1, cited in Muller

  10. Equilibrium and stability of flow-dominated Plasmas in the Big Red Ball

    NASA Astrophysics Data System (ADS)

    Siller, Robert; Flanagan, Kenneth; Peterson, Ethan; Milhone, Jason; Mirnov, Vladimir; Forest, Cary

    2017-10-01

    The equilibrium and linear stability of flow-dominated plasmas are studied numerically using a spectral techniques to model MRI and dynamo experiments in the Big Red Ball device. The equilibrium code solves for steady-state magnetic fields and plasma flows subject to boundary conditions in a spherical domain. It has been benchmarked with NIMROD (non-ideal MHD with rotation - open discussion), Two different flow scenarios are studied. The first scenario creates a differentially rotating toroidal flow that is peaked at the center. This is done to explore the onset of the magnetorotational instability (MRI) in a spherical geometry. The second scenario creates a counter-rotating von Karman-like flow in the presence of a weak magnetic field. This is done to explore the plasma dynamo instability in the limit of a weak applied field. Both scenarios are numerically modeled as axisymmetric flow to create a steady-state equilibrium solution, the stability and normal modes are studied in the lowest toroidal mode number. The details of the observed flow, and the structure of the fastest growing modes will be shown. DoE, NSF.

  11. JASMIN: Japanese-American study of muon interactions and neutron detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakashima, Hiroshi; /JAEA, Ibaraki; Mokhov, N.V.

    Experimental studies of shielding and radiation effects at Fermi National Accelerator Laboratory (FNAL) have been carried out under collaboration between FNAL and Japan, aiming at benchmarking of simulation codes and study of irradiation effects for upgrade and design of new high-energy accelerator facilities. The purposes of this collaboration are (1) acquisition of shielding data in a proton beam energy domain above 100GeV; (2) further evaluation of predictive accuracy of the PHITS and MARS codes; (3) modification of physics models and data in these codes if needed; (4) establishment of irradiation field for radiation effect tests; and (5) development of amore » code module for improved description of radiation effects. A series of experiments has been performed at the Pbar target station and NuMI facility, using irradiation of targets with 120 GeV protons for antiproton and neutrino production, as well as the M-test beam line (M-test) for measuring nuclear data and detector responses. Various nuclear and shielding data have been measured by activation methods with chemical separation techniques as well as by other detectors such as a Bonner ball counter. Analyses with the experimental data are in progress for benchmarking the PHITS and MARS15 codes. In this presentation recent activities and results are reviewed.« less

  12. Groundwater-quality data in the Borrego Valley, Central Desert, and Low-Use Basins of the Mojave and Sonoran Deserts study unit, 2008-2010--Results from the California GAMA Program

    USGS Publications Warehouse

    Mathany, Timothy M.; Wright, Michael T.; Beuttel, Brandon S.; Belitz, Kenneth

    2012-01-01

    Groundwater quality in the 12,103-square-mile Borrego Valley, Central Desert, and Low-Use Basins of the Mojave and Sonoran Deserts (CLUB) study unit was investigated by the U.S. Geological Survey (USGS) from December 2008 to March 2010, as part of the California State Water Resources Control Board (SWRCB) Groundwater Ambient Monitoring and Assessment (GAMA) Program's Priority Basin Project (PBP). The GAMA-PBP was developed in response to the California Groundwater Quality Monitoring Act of 2001 and is being conducted in collaboration with the SWRCB and Lawrence Livermore National Laboratory (LLNL). The CLUB study unit was the twenty-eighth study unit to be sampled as part of the GAMA-PBP. The GAMA CLUB study was designed to provide a spatially unbiased assessment of untreated-groundwater quality in the primary aquifer systems, and to facilitate statistically consistent comparisons of untreated-groundwater quality throughout California. The primary aquifer systems (hereinafter referred to as primary aquifers) are defined as parts of aquifers corresponding to the perforation intervals of wells listed in the California Department of Public Health (CDPH) database for the CLUB study unit. The quality of groundwater in shallow or deep water-bearing zones may differ from the quality of groundwater in the primary aquifers; shallow groundwater may be more vulnerable to surficial contamination. In the CLUB study unit, groundwater samples were collected from 52 wells in 3 study areas (Borrego Valley, Central Desert, and Low-Use Basins of the Mojave and Sonoran Deserts) in San Bernardino, Riverside, Kern, San Diego, and Imperial Counties. Forty-nine of the wells were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the study unit (grid wells), and three wells were selected to aid in evaluation of water-quality issues (understanding wells). The groundwater samples were analyzed for organic constituents (volatile organic compounds [VOCs], pesticides and pesticide degradates, and pharmaceutical compounds), constituents of special interest (perchlorate and N-nitrosodimethylamine [NDMA]), naturally-occurring inorganic constituents (trace elements, nutrients, major and minor ions, silica, total dissolved solids [TDS], alkalinity, and species of inorganic chromium), and radioactive constituents (radon-222, radium isotopes, and gross alpha and gross beta radioactivity). Naturally-occurring isotopes (stable isotopes of hydrogen, oxygen, boron, and strontium in water, stable isotopes of carbon in dissolved inorganic carbon, activities of tritium, and carbon-14 abundance) and dissolved noble gases also were measured to help identify the sources and ages of sampled groundwater. In total, 223 constituents and 12 water-quality indicators were investigated. Three types of quality-control samples (blanks, replicates, and matrix spikes) were collected at up to 10 percent of the wells in the CLUB study unit, and the results for these samples were used to evaluate the quality of the data for the groundwater samples. Field blanks rarely contained detectable concentrations of any constituent, suggesting that contamination from sample collection procedures was not a significant source of bias in the data for the groundwater samples. Replicate samples generally were within the limits of acceptable analytical reproducibility. Median matrix-spike recoveries were within the acceptable range (70 to 130 percent) for approximately 85 percent of the compounds. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, untreated groundwater typically is treated, disinfected, and (or) blended with other waters to maintain water quality. Regulatory benchmarks apply to water that is delivered to the consumer, not to untreated groundwater. However, to provide some context for the results, concentrations of constituents measured in the untreated groundwater were compared with regulatory and non-regulatory health-based benchmarks established by the U.S. Environmental Protection Agency (USEPA) and CDPH, and to non-regulatory benchmarks established for aesthetic concerns by CDPH. Comparisons between data collected for this study and benchmarks for drinking water are for illustrative purposes only and are not indicative of compliance or non-compliance with those benchmarks. Most inorganic constituents detected in groundwater samples from the 49 grid wells were detected at concentrations less than drinking-water benchmarks. In addition, all detections of organic constituents from the CLUB study-unit grid-well samples were less than health-based benchmarks. In total, VOCs were detected in 17 of the 49 grid wells sampled (approximately 35 percent), pesticides and pesticide degradates were detected in 5 of the 47 grid wells sampled (approximately 11 percent), and perchlorate was detected in 41 of 49 grid wells sampled (approximately 84 percent). Trace elements, major and minor ions, and nutrients were sampled for at 39 grid wells, and radioactive constituents were sampled for at 23 grid wells; most detected concentrations were less than health-based benchmarks. Exceptions in the grid-well samples include seven detections of arsenic greater than the USEPA maximum contaminant level (MCL-US) of 10 micrograms per liter (μg/L); four detections of boron greater than the CDPH notification level (NL-CA) of 1,000 μg/L; six detections of molybdenum greater than the USEPA lifetime health advisory level (HAL-US) of 40 μg/L; two detections of uranium greater than the MCL-US of 30 μg/L; nine detections of fluoride greater than the CDPH maximum contaminant level (MCL-CA) of 2 milligrams per liter (mg/L); one detection of nitrite plus nitrate (NO2-+NO3-), as nitrogen, greater than the MCL-US of 10 mg/L; and four detections of gross alpha radioactivity (72-hour count), and one detection of gross alpha radioactivity (30-day count), greater than the MCL-US of 15 picocuries per liter. Results for constituents with non-regulatory benchmarks set for aesthetic concerns showed that a manganese concentration greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 50 μg/L was detected in one grid well. Chloride concentrations greater than the recommended SMCL-CA benchmark of 250 mg/L were detected in three grid wells, and one of these wells also had a concentration that was greater than the upper SMCL-CA benchmark of 500 mg/L. Sulfate concentrations greater than the recommended SMCL-CA benchmark of 250 mg/L were measured in six grid wells. TDS concentrations greater than the SMCL-CA recommended benchmark of 500 mg/L were measured in 20 grid wells, and concentrations in 2 of these wells also were greater than the SMCL-CA upper benchmark of 1,000 mg/L.

  13. Statistical Analysis of NAS Parallel Benchmarks and LINPACK Results

    NASA Technical Reports Server (NTRS)

    Meuer, Hans-Werner; Simon, Horst D.; Strohmeier, Erich; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    In the last three years extensive performance data have been reported for parallel machines both based on the NAS Parallel Benchmarks, and on LINPACK. In this study we have used the reported benchmark results and performed a number of statistical experiments using factor, cluster, and regression analyses. In addition to the performance results of LINPACK and the eight NAS parallel benchmarks, we have also included peak performance of the machine, and the LINPACK n and n(sub 1/2) values. Some of the results and observations can be summarized as follows: 1) All benchmarks are strongly correlated with peak performance. 2) LINPACK and EP have each a unique signature. 3) The remaining NPB can grouped into three groups as follows: (CG and IS), (LU and SP), and (MG, FT, and BT). Hence three (or four with EP) benchmarks are sufficient to characterize the overall NPB performance. Our poster presentation will follow a standard poster format, and will present the data of our statistical analysis in detail.

  14. Performance analysis of fusion nuclear-data benchmark experiments for light to heavy materials in MeV energy region with a neutron spectrum shifter

    NASA Astrophysics Data System (ADS)

    Murata, Isao; Ohta, Masayuki; Miyamaru, Hiroyuki; Kondo, Keitaro; Yoshida, Shigeo; Iida, Toshiyuki; Ochiai, Kentaro; Konno, Chikara

    2011-10-01

    Nuclear data are indispensable for development of fusion reactor candidate materials. However, benchmarking of the nuclear data in MeV energy region is not yet adequate. In the present study, benchmark performance in the MeV energy region was investigated theoretically for experiments by using a 14 MeV neutron source. We carried out a systematical analysis for light to heavy materials. As a result, the benchmark performance for the neutron spectrum was confirmed to be acceptable, while for gamma-rays it was not sufficiently accurate. Consequently, a spectrum shifter has to be applied. Beryllium had the best performance as a shifter. Moreover, a preliminary examination of whether it is really acceptable that only the spectrum before the last collision is considered in the benchmark performance analysis. It was pointed out that not only the last collision but also earlier collisions should be considered equally in the benchmark performance analysis.

  15. Hexagonal boron nitride and water interaction parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Yanbin; Aluru, Narayana R., E-mail: aluru@illinois.edu; Wagner, Lucas K.

    2016-04-28

    The study of hexagonal boron nitride (hBN) in microfluidic and nanofluidic applications at the atomic level requires accurate force field parameters to describe the water-hBN interaction. In this work, we begin with benchmark quality first principles quantum Monte Carlo calculations on the interaction energy between water and hBN, which are used to validate random phase approximation (RPA) calculations. We then proceed with RPA to derive force field parameters, which are used to simulate water contact angle on bulk hBN, attaining a value within the experimental uncertainties. This paper demonstrates that end-to-end multiscale modeling, starting at detailed many-body quantum mechanics andmore » ending with macroscopic properties, with the approximations controlled along the way, is feasible for these systems.« less

  16. Using a visual plate waste study to monitor menu performance.

    PubMed

    Connors, Priscilla L; Rozell, Sarah B

    2004-01-01

    Two visual plate waste studies were conducted in 1-week phases over a 1-year period in an acute care hospital. A total of 383 trays were evaluated in the first phase and 467 in the second. Food items were ranked for consumption from a low (1) to high (6) score, with a score of 4.0 set as the benchmark denoting a minimum level of acceptable consumption. In the first phase two entrees, four starches, all of the vegetables, sliced white bread, and skim milk scored below the benchmark. As a result six menu items were replaced and one was modified. In the second phase all entrees scored at or above 4.0, as did seven vegetables, and a dinner roll that replaced sliced white bread. Skim milk continued to score below the benchmark. A visual plate waste study assists in benchmarking performance, planning menu changes, and assessing effectiveness.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    MOSTELLER, RUSSELL D.

    Previous studies have indicated that ENDF/B-VII preliminary releases {beta}-2 and {beta}-3, predecessors to the recent initial release of ENDF/B-VII.0, produce significantly better overall agreement with criticality benchmarks than does ENDF/B-VI. However, one of those studies also suggests that improvements still may be needed for thermal plutonium cross sections. The current study substantiates that concern by examining criticality benchmarks for unreflected spheres of plutonium-nitrate solutions and for slightly and heavily borated mixed-oxide (MOX) lattices. Results are presented for the JEFF-3.1 and JENDL-3.3 nuclear data libraries as well as ENDF/B-VII.0 and ENDF/B-VI. It is shown that ENDF/B-VII.0 tends to overpredict reactivity formore » thermal plutonium benchmarks over at least a portion of the thermal range. In addition, it is found that additional benchmark data are needed for the deep thermal range.« less

  18. Implementation of Benchmarking Transportation Logistics Practices and Future Benchmarking Organizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thrower, A.W.; Patric, J.; Keister, M.

    2008-07-01

    The purpose of the Office of Civilian Radioactive Waste Management's (OCRWM) Logistics Benchmarking Project is to identify established government and industry practices for the safe transportation of hazardous materials which can serve as a yardstick for design and operation of OCRWM's national transportation system for shipping spent nuclear fuel and high-level radioactive waste to the proposed repository at Yucca Mountain, Nevada. The project will present logistics and transportation practices and develop implementation recommendations for adaptation by the national transportation system. This paper will describe the process used to perform the initial benchmarking study, highlight interim findings, and explain how thesemore » findings are being implemented. It will also provide an overview of the next phase of benchmarking studies. The benchmarking effort will remain a high-priority activity throughout the planning and operational phases of the transportation system. The initial phase of the project focused on government transportation programs to identify those practices which are most clearly applicable to OCRWM. These Federal programs have decades of safe transportation experience, strive for excellence in operations, and implement effective stakeholder involvement, all of which parallel OCRWM's transportation mission and vision. The initial benchmarking project focused on four business processes that are critical to OCRWM's mission success, and can be incorporated into OCRWM planning and preparation in the near term. The processes examined were: transportation business model, contract management/out-sourcing, stakeholder relations, and contingency planning. More recently, OCRWM examined logistics operations of AREVA NC's Business Unit Logistics in France. The next phase of benchmarking will focus on integrated domestic and international commercial radioactive logistic operations. The prospective companies represent large scale shippers and have vast experience in safely and efficiently shipping spent nuclear fuel and other radioactive materials. Additional business processes may be examined in this phase. The findings of these benchmarking efforts will help determine the organizational structure and requirements of the national transportation system. (authors)« less

  19. Benchmarking is associated with improved quality of care in type 2 diabetes: the OPTIMISE randomized, controlled trial.

    PubMed

    Hermans, Michel P; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank; Vandenberghe, Hans; Brotons, Carlos

    2013-11-01

    To assess prospectively the effect of benchmarking on quality of primary care for patients with type 2 diabetes by using three major modifiable cardiovascular risk factors as critical quality indicators. Primary care physicians treating patients with type 2 diabetes in six European countries were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). In both groups, laboratory tests were performed every 4 months. The primary end point was the percentage of patients achieving preset targets of the critical quality indicators HbA1c, LDL cholesterol, and systolic blood pressure (SBP) after 12 months of follow-up. Of 4,027 patients enrolled, 3,996 patients were evaluable and 3,487 completed 12 months of follow-up. Primary end point of HbA1c target was achieved in the benchmarking group by 58.9 vs. 62.1% in the control group (P = 0.398) after 12 months; 40.0 vs. 30.1% patients met the SBP target (P < 0.001); 54.3 vs. 49.7% met the LDL cholesterol target (P = 0.006). Percentages of patients meeting all three targets increased during the study in both groups, with a statistically significant increase observed in the benchmarking group. The percentage of patients achieving all three targets at month 12 was significantly larger in the benchmarking group than in the control group (12.5 vs. 8.1%; P < 0.001). In this prospective, randomized, controlled study, benchmarking was shown to be an effective tool for increasing achievement of critical quality indicators and potentially reducing patient cardiovascular residual risk profile.

  20. Benchmarking Is Associated With Improved Quality of Care in Type 2 Diabetes

    PubMed Central

    Hermans, Michel P.; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank; Vandenberghe, Hans; Brotons, Carlos

    2013-01-01

    OBJECTIVE To assess prospectively the effect of benchmarking on quality of primary care for patients with type 2 diabetes by using three major modifiable cardiovascular risk factors as critical quality indicators. RESEARCH DESIGN AND METHODS Primary care physicians treating patients with type 2 diabetes in six European countries were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). In both groups, laboratory tests were performed every 4 months. The primary end point was the percentage of patients achieving preset targets of the critical quality indicators HbA1c, LDL cholesterol, and systolic blood pressure (SBP) after 12 months of follow-up. RESULTS Of 4,027 patients enrolled, 3,996 patients were evaluable and 3,487 completed 12 months of follow-up. Primary end point of HbA1c target was achieved in the benchmarking group by 58.9 vs. 62.1% in the control group (P = 0.398) after 12 months; 40.0 vs. 30.1% patients met the SBP target (P < 0.001); 54.3 vs. 49.7% met the LDL cholesterol target (P = 0.006). Percentages of patients meeting all three targets increased during the study in both groups, with a statistically significant increase observed in the benchmarking group. The percentage of patients achieving all three targets at month 12 was significantly larger in the benchmarking group than in the control group (12.5 vs. 8.1%; P < 0.001). CONCLUSIONS In this prospective, randomized, controlled study, benchmarking was shown to be an effective tool for increasing achievement of critical quality indicators and potentially reducing patient cardiovascular residual risk profile. PMID:23846810

  1. A benchmark study of the sea-level equation in GIA modelling

    NASA Astrophysics Data System (ADS)

    Martinec, Zdenek; Klemann, Volker; van der Wal, Wouter; Riva, Riccardo; Spada, Giorgio; Simon, Karen; Blank, Bas; Sun, Yu; Melini, Daniele; James, Tom; Bradley, Sarah

    2017-04-01

    The sea-level load in glacial isostatic adjustment (GIA) is described by the so called sea-level equation (SLE), which represents the mass redistribution between ice sheets and oceans on a deforming earth. Various levels of complexity of SLE have been proposed in the past, ranging from a simple mean global sea level (the so-called eustatic sea level) to the load with a deforming ocean bottom, migrating coastlines and a changing shape of the geoid. Several approaches to solve the SLE have been derived, from purely analytical formulations to fully numerical methods. Despite various teams independently investigating GIA, there has been no systematic intercomparison amongst the solvers through which the methods may be validated. The goal of this paper is to present a series of benchmark experiments designed for testing and comparing numerical implementations of the SLE. Our approach starts with simple load cases even though the benchmark will not result in GIA predictions for a realistic loading scenario. In the longer term we aim for a benchmark with a realistic loading scenario, and also for benchmark solutions with rotational feedback. The current benchmark uses an earth model for which Love numbers have been computed and benchmarked in Spada et al (2011). In spite of the significant differences in the numerical methods employed, the test computations performed so far show a satisfactory agreement between the results provided by the participants. The differences found can often be attributed to the different approximations inherent to the various algorithms. Literature G. Spada, V. R. Barletta, V. Klemann, R. E. M. Riva, Z. Martinec, P. Gasperini, B. Lund, D. Wolf, L. L. A. Vermeersen, and M. A. King, 2011. A benchmark study for glacial isostatic adjustment codes. Geophys. J. Int. 185: 106-132 doi:10.1111/j.1365-

  2. Spatial correlations in driven-dissipative photonic lattices

    NASA Astrophysics Data System (ADS)

    Biondi, Matteo; Lienhard, Saskia; Blatter, Gianni; Türeci, Hakan E.; Schmidt, Sebastian

    2017-12-01

    We study the nonequilibrium steady-state of interacting photons in cavity arrays as described by the driven-dissipative Bose–Hubbard and spin-1/2 XY model. For this purpose, we develop a self-consistent expansion in the inverse coordination number of the array (∼ 1/z) to solve the Lindblad master equation of these systems beyond the mean-field approximation. Our formalism is compared and benchmarked with exact numerical methods for small systems based on an exact diagonalization of the Liouvillian and a recently developed corner-space renormalization technique. We then apply this method to obtain insights beyond mean-field in two particular settings: (i) we show that the gas–liquid transition in the driven-dissipative Bose–Hubbard model is characterized by large density fluctuations and bunched photon statistics. (ii) We study the antibunching–bunching transition of the nearest-neighbor correlator in the driven-dissipative spin-1/2 XY model and provide a simple explanation of this phenomenon.

  3. Simulation of radiation damping in rings, using stepwise ray-tracing methods

    DOE PAGES

    Meot, F.

    2015-06-26

    The ray-tracing code Zgoubi computes particle trajectories in arbitrary magnetic and/or electric field maps or analytical field models. It includes a built-in fitting procedure, spin tracking many Monte Carlo processes. The accuracy of the integration method makes it an efficient tool for multi-turn tracking in periodic machines. Energy loss by synchrotron radiation, based on Monte Carlo techniques, had been introduced in Zgoubi in the early 2000s for studies regarding the linear collider beam delivery system. However, only recently has this Monte Carlo tool been used for systematic beam dynamics and spin diffusion studies in rings, including eRHIC electron-ion collider projectmore » at the Brookhaven National Laboratory. Some beam dynamics aspects of this recent use of Zgoubi capabilities, including considerations of accuracy as well as further benchmarking in the presence of synchrotron radiation in rings, are reported here.« less

  4. Exponentially-Biased Ground-State Sampling of Quantum Annealing Machines with Transverse-Field Driving Hamiltonians

    NASA Technical Reports Server (NTRS)

    Mandra, Salvatore

    2017-01-01

    We study the performance of the D-Wave 2X quantum annealing machine on systems with well-controlled ground-state degeneracy. While obtaining the ground state of a spin-glass benchmark instance represents a difficult task, the gold standard for any optimization algorithm or machine is to sample all solutions that minimize the Hamiltonian with more or less equal probability. Our results show that while naive transverse-field quantum annealing on the D-Wave 2X device can find the ground-state energy of the problems, it is not well suited in identifying all degenerate ground-state configurations associated to a particular instance. Even worse, some states are exponentially suppressed, in agreement with previous studies on toy model problems [New J. Phys. 11, 073021 (2009)]. These results suggest that more complex driving Hamiltonians are needed in future quantum annealing machines to ensure a fair sampling of the ground-state manifold.

  5. A benchmark for subduction zone modeling

    NASA Astrophysics Data System (ADS)

    van Keken, P.; King, S.; Peacock, S.

    2003-04-01

    Our understanding of subduction zones hinges critically on the ability to discern its thermal structure and dynamics. Computational modeling has become an essential complementary approach to observational and experimental studies. The accurate modeling of subduction zones is challenging due to the unique geometry, complicated rheological description and influence of fluid and melt formation. The complicated physics causes problems for the accurate numerical solution of the governing equations. As a consequence it is essential for the subduction zone community to be able to evaluate the ability and limitations of various modeling approaches. The participants of a workshop on the modeling of subduction zones, held at the University of Michigan at Ann Arbor, MI, USA in 2002, formulated a number of case studies to be developed into a benchmark similar to previous mantle convection benchmarks (Blankenbach et al., 1989; Busse et al., 1991; Van Keken et al., 1997). Our initial benchmark focuses on the dynamics of the mantle wedge and investigates three different rheologies: constant viscosity, diffusion creep, and dislocation creep. In addition we investigate the ability of codes to accurate model dynamic pressure and advection dominated flows. Proceedings of the workshop and the formulation of the benchmark are available at www.geo.lsa.umich.edu/~keken/subduction02.html We strongly encourage interested research groups to participate in this benchmark. At Nice 2003 we will provide an update and first set of benchmark results. Interested researchers are encouraged to contact one of the authors for further details.

  6. Reference frame access under the effects of great earthquakes: a least squares collocation approach for non-secular post-seismic evolution

    NASA Astrophysics Data System (ADS)

    Gómez, D. D.; Piñón, D. A.; Smalley, R.; Bevis, M.; Cimbaro, S. R.; Lenzano, L. E.; Barón, J.

    2016-03-01

    The 2010, (Mw 8.8) Maule, Chile, earthquake produced large co-seismic displacements and non-secular, post-seismic deformation, within latitudes 28°S-40°S extending from the Pacific to the Atlantic oceans. Although these effects are easily resolvable by fitting geodetic extended trajectory models (ETM) to continuous GPS (CGPS) time series, the co- and post-seismic deformation cannot be determined at locations without CGPS (e.g., on passive geodetic benchmarks). To estimate the trajectories of passive geodetic benchmarks, we used CGPS time series to fit an ETM that includes the secular South American plate motion and plate boundary deformation, the co-seismic discontinuity, and the non-secular, logarithmic post-seismic transient produced by the earthquake in the Posiciones Geodésicas Argentinas 2007 (POSGAR07) reference frame (RF). We then used least squares collocation (LSC) to model both the background secular inter-seismic and the non-secular post-seismic components of the ETM at the locations without CGPS. We tested the LSC modeled trajectories using campaign and CGPS data that was not used to generate the model and found standard deviations (95 % confidence level) for position estimates for the north and east components of 3.8 and 5.5 mm, respectively, indicating that the model predicts the post-seismic deformation field very well. Finally, we added the co-seismic displacement field, estimated using an elastic finite element model. The final, trajectory model allows accessing the POSGAR07 RF using post-Maule earthquake coordinates within 5 cm for ˜ 91 % of the passive test benchmarks.

  7. Benchmark datasets for 3D MALDI- and DESI-imaging mass spectrometry.

    PubMed

    Oetjen, Janina; Veselkov, Kirill; Watrous, Jeramie; McKenzie, James S; Becker, Michael; Hauberg-Lotte, Lena; Kobarg, Jan Hendrik; Strittmatter, Nicole; Mróz, Anna K; Hoffmann, Franziska; Trede, Dennis; Palmer, Andrew; Schiffler, Stefan; Steinhorst, Klaus; Aichler, Michaela; Goldin, Robert; Guntinas-Lichius, Orlando; von Eggeling, Ferdinand; Thiele, Herbert; Maedler, Kathrin; Walch, Axel; Maass, Peter; Dorrestein, Pieter C; Takats, Zoltan; Alexandrov, Theodore

    2015-01-01

    Three-dimensional (3D) imaging mass spectrometry (MS) is an analytical chemistry technique for the 3D molecular analysis of a tissue specimen, entire organ, or microbial colonies on an agar plate. 3D-imaging MS has unique advantages over existing 3D imaging techniques, offers novel perspectives for understanding the spatial organization of biological processes, and has growing potential to be introduced into routine use in both biology and medicine. Owing to the sheer quantity of data generated, the visualization, analysis, and interpretation of 3D imaging MS data remain a significant challenge. Bioinformatics research in this field is hampered by the lack of publicly available benchmark datasets needed to evaluate and compare algorithms. High-quality 3D imaging MS datasets from different biological systems at several labs were acquired, supplied with overview images and scripts demonstrating how to read them, and deposited into MetaboLights, an open repository for metabolomics data. 3D imaging MS data were collected from five samples using two types of 3D imaging MS. 3D matrix-assisted laser desorption/ionization imaging (MALDI) MS data were collected from murine pancreas, murine kidney, human oral squamous cell carcinoma, and interacting microbial colonies cultured in Petri dishes. 3D desorption electrospray ionization (DESI) imaging MS data were collected from a human colorectal adenocarcinoma. With the aim to stimulate computational research in the field of computational 3D imaging MS, selected high-quality 3D imaging MS datasets are provided that could be used by algorithm developers as benchmark datasets.

  8. A review on the benchmarking concept in Malaysian construction safety performance

    NASA Astrophysics Data System (ADS)

    Ishak, Nurfadzillah; Azizan, Muhammad Azizi

    2018-02-01

    Construction industry is one of the major industries that propels Malaysia's economy in highly contributes to our nation's GDP growth, yet the high fatality rates on construction sites have caused concern among safety practitioners and the stakeholders. Hence, there is a need of benchmarking in performance of Malaysia's construction industry especially in terms of safety. This concept can create a fertile ground for ideas, but only in a receptive environment, organization that share good practices and compare their safety performance against other benefit most to establish improvement in safety culture. This research was conducted to study the awareness important, evaluate current practice and improvement, and also identify the constraint in implement of benchmarking on safety performance in our industry. Additionally, interviews with construction professionals were come out with different views on this concept. Comparison has been done to show the different understanding of benchmarking approach and how safety performance can be benchmarked. But, it's viewed as one mission, which to evaluate objectives identified through benchmarking that will improve the organization's safety performance. Finally, the expected result from this research is to help Malaysia's construction industry implement best practice in safety performance management through the concept of benchmarking.

  9. Markov Dynamics as a Zooming Lens for Multiscale Community Detection: Non Clique-Like Communities and the Field-of-View Limit

    PubMed Central

    Schaub, Michael T.; Delvenne, Jean-Charles; Yaliraki, Sophia N.; Barahona, Mauricio

    2012-01-01

    In recent years, there has been a surge of interest in community detection algorithms for complex networks. A variety of computational heuristics, some with a long history, have been proposed for the identification of communities or, alternatively, of good graph partitions. In most cases, the algorithms maximize a particular objective function, thereby finding the ‘right’ split into communities. Although a thorough comparison of algorithms is still lacking, there has been an effort to design benchmarks, i.e., random graph models with known community structure against which algorithms can be evaluated. However, popular community detection methods and benchmarks normally assume an implicit notion of community based on clique-like subgraphs, a form of community structure that is not always characteristic of real networks. Specifically, networks that emerge from geometric constraints can have natural non clique-like substructures with large effective diameters, which can be interpreted as long-range communities. In this work, we show that long-range communities escape detection by popular methods, which are blinded by a restricted ‘field-of-view’ limit, an intrinsic upper scale on the communities they can detect. The field-of-view limit means that long-range communities tend to be overpartitioned. We show how by adopting a dynamical perspective towards community detection [1], [2], in which the evolution of a Markov process on the graph is used as a zooming lens over the structure of the network at all scales, one can detect both clique- or non clique-like communities without imposing an upper scale to the detection. Consequently, the performance of algorithms on inherently low-diameter, clique-like benchmarks may not always be indicative of equally good results in real networks with local, sparser connectivity. We illustrate our ideas with constructive examples and through the analysis of real-world networks from imaging, protein structures and the power grid, where a multiscale structure of non clique-like communities is revealed. PMID:22384178

  10. Key performance indicators to benchmark hospital information systems - a delphi study.

    PubMed

    Hübner-Bloder, G; Ammenwerth, E

    2009-01-01

    To identify the key performance indicators for hospital information systems (HIS) that can be used for HIS benchmarking. A Delphi survey with one qualitative and two quantitative rounds. Forty-four HIS experts from health care IT practice and academia participated in all three rounds. Seventy-seven performance indicators were identified and organized into eight categories: technical quality, software quality, architecture and interface quality, IT vendor quality, IT support and IT department quality, workflow support quality, IT outcome quality, and IT costs. The highest ranked indicators are related to clinical workflow support and user satisfaction. Isolated technical indicators or cost indicators were not seen as useful. The experts favored an interdisciplinary group of all the stakeholders, led by hospital management, to conduct the HIS benchmarking. They proposed benchmarking activities both in regular (annual) intervals as well as at defined events (for example after IT introduction). Most of the experts stated that in their institutions no HIS benchmarking activities are being performed at the moment. In the context of IT governance, IT benchmarking is gaining importance in the healthcare area. The found indicators reflect the view of health care IT professionals and researchers. Research is needed to further validate and operationalize key performance indicators, to provide an IT benchmarking framework, and to provide open repositories for a comparison of the HIS benchmarks of different hospitals.

  11. SU-E-T-148: Benchmarks and Pre-Treatment Reviews: A Study of Quality Assurance Effectiveness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowenstein, J; Nguyen, H; Roll, J

    Purpose: To determine the impact benchmarks and pre-treatment reviews have on improving the quality of submitted clinical trial data. Methods: Benchmarks are used to evaluate a site’s ability to develop a treatment that meets a specific protocol’s treatment guidelines prior to placing their first patient on the protocol. A pre-treatment review is an actual patient placed on the protocol in which the dosimetry and contour volumes are evaluated to be per protocol guidelines prior to allowing the beginning of the treatment. A key component of these QA mechanisms is that sites are provided timely feedback to educate them on howmore » to plan per the protocol and prevent protocol deviations on patients accrued to a protocol. For both benchmarks and pre-treatment reviews a dose volume analysis (DVA) was performed using MIM softwareTM. For pre-treatment reviews a volume contour evaluation was also performed. Results: IROC Houston performed a QA effectiveness analysis of a protocol which required both benchmarks and pre-treatment reviews. In 70 percent of the patient cases submitted, the benchmark played an effective role in assuring that the pre-treatment review of the cases met protocol requirements. The 35 percent of sites failing the benchmark subsequently modified there planning technique to pass the benchmark before being allowed to submit a patient for pre-treatment review. However, in 30 percent of the submitted cases the pre-treatment review failed where the majority (71 percent) failed the DVA. 20 percent of sites submitting patients failed to correct their dose volume discrepancies indicated by the benchmark case. Conclusion: Benchmark cases and pre-treatment reviews can be an effective QA tool to educate sites on protocol guidelines and to minimize deviations. Without the benchmark cases it is possible that 65 percent of the cases undergoing a pre-treatment review would have failed to meet the protocols requirements.Support: U24-CA-180803.« less

  12. A suite of standard post-tagging evaluation metrics can help assess tag retention for field-based fish telemetry research

    USGS Publications Warehouse

    Gerber, Kayla M.; Mather, Martha E.; Smith, Joseph M.

    2017-01-01

    Telemetry can inform many scientific and research questions if a context exists for integrating individual studies into the larger body of literature. Creating cumulative distributions of post-tagging evaluation metrics would allow individual researchers to relate their telemetry data to other studies. Widespread reporting of standard metrics is a precursor to the calculation of benchmarks for these distributions (e.g., mean, SD, 95% CI). Here we illustrate five types of standard post-tagging evaluation metrics using acoustically tagged Blue Catfish (Ictalurus furcatus) released into a Kansas reservoir. These metrics included: (1) percent of tagged fish detected overall, (2) percent of tagged fish detected daily using abacus plot data, (3) average number of (and percent of available) receiver sites visited, (4) date of last movement between receiver sites (and percent of tagged fish moving during that time period), and (5) number (and percent) of fish that egressed through exit gates. These metrics were calculated for one to three time periods: early (<10 d), during (weekly), and at the end of the study (5 months). Over three-quarters of our tagged fish were detected early (85%) and at the end (85%) of the study. Using abacus plot data, all tagged fish (100%) were detected at least one day and 96% were detected for > 5 days early in the study. On average, tagged Blue Catfish visited 9 (50%) and 13 (72%) of 18 within-reservoir receivers early and at the end of the study, respectively. At the end of the study, 73% of all tagged fish were detected moving between receivers. Creating statistical benchmarks for individual metrics can provide useful reference points. In addition, combining multiple metrics can inform ecology and research design. Consequently, individual researchers and the field of telemetry research can benefit from widespread, detailed, and standard reporting of post-tagging detection metrics.

  13. Feasibility analysis of using inverse modeling for estimating field-scale evapotranspiration in maize and soybean fields from soil water content monitoring networks

    NASA Astrophysics Data System (ADS)

    Foolad, Foad; Franz, Trenton E.; Wang, Tiejun; Gibson, Justin; Kilic, Ayse; Allen, Richard G.; Suyker, Andrew

    2017-03-01

    In this study, the feasibility of using inverse vadose zone modeling for estimating field-scale actual evapotranspiration (ETa) was explored at a long-term agricultural monitoring site in eastern Nebraska. Data from both point-scale soil water content (SWC) sensors and the area-average technique of cosmic-ray neutron probes were evaluated against independent ETa estimates from a co-located eddy covariance tower. While this methodology has been successfully used for estimates of groundwater recharge, it was essential to assess the performance of other components of the water balance such as ETa. In light of recent evaluations of land surface models (LSMs), independent estimates of hydrologic state variables and fluxes are critically needed benchmarks. The results here indicate reasonable estimates of daily and annual ETa from the point sensors, but with highly varied soil hydraulic function parameterizations due to local soil texture variability. The results of multiple soil hydraulic parameterizations leading to equally good ETa estimates is consistent with the hydrological principle of equifinality. While this study focused on one particular site, the framework can be easily applied to other SWC monitoring networks across the globe. The value-added products of groundwater recharge and ETa flux from the SWC monitoring networks will provide additional and more robust benchmarks for the validation of LSM that continues to improve their forecast skill. In addition, the value-added products of groundwater recharge and ETa often have more direct impacts on societal decision-making than SWC alone. Water flux impacts human decision-making from policies on the long-term management of groundwater resources (recharge), to yield forecasts (ETa), and to optimal irrigation scheduling (ETa). Illustrating the societal benefits of SWC monitoring is critical to insure the continued operation and expansion of these public datasets.

  14. Performance Evaluation and Improvement of Ferroelectric Field-Effect Transistor Memory

    NASA Astrophysics Data System (ADS)

    Yu, Hyung Suk

    Flash memory is reaching scaling limitations rapidly due to reduction of charge in floating gates, charge leakage and capacitive coupling between cells which cause threshold voltage fluctuations, short retention times, and interference. Many new memory technologies are being considered as alternatives to flash memory in an effort to overcome these limitations. Ferroelectric Field-Effect Transistor (FeFET) is one of the main emerging candidates because of its structural similarity to conventional FETs and fast switching speed. Nevertheless, the performance of FeFETs have not been systematically compared and analyzed against other competing technologies. In this work, we first benchmark the intrinsic performance of FeFETs and other memories by simulations in order to identify the strengths and weaknesses of FeFETs. To simulate realistic memory applications, we compare memories on an array structure. For the comparisons, we construct an accurate delay model and verify it by benchmarking against exact HSPICE simulations. Second, we propose an accurate model for FeFET memory window since the existing model has limitations. The existing model assumes symmetric operation voltages but it is not valid for the practical asymmetric operation voltages. In this modeling, we consider practical operation voltages and device dimensions. Also, we investigate realistic changes of memory window over time and retention time of FeFETs. Last, to improve memory window and subthreshold swing, we suggest nonplanar junctionless structures for FeFETs. Using the suggested structures, we study the dimensional dependences of crucial parameters like memory window and subthreshold swing and also analyze key interference mechanisms.

  15. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    deWit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure, This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  16. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    de Wit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure. This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  17. Local implementation of the Essence of Care benchmarks.

    PubMed

    Jones, Sue

    To understand clinical practice benchmarking from the perspective of nurses working in a large acute NHS trust and to determine whether the nurses perceived that their commitment to Essence of Care led to improvements in care, the factors that influenced their role in the process and the organisational factors that influenced benchmarking. An ethnographic case study approach was adopted. Six themes emerged from the data. Two organisational issues emerged: leadership and the values and/or culture of the organisation. The findings suggested that the leadership ability of the Essence of Care link nurses and the value placed on this work by the organisation were key to the success of benchmarking. A model for successful implementation of the Essence of Care is proposed based on the findings of this study, which lends itself to testing by other organisations.

  18. Limitations of Community College Benchmarking and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy H.

    2006-01-01

    This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.

  19. QUANTIFICATION AND INTERPRETATION OF TOTAL PETROLEUM HYDROCARBONS IN SEDIMENT SAMPLES BY A GC/MS METHOD AND COMPARISON WITH EPA 418.1 AND A RAPID FIELD METHOD

    EPA Science Inventory

    ABSTRACT: Total Petroleum hydrocarbons (TPH) as a lumped parameter can be easily and rapidly measured or monitored. Despite interpretational problems, it has become an accepted regulatory benchmark used widely to evaluate the extent of petroleum product contamination. Three cu...

  20. Results of the 2013 CASE Europe Salary Survey

    ERIC Educational Resources Information Center

    Paradise, Andrew

    2013-01-01

    CASE has conducted salary surveys to track trends in the profession and to help members benchmark salaries since 1982. Following CASE's major overhaul of the survey instrument and data collection system, CASE Europe fielded a European version of the salary survey for the second time in October 2012. All individual CASE Europe members at colleges,…

  1. Lessons from the Field: Developing and Implementing the Qatar Student Assessment System, 2002-2006. Technical Report

    ERIC Educational Resources Information Center

    Gonzalez, Gabriella; Le, Vi-Nhuan; Broer, Markus; Mariano, Louis T.; Froemel, J. Enrique; Goldman, Charles A.; DaVanzo, Julie

    2009-01-01

    Qatar has recently positioned itself to be a leader in education. Central to the country's efforts is the implementation of reforms to its K-12 education system. Central to the reform initiatives was the development of internationally benchmarked curriculum standards in four subjects: Arabic, English as a foreign language, mathematics, and…

  2. Learning Outcomes as a Key Concept in Policy Documents throughout Policy Changes

    ERIC Educational Resources Information Center

    Prøitz, Tine Sophie

    2015-01-01

    Learning outcomes can be considered to be a key concept in a changing education policy landscape, enhancing aspects such as benchmarking and competition. Issues relating to concepts of performance have a long history of debate within the field of education. Today, the concept of learning outcomes has become central in education policy development,…

  3. Comparison of mapping algorithms used in high-throughput sequencing: application to Ion Torrent data

    PubMed Central

    2014-01-01

    Background The rapid evolution in high-throughput sequencing (HTS) technologies has opened up new perspectives in several research fields and led to the production of large volumes of sequence data. A fundamental step in HTS data analysis is the mapping of reads onto reference sequences. Choosing a suitable mapper for a given technology and a given application is a subtle task because of the difficulty of evaluating mapping algorithms. Results In this paper, we present a benchmark procedure to compare mapping algorithms used in HTS using both real and simulated datasets and considering four evaluation criteria: computational resource and time requirements, robustness of mapping, ability to report positions for reads in repetitive regions, and ability to retrieve true genetic variation positions. To measure robustness, we introduced a new definition for a correctly mapped read taking into account not only the expected start position of the read but also the end position and the number of indels and substitutions. We developed CuReSim, a new read simulator, that is able to generate customized benchmark data for any kind of HTS technology by adjusting parameters to the error types. CuReSim and CuReSimEval, a tool to evaluate the mapping quality of the CuReSim simulated reads, are freely available. We applied our benchmark procedure to evaluate 14 mappers in the context of whole genome sequencing of small genomes with Ion Torrent data for which such a comparison has not yet been established. Conclusions A benchmark procedure to compare HTS data mappers is introduced with a new definition for the mapping correctness as well as tools to generate simulated reads and evaluate mapping quality. The application of this procedure to Ion Torrent data from the whole genome sequencing of small genomes has allowed us to validate our benchmark procedure and demonstrate that it is helpful for selecting a mapper based on the intended application, questions to be addressed, and the technology used. This benchmark procedure can be used to evaluate existing or in-development mappers as well as to optimize parameters of a chosen mapper for any application and any sequencing platform. PMID:24708189

  4. The Gaia-ESO Survey Astrophysical Calibration

    NASA Astrophysics Data System (ADS)

    Pancino, E.; Gaia-ESO Survey Consortium

    2016-05-01

    The Gaia-ESO Survey is a wide field spectroscopic survey recently started with the FLAMES@VLT in Cerro Paranal, Chile. It will produce radial velocities more accurate than Gaia's for faint stars (down to V ≃ 18), and astrophysical parameters and abundances for approximately 100 000 stars, belonging to all Galactic populations. 300 nights were assigned in 5 years (with the last year subject to approval after a detailed report). In particular, to connect with other ongoing and planned spectroscopic surveys, a detailed calibration program — for the astrophysical parameters derivation — is planned, including well known clusters, Gaia benchmark stars, and special equatorial calibration fields designed for wide field/multifiber spectrographs.

  5. StirMark Benchmark: audio watermarking attacks based on lossy compression

    NASA Astrophysics Data System (ADS)

    Steinebach, Martin; Lang, Andreas; Dittmann, Jana

    2002-04-01

    StirMark Benchmark is a well-known evaluation tool for watermarking robustness. Additional attacks are added to it continuously. To enable application based evaluation, in our paper we address attacks against audio watermarks based on lossy audio compression algorithms to be included in the test environment. We discuss the effect of different lossy compression algorithms like MPEG-2 audio Layer 3, Ogg or VQF on a selection of audio test data. Our focus is on changes regarding the basic characteristics of the audio data like spectrum or average power and on removal of embedded watermarks. Furthermore we compare results of different watermarking algorithms and show that lossy compression is still a challenge for most of them. There are two strategies for adding evaluation of robustness against lossy compression to StirMark Benchmark: (a) use of existing free compression algorithms (b) implementation of a generic lossy compression simulation. We discuss how such a model can be implemented based on the results of our tests. This method is less complex, as no real psycho acoustic model has to be applied. Our model can be used for audio watermarking evaluation of numerous application fields. As an example, we describe its importance for e-commerce applications with watermarking security.

  6. IT-benchmarking of clinical workflows: concept, implementation, and evaluation.

    PubMed

    Thye, Johannes; Straede, Matthias-Christopher; Liebe, Jan-David; Hübner, Ursula

    2014-01-01

    Due to the emerging evidence of health IT as opportunity and risk for clinical workflows, health IT must undergo a continuous measurement of its efficacy and efficiency. IT-benchmarks are a proven means for providing this information. The aim of this study was to enhance the methodology of an existing benchmarking procedure by including, in particular, new indicators of clinical workflows and by proposing new types of visualisation. Drawing on the concept of information logistics, we propose four workflow descriptors that were applied to four clinical processes. General and specific indicators were derived from these descriptors and processes. 199 chief information officers (CIOs) took part in the benchmarking. These hospitals were assigned to reference groups of a similar size and ownership from a total of 259 hospitals. Stepwise and comprehensive feedback was given to the CIOs. Most participants who evaluated the benchmark rated the procedure as very good, good, or rather good (98.4%). Benchmark information was used by CIOs for getting a general overview, advancing IT, preparing negotiations with board members, and arguing for a new IT project.

  7. School-Based Cognitive-Behavioral Therapy for Adolescent Depression: A Benchmarking Study

    ERIC Educational Resources Information Center

    Shirk, Stephen R.; Kaplinski, Heather; Gudmundsen, Gretchen

    2009-01-01

    The current study evaluated cognitive-behavioral therapy (CBT) for adolescent depression delivered in health clinics and counseling centers in four high schools. Outcomes were benchmarked to results from prior efficacy trials. Fifty adolescents diagnosed with depressive disorders were treated by eight doctoral-level psychologists who followed a…

  8. Saturn Dynamo Model (Invited)

    NASA Astrophysics Data System (ADS)

    Glatzmaier, G. A.

    2010-12-01

    There has been considerable interest during the past few years about the banded zonal winds and global magnetic field on Saturn (and Jupiter). Questions regarding the depth to which the intense winds extend below the surface and the role they play in maintaining the dynamo continue to be debated. The types of computer models employed to address these questions fall into two main classes: general circulation models (GCMs) based on hydrostatic shallow-water assumptions from the atmospheric and ocean modeling communities and global non-hydrostatic deep convection models from the geodynamo and solar dynamo communities. The latter class can be further divided into Boussinesq models, which do not account for density stratification, and anelastic models, which do. Recent efforts to convert GCMs to deep circulation anelastic models have succeeded in producing fluid flows similar to those obtained from the original deep convection anelastic models. We describe results from one of the original anelastic convective dynamo simulations and compare them to a recent anelastic dynamo benchmark for giant gas planets. This benchmark is based on a polytropic reference state that spans five density scale heights with a radius and rotation rate similar to those of our solar system gas giants. The resulting magnetic Reynolds number is about 3000. Better spatial resolution will be required to produce more realistic predictions that capture the effects of both the density and electrical conductivity stratifications and include enough of the turbulent kinetic energy spectrum. Important additional physics may also be needed in the models. However, the basic models used in all simulation studies of the global dynamics of giant planets will hopefully first be validated by doing these simpler benchmarks.

  9. OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE - A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gordon Tibbitts; Arnis Judzis

    2002-07-01

    This document details the progress to date on the OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE -- A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING contract for the quarter starting April 2002 through June 2002. Even though we are awaiting the optimization portion of the testing program, accomplishments include the following: (1) Presentation material was provided to the DOE/NETL project manager (Dr. John Rogers) for the DOE exhibit at the 2002 Offshore Technology Conference. (2) Two meeting at Smith International and one at Andergauge in Houston were held to investigate their interest in joining the Mud Hammer Performancemore » study. (3) SDS Digger Tools (Task 3 Benchmarking participant) apparently has not negotiated a commercial deal with Halliburton on the supply of fluid hammers to the oil and gas business. (4) TerraTek is awaiting progress by Novatek (a DOE contractor) on the redesign and development of their next hammer tool. Their delay will require an extension to TerraTek's contracted program. (5) Smith International has sufficient interest in the program to start engineering and chroming of collars for testing at TerraTek. (6) Shell's Brian Tarr has agreed to join the Industry Advisory Group for the DOE project. The addition of Brian Tarr is welcomed as he has numerous years of experience with the Novatek tool and was involved in the early tests in Europe while with Mobil Oil. (7) Conoco's field trial of the Smith fluid hammer for an application in Vietnam was organized and has contributed to the increased interest in their tool.« less

  10. Field Test of a Hybrid Finite-Difference and Analytic Element Regional Model.

    PubMed

    Abrams, D B; Haitjema, H M; Feinstein, D T; Hunt, R J

    2016-01-01

    Regional finite-difference models often have cell sizes that are too large to sufficiently model well-stream interactions. Here, a steady-state hybrid model is applied whereby the upper layer or layers of a coarse MODFLOW model are replaced by the analytic element model GFLOW, which represents surface waters and wells as line and point sinks. The two models are coupled by transferring cell-by-cell leakage obtained from the original MODFLOW model to the bottom of the GFLOW model. A real-world test of the hybrid model approach is applied on a subdomain of an existing model of the Lake Michigan Basin. The original (coarse) MODFLOW model consists of six layers, the top four of which are aggregated into GFLOW as a single layer, while the bottom two layers remain part of MODFLOW in the hybrid model. The hybrid model and a refined "benchmark" MODFLOW model simulate similar baseflows. The hybrid and benchmark models also simulate similar baseflow reductions due to nearby pumping when the well is located within the layers represented by GFLOW. However, the benchmark model requires refinement of the model grid in the local area of interest, while the hybrid approach uses a gridless top layer and is thus unaffected by grid discretization errors. The hybrid approach is well suited to facilitate cost-effective retrofitting of existing coarse grid MODFLOW models commonly used for regional studies because it leverages the strengths of both finite-difference and analytic element methods for predictions in mildly heterogeneous systems that can be simulated with steady-state conditions. © 2015, National Ground Water Association.

  11. [Benchmarks for interdisciplinary health and social sciences research: contributions of a research seminar].

    PubMed

    Kivits, Joëlle; Fournier, Cécile; Mino, Jean-Christophe; Frattini, Marie-Odile; Winance, Myriam; Lefève, Céline; Robelet, Magali

    2013-01-01

    This article proposes a reflection on an interdisciplinary seminar, initiated by philosophy and sociology researchers and public health professionals. The objective of this seminar was to explore the mechanisms involved in setting up and conducting interdisciplinary research, by investigating the practical modalities of articulating health and human and social sciences research in order to more clearly understand the conditions, tensions and contributions of collaborative research. These questions were discussed on the basis of detailed analysis of four recent or current research projects. Case studies identified four typical epistemological or methodological issues faced by researchers in the fields of health and human and social sciences: institutional conditions and their effects on research; deconstruction of the object; the researcher's commitment in his/her field; the articulation of research methods. Three prerequisites for interdisciplinary research in social and human sciences and in health were identified: mutual questioning of research positions and fields of study; awareness of the tensions related to institutional positions and disciplinary affiliation; joint elaboration and exchanges between various types of knowledge to ensure an interdisciplinary approach throughout all of the research process.

  12. Performance of a carbon nanotube field emission electron gun

    NASA Astrophysics Data System (ADS)

    Getty, Stephanie A.; King, Todd T.; Bis, Rachael A.; Jones, Hollis H.; Herrero, Federico; Lynch, Bernard A.; Roman, Patrick; Mahaffy, Paul

    2007-04-01

    A cold cathode field emission electron gun (e-gun) based on a patterned carbon nanotube (CNT) film has been fabricated for use in a miniaturized reflectron time-of-flight mass spectrometer (RTOF MS), with future applications in other charged particle spectrometers, and performance of the CNT e-gun has been evaluated. A thermionic electron gun has also been fabricated and evaluated in parallel and its performance is used as a benchmark in the evaluation of our CNT e-gun. Implications for future improvements and integration into the RTOF MS are discussed.

  13. PetIGA-MF: A multi-field high-performance toolbox for structure-preserving B-splines spaces

    DOE PAGES

    Sarmiento, Adel; Cortes, Adriano; Garcia, Daniel; ...

    2016-10-07

    We describe the development of a high-performance solution framework for isogeometric discrete differential forms based on B-splines: PetIGA-MF. Built on top of PetIGA, PetIGA-MF is a general multi-field discretization tool. To test the capabilities of our implementation, we solve different viscous flow problems such as Darcy, Stokes, Brinkman, and Navier-Stokes equations. Several convergence benchmarks based on manufactured solutions are presented assuring optimal convergence rates of the approximations, showing the accuracy and robustness of our solver.

  14. Benchmark duration of work hours for development of fatigue symptoms in Japanese workers with adjustment for job-related stress.

    PubMed

    Suwazono, Yasushi; Dochi, Mirei; Kobayashi, Etsuko; Oishi, Mitsuhiro; Okubo, Yasushi; Tanaka, Kumihiko; Sakata, Kouichi

    2008-12-01

    The objective of this study was to calculate benchmark durations and lower 95% confidence limits for benchmark durations of working hours associated with subjective fatigue symptoms by applying the benchmark dose approach while adjusting for job-related stress using multiple logistic regression analyses. A self-administered questionnaire was completed by 3,069 male and 412 female daytime workers (age 18-67 years) in a Japanese steel company. The eight dependent variables in the Cumulative Fatigue Symptoms Index were decreased vitality, general fatigue, physical disorders, irritability, decreased willingness to work, anxiety, depressive feelings, and chronic tiredness. Independent variables were daily working hours, four subscales (job demand, job control, interpersonal relationship, and job suitability) of the Brief Job Stress Questionnaire, and other potential covariates. Using significant parameters for working hours and those for other covariates, the benchmark durations of working hours were calculated for the corresponding Index property. Benchmark response was set at 5% or 10%. Assuming a condition of worst job stress, the benchmark duration/lower 95% confidence limit for benchmark duration of working hours per day with a benchmark response of 5% or 10% were 10.0/9.4 or 11.7/10.7 (irritability) and 9.2/8.9 or 10.4/9.8 (chronic tiredness) in men and 8.9/8.4 or 9.8/8.9 (chronic tiredness) in women. The threshold amounts of working hours for fatigue symptoms under the worst job-related stress were very close to the standard daily working hours in Japan. The results strongly suggest that special attention should be paid to employees whose working hours exceed threshold amounts based on individual levels of job-related stress.

  15. Developing and Trialling an independent, scalable and repeatable IT-benchmarking procedure for healthcare organisations.

    PubMed

    Liebe, J D; Hübner, U

    2013-01-01

    Continuous improvements of IT-performance in healthcare organisations require actionable performance indicators, regularly conducted, independent measurements and meaningful and scalable reference groups. Existing IT-benchmarking initiatives have focussed on the development of reliable and valid indicators, but less on the questions about how to implement an environment for conducting easily repeatable and scalable IT-benchmarks. This study aims at developing and trialling a procedure that meets the afore-mentioned requirements. We chose a well established, regularly conducted (inter-) national IT-survey of healthcare organisations (IT-Report Healthcare) as the environment and offered the participants of the 2011 survey (CIOs of hospitals) to enter a benchmark. The 61 structural and functional performance indicators covered among others the implementation status and integration of IT-systems and functions, global user satisfaction and the resources of the IT-department. Healthcare organisations were grouped by size and ownership. The benchmark results were made available electronically and feedback on the use of these results was requested after several months. Fifty-ninehospitals participated in the benchmarking. Reference groups consisted of up to 141 members depending on the number of beds (size) and the ownership (public vs. private). A total of 122 charts showing single indicator frequency views were sent to each participant. The evaluation showed that 94.1% of the CIOs who participated in the evaluation considered this benchmarking beneficial and reported that they would enter again. Based on the feedback of the participants we developed two additional views that provide a more consolidated picture. The results demonstrate that establishing an independent, easily repeatable and scalable IT-benchmarking procedure is possible and was deemed desirable. Based on these encouraging results a new benchmarking round which includes process indicators is currently conducted.

  16. An approach to estimate body dimensions through constant body ratio benchmarks.

    PubMed

    Chao, Wei-Cheng; Wang, Eric Min-Yang

    2010-12-01

    Building a new anthropometric database is a difficult and costly job that requires considerable manpower and time. However, most designers and engineers do not know how to convert old anthropometric data into applicable new data with minimal errors and costs (Wang et al., 1999). To simplify the process of converting old anthropometric data into useful new data, this study analyzed the available data in paired body dimensions in an attempt to determine constant body ratio (CBR) benchmarks that are independent of gender and age. In total, 483 CBR benchmarks were identified and verified from 35,245 ratios analyzed. Additionally, 197 estimation formulae, taking as inputs 19 easily measured body dimensions, were built using 483 CBR benchmarks. Based on the results for 30 recruited participants, this study determined that the described approach is more accurate and cost-effective than alternative techniques. Copyright © 2010 Elsevier Ltd. All rights reserved.

  17. All inclusive benchmarking.

    PubMed

    Ellis, Judith

    2006-07-01

    The aim of this article is to review published descriptions of benchmarking activity and synthesize benchmarking principles to encourage the acceptance and use of Essence of Care as a new benchmarking approach to continuous quality improvement, and to promote its acceptance as an integral and effective part of benchmarking activity in health services. The Essence of Care, was launched by the Department of Health in England in 2001 to provide a benchmarking tool kit to support continuous improvement in the quality of fundamental aspects of health care, for example, privacy and dignity, nutrition and hygiene. The tool kit is now being effectively used by some frontline staff. However, use is inconsistent, with the value of the tool kit, or the support clinical practice benchmarking requires to be effective, not always recognized or provided by National Health Service managers, who are absorbed with the use of quantitative benchmarking approaches and measurability of comparative performance data. This review of published benchmarking literature, was obtained through an ever-narrowing search strategy commencing from benchmarking within quality improvement literature through to benchmarking activity in health services and including access to not only published examples of benchmarking approaches and models used but the actual consideration of web-based benchmarking data. This supported identification of how benchmarking approaches have developed and been used, remaining true to the basic benchmarking principles of continuous improvement through comparison and sharing (Camp 1989). Descriptions of models and exemplars of quantitative and specifically performance benchmarking activity in industry abound (Camp 1998), with far fewer examples of more qualitative and process benchmarking approaches in use in the public services and then applied to the health service (Bullivant 1998). The literature is also in the main descriptive in its support of the effectiveness of benchmarking activity and although this does not seem to have restricted its popularity in quantitative activity, reticence about the value of the more qualitative approaches, for example Essence of Care, needs to be overcome in order to improve the quality of patient care and experiences. The perceived immeasurability and subjectivity of Essence of Care and clinical practice benchmarks means that these benchmarking approaches are not always accepted or supported by health service organizations as valid benchmarking activity. In conclusion, Essence of Care benchmarking is a sophisticated clinical practice benchmarking approach which needs to be accepted as an integral part of health service benchmarking activity to support improvement in the quality of patient care and experiences.

  18. Survey on RGB, 3D, Thermal, and Multimodal Approaches for Facial Expression Recognition: History, Trends, and Affect-Related Applications.

    PubMed

    Corneanu, Ciprian Adrian; Simon, Marc Oliu; Cohn, Jeffrey F; Guerrero, Sergio Escalera

    2016-08-01

    Facial expressions are an important way through which humans interact socially. Building a system capable of automatically recognizing facial expressions from images and video has been an intense field of study in recent years. Interpreting such expressions remains challenging and much research is needed about the way they relate to human affect. This paper presents a general overview of automatic RGB, 3D, thermal and multimodal facial expression analysis. We define a new taxonomy for the field, encompassing all steps from face detection to facial expression recognition, and describe and classify the state of the art methods accordingly. We also present the important datasets and the bench-marking of most influential methods. We conclude with a general discussion about trends, important questions and future lines of research.

  19. Public Interest Energy Research (PIER) Program Development of a Computer-based Benchmarking and Analytical Tool. Benchmarking and Energy & Water Savings Tool in Dairy Plants (BEST-Dairy)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Tengfang; Flapper, Joris; Ke, Jing

    The overall goal of the project is to develop a computer-based benchmarking and energy and water savings tool (BEST-Dairy) for use in the California dairy industry - including four dairy processes - cheese, fluid milk, butter, and milk powder. BEST-Dairy tool developed in this project provides three options for the user to benchmark each of the dairy product included in the tool, with each option differentiated based on specific detail level of process or plant, i.e., 1) plant level; 2) process-group level, and 3) process-step level. For each detail level, the tool accounts for differences in production and other variablesmore » affecting energy use in dairy processes. The dairy products include cheese, fluid milk, butter, milk powder, etc. The BEST-Dairy tool can be applied to a wide range of dairy facilities to provide energy and water savings estimates, which are based upon the comparisons with the best available reference cases that were established through reviewing information from international and national samples. We have performed and completed alpha- and beta-testing (field testing) of the BEST-Dairy tool, through which feedback from voluntary users in the U.S. dairy industry was gathered to validate and improve the tool's functionality. BEST-Dairy v1.2 was formally published in May 2011, and has been made available for free downloads from the internet (i.e., http://best-dairy.lbl.gov). A user's manual has been developed and published as the companion documentation for use with the BEST-Dairy tool. In addition, we also carried out technology transfer activities by engaging the dairy industry in the process of tool development and testing, including field testing, technical presentations, and technical assistance throughout the project. To date, users from more than ten countries in addition to those in the U.S. have downloaded the BEST-Dairy from the LBNL website. It is expected that the use of BEST-Dairy tool will advance understanding of energy and water usage in individual dairy plants, augment benchmarking activities in the market places, and facilitate implementation of efficiency measures and strategies to save energy and water usage in the dairy industry. Industrial adoption of this emerging tool and technology in the market is expected to benefit dairy plants, which are important customers of California utilities. Further demonstration of this benchmarking tool is recommended, for facilitating its commercialization and expansion in functions of the tool. Wider use of this BEST-Dairy tool and its continuous expansion (in functionality) will help to reduce the actual consumption of energy and water in the dairy industry sector. The outcomes comply very well with the goals set by the AB 1250 for PIER program.« less

  20. Recovery of evolution of Grad-Shafranov equilibria from single-spacecraft data: Benchmarking and application to a flux transfer event

    NASA Astrophysics Data System (ADS)

    Sonnerup, B. U.; Hasegawa, H.; Nakamura, T.

    2010-12-01

    Even after the advent of multi-spacecraft missions such as Cluster and THEMIS, it has been difficult to distinguish between time evolution of, and spatial variation within, a space plasma structure on the basis of in situ measurements. We present a method for analyzing time evolution of two-dimensional (2D) and magnetohydrostatic, namely Grad-Shafranov equilibria, using data recorded by an observing probe as it traverses a quasi-static, 2D magnetic-field/plasma structure. The method recovers spatial initial values used in the classical Grad-Shafranov (GS) reconstruction [Sonnerup et al., JGR, 2006] for an interval before and after the time of actual measurements, by advancing them backward and forward in time based on a set of equation for an incompressible plasma; the consequence is generation of multiple GS maps or a movie of the 2D field structure. The method is successfully benchmarked by use of a 2D magnetohydrodynamic simulation of time-dependent magnetic reconnection, and then is applied to a magnetic flux transfer event (FTE) seen by Cluster at the dayside high-latitude magnetopause, which has been analyzed with the GS method [Hasegawa et al., Ann. Geophys., 2006]. The application shows that the field lines constituting the FTE flux rope were contracting toward its center as a result of modest convective flow in the region around the core of the flux rope.

  1. Study rationale and design of OPTIMISE, a randomised controlled trial on the effect of benchmarking on quality of care in type 2 diabetes mellitus.

    PubMed

    Nobels, Frank; Debacker, Noëmi; Brotons, Carlos; Elisaf, Moses; Hermans, Michel P; Michel, Georges; Muls, Erik

    2011-09-22

    To investigate the effect of physician- and patient-specific feedback with benchmarking on the quality of care in adults with type 2 diabetes mellitus (T2DM). Study centres in six European countries were randomised to either a benchmarking or control group. Physicians in both groups received feedback on modifiable outcome indicators (glycated haemoglobin [HbA1c], glycaemia, total cholesterol, high density lipoprotein-cholesterol, low density lipoprotein [LDL]-cholesterol and triglycerides) for each patient at 0, 4, 8 and 12 months, based on the four times yearly control visits recommended by international guidelines. The benchmarking group also received comparative results on three critical quality indicators of vascular risk (HbA1c, LDL-cholesterol and systolic blood pressure [SBP]), checked against the results of their colleagues from the same country, and versus pre-set targets. After 12 months of follow up, the percentage of patients achieving the pre-determined targets for the three critical quality indicators will be assessed in the two groups. Recruitment was completed in December 2008 with 3994 evaluable patients. This paper discusses the study rationale and design of OPTIMISE, a randomised controlled study, that will help assess whether benchmarking is a useful clinical tool for improving outcomes in T2DM in primary care. NCT00681850.

  2. Study rationale and design of OPTIMISE, a randomised controlled trial on the effect of benchmarking on quality of care in type 2 diabetes mellitus

    PubMed Central

    2011-01-01

    Background To investigate the effect of physician- and patient-specific feedback with benchmarking on the quality of care in adults with type 2 diabetes mellitus (T2DM). Methods Study centres in six European countries were randomised to either a benchmarking or control group. Physicians in both groups received feedback on modifiable outcome indicators (glycated haemoglobin [HbA1c], glycaemia, total cholesterol, high density lipoprotein-cholesterol, low density lipoprotein [LDL]-cholesterol and triglycerides) for each patient at 0, 4, 8 and 12 months, based on the four times yearly control visits recommended by international guidelines. The benchmarking group also received comparative results on three critical quality indicators of vascular risk (HbA1c, LDL-cholesterol and systolic blood pressure [SBP]), checked against the results of their colleagues from the same country, and versus pre-set targets. After 12 months of follow up, the percentage of patients achieving the pre-determined targets for the three critical quality indicators will be assessed in the two groups. Results Recruitment was completed in December 2008 with 3994 evaluable patients. Conclusions This paper discusses the study rationale and design of OPTIMISE, a randomised controlled study, that will help assess whether benchmarking is a useful clinical tool for improving outcomes in T2DM in primary care. Trial registration NCT00681850 PMID:21939502

  3. A Better Benchmark Assessment: Multiple-Choice versus Project-Based

    ERIC Educational Resources Information Center

    Peariso, Jamon F.

    2006-01-01

    The purpose of this literature review and Ex Post Facto descriptive study was to determine which type of benchmark assessment, multiple-choice or project-based, provides the best indication of general success on the history portion of the CST (California Standards Tests). The result of the study indicates that although the project-based benchmark…

  4. Benchmarking: A Study of School and School District Effect and Efficiency.

    ERIC Educational Resources Information Center

    Swanson, Austin D.; Engert, Frank

    The "New York State School Report Card" provides a vehicle for benchmarking with respect to student achievement. In this study, additional tools were developed for making external comparisons with respect to achievement, and tools were added for assessing fiscal policy and efficiency. Data from school years 1993-94 through 1995-96 were…

  5. Benchmarking Investments in Advancement: Results of the Inaugural CASE Advancement Investment Metrics Study (AIMS). CASE White Paper

    ERIC Educational Resources Information Center

    Kroll, Juidith A.

    2012-01-01

    The inaugural Advancement Investment Metrics Study, or AIMS, benchmarked investments and staffing in each of the advancement disciplines (advancement services, alumni relations, communications and marketing, fundraising and advancement management) as well as the return on the investment in fundraising specifically. This white paper reports on the…

  6. A Critical Thinking Benchmark for a Department of Agricultural Education and Studies

    ERIC Educational Resources Information Center

    Perry, Dustin K.; Retallick, Michael S.; Paulsen, Thomas H.

    2014-01-01

    Due to an ever changing world where technology seemingly provides endless answers, today's higher education students must master a new skill set reflecting an emphasis on critical thinking, problem solving, and communications. The purpose of this study was to establish a departmental benchmark for critical thinking abilities of students majoring…

  7. Benchmarking Year Five Students' Reading Abilities

    ERIC Educational Resources Information Center

    Lim, Chang Kuan; Eng, Lin Siew; Mohamed, Abdul Rashid

    2014-01-01

    Reading and understanding a written text is one of the most important skills in English learning.This study attempts to benchmark Year Five students' reading abilities of fifteen rural schools in a district in Malaysia. The objectives of this study are to develop a set of standardised written reading comprehension and a set of indicators to inform…

  8. A Dual-Plane PIV Study of Turbulent Heat Transfer Flows

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Wroblewski, Adam C.; Locke, Randy J.

    2016-01-01

    Thin film cooling is a widely used technique in turbomachinery and rocket propulsion applications, where cool injection air protects a surface from hot combustion gases. The injected air typically has a different velocity and temperature from the free stream combustion flow, yielding a flow field with high turbulence and large temperature differences. These thin film cooling flows provide a good test case for evaluating computational model prediction capabilities. The goal of this work is to provide a database of flow field measurements for validating computational flow prediction models applied to turbulent heat transfer flows. In this work we describe the application of a Dual-Plane Particle Image Velocimetry (PIV) technique in a thin film cooling wind tunnel facility where the injection air stream velocity and temperatures are varied in order to provide benchmark turbulent heat transfer flow field measurements. The Dual-Plane PIV data collected include all three components of velocity and all three components of vorticity, spanning the width of the tunnel at multiple axial measurement planes.

  9. Multi-fidelity Gaussian process regression for prediction of random fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parussini, L.; Venturi, D., E-mail: venturi@ucsc.edu; Perdikaris, P.

    We propose a new multi-fidelity Gaussian process regression (GPR) approach for prediction of random fields based on observations of surrogate models or hierarchies of surrogate models. Our method builds upon recent work on recursive Bayesian techniques, in particular recursive co-kriging, and extends it to vector-valued fields and various types of covariances, including separable and non-separable ones. The framework we propose is general and can be used to perform uncertainty propagation and quantification in model-based simulations, multi-fidelity data fusion, and surrogate-based optimization. We demonstrate the effectiveness of the proposed recursive GPR techniques through various examples. Specifically, we study the stochastic Burgersmore » equation and the stochastic Oberbeck–Boussinesq equations describing natural convection within a square enclosure. In both cases we find that the standard deviation of the Gaussian predictors as well as the absolute errors relative to benchmark stochastic solutions are very small, suggesting that the proposed multi-fidelity GPR approaches can yield highly accurate results.« less

  10. ICASE/LaRC Workshop on Benchmark Problems in Computational Aeroacoustics (CAA)

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C. (Editor); Ristorcelli, J. Ray (Editor); Tam, Christopher K. W. (Editor)

    1995-01-01

    The proceedings of the Benchmark Problems in Computational Aeroacoustics Workshop held at NASA Langley Research Center are the subject of this report. The purpose of the Workshop was to assess the utility of a number of numerical schemes in the context of the unusual requirements of aeroacoustical calculations. The schemes were assessed from the viewpoint of dispersion and dissipation -- issues important to long time integration and long distance propagation in aeroacoustics. Also investigated were the effect of implementation of different boundary conditions. The Workshop included a forum in which practical engineering problems related to computational aeroacoustics were discussed. This discussion took the form of a dialogue between an industrial panel and the workshop participants and was an effort to suggest the direction of evolution of this field in the context of current engineering needs.

  11. Benchmarking and validation of a Geant4-SHADOW Monte Carlo simulation for dose calculations in microbeam radiation therapy.

    PubMed

    Cornelius, Iwan; Guatelli, Susanna; Fournier, Pauline; Crosbie, Jeffrey C; Sanchez Del Rio, Manuel; Bräuer-Krisch, Elke; Rosenfeld, Anatoly; Lerch, Michael

    2014-05-01

    Microbeam radiation therapy (MRT) is a synchrotron-based radiotherapy modality that uses high-intensity beams of spatially fractionated radiation to treat tumours. The rapid evolution of MRT towards clinical trials demands accurate treatment planning systems (TPS), as well as independent tools for the verification of TPS calculated dose distributions in order to ensure patient safety and treatment efficacy. Monte Carlo computer simulation represents the most accurate method of dose calculation in patient geometries and is best suited for the purpose of TPS verification. A Monte Carlo model of the ID17 biomedical beamline at the European Synchrotron Radiation Facility has been developed, including recent modifications, using the Geant4 Monte Carlo toolkit interfaced with the SHADOW X-ray optics and ray-tracing libraries. The code was benchmarked by simulating dose profiles in water-equivalent phantoms subject to irradiation by broad-beam (without spatial fractionation) and microbeam (with spatial fractionation) fields, and comparing against those calculated with a previous model of the beamline developed using the PENELOPE code. Validation against additional experimental dose profiles in water-equivalent phantoms subject to broad-beam irradiation was also performed. Good agreement between codes was observed, with the exception of out-of-field doses and toward the field edge for larger field sizes. Microbeam results showed good agreement between both codes and experimental results within uncertainties. Results of the experimental validation showed agreement for different beamline configurations. The asymmetry in the out-of-field dose profiles due to polarization effects was also investigated, yielding important information for the treatment planning process in MRT. This work represents an important step in the development of a Monte Carlo-based independent verification tool for treatment planning in MRT.

  12. Organic field-effect transistors using single crystals.

    PubMed

    Hasegawa, Tatsuo; Takeya, Jun

    2009-04-01

    Organic field-effect transistors using small-molecule organic single crystals are developed to investigate fundamental aspects of organic thin-film transistors that have been widely studied for possible future markets for 'plastic electronics'. In reviewing the physics and chemistry of single-crystal organic field-effect transistors (SC-OFETs), the nature of intrinsic charge dynamics is elucidated for the carriers induced at the single crystal surfaces of molecular semiconductors. Materials for SC-OFETs are first reviewed with descriptions of the fabrication methods and the field-effect characteristics. In particular, a benchmark carrier mobility of 20-40 cm 2 Vs -1 , achieved with thin platelets of rubrene single crystals, demonstrates the significance of the SC-OFETs and clarifies material limitations for organic devices. In the latter part of this review, we discuss the physics of microscopic charge transport by using SC-OFETs at metal/semiconductor contacts and along semiconductor/insulator interfaces. Most importantly, Hall effect and electron spin resonance (ESR) measurements reveal that interface charge transport in molecular semiconductors is properly described in terms of band transport and localization by charge traps.

  13. Organic field-effect transistors using single crystals

    PubMed Central

    Hasegawa, Tatsuo; Takeya, Jun

    2009-01-01

    Organic field-effect transistors using small-molecule organic single crystals are developed to investigate fundamental aspects of organic thin-film transistors that have been widely studied for possible future markets for ‘plastic electronics’. In reviewing the physics and chemistry of single-crystal organic field-effect transistors (SC-OFETs), the nature of intrinsic charge dynamics is elucidated for the carriers induced at the single crystal surfaces of molecular semiconductors. Materials for SC-OFETs are first reviewed with descriptions of the fabrication methods and the field-effect characteristics. In particular, a benchmark carrier mobility of 20–40 cm2 Vs−1, achieved with thin platelets of rubrene single crystals, demonstrates the significance of the SC-OFETs and clarifies material limitations for organic devices. In the latter part of this review, we discuss the physics of microscopic charge transport by using SC-OFETs at metal/semiconductor contacts and along semiconductor/insulator interfaces. Most importantly, Hall effect and electron spin resonance (ESR) measurements reveal that interface charge transport in molecular semiconductors is properly described in terms of band transport and localization by charge traps. PMID:27877287

  14. Benchmarking: measuring the outcomes of evidence-based practice.

    PubMed

    DeLise, D C; Leasure, A R

    2001-01-01

    Measurement of the outcomes associated with implementation of evidence-based practice changes is becoming increasingly emphasized by multiple health care disciplines. A final step to the process of implementing and sustaining evidence-supported practice changes is that of outcomes evaluation and monitoring. The comparison of outcomes to internal and external measures is known as benchmarking. This article discusses evidence-based practice, provides an overview of outcomes evaluation, and describes the process of benchmarking to improve practice. A case study is used to illustrate this concept.

  15. Multirate Flutter Suppression System Design for the Benchmark Active Controls Technology Wing. Part 2; Methodology Application Software Toolbox

    NASA Technical Reports Server (NTRS)

    Mason, Gregory S.; Berg, Martin C.; Mukhopadhyay, Vivek

    2002-01-01

    To study the effectiveness of various control system design methodologies, the NASA Langley Research Center initiated the Benchmark Active Controls Project. In this project, the various methodologies were applied to design a flutter suppression system for the Benchmark Active Controls Technology (BACT) Wing. This report describes the user's manual and software toolbox developed at the University of Washington to design a multirate flutter suppression control law for the BACT wing.

  16. A new enhanced index tracking model in portfolio optimization with sum weighted approach

    NASA Astrophysics Data System (ADS)

    Siew, Lam Weng; Jaaman, Saiful Hafizah; Hoe, Lam Weng

    2017-04-01

    Index tracking is a portfolio management which aims to construct the optimal portfolio to achieve similar return with the benchmark index return at minimum tracking error without purchasing all the stocks that make up the index. Enhanced index tracking is an improved portfolio management which aims to generate higher portfolio return than the benchmark index return besides minimizing the tracking error. The objective of this paper is to propose a new enhanced index tracking model with sum weighted approach to improve the existing index tracking model for tracking the benchmark Technology Index in Malaysia. The optimal portfolio composition and performance of both models are determined and compared in terms of portfolio mean return, tracking error and information ratio. The results of this study show that the optimal portfolio of the proposed model is able to generate higher mean return than the benchmark index at minimum tracking error. Besides that, the proposed model is able to outperform the existing model in tracking the benchmark index. The significance of this study is to propose a new enhanced index tracking model with sum weighted apporach which contributes 67% improvement on the portfolio mean return as compared to the existing model.

  17. Encoding color information for visual tracking: Algorithms and benchmark.

    PubMed

    Liang, Pengpeng; Blasch, Erik; Ling, Haibin

    2015-12-01

    While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.

  18. Early Childhood and Care in England: When Pedagogy Is Wed to Politics

    ERIC Educational Resources Information Center

    Aubrey, Carol

    2008-01-01

    The introduction to this article will seek to present a distillation of Sally Lubeck's achievements in order to provide a benchmark of existing knowledge in the field of early childhood care and education from her perspective and an indication of its likely future. Her work, it is suggested, provides an exemplification of the new sociology of…

  19. Large-Scale Academic Achievement Testing of Deaf and Hard-of-Hearing Students: Past, Present, and Future

    ERIC Educational Resources Information Center

    Qi, Sen; Mitchell, Ross E.

    2012-01-01

    The first large-scale, nationwide academic achievement testing program using Stanford Achievement Test (Stanford) for deaf and hard-of-hearing children in the United States started in 1969. Over the past three decades, the Stanford has served as a benchmark in the field of deaf education for assessing student academic achievement. However, the…

  20. Quality Assurance and Foreign Languages--Reflecting on Oral Assessment Practices in Two University Spanish Language Programs in Australia

    ERIC Educational Resources Information Center

    Díaz, Adriana R.; Hortiguera, Hugo; Espinoza Vera, Marcia

    2015-01-01

    In the era of quality assurance (QA), close scrutiny of assessment practices has been intensified worldwide across the board. However, in the Australian context, trends in QA efforts have not reached the field of modern/foreign languages. This has largely resulted in leaving the establishment of language proficiency benchmarking up to individual…

  1. Yoga for military service personnel with PTSD: A single arm study.

    PubMed

    Johnston, Jennifer M; Minami, Takuya; Greenwald, Deborah; Li, Chieh; Reinhardt, Kristen; Khalsa, Sat Bir S

    2015-11-01

    This study evaluated the effects of yoga on posttraumatic stress disorder (PTSD) symptoms, resilience, and mindfulness in military personnel. Participants completing the yoga intervention were 12 current or former military personnel who met the Diagnostic and Statistical Manual for Mental Disorders-Fourth Edition-Text Revision (DSM-IV-TR) diagnostic criteria for PTSD. Results were also benchmarked against other military intervention studies of PTSD using the Clinician Administered PTSD Scale (CAPS; Blake et al., 2000) as an outcome measure. Results of within-subject analyses supported the study's primary hypothesis that yoga would reduce PTSD symptoms (d = 0.768; t = 2.822; p = .009) but did not support the hypothesis that yoga would significantly increase mindfulness (d = 0.392; t = -0.9500; p = .181) and resilience (d = 0.270; t = -1.220; p = .124) in this population. Benchmarking results indicated that, as compared with the aggregated treatment benchmark (d = 1.074) obtained from published clinical trials, the current study's treatment effect (d = 0.768) was visibly lower, and compared with the waitlist control benchmark (d = 0.156), the treatment effect in the current study was visibly higher. (c) 2015 APA, all rights reserved).

  2. Ultracool dwarf benchmarks with Gaia primaries

    NASA Astrophysics Data System (ADS)

    Marocco, F.; Pinfield, D. J.; Cook, N. J.; Zapatero Osorio, M. R.; Montes, D.; Caballero, J. A.; Gálvez-Ortiz, M. C.; Gromadzki, M.; Jones, H. R. A.; Kurtev, R.; Smart, R. L.; Zhang, Z.; Cabrera Lavers, A. L.; García Álvarez, D.; Qi, Z. X.; Rickard, M. J.; Dover, L.

    2017-10-01

    We explore the potential of Gaia for the field of benchmark ultracool/brown dwarf companions, and present the results of an initial search for metal-rich/metal-poor systems. A simulated population of resolved ultracool dwarf companions to Gaia primary stars is generated and assessed. Of the order of ˜24 000 companions should be identifiable outside of the Galactic plane (|b| > 10 deg) with large-scale ground- and space-based surveys including late M, L, T and Y types. Our simulated companion parameter space covers 0.02 ≤ M/M⊙ ≤ 0.1, 0.1 ≤ age/Gyr ≤ 14 and -2.5 ≤ [Fe/H] ≤ 0.5, with systems required to have a false alarm probability <10-4, based on projected separation and expected constraints on common distance, common proper motion and/or common radial velocity. Within this bulk population, we identify smaller target subsets of rarer systems whose collective properties still span the full parameter space of the population, as well as systems containing primary stars that are good age calibrators. Our simulation analysis leads to a series of recommendations for candidate selection and observational follow-up that could identify ˜500 diverse Gaia benchmarks. As a test of the veracity of our methodology and simulations, our initial search uses UKIRT Infrared Deep Sky Survey and Sloan Digital Sky Survey to select secondaries, with the parameters of primaries taken from Tycho-2, Radial Velocity Experiment, Large sky Area Multi-Object fibre Spectroscopic Telescope and Tycho-Gaia Astrometric Solution. We identify and follow up 13 new benchmarks. These include M8-L2 companions, with metallicity constraints ranging in quality, but robust in the range -0.39 ≤ [Fe/H] ≤ +0.36, and with projected physical separation in the range 0.6 < s/kau < 76. Going forward, Gaia offers a very high yield of benchmark systems, from which diverse subsamples may be able to calibrate a range of foundational ultracool/sub-stellar theory and observation.

  3. SU-D-BRD-03: A Gateway for GPU Computing in Cancer Radiotherapy Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jia, X; Folkerts, M; Shi, F

    Purpose: Graphics Processing Unit (GPU) has become increasingly important in radiotherapy. However, it is still difficult for general clinical researchers to access GPU codes developed by other researchers, and for developers to objectively benchmark their codes. Moreover, it is quite often to see repeated efforts spent on developing low-quality GPU codes. The goal of this project is to establish an infrastructure for testing GPU codes, cross comparing them, and facilitating code distributions in radiotherapy community. Methods: We developed a system called Gateway for GPU Computing in Cancer Radiotherapy Research (GCR2). A number of GPU codes developed by our group andmore » other developers can be accessed via a web interface. To use the services, researchers first upload their test data or use the standard data provided by our system. Then they can select the GPU device on which the code will be executed. Our system offers all mainstream GPU hardware for code benchmarking purpose. After the code running is complete, the system automatically summarizes and displays the computing results. We also released a SDK to allow the developers to build their own algorithm implementation and submit their binary codes to the system. The submitted code is then systematically benchmarked using a variety of GPU hardware and representative data provided by our system. The developers can also compare their codes with others and generate benchmarking reports. Results: It is found that the developed system is fully functioning. Through a user-friendly web interface, researchers are able to test various GPU codes. Developers also benefit from this platform by comprehensively benchmarking their codes on various GPU platforms and representative clinical data sets. Conclusion: We have developed an open platform allowing the clinical researchers and developers to access the GPUs and GPU codes. This development will facilitate the utilization of GPU in radiation therapy field.« less

  4. Thirty Meter Telescope narrow-field infrared adaptive optics system real-time controller prototyping results

    NASA Astrophysics Data System (ADS)

    Smith, Malcolm; Kerley, Dan; Chapin, Edward L.; Dunn, Jennifer; Herriot, Glen; Véran, Jean-Pierre; Boyer, Corinne; Ellerbroek, Brent; Gilles, Luc; Wang, Lianqi

    2016-07-01

    Prototyping and benchmarking was performed for the Real-Time Controller (RTC) of the Narrow Field InfraRed Adaptive Optics System (NFIRAOS). To perform wavefront correction, NFIRAOS utilizes two deformable mirrors (DM) and one tip/tilt stage (TTS). The RTC receives wavefront information from six Laser Guide Star (LGS) Shack- Hartmann WaveFront Sensors (WFS), one high-order Natural Guide Star Pyramid WaveFront Sensor (PWFS) and multiple low-order instrument detectors. The RTC uses this information to determine the commands to send to the wavefront correctors. NFIRAOS is the first light AO system for the Thirty Meter Telescope (TMT). The prototyping was performed using dual-socket high performance Linux servers with the real-time (PREEMPT_RT) patch and demonstrated the viability of a commercial off-the-shelf (COTS) hardware approach to large scale AO reconstruction. In particular, a large custom matrix vector multiplication (MVM) was benchmarked which met the required latency requirements. In addition all major inter-machine communication was verified to be adequate using 10Gb and 40Gb Ethernet. The results of this prototyping has enabled a CPU-based NFIRAOS RTC design to proceed with confidence and that COTS hardware can be used to meet the demanding performance requirements.

  5. Homogeneous Molecular Catalysis of Electrochemical Reactions: Catalyst Benchmarking and Optimization Strategies.

    PubMed

    Costentin, Cyrille; Savéant, Jean-Michel

    2017-06-21

    Modern energy challenges currently trigger an intense interest in catalysis of redox reactions-electrochemical and photochemical-particularly those involving small molecules such as water, hydrogen, oxygen, proton, carbon dioxide. A continuously increasing number of molecular catalysts of these reactions, mostly transition metal complexes, have been proposed, rendering necessary procedures for their rational benchmarking and fueling the quest for leading principles that could inspire the design of improved catalysts. The search of "volcano plots" correlating catalysis kinetics to the stability of the key intermediate is a popular approach to the question in catalysis by surface-active sites, with as foremost example the electrochemical reduction of aqueous proton on metal surfaces. We discussed here for the first time, on theoretical and experimental grounds, the pertinence of such an approach in the field of molecular catalysis. This is the occasion to insist on the virtue of careful mechanism assignments. Particular emphasis is put on the interest of expressing the catalysts' intrinsic kinetic properties by means of catalytic Tafel plots, which relate kinetics and overpotential. We also underscore that the principle and strategies put forward for the catalytic activation of the above-mentioned small molecules are general as illustrated by catalytic applications out of this particular field.

  6. Benchmark for Numerical Models of Stented Coronary Bifurcation Flow.

    PubMed

    García Carrascal, P; García García, J; Sierra Pallares, J; Castro Ruiz, F; Manuel Martín, F J

    2018-09-01

    In-stent restenosis ails many patients who have undergone stenting. When the stented artery is a bifurcation, the intervention is particularly critical because of the complex stent geometry involved in these structures. Computational fluid dynamics (CFD) has been shown to be an effective approach when modeling blood flow behavior and understanding the mechanisms that underlie in-stent restenosis. However, these CFD models require validation through experimental data in order to be reliable. It is with this purpose in mind that we performed particle image velocimetry (PIV) measurements of velocity fields within flows through a simplified coronary bifurcation. Although the flow in this simplified bifurcation differs from the actual blood flow, it emulates the main fluid dynamic mechanisms found in hemodynamic flow. Experimental measurements were performed for several stenting techniques in both steady and unsteady flow conditions. The test conditions were strictly controlled, and uncertainty was accurately predicted. The results obtained in this research represent readily accessible, easy to emulate, detailed velocity fields and geometry, and they have been successfully used to validate our numerical model. These data can be used as a benchmark for further development of numerical CFD modeling in terms of comparison of the main flow pattern characteristics.

  7. Potential Deep Seated Landslide Mapping from Various Temporal Data - Benchmark, Aerial Photo, and SAR

    NASA Astrophysics Data System (ADS)

    Wang, Kuo-Lung; Lin, Jun-Tin; Lee, Yi-Hsuan; Lin, Meei-Ling; Chen, Chao-Wei; Liao, Ray-Tang; Chi, Chung-Chi; Lin, Hsi-Hung

    2016-04-01

    Landslide is always not hazard until mankind development in highly potential area. The study tries to map deep seated landslide before the initiation of landslide. Study area in central Taiwan is selected and the geological condition is quite unique, which is slate. Major direction of bedding in this area is northeast and the dip ranges from 30-75 degree to southeast. Several deep seated landslides were discovered in the same side of bedding from rainfall events. The benchmarks from 2002 ~ 2009 are in this study. However, the benchmarks were measured along Highway No. 14B and the road was constructed along the peak of mountains. Taiwan located between sea plates and continental plate. The elevation of mountains is rising according to most GPS and benchmarks in the island. The same trend is discovered from benchmarks in this area. But some benchmarks are located in landslide area thus the elevation is below average and event negative. The aerial photos from 1979 to 2007 are used for orthophoto generation. The changes of land use are obvious during 30 years and enlargement of river channel is also observed in this area. Both benchmarks and aerial photos have discovered landslide potential did exist this area but how big of landslide in not easy to define currently. Thus SAR data utilization is adopted in this case. DInSAR and SBAS sar analysis are used in this research and ALOS/PALSAR from 2006 to 2010 is adopted. DInSAR analysis shows that landslide is possible mapped but the error is not easy to reduce. The error is possibly form several conditions such as vegetation, clouds, vapor, etc. To conquer the problem, time series analysis, SBAS, is adopted in this research. The result of SBAS in this area shows that large deep seated landslides are easy mapped and the accuracy of vertical displacement is reasonable.

  8. Optimal type 2 diabetes mellitus management: the randomised controlled OPTIMISE benchmarking study: baseline results from six European countries.

    PubMed

    Hermans, Michel P; Brotons, Carlos; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank

    2013-12-01

    Micro- and macrovascular complications of type 2 diabetes have an adverse impact on survival, quality of life and healthcare costs. The OPTIMISE (OPtimal Type 2 dIabetes Management Including benchmarking and Standard trEatment) trial comparing physicians' individual performances with a peer group evaluates the hypothesis that benchmarking, using assessments of change in three critical quality indicators of vascular risk: glycated haemoglobin (HbA1c), low-density lipoprotein-cholesterol (LDL-C) and systolic blood pressure (SBP), may improve quality of care in type 2 diabetes in the primary care setting. This was a randomised, controlled study of 3980 patients with type 2 diabetes. Six European countries participated in the OPTIMISE study (NCT00681850). Quality of care was assessed by the percentage of patients achieving pre-set targets for the three critical quality indicators over 12 months. Physicians were randomly assigned to receive either benchmarked or non-benchmarked feedback. All physicians received feedback on six of their patients' modifiable outcome indicators (HbA1c, fasting glycaemia, total cholesterol, high-density lipoprotein-cholesterol (HDL-C), LDL-C and triglycerides). Physicians in the benchmarking group additionally received information on levels of control achieved for the three critical quality indicators compared with colleagues. At baseline, the percentage of evaluable patients (N = 3980) achieving pre-set targets was 51.2% (HbA1c; n = 2028/3964); 34.9% (LDL-C; n = 1350/3865); 27.3% (systolic blood pressure; n = 911/3337). OPTIMISE confirms that target achievement in the primary care setting is suboptimal for all three critical quality indicators. This represents an unmet but modifiable need to revisit the mechanisms and management of improving care in type 2 diabetes. OPTIMISE will help to assess whether benchmarking is a useful clinical tool for improving outcomes in type 2 diabetes.

  9. E × B electron drift instability in Hall thrusters: Particle-in-cell simulations vs. theory

    NASA Astrophysics Data System (ADS)

    Boeuf, J. P.; Garrigues, L.

    2018-06-01

    The E × B Electron Drift Instability (E × B EDI), also called Electron Cyclotron Drift Instability, has been observed in recent particle simulations of Hall thrusters and is a possible candidate to explain anomalous electron transport across the magnetic field in these devices. This instability is characterized by the development of an azimuthal wave with wavelength in the mm range and velocity on the order of the ion acoustic velocity, which enhances electron transport across the magnetic field. In this paper, we study the development and convection of the E × B EDI in the acceleration and near plume regions of a Hall thruster using a simplified 2D axial-azimuthal Particle-In-Cell simulation. The simulation is collisionless and the ionization profile is not-self-consistent but rather is given as an input parameter of the model. The aim is to study the development and properties of the instability for different values of the ionization rate (i.e., of the total ion production rate or current) and to compare the results with the theory. An important result is that the wavelength of the simulated azimuthal wave scales as the electron Debye length and that its frequency is on the order of the ion plasma frequency. This is consistent with the theory predicting destruction of electron cyclotron resonance of the E × B EDI in the non-linear regime resulting in the transition to an ion acoustic instability. The simulations also show that for plasma densities smaller than under nominal conditions of Hall thrusters the field fluctuations induced by the E × B EDI are no longer sufficient to significantly enhance electron transport across the magnetic field, and transit time instabilities develop in the axial direction. The conditions and results of the simulations are described in detail in this paper and they can serve as benchmarks for comparisons between different simulation codes. Such benchmarks would be very useful to study the role of numerical noise (numerical noise can also be responsible to the destruction of electron cyclotron resonance) or the influence of the period of the azimuthal domain, as well as to reach a better and consensual understanding of the physics.

  10. A broken promise: microbiome differential abundance methods do not control the false discovery rate.

    PubMed

    Hawinkel, Stijn; Mattiello, Federico; Bijnens, Luc; Thas, Olivier

    2017-08-22

    High-throughput sequencing technologies allow easy characterization of the human microbiome, but the statistical methods to analyze microbiome data are still in their infancy. Differential abundance methods aim at detecting associations between the abundances of bacterial species and subject grouping factors. The results of such methods are important to identify the microbiome as a prognostic or diagnostic biomarker or to demonstrate efficacy of prodrug or antibiotic drugs. Because of a lack of benchmarking studies in the microbiome field, no consensus exists on the performance of the statistical methods. We have compared a large number of popular methods through extensive parametric and nonparametric simulation as well as real data shuffling algorithms. The results are consistent over the different approaches and all point to an alarming excess of false discoveries. This raises great doubts about the reliability of discoveries in past studies and imperils reproducibility of microbiome experiments. To further improve method benchmarking, we introduce a new simulation tool that allows to generate correlated count data following any univariate count distribution; the correlation structure may be inferred from real data. Most simulation studies discard the correlation between species, but our results indicate that this correlation can negatively affect the performance of statistical methods. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Groundwater-quality data in the Cascade Range and Modoc Plateau study unit, 2010-Results from the California GAMA Program

    USGS Publications Warehouse

    Shelton, Jennifer L.; Fram, Miranda S.; Belitz, Kenneth

    2013-01-01

    Groundwater quality in the 39,000-square-kilometer Cascade Range and Modoc Plateau (CAMP) study unit was investigated by the U.S. Geological Survey (USGS) from July through October 2010, as part of the California State Water Resources Control Board (SWRCB) Groundwater Ambient Monitoring and Assessment (GAMA) Program’s Priority Basin Project (PBP). The GAMA PBP was developed in response to the California Groundwater Quality Monitoring Act of 2001 and is being conducted in collaboration with the SWRCB and Lawrence Livermore National Laboratory (LLNL). The CAMP study unit is the thirty-second study unit to be sampled as part of the GAMA PBP. The GAMA CAMP study was designed to provide a spatially unbiased assessment of untreated-groundwater quality in the primary aquifer system and to facilitate statistically consistent comparisons of untreated-groundwater quality throughout California. The primary aquifer system is defined as that part of the aquifer corresponding to the open or screened intervals of wells listed in the California Department of Public Health (CDPH) database for the CAMP study unit. The quality of groundwater in shallow or deep water-bearing zones may differ from the quality of groundwater in the primary aquifer system; shallow groundwater may be more vulnerable to surficial contamination. In the CAMP study unit, groundwater samples were collected from 90 wells and springs in 6 study areas (Sacramento Valley Eastside, Honey Lake Valley, Cascade Range and Modoc Plateau Low Use Basins, Shasta Valley and Mount Shasta Volcanic Area, Quaternary Volcanic Areas, and Tertiary Volcanic Areas) in Butte, Lassen, Modoc, Plumas, Shasta, Siskiyou, and Tehama Counties. Wells and springs were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the study unit (grid wells). Groundwater samples were analyzed for field water-quality indicators, organic constituents, perchlorate, inorganic constituents, radioactive constituents, and microbial indicators. Naturally occurring isotopes and dissolved noble gases also were measured to provide a dataset that will be used to help interpret the sources and ages of the sampled groundwater in subsequent reports. In total, 221 constituents were investigated for this study. Three types of quality-control samples (blanks, replicates, and matrix spikes) were collected at approximately 10 percent of the wells in the CAMP study unit, and the results for these samples were used to evaluate the quality of the data for the groundwater samples. Blanks rarely contained detectable concentrations of any constituent, suggesting that contamination from sample collection procedures was not a significant source of bias in the data for the groundwater samples. Replicate samples generally were within the limits of acceptable analytical reproducibility. Matrix-spike recoveries were within the acceptable range (70 to 130 percent) for approximately 90 percent of the compounds. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, untreated groundwater typically is treated, disinfected, and (or) blended with other waters to maintain water quality. Regulatory benchmarks apply to water that is served to the consumer, not to untreated groundwater. However, to provide some context for the results, concentrations of constituents measured in the untreated groundwater were compared with regulatory and non-regulatory health-based benchmarks established by the U.S. Environmental Protection Agency (USEPA) and CDPH, and to non-regulatory benchmarks established for aesthetic concerns by CDPH. Comparisons between data collected for this study and benchmarks for drinking water are for illustrative purposes only and are not indicative of compliance or non-compliance with those benchmarks. All organic constituents and most inorganic constituents that were detected in groundwater samples from the 90 grid wells in the CAMP study unit were detected at concentrations less than drinking-water benchmarks. Of the 148 organic constituents analyzed, 27 were detected in groundwater samples; concentrations of all detected constituents were less than regulatory and nonregulatory health-based benchmarks, and all were less than 1/10 of benchmark levels. One or more organic constituents were detected in 52 percent of the grid wells in the CAMP study unit: VOCs were detected in 30 percent, and pesticides and pesticide degradates were detected in 31 percent. Trace elements, major ions, nutrients, and radioactive constituents were sampled for at 90 grid wells in the CAMP study unit, and most detected concentrations were less than health-based benchmarks. Exceptions include three detections of arsenic greater than the USEPA maximum contaminant level (MCL-US) of 10 micrograms per liter (µg/L), two detections of boron greater than the CDPH notification level (NL-CA) of 1,000 µg/L, two detections of molybdenum greater than the USEPA lifetime health advisory level (HAL-US) of 40 µg/L, two detections of vanadium greater than the CDPH notification level (NL-CA) of 50 µg/L, one detection of nitrate, as nitrogen, greater than the MCL-US of 10 milligrams per liter (mg/L), two detections of uranium greater than the MCL-US of 30 µg/L and the MCL-CA of 20 picocuries per liter (pCi/L), one detection of radon-222 greater than the proposed MCL-US of 4,000 pCi/L, and two detections of gross alpha particle activity greater than the MCL-US of 15 pCi/L. Results for inorganic constituents with non-regulatory benchmarks set for aesthetic concerns showed that iron concentrations greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 300 µg/L were detected in four grid wells. Manganese concentrations greater than the SMCL-CA of 50 µg/L were detected in nine grid wells. Chloride and TDS were detected at concentrations greater than the upper SMCL-CA benchmarks of 500 mg/L and 1,000 mg/L, respectively, in one grid well. Microbial indicators (total coliform and Escherichia coli [E. coli]) were detected in 11 percent of the 83 grid wells sampled for these analyses in the CAMP study unit. The presence of total coliform was detected in nine grid wells, and the presence of E. coli was detected in one of these same grid wells.

  12. Simulation of guided-wave ultrasound propagation in composite laminates: Benchmark comparisons of numerical codes and experiment.

    PubMed

    Leckey, Cara A C; Wheeler, Kevin R; Hafiychuk, Vasyl N; Hafiychuk, Halyna; Timuçin, Doğan A

    2018-03-01

    Ultrasonic wave methods constitute the leading physical mechanism for nondestructive evaluation (NDE) and structural health monitoring (SHM) of solid composite materials, such as carbon fiber reinforced polymer (CFRP) laminates. Computational models of ultrasonic wave excitation, propagation, and scattering in CFRP composites can be extremely valuable in designing practicable NDE and SHM hardware, software, and methodologies that accomplish the desired accuracy, reliability, efficiency, and coverage. The development and application of ultrasonic simulation approaches for composite materials is an active area of research in the field of NDE. This paper presents comparisons of guided wave simulations for CFRP composites implemented using four different simulation codes: the commercial finite element modeling (FEM) packages ABAQUS, ANSYS, and COMSOL, and a custom code executing the Elastodynamic Finite Integration Technique (EFIT). Benchmark comparisons are made between the simulation tools and both experimental laser Doppler vibrometry data and theoretical dispersion curves. A pristine and a delamination type case (Teflon insert in the experimental specimen) is studied. A summary is given of the accuracy of simulation results and the respective computational performance of the four different simulation tools. Published by Elsevier B.V.

  13. Benchmarks: Reports of the NASA Science Institutes Team

    NASA Technical Reports Server (NTRS)

    Diaz, A. V.

    1995-01-01

    This report results from a benchmarking study undertaken by NASA as part of its planning for the possible creation of new science Institutes. Candidate Institutes under consideration cover a range of scientific and technological activities ranging from biomedical to astrophysical research and from the global hydrological cycle to microgravity material science. Should NASA create these Institutes, the intent will be to preserve and strengthen key science and technology activities now being performed by Government employees at NASA Field Centers. Because the success of these projected non-Government-operated Institutes is vital for the continued development of space science and applications, NASA has sought to identify the best practices of successful existing scientific and technological research institutions as they carry out those processes that will be most important for the new science Institutes. While many individuals and organizations may be interested in our findings, the primary use of this report will be to formulate plas for establishing the new science Institutes. As a result, the report is organized to that the "best practices" of the finest institutes are associated with characteristics of all institutes. These characteristics or "attributes" serve as the headings for the main body of this report.

  14. Toxicological benchmarks for screening potential contaminants of concern for effects on aquatic biota: 1996 revision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W. II; Tsao, C.L.

    1996-06-01

    This report presents potential screening benchmarks for protection of aquatic life form contaminants in water. Because there is no guidance for screening for benchmarks, a set of alternative benchmarks is presented herein. This report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. Also included is the updates of benchmark values where appropriate, new benchmark values, secondary sources are replaced by primary sources, and a more completemore » documentation of the sources and derivation of all values are presented.« less

  15. Competency based training in robotic surgery: benchmark scores for virtual reality robotic simulation.

    PubMed

    Raison, Nicholas; Ahmed, Kamran; Fossati, Nicola; Buffi, Nicolò; Mottrie, Alexandre; Dasgupta, Prokar; Van Der Poel, Henk

    2017-05-01

    To develop benchmark scores of competency for use within a competency based virtual reality (VR) robotic training curriculum. This longitudinal, observational study analysed results from nine European Association of Urology hands-on-training courses in VR simulation. In all, 223 participants ranging from novice to expert robotic surgeons completed 1565 exercises. Competency was set at 75% of the mean expert score. Benchmark scores for all general performance metrics generated by the simulator were calculated. Assessment exercises were selected by expert consensus and through learning-curve analysis. Three basic skill and two advanced skill exercises were identified. Benchmark scores based on expert performance offered viable targets for novice and intermediate trainees in robotic surgery. Novice participants met the competency standards for most basic skill exercises; however, advanced exercises were significantly more challenging. Intermediate participants performed better across the seven metrics but still did not achieve the benchmark standard in the more difficult exercises. Benchmark scores derived from expert performances offer relevant and challenging scores for trainees to achieve during VR simulation training. Objective feedback allows both participants and trainers to monitor educational progress and ensures that training remains effective. Furthermore, the well-defined goals set through benchmarking offer clear targets for trainees and enable training to move to a more efficient competency based curriculum. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.

  16. Benchmark studies of thermal jet mixing in SFRs using a two-jet model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Omotowa, O. A.; Skifton, R.; Tokuhiro, A.

    To guide the modeling, simulations and design of Sodium Fast Reactors (SFRs), we explore and compare the predictive capabilities of two numerical solvers COMSOL and OpenFOAM in the thermal jet mixing of two buoyant jets typical of the outlet flow from a SFR tube bundle. This process will help optimize on-going experimental efforts at obtaining high resolution data for V and V of CFD codes as anticipated in next generation nuclear systems. Using the k-{epsilon} turbulence models of both codes as reference, their ability to simulate the turbulence behavior in similar environments was first validated for single jet experimental datamore » reported in literature. This study investigates the thermal mixing of two parallel jets having a temperature difference (hot-to-cold) {Delta}T{sub hc}= 5 deg. C, 10 deg. C and velocity ratios U{sub c}/U{sub h} = 0.5, 1. Results of the computed turbulent quantities due to convective mixing and the variations in flow field along the axial position are presented. In addition, this study also evaluates the effect of spacing ratio between jets in predicting the flow field and jet behavior in near and far fields. (authors)« less

  17. Facility Benchmarking Trends in Tertiary Education - An Australian Case Study.

    ERIC Educational Resources Information Center

    Fisher, Kenn

    2001-01-01

    Presents how Australia's facility managers are responding to the growing impact of tertiary education participation and the increase in educational facility usage. Topics cover strategic asset management and the benchmarking of education physical assets and postsecondary institutions. (GR)

  18. Methodology and issues of integral experiments selection for nuclear data validation

    NASA Astrophysics Data System (ADS)

    Tatiana, Ivanova; Ivanov, Evgeny; Hill, Ian

    2017-09-01

    Nuclear data validation involves a large suite of Integral Experiments (IEs) for criticality, reactor physics and dosimetry applications. [1] Often benchmarks are taken from international Handbooks. [2, 3] Depending on the application, IEs have different degrees of usefulness in validation, and usually the use of a single benchmark is not advised; indeed, it may lead to erroneous interpretation and results. [1] This work aims at quantifying the importance of benchmarks used in application dependent cross section validation. The approach is based on well-known General Linear Least Squared Method (GLLSM) extended to establish biases and uncertainties for given cross sections (within a given energy interval). The statistical treatment results in a vector of weighting factors for the integral benchmarks. These factors characterize the value added by a benchmark for nuclear data validation for the given application. The methodology is illustrated by one example, selecting benchmarks for 239Pu cross section validation. The studies were performed in the framework of Subgroup 39 (Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files) established at the Working Party on International Nuclear Data Evaluation Cooperation (WPEC) of the Nuclear Science Committee under the Nuclear Energy Agency (NEA/OECD).

  19. Decoys Selection in Benchmarking Datasets: Overview and Perspectives

    PubMed Central

    Réau, Manon; Langenfeld, Florent; Zagury, Jean-François; Lagarde, Nathalie; Montes, Matthieu

    2018-01-01

    Virtual Screening (VS) is designed to prospectively help identifying potential hits, i.e., compounds capable of interacting with a given target and potentially modulate its activity, out of large compound collections. Among the variety of methodologies, it is crucial to select the protocol that is the most adapted to the query/target system under study and that yields the most reliable output. To this aim, the performance of VS methods is commonly evaluated and compared by computing their ability to retrieve active compounds in benchmarking datasets. The benchmarking datasets contain a subset of known active compounds together with a subset of decoys, i.e., assumed non-active molecules. The composition of both the active and the decoy compounds subsets is critical to limit the biases in the evaluation of the VS methods. In this review, we focus on the selection of decoy compounds that has considerably changed over the years, from randomly selected compounds to highly customized or experimentally validated negative compounds. We first outline the evolution of decoys selection in benchmarking databases as well as current benchmarking databases that tend to minimize the introduction of biases, and secondly, we propose recommendations for the selection and the design of benchmarking datasets. PMID:29416509

  20. Benchmarking the cost efficiency of community care in Australian child and adolescent mental health services: implications for future benchmarking.

    PubMed

    Furber, Gareth; Brann, Peter; Skene, Clive; Allison, Stephen

    2011-06-01

    The purpose of this study was to benchmark the cost efficiency of community care across six child and adolescent mental health services (CAMHS) drawn from different Australian states. Organizational, contact and outcome data from the National Mental Health Benchmarking Project (NMHBP) data-sets were used to calculate cost per "treatment hour" and cost per episode for the six participating organizations. We also explored the relationship between intake severity as measured by the Health of the Nations Outcome Scales for Children and Adolescents (HoNOSCA) and cost per episode. The average cost per treatment hour was $223, with cost differences across the six services ranging from a mean of $156 to $273 per treatment hour. The average cost per episode was $3349 (median $1577) and there were significant differences in the CAMHS organizational medians ranging from $388 to $7076 per episode. HoNOSCA scores explained at best 6% of the cost variance per episode. These large cost differences indicate that community CAMHS have the potential to make substantial gains in cost efficiency through collaborative benchmarking. Benchmarking forums need considerable financial and business expertise for detailed comparison of business models for service provision.

  1. A large-scale benchmark of gene prioritization methods.

    PubMed

    Guala, Dimitri; Sonnhammer, Erik L L

    2017-04-21

    In order to maximize the use of results from high-throughput experimental studies, e.g. GWAS, for identification and diagnostics of new disease-associated genes, it is important to have properly analyzed and benchmarked gene prioritization tools. While prospective benchmarks are underpowered to provide statistically significant results in their attempt to differentiate the performance of gene prioritization tools, a strategy for retrospective benchmarking has been missing, and new tools usually only provide internal validations. The Gene Ontology(GO) contains genes clustered around annotation terms. This intrinsic property of GO can be utilized in construction of robust benchmarks, objective to the problem domain. We demonstrate how this can be achieved for network-based gene prioritization tools, utilizing the FunCoup network. We use cross-validation and a set of appropriate performance measures to compare state-of-the-art gene prioritization algorithms: three based on network diffusion, NetRank and two implementations of Random Walk with Restart, and MaxLink that utilizes network neighborhood. Our benchmark suite provides a systematic and objective way to compare the multitude of available and future gene prioritization tools, enabling researchers to select the best gene prioritization tool for the task at hand, and helping to guide the development of more accurate methods.

  2. Global height datum unification: a new approach in gravity potential space

    NASA Astrophysics Data System (ADS)

    Ardalan, A. A.; Safari, A.

    2005-12-01

    The problem of “global height datum unification” is solved in the gravity potential space based on: (1) high-resolution local gravity field modeling, (2) geocentric coordinates of the reference benchmark, and (3) a known value of the geoid’s potential. The high-resolution local gravity field model is derived based on a solution of the fixed-free two-boundary-value problem of the Earth’s gravity field using (a) potential difference values (from precise leveling), (b) modulus of the gravity vector (from gravimetry), (c) astronomical longitude and latitude (from geodetic astronomy and/or combination of (GNSS) Global Navigation Satellite System observations with total station measurements), (d) and satellite altimetry. Knowing the height of the reference benchmark in the national height system and its geocentric GNSS coordinates, and using the derived high-resolution local gravity field model, the gravity potential value of the zero point of the height system is computed. The difference between the derived gravity potential value of the zero point of the height system and the geoid’s potential value is computed. This potential difference gives the offset of the zero point of the height system from geoid in the “potential space”, which is transferred into “geometry space” using the transformation formula derived in this paper. The method was applied to the computation of the offset of the zero point of the Iranian height datum from the geoid’s potential value W 0=62636855.8 m2/s2. According to the geometry space computations, the height datum of Iran is 0.09 m below the geoid.

  3. Benchmarking in emergency health systems.

    PubMed

    Kennedy, Marcus P; Allen, Jacqueline; Allen, Greg

    2002-12-01

    This paper discusses the role of benchmarking as a component of quality management. It describes the historical background of benchmarking, its competitive origin and the requirement in today's health environment for a more collaborative approach. The classical 'functional and generic' types of benchmarking are discussed with a suggestion to adopt a different terminology that describes the purpose and practicalities of benchmarking. Benchmarking is not without risks. The consequence of inappropriate focus and the need for a balanced overview of process is explored. The competition that is intrinsic to benchmarking is questioned and the negative impact it may have on improvement strategies in poorly performing organizations is recognized. The difficulty in achieving cross-organizational validity in benchmarking is emphasized, as is the need to scrutinize benchmarking measures. The cost effectiveness of benchmarking projects is questioned and the concept of 'best value, best practice' in an environment of fixed resources is examined.

  4. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  5. A novel platform to study magnetized high-velocity collisionless shocks

    DOE PAGES

    Higginson, D. P.; Korneev, Ph; Béard, J.; ...

    2014-12-13

    An experimental platform to study the interaction of two colliding high-velocity (0.01–0.2c; 0.05–20 MeV) proton plasmas in a high strength (20 T) magnetic field is introduced. This platform aims to study the collision of magnetized plasmas accelerated via the Target-Normal-Sheath-Acceleration mechanism and initially separated by distances of a few hundred microns. The plasmas are accelerated from solid targets positioned inside a few cubic millimeter cavity located within a Helmholtz coil that provides up to 20 T magnetic fields. Various parameters of the plasmas at their interaction location are estimated. These show an interaction that is highly non-collisional, and that becomesmore » more and more dominated by the magnetic fields as time progresses (from 5 to 60 ps). Particle-in-cell simulations are used to reproduce the initial acceleration of the plasma both via simulations including the laser interaction and via simulations that start with preheated electrons (to save dramatically on computational expense). The benchmarking of such simulations with the experiment and with each other will be used to understand the physical interaction when a magnetic field is applied. In conclusion, the experimental density profile of the interacting plasmas is shown in the case without an applied magnetic magnetic field, so to show that without an applied field that the development of high-velocity shocks, as a result of particle-to-particle collisions, is not achievable in the configuration considered.« less

  6. A novel platform to study magnetized high-velocity collisionless shocks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Higginson, D. P.; Korneev, Ph; Béard, J.

    An experimental platform to study the interaction of two colliding high-velocity (0.01–0.2c; 0.05–20 MeV) proton plasmas in a high strength (20 T) magnetic field is introduced. This platform aims to study the collision of magnetized plasmas accelerated via the Target-Normal-Sheath-Acceleration mechanism and initially separated by distances of a few hundred microns. The plasmas are accelerated from solid targets positioned inside a few cubic millimeter cavity located within a Helmholtz coil that provides up to 20 T magnetic fields. Various parameters of the plasmas at their interaction location are estimated. These show an interaction that is highly non-collisional, and that becomesmore » more and more dominated by the magnetic fields as time progresses (from 5 to 60 ps). Particle-in-cell simulations are used to reproduce the initial acceleration of the plasma both via simulations including the laser interaction and via simulations that start with preheated electrons (to save dramatically on computational expense). The benchmarking of such simulations with the experiment and with each other will be used to understand the physical interaction when a magnetic field is applied. In conclusion, the experimental density profile of the interacting plasmas is shown in the case without an applied magnetic magnetic field, so to show that without an applied field that the development of high-velocity shocks, as a result of particle-to-particle collisions, is not achievable in the configuration considered.« less

  7. Alternative industrial carbon emissions benchmark based on input-output analysis

    NASA Astrophysics Data System (ADS)

    Han, Mengyao; Ji, Xi

    2016-12-01

    Some problems exist in the current carbon emissions benchmark setting systems. The primary consideration for industrial carbon emissions standards highly relate to direct carbon emissions (power-related emissions) and only a portion of indirect emissions are considered in the current carbon emissions accounting processes. This practice is insufficient and may cause double counting to some extent due to mixed emission sources. To better integrate and quantify direct and indirect carbon emissions, an embodied industrial carbon emissions benchmark setting method is proposed to guide the establishment of carbon emissions benchmarks based on input-output analysis. This method attempts to link direct carbon emissions with inter-industrial economic exchanges and systematically quantifies carbon emissions embodied in total product delivery chains. The purpose of this study is to design a practical new set of embodied intensity-based benchmarks for both direct and indirect carbon emissions. Beijing, at the first level of carbon emissions trading pilot schemes in China, plays a significant role in the establishment of these schemes and is chosen as an example in this study. The newly proposed method tends to relate emissions directly to each responsibility in a practical way through the measurement of complex production and supply chains and reduce carbon emissions from their original sources. This method is expected to be developed under uncertain internal and external contexts and is further expected to be generalized to guide the establishment of industrial benchmarks for carbon emissions trading schemes in China and other countries.

  8. Benchmarking and Performance Measurement.

    ERIC Educational Resources Information Center

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  9. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  10. Professional Learning: Trends in State Efforts. Benchmarking State Implementation of College- and Career-Readiness Standards

    ERIC Educational Resources Information Center

    Anderson, Kimberly; Mire, Mary Elizabeth

    2016-01-01

    This report presents a multi-year study of how states are implementing their state college- and career-readiness standards. In this report, the Southern Regional Education Board's (SREB's) Benchmarking State Implementation of College- and Career-Readiness Standards project studied state efforts in 2014-15 and 2015-16 to foster effective…

  11. Towards an automated and efficient calculation of resonating vibrational states based on state-averaged multiconfigurational approaches

    NASA Astrophysics Data System (ADS)

    Meier, Patrick; Oschetzki, Dominik; Pfeiffer, Florian; Rauhut, Guntram

    2015-12-01

    Resonating vibrational states cannot consistently be described by single-reference vibrational self-consistent field methods but request the use of multiconfigurational approaches. Strategies are presented to accelerate vibrational multiconfiguration self-consistent field theory and subsequent multireference configuration interaction calculations in order to allow for routine calculations at this enhanced level of theory. State-averaged vibrational complete active space self-consistent field calculations using mode-specific and state-tailored active spaces were found to be very fast and superior to state-specific calculations or calculations with a uniform active space. Benchmark calculations are presented for trans-diazene and bromoform, which show strong resonances in their vibrational spectra.

  12. Towards an automated and efficient calculation of resonating vibrational states based on state-averaged multiconfigurational approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meier, Patrick; Oschetzki, Dominik; Pfeiffer, Florian

    Resonating vibrational states cannot consistently be described by single-reference vibrational self-consistent field methods but request the use of multiconfigurational approaches. Strategies are presented to accelerate vibrational multiconfiguration self-consistent field theory and subsequent multireference configuration interaction calculations in order to allow for routine calculations at this enhanced level of theory. State-averaged vibrational complete active space self-consistent field calculations using mode-specific and state-tailored active spaces were found to be very fast and superior to state-specific calculations or calculations with a uniform active space. Benchmark calculations are presented for trans-diazene and bromoform, which show strong resonances in their vibrational spectra.

  13. Glacier Changes in the Cordillera Blanca, Peru, Derived From SPOT5 Imagery, GIS and Field- Based Measurements

    NASA Astrophysics Data System (ADS)

    Racoviteanu, A.; Arnaud, Y.; Williams, M. W.; Singh Khalsa, S.

    2007-12-01

    There is urgency in deriving an extensive dataset for deriving glacier changes within the Cordillera Blanca, Peru, in a cost-effective and timely manner. Rapid glacial retreat during the last decades in this area poses a threat for water resources, hydroelectric power and local traditions. While there is some information on decadal changes in glacier extents, there still remains a paucity of mass balance measurements and glacier parameters such as hypsometry, size distribution and termini elevations. Here we investigate decadal changes in glacier parameters for Cordillera Blanca of Peru using data from Système Probatoire d'Observation de la Terre (SPOT) sensor, an old glacier inventory from 1970 aerial photography, field-based mass balance measurements and meteorological observations. Here we focus on: constructing a geospatial glacier inventory from 2003 SPOT scenes; mass balance estimations using remote sensing and field data; frequency distribution of glacier area; changes in termini elevations; hypsometry changes over time; glacier topography (slope, aspect, length/width ratio); AAR vs. mass balance for Artesonraju and Yanamarey benchmark glaciers; precipitation and temperature trends in the region. Over the last 25 years, mean temperatures increases of 0.09 deg.C/yr were greater at lower elevation than the 0.01 deg.C/yr at higher elevations, with little change in precipitation. Comparison of the new SPOT-based glacier inventory with the 1970 inventory shows that glaciers in Cordillera Blanca retreated at a rate of 0.6% per year over the last three decades, with no significant differences in the rate of area loss between E and W side. At lower elevations there is an upward shift of glacier termini along with a decrease in glacier area. Small glaciers are losing more area than large glaciers. Based on the relationship between specific mass balance (bn) and accumulation area ratio (AAR) for the two benchmark glaciers, we predicted a steady-state equilibrium line altitude (ELA) of approximately 5050 m for the range as a whole. Additional field work is needed to more accurately establish the bn vs. AAR curves and to better determine the most representative benchmark glacier to use in predicting the response of the entire system to climate changes.

  14. A Kinesthetic Learning Approach to Earth Science for 3rd and 4th Grade Students on the Pajarito Plateau, Los Alamos, NM

    NASA Astrophysics Data System (ADS)

    Wershow, H. N.; Green, M.; Stocker, A.; Staires, D.

    2010-12-01

    Current efforts towards Earth Science literacy in New Mexico are guided by the New Mexico Science Benchmarks [1]. We are geoscience professionals in Los Alamos, NM who believe there is an important role for non-traditional educators utilizing innovative teaching methods. We propose to further Earth Science literacy for local 3rd and 4th grade students using a kinesthetic learning approach, with the goal of fostering an interactive relationship between the students and their geologic environment. We will be working in partnership with the Pajarito Environmental Education Center (PEEC), which teaches the natural heritage of the Pajarito Plateau to 3rd and 4th grade students from the surrounding area, as well as the Family YMCA’s Adventure Programs Director. The Pajarito Plateau provides a remarkable geologic classroom because minimal structural features complicate the stratigraphy and dramatic volcanic and erosional processes are plainly on display and easily accessible. Our methodology consists of two approaches. First, we will build an interpretive display of the local geology at PEEC that will highlight prominent rock formations and geologic processes seen on a daily basis. It will include a simplified stratigraphic section with field specimens and a map linked to each specimen’s location to encourage further exploration. Second, we will develop and implement a kinesthetic curriculum for an exploratory field class. Active engagement with geologic phenomena will take place in many forms, such as a scavenger hunt for precipitated crystals in the vesicles of basalt flows and a search for progressively smaller rhyodacite clasts scattered along an actively eroding canyon. We believe students will be more receptive to origin explanations when they possess a piece of the story. Students will be provided with field books to make drawings of geologic features. This will encourage independent assessment of phenomena and introduce the skill of scientific observation. We expect students to develop comprehension of basic geologic concepts and processes such as erosion and sediment transport, caldera formation, ash flows, crystallization and volcanic cooling features. More importantly, we hope students will become excited about their geologic environment and pursue further engagement. We will attempt to quantify student comprehension and engagement by administering simple questionnaires before and after exposure to both the PEEC display and the field class. ____________________________________________________________ [1] New Mexico Science Content Standards, Benchmarks, and Performance Standards. Approved 2003, New Mexico State Department of Education. 3rd Grade Benchmark: “Know that Earth’s features are constantly changed by a combination of slow and rapid processes that include the action of volcanoes, earthquakes, mountain building, biological changes, erosion, and weathering” 4th Grade Benchmark: “Know that the properties of rocks and minerals reflect the processes that shaped them (i.e., igneous, metamorphic, and sedimentary rocks)”

  15. Small-amplitude acoustics in bulk granular media

    NASA Astrophysics Data System (ADS)

    Henann, David L.; Valenza, John J., II; Johnson, David L.; Kamrin, Ken

    2013-10-01

    We propose and validate a three-dimensional continuum modeling approach that predicts small-amplitude acoustic behavior of dense-packed granular media. The model is obtained through a joint experimental and finite-element study focused on the benchmark example of a vibrated container of grains. Using a three-parameter linear viscoelastic constitutive relation, our continuum model is shown to quantitatively predict the effective mass spectra in this geometry, even as geometric parameters for the environment are varied. Further, the model's predictions for the surface displacement field are validated mode-by-mode against experiment. A primary observation is the importance of the boundary condition between grains and the quasirigid walls.

  16. Toxicological Benchmarks for Screening of Potential Contaminants of Concern for Effects on Aquatic Biota on the Oak Ridge Reservation, Oak Ridge, Tennessee

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W., II

    1993-01-01

    One of the initial stages in ecological risk assessment of hazardous waste sites is the screening of contaminants to determine which, if any, of them are worthy of further consideration; this process is termed contaminant screening. Screening is performed by comparing concentrations in ambient media to benchmark concentrations that are either indicative of a high likelihood of significant effects (upper screening benchmarks) or of a very low likelihood of significant effects (lower screening benchmarks). Exceedance of an upper screening benchmark indicates that the chemical in question is clearly of concern and remedial actions are likely to be needed. Exceedance ofmore » a lower screening benchmark indicates that a contaminant is of concern unless other information indicates that the data are unreliable or the comparison is inappropriate. Chemicals with concentrations below the lower benchmark are not of concern if the ambient data are judged to be adequate. This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronic Value (SCV), the lowest chronic values for fish and daphnids, the lowest EC20 for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. This report supersedes a prior aquatic benchmarks report (Suter and Mabrey 1994). It adds two new types of benchmarks. It also updates the benchmark values where appropriate, adds some new benchmark values, replaces secondary sources with primary sources, and provides more complete documentation of the sources and derivation of all values.« less

  17. The KMAT: Benchmarking Knowledge Management.

    ERIC Educational Resources Information Center

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  18. APPLICATION OF BENCHMARK DOSE METHODOLOGY TO DATA FROM PRENATAL DEVELOPMENTAL TOXICITY STUDIES

    EPA Science Inventory

    The benchmark dose (BMD) concept was applied to 246 conventional developmental toxicity datasets from government, industry and commercial laboratories. Five modeling approaches were used, two generic and three specific to developmental toxicity (DT models). BMDs for both quantal ...

  19. Nutrient and pesticide contamination bias estimated from field blanks collected at surface-water sites in U.S. Geological Survey Water-Quality Networks, 2002–12

    USGS Publications Warehouse

    Medalie, Laura; Martin, Jeffrey D.

    2017-08-14

    Potential contamination bias was estimated for 8 nutrient analytes and 40 pesticides in stream water collected by the U.S. Geological Survey at 147 stream sites from across the United States, and representing a variety of hydrologic conditions and site types, for water years 2002–12. This study updates previous U.S. Geological Survey evaluations of potential contamination bias for nutrients and pesticides. Contamination is potentially introduced to water samples by exposure to airborne gases and particulates, from inadequate cleaning of sampling or analytic equipment, and from inadvertent sources during sample collection, field processing, shipment, and laboratory analysis. Potential contamination bias, based on frequency and magnitude of detections in field blanks, is used to determine whether or under what conditions environmental data might need to be qualified for the interpretation of results in the context of comparisons with background levels, drinking-water standards, aquatic-life criteria or benchmarks, or human-health benchmarks. Environmental samples for which contamination bias as determined in this report applies are those from historical U.S. Geological Survey water-quality networks or programs that were collected during the same time frame and according to the same protocols and that were analyzed in the same laboratory as field blanks described in this report.Results from field blanks for ammonia, nitrite, nitrite plus nitrate, orthophosphate, and total phosphorus were partitioned by analytical method; results from the most commonly used analytical method for total phosphorus were further partitioned by date. Depending on the analytical method, 3.8, 9.2, or 26.9 percent of environmental samples, the last of these percentages pertaining to all results from 2007 through 2012, were potentially affected by ammonia contamination. Nitrite contamination potentially affected up to 2.6 percent of environmental samples collected between 2002 and 2006 and affected about 3.3 percent of samples collected between 2007 and 2012. The percentages of environmental samples collected between 2002 and 2011 that were potentially affected by nitrite plus nitrate contamination were 7.3 for samples analyzed with the low-level method and 0.4 for samples analyzed with the standard-level method. These percentages increased to 14.8 and 2.2 for samples collected in 2012 and analyzed using replacement low- and standard-level methods, respectively. The maximum potentially affected concentrations for nitrite and for nitrite plus nitrate were much less than their respective maximum contamination levels for drinking-water standards. Although contamination from particulate nitrogen can potentially affect up to 21.2 percent and that from total Kjeldahl nitrogen can affect up to 16.5 percent of environmental samples, there are no critical or background levels for these substances.For total nitrogen, orthophosphate, and total phosphorus, contamination in a small percentage of environmental samples might be consequential for comparisons relative to impairment risks or background levels. At the low ends of the respective ranges of impairment risk for these nutrients, contamination in up to 5 percent of stream samples could account for at least 23 percent of measured concentrations of total nitrogen, for at least 40 or 90 percent of concentrations of orthophosphate, depending on the analytical method, and for 31 to 76 percent of concentrations of total phosphorus, depending on the time period.Twenty-six pesticides had no detections in field blanks. Atrazine with 12 and metolachlor with 11 had the highest number of detections, mostly occurring in spring or early summer. At a 99-percent level of confidence, contamination was estimated to be no greater than the detection limit in at least 98 percent of all samples for 38 of 40 pesticides. For metolachlor and atrazine, potential contamination was no greater than 0.0053 and 0.0093 micrograms per liter in 98 percent of samples. For 11 of 14 pesticides with at least one detection, the maximum potentially affected concentration of the environmental sample was less than their respective human-health or aquatic-life benchmarks. Small percentages of environmental samples had concentrations high enough that atrazine contamination potentially could account for the entire aquatic-life benchmark for acute effects on nonvascular plants, that dieldrin contamination could account for up to 100 percent of the cancer health-based screening level, or that chlorpyrifos contamination could account for 13 or 12 percent of the concentrations in the aquatic-life benchmarks for chronic effects on invertebrates or the criterion continuous concentration for chronic effects on aquatic life.

  20. Paediatric International Nursing Study: using person-centred key performance indicators to benchmark children's services.

    PubMed

    McCance, Tanya; Wilson, Val; Kornman, Kelly

    2016-07-01

    The aim of the Paediatric International Nursing Study was to explore the utility of key performance indicators in developing person-centred practice across a range of services provided to sick children. The objective addressed in this paper was evaluating the use of these indicators to benchmark services internationally. This study builds on primary research, which produced indicators that were considered novel both in terms of their positive orientation and use in generating data that privileges the patient voice. This study extends this research through wider testing on an international platform within paediatrics. The overall methodological approach was a realistic evaluation used to evaluate the implementation of the key performance indicators, which combined an integrated development and evaluation methodology. The study involved children's wards/hospitals in Australia (six sites across three states) and Europe (seven sites across four countries). Qualitative and quantitative methods were used during the implementation process, however, this paper reports the quantitative data only, which used survey, observations and documentary review. The findings demonstrate the quality of care being delivered to children and their families across different international sites. The benchmarking does, however, highlight some differences between paediatric and general hospitals, and between the different key performance indicators across all the sites. The findings support the use of the key performance indicators as a novel method to benchmark services internationally. Whilst the data collected across 20 paediatric sites suggest services are more similar than different, benchmarking illuminates variations that encourage a critical dialogue about what works and why. The transferability of the key performance indicators and measurement framework across different settings has significant implications for practice. The findings offer an approach to benchmarking and celebrating the successes within practice, while learning from partners across the globe in further developing person-centred cultures. © 2016 John Wiley & Sons Ltd.

  1. Establishing objective benchmarks in robotic virtual reality simulation at the level of a competent surgeon using the RobotiX Mentor simulator.

    PubMed

    Watkinson, William; Raison, Nicholas; Abe, Takashige; Harrison, Patrick; Khan, Shamim; Van der Poel, Henk; Dasgupta, Prokar; Ahmed, Kamran

    2018-05-01

    To establish objective benchmarks at the level of a competent robotic surgeon across different exercises and metrics for the RobotiX Mentor virtual reality (VR) simulator suitable for use within a robotic surgical training curriculum. This retrospective observational study analysed results from multiple data sources, all of which used the RobotiX Mentor VR simulator. 123 participants with varying experience from novice to expert completed the exercises. Competency was established as the 25th centile of the mean advanced intermediate score. Three basic skill exercises and two advanced skill exercises were used. King's College London. 84 Novice, 26 beginner intermediates, 9 advanced intermediates and 4 experts were used in this retrospective observational study. Objective benchmarks derived from the 25th centile of the mean scores of the advanced intermediates provided suitably challenging yet also achievable targets for training surgeons. The disparity in scores was greatest for the advanced exercises. Novice surgeons are able to achieve the benchmarks across all exercises in the majority of metrics. We have successfully created this proof-of-concept study, which requires validation in a larger cohort. Objective benchmarks obtained from the 25th centile of the mean scores of advanced intermediates provide clinically relevant benchmarks at the standard of a competent robotic surgeon that are challenging yet also attainable. That can be used within a VR training curriculum allowing participants to track and monitor their progress in a structured and progressional manner through five exercises. Providing clearly defined targets, ensuring that a universal training standard has been achieved across training surgeons. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  2. Studies of the flow and turbulence fields in a turbulent pulsed jet flame using LES/PDF

    NASA Astrophysics Data System (ADS)

    Zhang, Pei; Masri, Assaad R.; Wang, Haifeng

    2017-09-01

    A turbulent piloted jet flame subject to a rapid velocity pulse in its fuel jet inflow is proposed as a new benchmark case for the study of turbulent combustion models. In this work, we perform modelling studies of this turbulent pulsed jet flame and focus on the predictions of its flow and turbulence fields. An advanced modelling strategy combining the large eddy simulation (LES) and the probability density function (PDF) methods is employed to model the turbulent pulsed jet flame. Characteristics of the velocity measurements are analysed to produce a time-dependent inflow condition that can be fed into the simulations. The effect of the uncertainty in the inflow turbulence intensity is investigated and is found to be very small. A method of specifying the inflow turbulence boundary condition for the simulations of the pulsed jet flame is assessed. The strategies for validating LES of statistically transient flames are discussed, and a new framework is developed consisting of different averaging strategies and a bootstrap method for constructing confidence intervals. Parametric studies are performed to examine the sensitivity of the predictions of the flow and turbulence fields to model and numerical parameters. A direct comparison of the predicted and measured time series of the axial velocity demonstrates a satisfactory prediction of the flow and turbulence fields of the pulsed jet flame by the employed modelling methods.

  3. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barszcz, E.; Barton, J. T.; Carter, R. L.; Lasinski, T. A.; Browning, D. S.; Dagum, L.; Fatoohi, R. A.; Frederickson, P. O.; Schreiber, R. S.

    1991-01-01

    A new set of benchmarks has been developed for the performance evaluation of highly parallel supercomputers in the framework of the NASA Ames Numerical Aerodynamic Simulation (NAS) Program. These consist of five 'parallel kernel' benchmarks and three 'simulated application' benchmarks. Together they mimic the computation and data movement characteristics of large-scale computational fluid dynamics applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification-all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  4. 42 CFR 440.335 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Benchmark-equivalent health benefits coverage. 440... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has an aggregate...

  5. 42 CFR 440.335 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Benchmark-equivalent health benefits coverage. 440... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has an aggregate...

  6. Convection Effects During Bulk Transparent Alloy Solidification in DECLIC-DSI and Phase-Field Simulations in Diffusive Conditions

    NASA Astrophysics Data System (ADS)

    Mota, F. L.; Song, Y.; Pereda, J.; Billia, B.; Tourret, D.; Debierre, J.-M.; Trivedi, R.; Karma, A.; Bergeon, N.

    2017-08-01

    To study the dynamical formation and evolution of cellular and dendritic arrays under diffusive growth conditions, three-dimensional (3D) directional solidification experiments were conducted in microgravity on a model transparent alloy onboard the International Space Station using the Directional Solidification Insert in the DEvice for the study of Critical LIquids and Crystallization. Selected experiments were repeated on Earth under gravity-driven fluid flow to evidence convection effects. Both radial and axial macrosegregation resulting from convection are observed in ground experiments, and primary spacings measured on Earth and microgravity experiments are noticeably different. The microgravity experiments provide unique benchmark data for numerical simulations of spatially extended pattern formation under diffusive growth conditions. The results of 3D phase-field simulations highlight the importance of accurately modeling thermal conditions that strongly influence the front recoil of the interface and the selection of the primary spacing. The modeling predictions are in good quantitative agreements with the microgravity experiments.

  7. 9.4T Human MRI: Preliminary Results

    PubMed Central

    Vaughan, Thomas; DelaBarre, Lance; Snyder, Carl; Tian, Jinfeng; Akgun, Can; Shrivastava, Devashish; Liu, Wanzahn; Olson, Chris; Adriany, Gregor; Strupp, John; Andersen, Peter; Gopinath, Anand; van de Moortele, Pierre-Francois; Garwood, Michael; Ugurbil, Kamil

    2014-01-01

    This work reports the preliminary results of the first human images at the new high-field benchmark of 9.4T. A 65-cm-diameter bore magnet was used together with an asymmetric 40-cm-diameter head gradient and shim set. A multichannel transmission line (transverse electromagnetic (TEM)) head coil was driven by a programmable parallel transceiver to control the relative phase and magnitude of each channel independently. These new RF field control methods facilitated compensation for RF artifacts attributed to destructive interference patterns, in order to achieve homogeneous 9.4T head images or localize anatomic targets. Prior to FDA investigational device exemptions (IDEs) and internal review board (IRB)-approved human studies, preliminary RF safety studies were performed on porcine models. These data are reported together with exit interview results from the first 44 human volunteers. Although several points for improvement are discussed, the preliminary results demonstrate the feasibility of safe and successful human imaging at 9.4T. PMID:17075852

  8. Development and Applications of Benchmark Examples for Static Delamination Propagation Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2013-01-01

    The development and application of benchmark examples for the assessment of quasistatic delamination propagation capabilities was demonstrated for ANSYS (TradeMark) and Abaqus/Standard (TradeMark). The examples selected were based on finite element models of Double Cantilever Beam (DCB) and Mixed-Mode Bending (MMB) specimens. First, quasi-static benchmark results were created based on an approach developed previously. Second, the delamination was allowed to propagate under quasi-static loading from its initial location using the automated procedure implemented in ANSYS (TradeMark) and Abaqus/Standard (TradeMark). Input control parameters were varied to study the effect on the computed delamination propagation. Overall, the benchmarking procedure proved valuable by highlighting the issues associated with choosing the appropriate input parameters for the VCCT implementations in ANSYS® and Abaqus/Standard®. However, further assessment for mixed-mode delamination fatigue onset and growth is required. Additionally studies should include the assessment of the propagation capabilities in more complex specimens and on a structural level.

  9. Benchmarking the GW Approximation and Bethe–Salpeter Equation for Groups IB and IIB Atoms and Monoxides

    DOE PAGES

    Hung, Linda; Bruneval, Fabien; Baishya, Kopinjol; ...

    2017-04-07

    Energies from the GW approximation and the Bethe–Salpeter equation (BSE) are benchmarked against the excitation energies of transition-metal (Cu, Zn, Ag, and Cd) single atoms and monoxide anions. We demonstrate that best estimates of GW quasiparticle energies at the complete basis set limit should be obtained via extrapolation or closure relations, while numerically converged GW-BSE eigenvalues can be obtained on a finite basis set. Calculations using real-space wave functions and pseudopotentials are shown to give best-estimate GW energies that agree (up to the extrapolation error) with calculations using all-electron Gaussian basis sets. We benchmark the effects of a vertex approximationmore » (ΓLDA) and the mean-field starting point in GW and the BSE, performing computations using a real-space, transition-space basis and scalar-relativistic pseudopotentials. Here, while no variant of GW improves on perturbative G0W0 at predicting ionization energies, G0W0Γ LDA-BSE computations give excellent agreement with experimental absorption spectra as long as off-diagonal self-energy terms are included. We also present G0W0 quasiparticle energies for the CuO –, ZnO –, AgO –, and CdO – anions, in comparison to available anion photoelectron spectra.« less

  10. Benchmarking the GW Approximation and Bethe–Salpeter Equation for Groups IB and IIB Atoms and Monoxides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hung, Linda; Bruneval, Fabien; Baishya, Kopinjol

    Energies from the GW approximation and the Bethe–Salpeter equation (BSE) are benchmarked against the excitation energies of transition-metal (Cu, Zn, Ag, and Cd) single atoms and monoxide anions. We demonstrate that best estimates of GW quasiparticle energies at the complete basis set limit should be obtained via extrapolation or closure relations, while numerically converged GW-BSE eigenvalues can be obtained on a finite basis set. Calculations using real-space wave functions and pseudopotentials are shown to give best-estimate GW energies that agree (up to the extrapolation error) with calculations using all-electron Gaussian basis sets. We benchmark the effects of a vertex approximationmore » (ΓLDA) and the mean-field starting point in GW and the BSE, performing computations using a real-space, transition-space basis and scalar-relativistic pseudopotentials. Here, while no variant of GW improves on perturbative G0W0 at predicting ionization energies, G0W0Γ LDA-BSE computations give excellent agreement with experimental absorption spectra as long as off-diagonal self-energy terms are included. We also present G0W0 quasiparticle energies for the CuO –, ZnO –, AgO –, and CdO – anions, in comparison to available anion photoelectron spectra.« less

  11. Shuttle Main Propulsion System LH2 Feed Line and Inducer Simulations

    NASA Technical Reports Server (NTRS)

    Dorney, Daniel J.; Rothermel, Jeffry

    2002-01-01

    This viewgraph presentation includes simulations of the unsteady flow field in the LH2 feed line, flow line, flow liner, backing cavity and inducer of Shuttle engine #1. It also evaluates aerodynamic forcing functions which may contribute to the formation of the cracks observed on the flow liner slots. The presentation lists the numerical methods used, and profiles a benchmark test case.

  12. Development of STEM Readiness Benchmarks to Assist Educational and Career Decision Making. ACT Research Report Series, 2015 (3)

    ERIC Educational Resources Information Center

    Mattern, Krista; Radunzel, Justine; Westrick, Paul

    2015-01-01

    Although about 40% of high school graduates who take the ACT® test express interest in pursuing a career in a science, technology, engineering, and mathematics (STEM) field, the percentage of first-year students in college who declare a STEM major is substantially lower. The pool of prospective STEM workers shrinks further as the majority of STEM…

  13. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...

  14. Damage characterization in engineering materials using a combination of optical, acoustic, and thermal techniques

    NASA Astrophysics Data System (ADS)

    Tragazikis, I. K.; Exarchos, D. A.; Dalla, P. T.; Matikas, T. E.

    2016-04-01

    This paper deals with the use of complimentary nondestructive methods for the evaluation of damage in engineering materials. The application of digital image correlation (DIC) to engineering materials is a useful tool for accurate, noncontact strain measurement. DIC is a 2D, full-field optical analysis technique based on gray-value digital images to measure deformation, vibration and strain a vast variety of materials. In addition, this technique can be applied from very small to large testing areas and can be used for various tests such as tensile, torsion and bending under static or dynamic loading. In this study, DIC results are benchmarked with other nondestructive techniques such as acoustic emission for damage localization and fracture mode evaluation, and IR thermography for stress field visualization and assessment. The combined use of these three nondestructive methods enables the characterization and classification of damage in materials and structures.

  15. Carrier-envelope phase control over pathway interference in strong-field dissociation of H2+.

    PubMed

    Kling, Nora G; Betsch, K J; Zohrabi, M; Zeng, S; Anis, F; Ablikim, U; Jochim, Bethany; Wang, Z; Kübel, M; Kling, M F; Carnes, K D; Esry, B D; Ben-Itzhak, I

    2013-10-18

    The dissociation of an H2+ molecular-ion beam by linearly polarized, carrier-envelope-phase-tagged 5 fs pulses at 4×10(14) W/cm2 with a central wavelength of 730 nm was studied using a coincidence 3D momentum imaging technique. Carrier-envelope-phase-dependent asymmetries in the emission direction of H+ fragments relative to the laser polarization were observed. These asymmetries are caused by interference of odd and even photon number pathways, where net zero-photon and one-photon interference predominantly contributes at H+ + H kinetic energy releases of 0.2-0.45 eV, and net two-photon and one-photon interference contributes at 1.65-1.9 eV. These measurements of the benchmark H2+ molecule offer the distinct advantage that they can be quantitatively compared with ab initio theory to confirm our understanding of strong-field coherent control via the carrier-envelope phase.

  16. SU-E-J-30: Benchmark Image-Based TCP Calculation for Evaluation of PTV Margins for Lung SBRT Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, M; Chetty, I; Zhong, H

    2014-06-01

    Purpose: Tumor control probability (TCP) calculated with accumulated radiation doses may help design appropriate treatment margins. Image registration errors, however, may compromise the calculated TCP. The purpose of this study is to develop benchmark CT images to quantify registration-induced errors in the accumulated doses and their corresponding TCP. Methods: 4DCT images were registered from end-inhale (EI) to end-exhale (EE) using a “demons” algorithm. The demons DVFs were corrected by an FEM model to get realistic deformation fields. The FEM DVFs were used to warp the EI images to create the FEM-simulated images. The two images combined with the FEM DVFmore » formed a benchmark model. Maximum intensity projection (MIP) images, created from the EI and simulated images, were used to develop IMRT plans. Two plans with 3 and 5 mm margins were developed for each patient. With these plans, radiation doses were recalculated on the simulated images and warped back to the EI images using the FEM DVFs to get the accumulated doses. The Elastix software was used to register the FEM-simulated images to the EI images. TCPs calculated with the Elastix-accumulated doses were compared with those generated by the FEM to get the TCP error of the Elastix registrations. Results: For six lung patients, the mean Elastix registration error ranged from 0.93 to 1.98 mm. Their relative dose errors in PTV were between 0.28% and 6.8% for 3mm margin plans, and between 0.29% and 6.3% for 5mm-margin plans. As the PTV margin reduced from 5 to 3 mm, the mean TCP error of the Elastix-reconstructed doses increased from 2.0% to 2.9%, and the mean NTCP errors decreased from 1.2% to 1.1%. Conclusion: Patient-specific benchmark images can be used to evaluate the impact of registration errors on the computed TCPs, and may help select appropriate PTV margins for lung SBRT patients.« less

  17. Gamma-ray imaging system for real-time measurements in nuclear waste characterisation

    NASA Astrophysics Data System (ADS)

    Caballero, L.; Albiol Colomer, F.; Corbi Bellot, A.; Domingo-Pardo, C.; Leganés Nieto, J. L.; Agramunt Ros, J.; Contreras, P.; Monserrate, M.; Olleros Rodríguez, P.; Pérez Magán, D. L.

    2018-03-01

    A compact, portable and large field-of-view gamma camera that is able to identify, locate and quantify gamma-ray emitting radioisotopes in real-time has been developed. The device delivers spectroscopic and imaging capabilities that enable its use it in a variety of nuclear waste characterisation scenarios, such as radioactivity monitoring in nuclear power plants and more specifically for the decommissioning of nuclear facilities. The technical development of this apparatus and some examples of its application in field measurements are reported in this article. The performance of the presented gamma-camera is also benchmarked against other conventional techniques.

  18. Simultaneous Concentration and Velocity Maps in Particle Suspensions under Shear from Rheo-Ultrasonic Imaging

    NASA Astrophysics Data System (ADS)

    Saint-Michel, Brice; Bodiguel, Hugues; Meeker, Steven; Manneville, Sébastien

    2017-07-01

    We extend a previously developed ultrafast ultrasonic technique [T. Gallot et al., Rev. Sci. Instrum. 84, 045107 (2013), 10.1063/1.4801462] to concentration-field measurements in non-Brownian particle suspensions under shear. The technique provides access to time-resolved concentration maps within the gap of a Taylor-Couette cell simultaneously to local velocity measurements and standard rheological characterization. Benchmark experiments in homogeneous particle suspensions are used to calibrate the system. We then image heterogeneous concentration fields that result from centrifugation effects, from the classical Taylor-Couette instability, and from sedimentation or shear-induced resuspension.

  19. D Modeling of Industrial Heritage Building Using COTSs System: Test, Limits and Performances

    NASA Astrophysics Data System (ADS)

    Piras, M.; Di Pietra, V.; Visintini, D.

    2017-08-01

    The role of UAV systems in applied geomatics is continuously increasing in several applications as inspection, surveying and geospatial data. This evolution is mainly due to two factors: new technologies and new algorithms for data processing. About technologies, from some years ago there is a very wide use of commercial UAV even COTSs (Commercial On-The-Shelf) systems. Moreover, these UAVs allow to easily acquire oblique images, giving the possibility to overcome the limitations of the nadir approach related to the field of view and occlusions. In order to test potential and issue of COTSs systems, the Italian Society of Photogrammetry and Topography (SIFET) has organised the SBM2017, which is a benchmark where all people can participate in a shared experience. This benchmark, called "Photogrammetry with oblique images from UAV: potentialities and challenges", permits to collect considerations from the users, highlight the potential of these systems, define the critical aspects and the technological challenges and compare distinct approaches and software. The case study is the "Fornace Penna" in Scicli (Ragusa, Italy), an inaccessible monument of industrial architecture from the early 1900s. The datasets (images and video) have been acquired from three different UAVs system: Parrot Bebop 2, DJI Phantom 4 and Flytop Flynovex. The aim of this benchmark is to generate the 3D model of the "Fornace Penna", making an analysis considering different software, imaging geometry and processing strategies. This paper describes the surveying strategies, the methodologies and five different photogrammetric obtained results (sensor calibration, external orientation, dense point cloud and two orthophotos), using separately - the single images and the frames extracted from the video - acquired with the DJI system.

  20. High Performance Computing at NASA

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    The speaker will give an overview of high performance computing in the U.S. in general and within NASA in particular, including a description of the recently signed NASA-IBM cooperative agreement. The latest performance figures of various parallel systems on the NAS Parallel Benchmarks will be presented. The speaker was one of the authors of the NAS (National Aerospace Standards) Parallel Benchmarks, which are now widely cited in the industry as a measure of sustained performance on realistic high-end scientific applications. It will be shown that significant progress has been made by the highly parallel supercomputer industry during the past year or so, with several new systems, based on high-performance RISC processors, that now deliver superior performance per dollar compared to conventional supercomputers. Various pitfalls in reporting performance will be discussed. The speaker will then conclude by assessing the general state of the high performance computing field.

  1. QUASAR--scoring and ranking of sequence-structure alignments.

    PubMed

    Birzele, Fabian; Gewehr, Jan E; Zimmer, Ralf

    2005-12-15

    Sequence-structure alignments are a common means for protein structure prediction in the fields of fold recognition and homology modeling, and there is a broad variety of programs that provide such alignments based on sequence similarity, secondary structure or contact potentials. Nevertheless, finding the best sequence-structure alignment in a pool of alignments remains a difficult problem. QUASAR (quality of sequence-structure alignments ranking) provides a unifying framework for scoring sequence-structure alignments that aids finding well-performing combinations of well-known and custom-made scoring schemes. Those scoring functions can be benchmarked against widely accepted quality scores like MaxSub, TMScore, Touch and APDB, thus enabling users to test their own alignment scores against 'standard-of-truth' structure-based scores. Furthermore, individual score combinations can be optimized with respect to benchmark sets based on known structural relationships using QUASAR's in-built optimization routines.

  2. An Examination of Five Benchmarks of Student Engagement for Commuter Students Enrolled at an Urban Public University

    ERIC Educational Resources Information Center

    Galladian, Carol

    2013-01-01

    The purpose of this quantitative ex post facto study was to provide a description of the student engagement of commuter students attending a large urban public university located in a mid-Atlantic state using the five National Survey of Student Engagement (NSSE) benchmarks of student engagement. In addition, the study examined the relationship…

  3. Social Studies: Grades 4, 8, & 11. Content Specifications for Statewide Assessment by Standard.

    ERIC Educational Resources Information Center

    Missouri State Dept. of Elementary and Secondary Education, Jefferson City.

    This state of Missouri guide to content specifications for social studies assessment is designed to give teachers direction for assessment at the benchmark levels of grades 4, 8, and 11 for each standard that is appropriate for a statewide assessment. The guide includes specifications of what students are expected to know at the benchmark levels…

  4. Hydrogen bonding and pi-stacking: how reliable are force fields? A critical evaluation of force field descriptions of nonbonded interactions.

    PubMed

    Paton, Robert S; Goodman, Jonathan M

    2009-04-01

    We have evaluated the performance of a set of widely used force fields by calculating the geometries and stabilization energies for a large collection of intermolecular complexes. These complexes are representative of a range of chemical and biological systems for which hydrogen bonding, electrostatic, and van der Waals interactions play important roles. Benchmark energies are taken from the high-level ab initio values in the JSCH-2005 and S22 data sets. All of the force fields underestimate stabilization resulting from hydrogen bonding, but the energetics of electrostatic and van der Waals interactions are described more accurately. OPLSAA gave a mean unsigned error of 2 kcal mol(-1) for all 165 complexes studied, and outperforms DFT calculations employing very large basis sets for the S22 complexes. The magnitude of hydrogen bonding interactions are severely underestimated by all of the force fields tested, which contributes significantly to the overall mean error; if complexes which are predominantly bound by hydrogen bonding interactions are discounted, the mean unsigned error of OPLSAA is reduced to 1 kcal mol(-1). For added clarity, web-based interactive displays of the results have been developed which allow comparisons of force field and ab initio geometries to be performed and the structures viewed and rotated in three dimensions.

  5. WE-D-17A-02: Evaluation of a Two-Dimensional Optical Dosimeter On Measuring Lateral Profiles of Proton Pencil Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsi, W; Lee, T; Schultz, T

    Purpose: To evaluate the accuracy of a two-dimensional optical dosimeter on measuring lateral profiles for spots and scanned fields of proton pencil beams. Methods: A digital camera with a color image senor was utilized to image proton-induced scintillations on Gadolinium-oxysulfide phosphor reflected by a stainless-steel mirror. Intensities of three colors were summed for each pixel with proper spatial-resolution calibration. To benchmark this dosimeter, the field size and penumbra for 100mm square fields of singleenergy pencil-scan protons were measured and compared between this optical dosimeter and an ionization-chamber profiler. Sigma widths of proton spots in air were measured and compared betweenmore » this dosimeter and a commercial optical dosimeter. Clinical proton beams with ranges between 80 mm and 300 mm at CDH proton center were used for this benchmark. Results: Pixel resolutions vary 1.5% between two perpendicular axes. For a pencil-scan field with 302 mm range, measured field sizes and penumbras between two detection systems agreed to 0.5 mm and 0.3 mm, respectively. Sigma widths agree to 0.3 mm between two optical dosimeters for a proton spot with 158 mm range; having widths of 5.76 mm and 5.92 mm for X and Y axes, respectively. Similar agreements were obtained for others beam ranges. This dosimeter was successfully utilizing on mapping the shapes and sizes of proton spots at the technical acceptance of McLaren proton therapy system. Snow-flake spots seen on images indicated the image sensor having pixels damaged by radiations. Minor variations in intensity between different colors were observed. Conclusions: The accuracy of our dosimeter was in good agreement with other established devices in measuring lateral profiles of pencil-scan fields and proton spots. A precise docking mechanism for camera was designed to keep aligned optical path while replacing damaged image senor. Causes for minor variations between emitted color lights will be investigated.« less

  6. How accurately do force fields represent protein side chain ensembles?

    PubMed

    Petrović, Dušan; Wang, Xue; Strodel, Birgit

    2018-05-23

    Although the protein backbone is the most fundamental part of the structure, the fine-tuning of side-chain conformations is important for protein function, for example, in protein-protein and protein-ligand interactions, and also in enzyme catalysis. While several benchmarks testing the performance of protein force fields for side chain properties have already been published, they often considered only a few force fields and were not tested against the same experimental observables; hence, they are not directly comparable. In this work, we explore the ability of twelve force fields, which are different flavors of AMBER, CHARMM, OPLS, or GROMOS, to reproduce average rotamer angles and rotamer populations obtained from extensive NMR studies of the 3 J and residual dipolar coupling constants for two small proteins: ubiquitin and GB3. Based on a total of 196 μs sampling time, our results reveal that all force fields identify the correct side chain angles, while the AMBER and CHARMM force fields clearly outperform the OPLS and GROMOS force fields in estimating rotamer populations. The three best force fields for representing the protein side chain dynamics are AMBER 14SB, AMBER 99SB*-ILDN, and CHARMM36. Furthermore, we observe that the side chain ensembles of buried amino acid residues are generally more accurately represented than those of the surface exposed residues. This article is protected by copyright. All rights reserved. © 2018 Wiley Periodicals, Inc.

  7. Information Literacy and Office Tool Competencies: A Benchmark Study

    ERIC Educational Resources Information Center

    Heinrichs, John H.; Lim, Jeen-Su

    2010-01-01

    Present information science literature recognizes the importance of information technology to achieve information literacy. The authors report the results of a benchmarking student survey regarding perceived functional skills and competencies in word-processing and presentation tools. They used analysis of variance and regression analysis to…

  8. Nonlinear modeling of forced magnetic reconnection in slab geometry with NIMROD

    NASA Astrophysics Data System (ADS)

    Beidler, M. T.; Callen, J. D.; Hegna, C. C.; Sovinec, C. R.

    2017-05-01

    The nonlinear, extended-magnetohydrodynamic (MHD) code NIMROD is benchmarked with the theory of time-dependent forced magnetic reconnection induced by small resonant fields in slab geometry in the context of visco-resistive MHD modeling. Linear computations agree with time-asymptotic, linear theory of flow screening of externally applied fields. The inclusion of flow in nonlinear computations can result in mode penetration due to the balance between electromagnetic and viscous forces in the time-asymptotic state, which produces bifurcations from a high-slip state to a low-slip state as the external field is slowly increased. We reproduce mode penetration and unlocking transitions by employing time-dependent externally applied magnetic fields. Mode penetration and unlocking exhibit hysteresis and occur at different magnitudes of applied field. We also establish how nonlinearly determined flow screening of the resonant field is affected by the square of the magnitude of the externally applied field. These results emphasize that the inclusion of nonlinear physics is essential for accurate prediction of the reconnected field in a flowing plasma.

  9. Toxicological benchmarks for screening potential contaminants of concern for effects on aquatic biota: 1994 Revision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W. II; Mabrey, J.B.

    1994-07-01

    This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronicmore » Value (SCV), the lowest chronic values for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility.« less

  10. Solutions in radiology services management: a literature review.

    PubMed

    Pereira, Aline Garcia; Vergara, Lizandra Garcia Lupi; Merino, Eugenio Andrés Díaz; Wagner, Adriano

    2015-01-01

    The present study was aimed at reviewing the literature to identify solutions for problems observed in radiology services. Basic, qualitative, exploratory literature review at Scopus and SciELO databases, utilizing the Mendeley and Illustrator CC Adobe softwares. In the databases, 565 papers - 120 out of them, pdf free - were identified. Problems observed in the radiology sector are related to procedures scheduling, humanization, lack of training, poor knowledge and use of management techniques, and interaction with users. The design management provides the services with interesting solutions such as Benchmarking, CRM, Lean Approach, ServiceBlueprinting, continued education, among others. Literature review is an important tool to identify problems and respective solutions. However, considering the small number of studies approaching management of radiology services, this is a great field of research for the development of deeper studies.

  11. The challenges of numerically simulating analogue brittle thrust wedges

    NASA Astrophysics Data System (ADS)

    Buiter, Susanne; Ellis, Susan

    2017-04-01

    Fold-and-thrust belts and accretionary wedges form when sedimentary and crustal rocks are compressed into thrusts and folds in the foreland of an orogen or at a subduction trench. For over a century, analogue models have been used to investigate the deformation characteristics of such brittle wedges. These models predict wedge shapes that agree with analytical critical taper theory and internal deformation structures that well resemble natural observations. In a series of comparison experiments for thrust wedges, called the GeoMod2004 (1,2) and GeoMod2008 (3,4) experiments, it was shown that different numerical solution methods successfully reproduce sandbox thrust wedges. However, the GeoMod2008 benchmark also pointed to the difficulties of representing frictional boundary conditions and sharp velocity discontinuities with continuum numerical methods, in addition to the well-known challenges of numerical plasticity. Here we show how details in the numerical implementation of boundary conditions can substantially impact numerical wedge deformation. We consider experiment 1 of the GeoMod2008 brittle thrust wedge benchmarks. This experiment examines a triangular thrust wedge in the stable field of critical taper theory that should remain stable, that is, without internal deformation, when sliding over a basal frictional surface. The thrust wedge is translated by lateral displacement of a rigid mobile wall. The corner between the mobile wall and the subsurface is a velocity discontinuity. Using our finite-element code SULEC, we show how different approaches to implementing boundary friction (boundary layer or contact elements) and the velocity discontinuity (various smoothing schemes) can cause the wedge to indeed translate in a stable manner or to undergo internal deformation (which is a fail). We recommend that numerical studies of sandbox setups not only report the details of their implementation of boundary conditions, but also document the modelling attempts that failed. References 1. Buiter and the GeoMod2004 Team, 2006. The numerical sandbox: comparison of model results for a shortening and an extension experiment. Geol. Soc. Lond. Spec. Publ. 253, 29-64 2. Schreurs and the GeoMod2004 Team, 2006. Analogue benchmarks of shortening and extension experiments. Geol. Soc. Lond. Spec. Publ. 253, 1-27 3. Buiter, Schreurs and the GeoMod2008 Team, 2016. Benchmarking numerical models of brittle thrust wedges, J. Struct. Geol. 92, 140-177 4. Schreurs, Buiter and the GeoMod2008 Team, 2016. Benchmarking analogue models of brittle thrust wedges, J. Struct. Geol. 92, 116-13

  12. Building America Industrialized Housing Partnership (BAIHP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McIlvaine, Janet; Chandra, Subrato; Barkaszi, Stephen

    This final report summarizes the work conducted by the Building America Industrialized Housing Partnership (www.baihp.org) for the period 9/1/99-6/30/06. BAIHP is led by the Florida Solar Energy Center of the University of Central Florida and focuses on factory built housing. In partnership with over 50 factory and site builders, work was performed in two main areas--research and technical assistance. In the research area--through site visits in over 75 problem homes, we discovered the prime causes of moisture problems in some manufactured homes and our industry partners adopted our solutions to nearly eliminate this vexing problem. Through testing conducted in overmore » two dozen housing factories of six factory builders we documented the value of leak free duct design and construction which was embraced by our industry partners and implemented in all the thousands of homes they built. Through laboratory test facilities and measurements in real homes we documented the merits of 'cool roof' technologies and developed an innovative night sky radiative cooling concept currently being tested. We patented an energy efficient condenser fan design, documented energy efficient home retrofit strategies after hurricane damage, developed improved specifications for federal procurement for future temporary housing, compared the Building America benchmark to HERS Index and IECC 2006, developed a toolkit for improving the accuracy and speed of benchmark calculations, monitored the field performance of over a dozen prototype homes and initiated research on the effectiveness of occupancy feedback in reducing household energy use. In the technical assistance area we provided systems engineering analysis, conducted training, testing and commissioning that have resulted in over 128,000 factory built and over 5,000 site built homes which are saving their owners over $17,000,000 annually in energy bills. These include homes built by Palm Harbor Homes, Fleetwood, Southern Energy Homes, Cavalier and the manufacturers participating in the Northwest Energy Efficient Manufactured Home program. We worked with over two dozen Habitat for Humanity affiliates and helped them build over 700 Energy Star or near Energy Star homes. We have provided technical assistance to several show homes constructed for the International builders show in Orlando, FL and assisted with other prototype homes in cold climates that save 40% over the benchmark reference. In the Gainesville Fl area we have several builders that are consistently producing 15 to 30 homes per month in several subdivisions that meet the 30% benchmark savings goal. We have contributed to the 2006 DOE Joule goals by providing two community case studies meeting the 30% benchmark goal in marine climates.« less

  13. Benchmark cool companions: ages and abundances for the PZ Telescopii system

    NASA Astrophysics Data System (ADS)

    Jenkins, J. S.; Pavlenko, Y. V.; Ivanyuk, O.; Gallardo, J.; Jones, M. I.; Day-Jones, A. C.; Jones, H. R. A.; Ruiz, M. T.; Pinfield, D. J.; Yakovina, L.

    2012-03-01

    We present new ages and abundance measurements for the pre-main-sequence star PZ Telescopii (more commonly known as PZ Tel). PZ Tel was recently found to host a young and low-mass companion. Such companions, whether they are brown dwarfs or planetary systems, can attain benchmark status by detailed study of the properties of the primary, and then evolutionary and bulk characteristics can be inferred for the companion. Using Fibre-fed Extended Range Optical Spectrograph spectra, we have measured atomic abundances (e.g. Fe and Li) and chromospheric activity for PZ Tel and used these to obtain the metallicity and age estimates for the companion. We have also determined the age independently using the latest evolutionary models. We find PZ Tel A to be a rapidly rotating (v sin i= 73 ± 5 km s-1), approximately solar metallicity star [log N(Fe) =-4.37 ± 0.06 dex or [Fe/H] = 0.05 ± 0.20 dex]. We measure a non-local thermodynamic equilibrium lithium abundance of log N(Li) = 3.1 ± 0.1 dex, which from depletion models gives rise to an age of 7? Myr for the system. Our measured chromospheric activity (? of -4.12) returns an age of 26 ± 2 Myr, as does fitting pre-main-sequence evolutionary tracks (τevol= 22 ± 3 Myr), both of these are in disagreement with the lithium age. We speculate on reasons for this difference and introduce new models for lithium depletion that incorporate both rotation and magnetic field effects. We also synthesize solar, metal-poor and metal-rich substellar evolutionary models to better determine the bulk properties of PZ Tel B, showing that PZ Tel B is probably more massive than previous estimates, meaning the companion is not a giant exoplanet, even though a planetary-like formation origin can go some way to describing the distribution of benchmark binaries currently known. We show how PZ Tel B compares to other currently known age and metallicity benchmark systems and try to empirically test the effects of dust opacity as a function of metallicity on the near-infrared colours of brown dwarfs. Current models suggest that in the near-infrared observations are more sensitive to low-mass companions orbiting more metal rich stars. We also look for trends between infrared photometry and metallicity amongst a growing population of substellar benchmark objects, and identify the need for more data in mass-age-metallicity parameter space.

  14. Raising Quality and Achievement. A College Guide to Benchmarking.

    ERIC Educational Resources Information Center

    Owen, Jane

    This booklet introduces the principles and practices of benchmarking as a way of raising quality and achievement at further education colleges in Britain. Section 1 defines the concept of benchmarking. Section 2 explains what benchmarking is not and the steps that should be taken before benchmarking is initiated. The following aspects and…

  15. Benchmarking in Education: Tech Prep, a Case in Point. IEE Brief Number 8.

    ERIC Educational Resources Information Center

    Inger, Morton

    Benchmarking is a process by which organizations compare their practices, processes, and outcomes to standards of excellence in a systematic way. The benchmarking process entails the following essential steps: determining what to benchmark and establishing internal baseline data; identifying the benchmark; determining how that standard has been…

  16. Benchmarks: The Development of a New Approach to Student Evaluation.

    ERIC Educational Resources Information Center

    Larter, Sylvia

    The Toronto Board of Education Benchmarks are libraries of reference materials that demonstrate student achievement at various levels. Each library contains video benchmarks, print benchmarks, a staff handbook, and summary and introductory documents. This book is about the development and the history of the benchmark program. It has taken over 3…

  17. Paradoxical ventilator associated pneumonia incidences among selective digestive decontamination studies versus other studies of mechanically ventilated patients: benchmarking the evidence base

    PubMed Central

    2011-01-01

    Introduction Selective digestive decontamination (SDD) appears to have a more compelling evidence base than non-antimicrobial methods for the prevention of ventilator associated pneumonia (VAP). However, the striking variability in ventilator associated pneumonia-incidence proportion (VAP-IP) among the SDD studies remains unexplained and a postulated contextual effect remains untested for. Methods Nine reviews were used to source 45 observational (benchmark) groups and 137 component (control and intervention) groups of studies of SDD and studies of three non-antimicrobial methods of VAP prevention. The logit VAP-IP data were summarized by meta-analysis using random effects methods and the associated heterogeneity (tau2) was measured. As group level predictors of logit VAP-IP, the mode of VAP diagnosis, proportion of trauma admissions, the proportion receiving prolonged ventilation and the intervention method under study were examined in meta-regression models containing the benchmark groups together with either the control (models 1 to 3) or intervention (models 4 to 6) groups of the prevention studies. Results The VAP-IP benchmark derived here is 22.1% (95% confidence interval; 95% CI; 19.2 to 25.5; tau2 0.34) whereas the mean VAP-IP of control groups from studies of SDD and of non-antimicrobial methods, is 35.7 (29.7 to 41.8; tau2 0.63) versus 20.4 (17.2 to 24.0; tau2 0.41), respectively (P < 0.001). The disparity between the benchmark groups and the control groups of the SDD studies, which was most apparent for the highest quality studies, could not be explained in the meta-regression models after adjusting for various group level factors. The mean VAP-IP (95% CI) of intervention groups is 16.0 (12.6 to 20.3; tau2 0.59) and 17.1 (14.2 to 20.3; tau2 0.35) for SDD studies versus studies of non-antimicrobial methods, respectively. Conclusions The VAP-IP among the intervention groups within the SDD evidence base is less variable and more similar to the benchmark than among the control groups. These paradoxical observations cannot readily be explained. The interpretation of the SDD evidence base cannot proceed without further consideration of this contextual effect. PMID:21214897

  18. Benchmarking carbon-nitrogen interactions in Earth System Models to observations: An inter-comparison of nitrogen limitation in global land surface models with carbon and nitrogen cycles (CLM-CN and O-CN)

    NASA Astrophysics Data System (ADS)

    Thomas, R. Q.; Zaehle, S.; Templer, P. H.; Goodale, C. L.

    2011-12-01

    Predictions of climate change depend on accurately modeling the feedbacks among the carbon cycle, nitrogen cycle, and climate system. Several global land surface models have shown that nitrogen limitation determines how land carbon fluxes respond to rising CO2, nitrogen deposition, and climate change, thereby influencing predictions of climate change. However, the magnitude of the carbon-nitrogen-climate feedbacks varies considerably by model, leading to critical and timely questions of why they differ and how they compare to field observations. To address these questions, we initiated a model inter-comparison of spatial patterns and drivers of nitrogen limitation. The experiment assessed the regional consequences of sustained nitrogen additions in a set of 25-year global nitrogen fertilization simulations. The model experiments were designed to cover effects from small changes in nitrogen inputs associated with plausible increases in nitrogen deposition to large changes associated with field-based nitrogen fertilization experiments. The analyses of model simulations included assessing the geographically varying degree of nitrogen limitation on plant and soil carbon cycling and the mechanisms underlying model differences. Here, we present results from two global land-surface models (CLM-CN and O-CN) with differing approaches to modeling carbon-nitrogen interactions. The predictions from each model were compared to a set of globally distributed observational data that includes nitrogen fertilization experiments, 15N tracer studies, small catchment nitrogen input-output studies, and syntheses across nitrogen deposition gradients. Together these datasets test many aspects of carbon-nitrogen coupling and are able to differentiate between the two models. Overall, this study is the first to explicitly benchmark carbon and nitrogen interactions in Earth System Models using a range of observations and is a foundation for future inter-comparisons.

  19. Cross-Evaluation of Degree Programmes in Higher Education

    ERIC Educational Resources Information Center

    Kettunen, Juha

    2010-01-01

    Purpose: This study seeks to develop and describe the benchmarking approach of enhancement-led evaluation in higher education and to present a cross-evaluation process for degree programmes. Design/methodology/approach: The benchmarking approach produces useful information for the development of degree programmes based on self-evaluation,…

  20. Establishing Language Benchmarks for Children with Typically Developing Language and Children with Language Impairment

    ERIC Educational Resources Information Center

    Schmitt, Mary Beth; Logan, Jessica A. R.; Tambyraja, Sherine R.; Farquharson, Kelly; Justice, Laura M.

    2017-01-01

    Purpose: Practitioners, researchers, and policymakers (i.e., stakeholders) have vested interests in children's language growth yet currently do not have empirically driven methods for measuring such outcomes. The present study established language benchmarks for children with typically developing language (TDL) and children with language…

  1. Benchmarking Academic Libraries: An Australian Case Study.

    ERIC Educational Resources Information Center

    Robertson, Margaret; Trahn, Isabella

    1997-01-01

    Discusses experiences and outcomes of benchmarking at the Queensland University of Technology (Australia) library that compared acquisitions, cataloging, document delivery, and research support services with those of the University of New South Wales. Highlights include results as a catalyst for change, and the use of common output and performance…

  2. 76 T dwarfs from the UKIDSS LAS: benchmarks, kinematics and an updated space density

    NASA Astrophysics Data System (ADS)

    Burningham, Ben; Cardoso, C. V.; Smith, L.; Leggett, S. K.; Smart, R. L.; Mann, A. W.; Dhital, S.; Lucas, P. W.; Tinney, C. G.; Pinfield, D. J.; Zhang, Z.; Morley, C.; Saumon, D.; Aller, K.; Littlefair, S. P.; Homeier, D.; Lodieu, N.; Deacon, N.; Marley, M. S.; van Spaandonk, L.; Baker, D.; Allard, F.; Andrei, A. H.; Canty, J.; Clarke, J.; Day-Jones, A. C.; Dupuy, T.; Fortney, J. J.; Gomes, J.; Ishii, M.; Jones, H. R. A.; Liu, M.; Magazzú, A.; Marocco, F.; Murray, D. N.; Rojas-Ayala, B.; Tamura, M.

    2013-07-01

    We report the discovery of 76 new T dwarfs from the UKIRT Infrared Deep Sky Survey (UKIDSS) Large Area Survey (LAS). Near-infrared broad- and narrow-band photometry and spectroscopy are presented for the new objects, along with Wide-field Infrared Survey Explorer (WISE) and warm-Spitzer photometry. Proper motions for 128 UKIDSS T dwarfs are presented from a new two epoch LAS proper motion catalogue. We use these motions to identify two new benchmark systems: LHS 6176AB, a T8p+M4 pair and HD 118865AB, a T5.5+F8 pair. Using age constraints from the primaries and evolutionary models to constrain the radii, we have estimated their physical properties from their bolometric luminosity. We compare the colours and properties of known benchmark T dwarfs to the latest model atmospheres and draw two principal conclusions. First, it appears that the H - [4.5] and J - W2 colours are more sensitive to metallicity than has previously been recognized, such that differences in metallicity may dominate over differences in Teff when considering relative properties of cool objects using these colours. Secondly, the previously noted apparent dominance of young objects in the late-T dwarf sample is no longer apparent when using the new model grids and the expanded sample of late-T dwarfs and benchmarks. This is supported by the apparently similar distribution of late-T dwarfs and earlier type T dwarfs on reduced proper motion diagrams that we present. Finally, we present updated space densities for the late-T dwarfs, and compare our values to simulation predictions and those from WISE.

  3. Optimization of Deep Drilling Performance--Development and Benchmark Testing of Advanced Diamond Product Drill Bits & HP/HT Fluids to Significantly Improve Rates of Penetration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alan Black; Arnis Judzis

    2003-10-01

    This document details the progress to date on the OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS AND HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION contract for the year starting October 2002 through September 2002. The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for amore » next level of deep drilling performance; Phase 2--Develop advanced smart bit--fluid prototypes and test at large scale; and Phase 3--Field trial smart bit--fluid concepts, modify as necessary and commercialize products. Accomplishments to date include the following: 4Q 2002--Project started; Industry Team was assembled; Kick-off meeting was held at DOE Morgantown; 1Q 2003--Engineering meeting was held at Hughes Christensen, The Woodlands Texas to prepare preliminary plans for development and testing and review equipment needs; Operators started sending information regarding their needs for deep drilling challenges and priorities for large-scale testing experimental matrix; Aramco joined the Industry Team as DEA 148 objectives paralleled the DOE project; 2Q 2003--Engineering and planning for high pressure drilling at TerraTek commenced; 3Q 2003--Continuation of engineering and design work for high pressure drilling at TerraTek; Baker Hughes INTEQ drilling Fluids and Hughes Christensen commence planning for Phase 1 testing--recommendations for bits and fluids.« less

  4. Benchmark study on glyphosate-resistant cropping systems in the United States. Part 4: Weed management practices and effects on weed populations and soil seedbanks.

    PubMed

    Wilson, Robert G; Young, Bryan G; Matthews, Joseph L; Weller, Stephen C; Johnson, William G; Jordan, David L; Owen, Micheal D K; Dixon, Philip M; Shaw, David R

    2011-07-01

    Weed management in glyphosate-resistant (GR) maize, cotton and soybean in the United States relies almost exclusively on glyphosate, which raises criticism for facilitating shifts in weed populations. In 2006, the benchmark study, a field-scale investigation, was initiated in three different GR cropping systems to characterize academic recommendations for weed management and to determine the level to which these recommendations would reduce weed population shifts. A majority of growers used glyphosate as the only herbicide for weed management, as opposed to 98% of the academic recommendations implementing at least two herbicide active ingredients and modes of action. The additional herbicides were applied with glyphosate and as soil residual treatments. The greater herbicide diversity with academic recommendations reduced weed population densities before and after post-emergence herbicide applications in 2006 and 2007, particularly in continuous GR crops. Diversifying herbicides reduces weed population densities and lowers the risk of weed population shifts and the associated potential for the evolution of glyphosate-resistant weeds in continuous GR crops. Altered weed management practices (e.g. herbicides or tillage) enabled by rotating crops, whether GR or non-GR, improves weed management and thus minimizes the effectiveness of only using chemical tactics to mitigate weed population shifts. Copyright © 2011 Society of Chemical Industry.

  5. Nonparametric estimation of benchmark doses in environmental risk assessment

    PubMed Central

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Summary An important statistical objective in environmental risk analysis is estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a pre-specified benchmark response in a dose-response experiment. In such settings, representations of the risk are traditionally based on a parametric dose-response model. It is a well-known concern, however, that if the chosen parametric form is misspecified, inaccurate and possibly unsafe low-dose inferences can result. We apply a nonparametric approach for calculating benchmark doses, based on an isotonic regression method for dose-response estimation with quantal-response data (Bhattacharya and Kong, 2007). We determine the large-sample properties of the estimator, develop bootstrap-based confidence limits on the BMDs, and explore the confidence limits’ small-sample properties via a short simulation study. An example from cancer risk assessment illustrates the calculations. PMID:23914133

  6. Groundwater-quality data in seven GAMA study units: results from initial sampling, 2004-2005, and resampling, 2007-2008, of wells: California GAMA Program Priority Basin Project

    USGS Publications Warehouse

    Kent, Robert; Belitz, Kenneth; Fram, Miranda S.

    2014-01-01

    The Priority Basin Project (PBP) of the Groundwater Ambient Monitoring and Assessment (GAMA) Program was developed in response to the Groundwater Quality Monitoring Act of 2001 and is being conducted by the U.S. Geological Survey (USGS) in cooperation with the California State Water Resources Control Board (SWRCB). The GAMA-PBP began sampling, primarily public supply wells in May 2004. By the end of February 2006, seven (of what would eventually be 35) study units had been sampled over a wide area of the State. Selected wells in these first seven study units were resampled for water quality from August 2007 to November 2008 as part of an assessment of temporal trends in water quality by the GAMA-PBP. The initial sampling was designed to provide a spatially unbiased assessment of the quality of raw groundwater used for public water supplies within the seven study units. In the 7 study units, 462 wells were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the study area. Wells selected this way are referred to as grid wells or status wells. Approximately 3 years after the initial sampling, 55 of these previously sampled status wells (approximately 10 percent in each study unit) were randomly selected for resampling. The seven resampled study units, the total number of status wells sampled for each study unit, and the number of these wells resampled for trends are as follows, in chronological order of sampling: San Diego Drainages (53 status wells, 7 trend wells), North San Francisco Bay (84, 10), Northern San Joaquin Basin (51, 5), Southern Sacramento Valley (67, 7), San Fernando–San Gabriel (35, 6), Monterey Bay and Salinas Valley Basins (91, 11), and Southeast San Joaquin Valley (83, 9). The groundwater samples were analyzed for a large number of synthetic organic constituents (volatile organic compounds [VOCs], pesticides, and pesticide degradates), constituents of special interest (perchlorate, N-nitrosodimethylamine [NDMA], and 1,2,3-trichloropropane [1,2,3-TCP]), and naturally-occurring inorganic constituents (nutrients, major and minor ions, and trace elements). Naturally-occurring isotopes (tritium, carbon-14, and stable isotopes of hydrogen and oxygen in water) also were measured to help identify processes affecting groundwater quality and the sources and ages of the sampled groundwater. Nearly 300 constituents and water-quality indicators were investigated. Quality-control samples (blanks, replicates, and samples for matrix spikes) were collected at 24 percent of the 55 status wells resampled for trends, and the results for these samples were used to evaluate the quality of the data for the groundwater samples. Field blanks rarely contained detectable concentrations of any constituent, suggesting that contamination was not a noticeable source of bias in the data for the groundwater samples. Differences between replicate samples were mostly within acceptable ranges, indicating acceptably low variability in analytical results. Matrix-spike recoveries were within the acceptable range (70 to 130 percent) for 75 percent of the compounds for which matrix spikes were collected. This study did not attempt to evaluate the quality of water delivered to consumers. After withdrawal, groundwater typically is treated, disinfected, and blended with other waters to maintain acceptable water quality. The benchmarks used in this report apply to treated water that is served to the consumer, not to untreated groundwater. To provide some context for the results, however, concentrations of constituents measured in these groundwater samples were compared with benchmarks established by the U.S. Environmental Protection Agency (USEPA) and California Department of Public Health (CDPH). Comparisons between data collected for this study and benchmarks for drinking water are for illustrative purposes only and are not indicative of compliance or non-compliance with those benchmarks. Most constituents that were detected in groundwater samples from the trend wells were found at concentrations less than drinking-water benchmarks. Four VOCs—trichloroethene, tetrachloroethene, 1,2-dibromo-3-chloropropane, and methyl tert-butyl ether—were detected in one or more wells at concentrations greater than their health-based benchmarks, and six VOCs were detected in at least 10 percent of the samples during initial sampling or resampling of the trend wells. No pesticides were detected at concentrations near or greater than their health-based benchmarks. Three pesticide constituents—atrazine, deethylatrazine, and simazine—were detected in more than 10 percent of the trend-well samples during both sampling periods. Perchlorate, a constituent of special interest, was detected more frequently, and at greater concentrations during resampling than during initial sampling, but this may be due to a change in analytical method between the sampling periods, rather than to a change in groundwater quality. Another constituent of special interest, 1,2,3-TCP, was also detected more frequently during resampling than during initial sampling, but this pattern also may not reflect a change in groundwater quality. Samples from several of the wells where 1,2,3-TCP was detected by low-concentration-level analysis during resampling were not analyzed for 1,2,3-TCP using a low-level method during initial sampling. Most detections of nutrients and trace elements in samples from trend wells were less than health-based benchmarks during both sampling periods. Exceptions include nitrate, arsenic, boron, and vanadium, all detected at concentrations greater than their health-based benchmarks in at least one well during both sampling periods, and molybdenum, detected at concentrations greater than its health-based benchmark during resampling only. The isotopic ratios of oxygen and hydrogen in water and tritium and carbon-14 activities generally changed little between sampling periods, suggesting that the predominant sources and ages of groundwater in most trend wells were consistent between the sampling periods.

  7. HS06 Benchmark for an ARM Server

    NASA Astrophysics Data System (ADS)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  8. System impact research – increasing public health and health care system performance

    PubMed Central

    Malmivaara, Antti

    2016-01-01

    Abstract Background Interventions directed to system features of public health and health care should increase health and welfare of patients and population. Aims To build a new framework for studies aiming to assess the impact of public health or health care system, and to consider the role of Randomized Controlled Trials (RCTs) and of Benchmarking Controlled Trials (BCTs). Methods The new concept is partly based on the author's previous paper on the Benchmarking Controlled Trial. The validity and generalizability considerations were based on previous methodological studies on RCTs and BCTs. Results The new concept System Impact Research (SIR) covers all the studies which aim to assess the impact of the public health system or of the health care system on patients or on population. There are two kinds of studies in System Impact Research: Benchmarking Controlled Trials (observational) and Randomized Controlled Trials (experimental). The term impact covers in particular accessibility, quality, effectiveness, safety, efficiency, and equality. Conclusions System Impact Research – creating the scientific basis for policy decision making - should be given a high priority in medical, public health and health economic research, and should also be used for improving performance. Leaders at all levels of health and social care can use the evidence from System Impact Research for the benefit of patients and population.Key messagesThe new concept of SIR is defined as a research field aiming at assessing the impacts on patients and on populations of features of public health and health and social care systems or of interventions trying to change these features.SIR covers all features of public health and health and social care system, and actions upon these features. The term impact refers to all effects caused by the public health and health and social care system or parts of it, with particular emphasis on accessibility, quality, effectiveness, adverse effects, efficiency, and equality of services.SIR creates the scientific basis for policy decisions. Leaders at all levels of health and social care can use the evidence from SIR for the benefit of the patients and the population. PMID:26977939

  9. System impact research - increasing public health and health care system performance.

    PubMed

    Malmivaara, Antti

    2016-01-01

    Interventions directed to system features of public health and health care should increase health and welfare of patients and population. To build a new framework for studies aiming to assess the impact of public health or health care system, and to consider the role of Randomized Controlled Trials (RCTs) and of Benchmarking Controlled Trials (BCTs). The new concept is partly based on the author's previous paper on the Benchmarking Controlled Trial. The validity and generalizability considerations were based on previous methodological studies on RCTs and BCTs. The new concept System Impact Research (SIR) covers all the studies which aim to assess the impact of the public health system or of the health care system on patients or on population. There are two kinds of studies in System Impact Research: Benchmarking Controlled Trials (observational) and Randomized Controlled Trials (experimental). The term impact covers in particular accessibility, quality, effectiveness, safety, efficiency, and equality. System Impact Research - creating the scientific basis for policy decision making - should be given a high priority in medical, public health and health economic research, and should also be used for improving performance. Leaders at all levels of health and social care can use the evidence from System Impact Research for the benefit of patients and population. Key messages The new concept of SIR is defined as a research field aiming at assessing the impacts on patients and on populations of features of public health and health and social care systems or of interventions trying to change these features. SIR covers all features of public health and health and social care system, and actions upon these features. The term impact refers to all effects caused by the public health and health and social care system or parts of it, with particular emphasis on accessibility, quality, effectiveness, adverse effects, efficiency, and equality of services. SIR creates the scientific basis for policy decisions. Leaders at all levels of health and social care can use the evidence from SIR for the benefit of the patients and the population.

  10. Benchmarking facilities providing care: An international overview of initiatives

    PubMed Central

    Thonon, Frédérique; Watson, Jonathan; Saghatchian, Mahasti

    2015-01-01

    We performed a literature review of existing benchmarking projects of health facilities to explore (1) the rationales for those projects, (2) the motivation for health facilities to participate, (3) the indicators used and (4) the success and threat factors linked to those projects. We studied both peer-reviewed and grey literature. We examined 23 benchmarking projects of different medical specialities. The majority of projects used a mix of structure, process and outcome indicators. For some projects, participants had a direct or indirect financial incentive to participate (such as reimbursement by Medicaid/Medicare or litigation costs related to quality of care). A positive impact was reported for most projects, mainly in terms of improvement of practice and adoption of guidelines and, to a lesser extent, improvement in communication. Only 1 project reported positive impact in terms of clinical outcomes. Success factors and threats are linked to both the benchmarking process (such as organisation of meetings, link with existing projects) and indicators used (such as adjustment for diagnostic-related groups). The results of this review will help coordinators of a benchmarking project to set it up successfully. PMID:26770800

  11. Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation

    PubMed Central

    Liu, Qian; Pineda-García, Garibaldi; Stromatias, Evangelos; Serrano-Gotarredona, Teresa; Furber, Steve B.

    2016-01-01

    Today, increasing attention is being paid to research into spike-based neural computation both to gain a better understanding of the brain and to explore biologically-inspired computation. Within this field, the primate visual pathway and its hierarchical organization have been extensively studied. Spiking Neural Networks (SNNs), inspired by the understanding of observed biological structure and function, have been successfully applied to visual recognition and classification tasks. In addition, implementations on neuromorphic hardware have enabled large-scale networks to run in (or even faster than) real time, making spike-based neural vision processing accessible on mobile robots. Neuromorphic sensors such as silicon retinas are able to feed such mobile systems with real-time visual stimuli. A new set of vision benchmarks for spike-based neural processing are now needed to measure progress quantitatively within this rapidly advancing field. We propose that a large dataset of spike-based visual stimuli is needed to provide meaningful comparisons between different systems, and a corresponding evaluation methodology is also required to measure the performance of SNN models and their hardware implementations. In this paper we first propose an initial NE (Neuromorphic Engineering) dataset based on standard computer vision benchmarksand that uses digits from the MNIST database. This dataset is compatible with the state of current research on spike-based image recognition. The corresponding spike trains are produced using a range of techniques: rate-based Poisson spike generation, rank order encoding, and recorded output from a silicon retina with both flashing and oscillating input stimuli. In addition, a complementary evaluation methodology is presented to assess both model-level and hardware-level performance. Finally, we demonstrate the use of the dataset and the evaluation methodology using two SNN models to validate the performance of the models and their hardware implementations. With this dataset we hope to (1) promote meaningful comparison between algorithms in the field of neural computation, (2) allow comparison with conventional image recognition methods, (3) provide an assessment of the state of the art in spike-based visual recognition, and (4) help researchers identify future directions and advance the field. PMID:27853419

  12. Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation.

    PubMed

    Liu, Qian; Pineda-García, Garibaldi; Stromatias, Evangelos; Serrano-Gotarredona, Teresa; Furber, Steve B

    2016-01-01

    Today, increasing attention is being paid to research into spike-based neural computation both to gain a better understanding of the brain and to explore biologically-inspired computation. Within this field, the primate visual pathway and its hierarchical organization have been extensively studied. Spiking Neural Networks (SNNs), inspired by the understanding of observed biological structure and function, have been successfully applied to visual recognition and classification tasks. In addition, implementations on neuromorphic hardware have enabled large-scale networks to run in (or even faster than) real time, making spike-based neural vision processing accessible on mobile robots. Neuromorphic sensors such as silicon retinas are able to feed such mobile systems with real-time visual stimuli. A new set of vision benchmarks for spike-based neural processing are now needed to measure progress quantitatively within this rapidly advancing field. We propose that a large dataset of spike-based visual stimuli is needed to provide meaningful comparisons between different systems, and a corresponding evaluation methodology is also required to measure the performance of SNN models and their hardware implementations. In this paper we first propose an initial NE (Neuromorphic Engineering) dataset based on standard computer vision benchmarksand that uses digits from the MNIST database. This dataset is compatible with the state of current research on spike-based image recognition. The corresponding spike trains are produced using a range of techniques: rate-based Poisson spike generation, rank order encoding, and recorded output from a silicon retina with both flashing and oscillating input stimuli. In addition, a complementary evaluation methodology is presented to assess both model-level and hardware-level performance. Finally, we demonstrate the use of the dataset and the evaluation methodology using two SNN models to validate the performance of the models and their hardware implementations. With this dataset we hope to (1) promote meaningful comparison between algorithms in the field of neural computation, (2) allow comparison with conventional image recognition methods, (3) provide an assessment of the state of the art in spike-based visual recognition, and (4) help researchers identify future directions and advance the field.

  13. RASSP Benchmark 4 Technical Description.

    DTIC Science & Technology

    1998-01-09

    be carried out. Based on results of the study, an implementation of all, or part, of the system described in this benchmark technical description...validate interface and timing constraints. The ISA level of modeling defines the limit of detail expected in the VHDL virtual prototype. It does not...develop a set of candidate architectures and perform an architecture trade-off study. Candidate proces- sor implementations must then be examined for

  14. A Benchmark Study of Large Contract Supplier Monitoring Within DOD and Private Industry

    DTIC Science & Technology

    1994-03-01

    83 2. Long Term Supplier Relationships ...... .. 84 3. Global Sourcing . . . . . . . . . . . . .. 85 4. Refocusing on Customer Quality...monitoring and recognition, reduced number of suppliers, global sourcing, and long term contractor relationships . These initiatives were then compared to DCMC...on customer quality. 14. suBJE.C TERMS Benchmark Study of Large Contract Supplier Monitoring. 15. NUMBER OF PAGES108 16. PRICE CODE 17. SECURITY

  15. Middle Level Teachers' Perceptions of Interim Reading Assessments: An Exploratory Study of Data-Based Decision Making

    ERIC Educational Resources Information Center

    Reed, Deborah K.

    2015-01-01

    This study explored the data-based decision making of 12 teachers in grades 6-8 who were asked about their perceptions and use of three required interim measures of reading performance: oral reading fluency (ORF), retell, and a benchmark comprised of released state test items. Focus group participants reported they did not believe the benchmark or…

  16. Relationship between the TCAP and the Pearson Benchmark Assessment in Elementary Students' Reading and Math Performance in a Northeastern Tennessee School District

    ERIC Educational Resources Information Center

    Dugger-Roberts, Cherith A.

    2014-01-01

    The purpose of this quantitative study was to determine if there was a relationship between the TCAP test and Pearson Benchmark assessment in elementary students' reading and language arts and math performance in a northeastern Tennessee school district. This study involved 3rd, 4th, 5th, and 6th grade students. The study focused on the following…

  17. Clinical Impact Research – how to choose experimental or observational intervention study?

    PubMed Central

    Malmivaara, Antti

    2016-01-01

    Abstract Background: Interventions directed to individuals by health and social care systems should increase health and welfare of patients and customers. Aims: This paper aims to present and define a new concept Clinical Impact Research (CIR) and suggest which study design, either randomized controlled trial (RCT) (experimental) or benchmarking controlled trial (BCT) (observational) is recommendable and to consider the feasibility, validity, and generalizability issues in CIR. Methods: The new concept is based on a narrative review of the literature and on author’s idea that in intervention studies, there is a need to cover comprehensively all the main impact categories and their respective outcomes. The considerations on how to choose the most appropriate study design (RCT or BCT) were based on previous methodological studies on RCTs and BCTs and on author’s previous work on the concepts benchmarking controlled trial and system impact research (SIR). Results: The CIR covers all studies aiming to assess the impact for health and welfare of any health (and integrated social) care or public health intervention directed to an individual. The impact categories are accessibility, quality, equality, effectiveness, safety, and efficiency. Impact is the main concept, and within each impact category, both generic- and context-specific outcome measures are needed. CIR uses RCTs and BCTs. Conclusions: CIR should be given a high priority in medical, health care, and health economic research. Clinicians and leaders at all levels of health care can exploit the evidence from CIR. Key messagesThe new concept of Clinical Impact Research (CIR) is defined as a research field aiming to assess what are the impacts of healthcare and public health interventions targeted to patients or individuals.The term impact refers to all effects caused by the interventions, with particular emphasis on accessibility, quality, equality, effectiveness, safety, and efficiency. CIR uses two study designs: randomized controlled trials (RCTs) (experimental) and benchmarking controlled trials (BCTs) (observational). Suggestions on how to choose between RCT and BCT as the most suitable study design are presented.Simple way of determining the study question in CIR based on the PICO (patient, intervention, control intervention, outcome) framework is presented.CIR creates the scientific basis for clinical decisions. Clinicians and leaders at all levels of health care and those working for public health can use the evidence from CIR for the benefit of patients and the population. PMID:27494394

  18. Clinical Impact Research - how to choose experimental or observational intervention study?

    PubMed

    Malmivaara, Antti

    2016-11-01

    Interventions directed to individuals by health and social care systems should increase health and welfare of patients and customers. This paper aims to present and define a new concept Clinical Impact Research (CIR) and suggest which study design, either randomized controlled trial (RCT) (experimental) or benchmarking controlled trial (BCT) (observational) is recommendable and to consider the feasibility, validity, and generalizability issues in CIR. The new concept is based on a narrative review of the literature and on author's idea that in intervention studies, there is a need to cover comprehensively all the main impact categories and their respective outcomes. The considerations on how to choose the most appropriate study design (RCT or BCT) were based on previous methodological studies on RCTs and BCTs and on author's previous work on the concepts benchmarking controlled trial and system impact research (SIR). The CIR covers all studies aiming to assess the impact for health and welfare of any health (and integrated social) care or public health intervention directed to an individual. The impact categories are accessibility, quality, equality, effectiveness, safety, and efficiency. Impact is the main concept, and within each impact category, both generic- and context-specific outcome measures are needed. CIR uses RCTs and BCTs. CIR should be given a high priority in medical, health care, and health economic research. Clinicians and leaders at all levels of health care can exploit the evidence from CIR. Key messages The new concept of Clinical Impact Research (CIR) is defined as a research field aiming to assess what are the impacts of healthcare and public health interventions targeted to patients or individuals. The term impact refers to all effects caused by the interventions, with particular emphasis on accessibility, quality, equality, effectiveness, safety, and efficiency. CIR uses two study designs: randomized controlled trials (RCTs) (experimental) and benchmarking controlled trials (BCTs) (observational). Suggestions on how to choose between RCT and BCT as the most suitable study design are presented. Simple way of determining the study question in CIR based on the PICO (patient, intervention, control intervention, outcome) framework is presented. CIR creates the scientific basis for clinical decisions. Clinicians and leaders at all levels of health care and those working for public health can use the evidence from CIR for the benefit of patients and the population.

  19. Generation and Radiation of Acoustic Waves from a 2-D Shear Layer

    NASA Technical Reports Server (NTRS)

    Agarwal, Anurag; Morris, Philip J.

    2000-01-01

    A parallel numerical simulation of the radiation of sound from an acoustic source inside a 2-D jet is presented in this paper. This basic benchmark problem is used as a test case for scattering problems that are presently being solved by using the Impedance Mismatch Method (IMM). In this technique, a solid body in the domain is represented by setting the acoustic impedance of each medium, encountered by a wave, to a different value. This impedance discrepancy results in reflected and scattered waves with appropriate amplitudes. The great advantage of the use of this method is that no modifications to a simple Cartesian grid need to be made for complicated geometry bodies. Thus, high order finite difference schemes may be applied simply to all parts of the domain. In the IMM, the total perturbation field is split into incident and scattered fields. The incident pressure is assumed to be known and the equivalent sources for the scattered field are associated with the presence of the scattering body (through the impedance mismatch) and the propagation of the incident field through a non-uniform flow. An earlier version of the technique could only handle uniform flow in the vicinity of the source and at the outflow boundary. Scattering problems in non-uniform mean flow are of great practical importance (for example, scattering from a high lift device in a non-uniform mean flow or the effects of a fuselage boundary layer). The solution to this benchmark problem, which has an acoustic wave propagating through a non-uniform mean flow, serves as a test case for the extensions of the IMM technique.

  20. Modelling of a Solar Thermal Power Plant for Benchmarking Blackbox Optimization Solvers

    NASA Astrophysics Data System (ADS)

    Lemyre Garneau, Mathieu

    A new family of problems is provided to serve as a benchmark for blackbox optimization solvers. The problems are single or bi-objective and vary in complexity in terms of the number of variables used (from 5 to 29), the type of variables (integer, real, category), the number of constraints (from 5 to 17) and their types (binary or continuous). In order to provide problems exhibiting dynamics that reflect real engineering challenges, they are extracted from an original numerical model of a concentrated solar power (CSP) power plant with molten salt thermal storage. The model simulates the performance of the power plant by using a high level modeling of each of its main components, namely, an heliostats field, a central cavity receiver, a molten salt heat storage, a steam generator and an idealized powerblock. The heliostats field layout is determined through a simple automatic strategy that finds the best individual positions on the field by considering their respective cosine efficiency, atmospheric scattering and spillage losses as a function of the design parameters. A Monte-Carlo integral method is used to evaluate the heliostats field's optical performance throughout the day so that shadowing effects between heliostats are considered, and the results of this evaluation provide the inputs to simulate the levels and temperatures of the thermal storage. The molten salt storage inventory is used to transfer thermal energy to the powerblock, which simulates a simple Rankine cycle with a single steam turbine. Auxiliary models are used to provide additional optimization constraints on the investment cost, parasitic losses or components failure. The results of preliminary optimizations performed with the NOMAD software using default settings are provided to show the validity of the problems.

  1. Analysis of contact zones from whole field isochromatics using reflection photoelasticity

    NASA Astrophysics Data System (ADS)

    Hariprasad, M. P.; Ramesh, K.

    2018-06-01

    This paper discusses the method for evaluating the unknown contact parameters by post processing the whole field fringe order data obtained from reflection photoelasticity in a nonlinear least squares sense. Recent developments in Twelve Fringe Photoelasticity (TFP) for fringe order evaluation from single isochromatics is utilized for the whole field fringe order evaluation. One of the issues in using TFP for reflection photoelasticity is the smudging of isochromatic data at the contact zone. This leads to error in identifying the origin of contact, which is successfully addressed by implementing a semi-automatic contact point refinement algorithm. The methodologies are initially verified for benchmark problems and demonstrated for two application problems of turbine blade and sheet pile contacting interfaces.

  2. Addiction recovery: its definition and conceptual boundaries.

    PubMed

    White, William L

    2007-10-01

    The addiction field's failure to achieve consensus on a definition of "recovery" from severe and persistent alcohol and other drug problems undermines clinical research, compromises clinical practice, and muddles the field's communications to service constituents, allied service professionals, the public, and policymakers. This essay discusses 10 questions critical to the achievement of such a definition and offers a working definition of recovery that attempts to meet the criteria of precision, inclusiveness, exclusiveness, measurability, acceptability, and simplicity. The key questions explore who has professional and cultural authority to define recovery, the defining ingredients of recovery, the boundaries (scope and depth) of recovery, and temporal benchmarks of recovery (when recovery begins and ends). The process of defining recovery touches on some of the most controversial issues within the addictions field.

  3. Application of Phase-Field Techniques to Hydraulically- and Deformation-Induced Fracture.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Culp, David; Miller, Nathan; Schweizer, Laura

    Phase-field techniques provide an alternative approach to fracture problems which mitigate some of the computational expense associated with tracking the crack interface and the coalescence of individual fractures. The technique is extended to apply to hydraulically driven fracture such as would occur during fracking or CO 2 sequestration. Additionally, the technique is applied to a stainless steel specimen used in the Sandia Fracture Challenge. It was found that the phase-field model performs very well, at least qualitatively, in both deformation-induced fracture and hydraulically-induced fracture, though spurious hourglassing modes were observed during coupled hydralically-induced fracture. Future work would include performing additionalmore » quantitative benchmark tests and updating the model as needed.« less

  4. Benchmarks of programming languages for special purposes in the space station

    NASA Technical Reports Server (NTRS)

    Knoebel, Arthur

    1986-01-01

    Although Ada is likely to be chosen as the principal programming language for the Space Station, certain needs, such as expert systems and robotics, may be better developed in special languages. The languages, LISP and Prolog, are studied and some benchmarks derived. The mathematical foundations for these languages are reviewed. Likely areas of the space station are sought out where automation and robotics might be applicable. Benchmarks are designed which are functional, mathematical, relational, and expert in nature. The coding will depend on the particular versions of the languages which become available for testing.

  5. A study of unsteady physiological magneto-fluid flow and heat transfer through a finite length channel by peristaltic pumping.

    PubMed

    Tripathi, Dharmendra; Bég, O Anwar

    2012-08-01

    Magnetohydrodynamic peristaltic flows arise in controlled magnetic drug targeting, hybrid haemodynamic pumps and biomagnetic phenomena interacting with the human digestive system. Motivated by the objective of improving an understanding of the complex fluid dynamics in such flows, we consider in the present article the transient magneto-fluid flow and heat transfer through a finite length channel by peristaltic pumping. Reynolds number is small enough and the wavelength to diameter ratio is large enough to negate inertial effects. Analytical solutions for temperature field, axial velocity, transverse velocity, pressure gradient, local wall shear stress, volume flowrate and averaged volume flowrate are obtained. The effects of the transverse magnetic field, Grashof number and thermal conductivity on the flow patterns induced by peristaltic waves (sinusoidal propagation along the length of channel) are studied using graphical plots. The present study identifies that greater pressure is required to propel the magneto-fluid by peristaltic pumping in comparison to a non-conducting Newtonian fluid, whereas, a lower pressure is required if heat transfer is effective. The analytical solutions further provide an important benchmark for future numerical simulations.

  6. Somatic cell nuclear transfer: pros and cons.

    PubMed

    Sumer, Huseyin; Liu, Jun; Tat, Pollyanna; Heffernan, Corey; Jones, Karen L; Verma, Paul J

    2009-01-01

    Even though the technique of mammalian SCNT is just over a decade old it has already resulted in numerous significant advances. Despite the recent advances in the reprogramming field, SCNT remains the bench-mark for the generation of both genetically unmodified autologous pluripotent stem cells for transplantation and for the production of cloned animals. In this review we will discuss the pros and cons of SCNT, drawing comparisons with other reprogramming methods.

  7. High-Field High-Repetition-Rate Sources for the Coherent THz Control of Matter

    PubMed Central

    Green, B.; Kovalev, S.; Asgekar, V.; Geloni, G.; Lehnert, U.; Golz, T.; Kuntzsch, M.; Bauer, C.; Hauser, J.; Voigtlaender, J.; Wustmann, B.; Koesterke, I.; Schwarz, M.; Freitag, M.; Arnold, A.; Teichert, J.; Justus, M.; Seidel, W.; Ilgner, C.; Awari, N.; Nicoletti, D.; Kaiser, S.; Laplace, Y.; Rajasekaran, S.; Zhang, L.; Winnerl, S.; Schneider, H.; Schay, G.; Lorincz, I.; Rauscher, A. A.; Radu, I.; Mährlein, S.; Kim, T. H.; Lee, J. S.; Kampfrath, T.; Wall, S.; Heberle, J.; Malnasi-Csizmadia, A.; Steiger, A.; Müller, A. S.; Helm, M.; Schramm, U.; Cowan, T.; Michel, P.; Cavalleri, A.; Fisher, A. S.; Stojanovic, N.; Gensch, M.

    2016-01-01

    Ultrashort flashes of THz light with low photon energies of a few meV, but strong electric or magnetic field transients have recently been employed to prepare various fascinating nonequilibrium states in matter. Here we present a new class of sources based on superradiant enhancement of radiation from relativistic electron bunches in a compact electron accelerator that we believe will revolutionize experiments in this field. Our prototype source generates high-field THz pulses at unprecedented quasi-continuous-wave repetition rates up to the MHz regime. We demonstrate parameters that exceed state-of-the-art laser-based sources by more than 2 orders of magnitude. The peak fields and the repetition rates are highly scalable and once fully operational this type of sources will routinely provide 1 MV/cm electric fields and 0.3 T magnetic fields at repetition rates of few 100 kHz. We benchmark the unique properties by performing a resonant coherent THz control experiment with few 10 fs resolution. PMID:26924651

  8. High-Field High-Repetition-Rate Sources for the Coherent THz Control of Matter

    DOE PAGES

    Green, B.; Kovalev, S.; Asgekar, V.; ...

    2016-02-29

    Ultrashort flashes of THz light with low photon energies of a few meV, but strong electric or magnetic field transients have recently been employed to prepare various fascinating nonequilibrium states in matter. Here we present a new class of sources based on superradiant enhancement of radiation from relativistic electron bunches in a compact electron accelerator that we believe will revolutionize experiments in this field. Our prototype source generates high-field THz pulses at unprecedented quasi-continuous-wave repetition rates up to the MHz regime. We demonstrate parameters that exceed state-of-the-art laser-based sources by more than 2 orders of magnitude. The peak fields andmore » the repetition rates are highly scalable and once fully operational this type of sources will routinely provide 1 MV/cm electric fields and 0.3 T magnetic fields at repetition rates of few 100 kHz. In conclusion, we benchmark the unique properties by performing a resonant coherent THz control experiment with few 10 fs resolution.« less

  9. Taking the Battle Upstream: Towards a Benchmarking Role for NATO

    DTIC Science & Technology

    2012-09-01

    Benchmark.........................................................................................14 Figure 8. World Bank Benchmarking Work on Quality...Search of a Benchmarking Theory for the Public Sector.” 16     Figure 8. World Bank Benchmarking Work on Quality of Governance One of the most...the Ministries of Defense in the countries in which it works ). Another interesting innovation is that for comparison purposes, McKinsey categorized

  10. Benchmarks--Standards Comparisons. Math Competencies: EFF Benchmarks Comparison [and] Reading Competencies: EFF Benchmarks Comparison [and] Writing Competencies: EFF Benchmarks Comparison.

    ERIC Educational Resources Information Center

    Kent State Univ., OH. Ohio Literacy Resource Center.

    This document is intended to show the relationship between Ohio's Standards and Competencies, Equipped for the Future's (EFF's) Standards and Components of Performance, and Ohio's Revised Benchmarks. The document is divided into three parts, with Part 1 covering mathematics instruction, Part 2 covering reading instruction, and Part 3 covering…

  11. Medicare Part D Roulette: Potential Implications of Random Assignment and Plan Restrictions

    PubMed Central

    Patel, Rajul A.; Walberg, Mark P.; Woelfel, Joseph A.; Amaral, Michelle M.; Varu, Paresh

    2013-01-01

    Background Dual-eligible (Medicare/Medicaid) beneficiaries are randomly assigned to a benchmark plan, which provides prescription drug coverage under the Part D benefit without consideration of their prescription drug profile. To date, the potential for beneficiary assignment to a plan with poor formulary coverage has been minimally studied and the resultant financial impact to beneficiaries unknown. Objective We sought to determine cost variability and drug use restrictions under each available 2010 California benchmark plan. Methods Dual-eligible beneficiaries were provided Part D plan assistance during the 2010 annual election period. The Medicare Web site was used to determine benchmark plan costs and prescription utilization restrictions for each of the six California benchmark plans available for random assignment in 2010. A standardized survey was used to record all de-identified beneficiary demographic and plan specific data. For each low-income subsidy-recipient (n = 113), cost, rank, number of non-formulary medications, and prescription utilization restrictions were recorded for each available 2010 California benchmark plan. Formulary matching rates (percent of beneficiary's medications on plan formulary) were calculated for each benchmark plan. Results Auto-assigned beneficiaries had only a 34% chance of being assigned to the lowest cost plan; the remainder faced potentially significant avoidable out-of-pocket costs. Wide variations between benchmark plans were observed for plan cost, formulary coverage, formulary matching rates, and prescription utilization restrictions. Conclusions Beneficiaries had a 66% chance of being assigned to a sub-optimal plan; thereby, they faced significant avoidable out-of-pocket costs. Alternative methods of beneficiary assignment could decrease beneficiary and Medicare costs while also reducing medication non-compliance. PMID:24753963

  12. Correlational effect size benchmarks.

    PubMed

    Bosco, Frank A; Aguinis, Herman; Singh, Kulraj; Field, James G; Pierce, Charles A

    2015-03-01

    Effect size information is essential for the scientific enterprise and plays an increasingly central role in the scientific process. We extracted 147,328 correlations and developed a hierarchical taxonomy of variables reported in Journal of Applied Psychology and Personnel Psychology from 1980 to 2010 to produce empirical effect size benchmarks at the omnibus level, for 20 common research domains, and for an even finer grained level of generality. Results indicate that the usual interpretation and classification of effect sizes as small, medium, and large bear almost no resemblance to findings in the field, because distributions of effect sizes exhibit tertile partitions at values approximately one-half to one-third those intuited by Cohen (1988). Our results offer information that can be used for research planning and design purposes, such as producing better informed non-nil hypotheses and estimating statistical power and planning sample size accordingly. We also offer information useful for understanding the relative importance of the effect sizes found in a particular study in relationship to others and which research domains have advanced more or less, given that larger effect sizes indicate a better understanding of a phenomenon. Also, our study offers information about research domains for which the investigation of moderating effects may be more fruitful and provide information that is likely to facilitate the implementation of Bayesian analysis. Finally, our study offers information that practitioners can use to evaluate the relative effectiveness of various types of interventions. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  13. Predicting field-scale dispersion under realistic conditions with the polar Markovian velocity process model

    NASA Astrophysics Data System (ADS)

    Dünser, Simon; Meyer, Daniel W.

    2016-06-01

    In most groundwater aquifers, dispersion of tracers is dominated by flow-field inhomogeneities resulting from the underlying heterogeneous conductivity or transmissivity field. This effect is referred to as macrodispersion. Since in practice, besides a few point measurements the complete conductivity field is virtually never available, a probabilistic treatment is needed. To quantify the uncertainty in tracer concentrations from a given geostatistical model for the conductivity, Monte Carlo (MC) simulation is typically used. To avoid the excessive computational costs of MC, the polar Markovian velocity process (PMVP) model was recently introduced delivering predictions at about three orders of magnitude smaller computing times. In artificial test cases, the PMVP model has provided good results in comparison with MC. In this study, we further validate the model in a more challenging and realistic setup. The setup considered is derived from the well-known benchmark macrodispersion experiment (MADE), which is highly heterogeneous and non-stationary with a large number of unevenly scattered conductivity measurements. Validations were done against reference MC and good overall agreement was found. Moreover, simulations of a simplified setup with a single measurement were conducted in order to reassess the model's most fundamental assumptions and to provide guidance for model improvements.

  14. Benchmarking study of the MCNP code against cold critical experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sitaraman, S.

    1991-01-01

    The purpose of this study was to benchmark the widely used Monte Carlo code MCNP against a set of cold critical experiments with a view to using the code as a means of independently verifying the performance of faster but less accurate Monte Carlo and deterministic codes. The experiments simulated consisted of both fast and thermal criticals as well as fuel in a variety of chemical forms. A standard set of benchmark cold critical experiments was modeled. These included the two fast experiments, GODIVA and JEZEBEL, the TRX metallic uranium thermal experiments, the Babcock and Wilcox oxide and mixed oxidemore » experiments, and the Oak Ridge National Laboratory (ORNL) and Pacific Northwest Laboratory (PNL) nitrate solution experiments. The principal case studied was a small critical experiment that was performed with boiling water reactor bundles.« less

  15. Optical Gaps in Pristine and Heavily Doped Silicon Nanocrystals: DFT versus Quantum Monte Carlo Benchmarks.

    PubMed

    Derian, R; Tokár, K; Somogyi, B; Gali, Á; Štich, I

    2017-12-12

    We present a time-dependent density functional theory (TDDFT) study of the optical gaps of light-emitting nanomaterials, namely, pristine and heavily B- and P-codoped silicon crystalline nanoparticles. Twenty DFT exchange-correlation functionals sampled from the best currently available inventory such as hybrids and range-separated hybrids are benchmarked against ultra-accurate quantum Monte Carlo results on small model Si nanocrystals. Overall, the range-separated hybrids are found to perform best. The quality of the DFT gaps is correlated with the deviation from Koopmans' theorem as a possible quality guide. In addition to providing a generic test of the ability of TDDFT to describe optical properties of silicon crystalline nanoparticles, the results also open up a route to benchmark-quality DFT studies of nanoparticle sizes approaching those studied experimentally.

  16. On the numerical computation of nonlinear force-free magnetic fields. [from solar photosphere

    NASA Technical Reports Server (NTRS)

    Wu, S. T.; Sun, M. T.; Chang, H. M.; Hagyard, M. J.; Gary, G. A.

    1990-01-01

    An algorithm has been developed to extrapolate nonlinear force-free magnetic fields from the photosphere, given the proper boundary conditions. This paper presents the results of this work, describing the mathematical formalism that was developed, the numerical techniques employed, and comments on the stability criteria and accuracy developed for these numerical schemes. An analytical solution is used for a benchmark test; the results show that the computational accuracy for the case of a nonlinear force-free magnetic field was on the order of a few percent (less than 5 percent). This newly developed scheme was applied to analyze a solar vector magnetogram, and the results were compared with the results deduced from the classical potential field method. The comparison shows that additional physical features of the vector magnetogram were revealed in the nonlinear force-free case.

  17. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    DOE PAGES

    Bess, John D.; Fujimoto, Nozomu

    2014-10-09

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in themore » experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  18. How to achieve and prove performance improvement - 15 years of experience in German wastewater benchmarking.

    PubMed

    Bertzbach, F; Franz, T; Möller, K

    2012-01-01

    This paper shows the results of performance improvement, which have been achieved in benchmarking projects in the wastewater industry in Germany over the last 15 years. A huge number of changes in operational practice and also in achieved annual savings can be shown, induced in particular by benchmarking at process level. Investigation of this question produces some general findings for the inclusion of performance improvement in a benchmarking project and for the communication of its results. Thus, we elaborate on the concept of benchmarking at both utility and process level, which is still a necessary distinction for the integration of performance improvement into our benchmarking approach. To achieve performance improvement via benchmarking it should be made quite clear that this outcome depends, on one hand, on a well conducted benchmarking programme and, on the other, on the individual situation within each participating utility.

  19. Benchmarking Equity in Transfer Policies for Career and Technical Associate's Degrees

    ERIC Educational Resources Information Center

    Chase, Megan M.

    2011-01-01

    Using critical policy analysis, this study considers state policies that impede technical credit transfer from public 2-year colleges to 4-year institutions of higher education. The states of Ohio, Texas, Washington, and Wisconsin are considered, and seven policy benchmarks for facilitating the transfer of technical credits are proposed. (Contains…

  20. Global Benchmarking of Marketing Doctoral Program Faculty and Institutions by Subarea

    ERIC Educational Resources Information Center

    Elbeck, Matt; Vander Schee, Brian A.

    2014-01-01

    This study benchmarks marketing doctoral programs worldwide in five popular subareas by faculty and institutional scholarly impact. A multi-item approach identifies a collection of top-tier scholarly journals for each subarea, while citation data over the decade 2003 to 2012 identify high scholarly impact marketing faculty by subarea used to…

Top