Sample records for proper benchmark definitions

  1. Reliable B Cell Epitope Predictions: Impacts of Method Development and Improved Benchmarking

    PubMed Central

    Kringelum, Jens Vindahl; Lundegaard, Claus; Lund, Ole; Nielsen, Morten

    2012-01-01

    The interaction between antibodies and antigens is one of the most important immune system mechanisms for clearing infectious organisms from the host. Antibodies bind to antigens at sites referred to as B-cell epitopes. Identification of the exact location of B-cell epitopes is essential in several biomedical applications such as; rational vaccine design, development of disease diagnostics and immunotherapeutics. However, experimental mapping of epitopes is resource intensive making in silico methods an appealing complementary approach. To date, the reported performance of methods for in silico mapping of B-cell epitopes has been moderate. Several issues regarding the evaluation data sets may however have led to the performance values being underestimated: Rarely, all potential epitopes have been mapped on an antigen, and antibodies are generally raised against the antigen in a given biological context not against the antigen monomer. Improper dealing with these aspects leads to many artificial false positive predictions and hence to incorrect low performance values. To demonstrate the impact of proper benchmark definitions, we here present an updated version of the DiscoTope method incorporating a novel spatial neighborhood definition and half-sphere exposure as surface measure. Compared to other state-of-the-art prediction methods, Discotope-2.0 displayed improved performance both in cross-validation and in independent evaluations. Using DiscoTope-2.0, we assessed the impact on performance when using proper benchmark definitions. For 13 proteins in the training data set where sufficient biological information was available to make a proper benchmark redefinition, the average AUC performance was improved from 0.791 to 0.824. Similarly, the average AUC performance on an independent evaluation data set improved from 0.712 to 0.727. Our results thus demonstrate that given proper benchmark definitions, B-cell epitope prediction methods achieve highly significant predictive performances suggesting these tools to be a powerful asset in rational epitope discovery. The updated version of DiscoTope is available at www.cbs.dtu.dk/services/DiscoTope-2.0. PMID:23300419

  2. Phase definition to assess synchronization quality of nonlinear oscillators

    NASA Astrophysics Data System (ADS)

    Freitas, Leandro; Torres, Leonardo A. B.; Aguirre, Luis A.

    2018-05-01

    This paper proposes a phase definition, named the vector field phase, which can be defined for systems with arbitrary finite dimension and is a monotonically increasing function of time. The proposed definition can properly quantify the dynamics in the flow direction, often associated with the null Lyapunov exponent. Numerical examples that use benchmark periodic and chaotic oscillators are discussed to illustrate some of the main features of the definition, which are that (i) phase information can be obtained either from the vector field or from a time series, (ii) it permits not only detection of phase synchronization but also quantification of it, and (iii) it can be used in the phase synchronization of very different oscillators.

  3. Phase definition to assess synchronization quality of nonlinear oscillators.

    PubMed

    Freitas, Leandro; Torres, Leonardo A B; Aguirre, Luis A

    2018-05-01

    This paper proposes a phase definition, named the vector field phase, which can be defined for systems with arbitrary finite dimension and is a monotonically increasing function of time. The proposed definition can properly quantify the dynamics in the flow direction, often associated with the null Lyapunov exponent. Numerical examples that use benchmark periodic and chaotic oscillators are discussed to illustrate some of the main features of the definition, which are that (i) phase information can be obtained either from the vector field or from a time series, (ii) it permits not only detection of phase synchronization but also quantification of it, and (iii) it can be used in the phase synchronization of very different oscillators.

  4. Evaluation of the influence of the definition of an isolated hip fracture as an exclusion criterion for trauma system benchmarking: a multicenter cohort study.

    PubMed

    Tiao, J; Moore, L; Porgo, T V; Belcaid, A

    2016-06-01

    To assess whether the definition of an IHF used as an exclusion criterion influences the results of trauma center benchmarking. We conducted a multicenter retrospective cohort study with data from an integrated Canadian trauma system. The study population included all patients admitted between 1999 and 2010 to any of the 57 adult trauma centers. Seven definitions of IHF based on diagnostic codes, age, mechanism of injury, and secondary injuries, identified in a systematic review, were used. Trauma centers were benchmarked using risk-adjusted mortality estimates generated using the Trauma Risk Adjustment Model. The agreement between benchmarking results generated under different IHF definitions was evaluated with correlation coefficients on adjusted mortality estimates. Correlation coefficients >0.95 were considered to convey acceptable agreement. The study population consisted of 172,872 patients before exclusion of IHF and between 128,094 and 139,588 patients after exclusion. Correlation coefficients between risk-adjusted mortality estimates generated in populations including and excluding IHF varied between 0.86 and 0.90. Correlation coefficients of estimates generated under different definitions of IHF varied between 0.97 and 0.99, even when analyses were restricted to patients aged ≥65 years. Although the exclusion of patients with IHF has an influence on the results of trauma center benchmarking based on mortality, the definition of IHF in terms of diagnostic codes, age, mechanism of injury and secondary injury has no significant impact on benchmarking results. Results suggest that there is no need to obtain formal consensus on the definition of IHF for benchmarking activities.

  5. A new UKIDSS proper motion survey and key early results, including new benchmark systems

    NASA Astrophysics Data System (ADS)

    Smith, L.; Lucas, P.; Burningham, B.; Jones, H.; Pinfield, D.; Smart, R.; Andrei, A.

    We present a proper motion catalogue for the 1500 deg2 of 2 epoch J-band UKIDSS Large Area Survey (LAS) data, which includes 120,000 stellar sources with motions detected above the 5sigma level. Our upper limit on proper motion detection is 3\\farcs3 yr-1 and typical uncertainties are of order 10 mas yr-1 for bright sources from data with a modest 1.8-7.0 year epoch baseline. We developed a bespoke proper motion pipeline which applies a source-unique second order polynomial transformation to UKIDSS array coordinates to counter potential local non-uniformity in the focal plane. Our catalogue agrees well with the proper motion data supplied in the current WFCAM Science Archive (WSA) tenth data release (DR10) catalogue where there is overlap, and in various optical catalogues, but it benefits from some improvements, such as a larger matching radius and relative to absolute proper motion correction. We present proper motion results for 128 T dwarfs in the UKIDSS LAS and key early results of projects utilising our catalogue, in particular searches for brown dwarf benchmark systems through cross matches with existing proper motion catalogues. We report the discovery of two new T dwarf benchmark systems.

  6. Benchmark experiments at ASTRA facility on definition of space distribution of {sup 235}U fission reaction rate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bobrov, A. A.; Boyarinov, V. F.; Glushkov, A. E.

    2012-07-01

    Results of critical experiments performed at five ASTRA facility configurations modeling the high-temperature helium-cooled graphite-moderated reactors are presented. Results of experiments on definition of space distribution of {sup 235}U fission reaction rate performed at four from these five configurations are presented more detail. Analysis of available information showed that all experiments on criticality at these five configurations are acceptable for use them as critical benchmark experiments. All experiments on definition of space distribution of {sup 235}U fission reaction rate are acceptable for use them as physical benchmark experiments. (authors)

  7. Estimation of the limit of detection using information theory measures.

    PubMed

    Fonollosa, Jordi; Vergara, Alexander; Huerta, Ramón; Marco, Santiago

    2014-01-31

    Definitions of the limit of detection (LOD) based on the probability of false positive and/or false negative errors have been proposed over the past years. Although such definitions are straightforward and valid for any kind of analytical system, proposed methodologies to estimate the LOD are usually simplified to signals with Gaussian noise. Additionally, there is a general misconception that two systems with the same LOD provide the same amount of information on the source regardless of the prior probability of presenting a blank/analyte sample. Based upon an analogy between an analytical system and a binary communication channel, in this paper we show that the amount of information that can be extracted from an analytical system depends on the probability of presenting the two different possible states. We propose a new definition of LOD utilizing information theory tools that deals with noise of any kind and allows the introduction of prior knowledge easily. Unlike most traditional LOD estimation approaches, the proposed definition is based on the amount of information that the chemical instrumentation system provides on the chemical information source. Our findings indicate that the benchmark of analytical systems based on the ability to provide information about the presence/absence of the analyte (our proposed approach) is a more general and proper framework, while converging to the usual values when dealing with Gaussian noise. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Implementing Data Definition Consistency for Emergency Department Operations Benchmarking and Research.

    PubMed

    Yiadom, Maame Yaa A B; Scheulen, James; McWade, Conor M; Augustine, James J

    2016-07-01

    The objective was to obtain a commitment to adopt a common set of definitions for emergency department (ED) demographic, clinical process, and performance metrics among the ED Benchmarking Alliance (EDBA), ED Operations Study Group (EDOSG), and Academy of Academic Administrators of Emergency Medicine (AAAEM) by 2017. A retrospective cross-sectional analysis of available data from three ED operations benchmarking organizations supported a negotiation to use a set of common metrics with identical definitions. During a 1.5-day meeting-structured according to social change theories of information exchange, self-interest, and interdependence-common definitions were identified and negotiated using the EDBA's published definitions as a start for discussion. Methods of process analysis theory were used in the 8 weeks following the meeting to achieve official consensus on definitions. These two lists were submitted to the organizations' leadership for implementation approval. A total of 374 unique measures were identified, of which 57 (15%) were shared by at least two organizations. Fourteen (4%) were common to all three organizations. In addition to agreement on definitions for the 14 measures used by all three organizations, agreement was reached on universal definitions for 17 of the 57 measures shared by at least two organizations. The negotiation outcome was a list of 31 measures with universal definitions to be adopted by each organization by 2017. The use of negotiation, social change, and process analysis theories achieved the adoption of universal definitions among the EDBA, EDOSG, and AAAEM. This will impact performance benchmarking for nearly half of US EDs. It initiates a formal commitment to utilize standardized metrics, and it transitions consistency in reporting ED operations metrics from consensus to implementation. This work advances our ability to more accurately characterize variation in ED care delivery models, resource utilization, and performance. In addition, it permits future aggregation of these three data sets, thus facilitating the creation of more robust ED operations research data sets unified by a universal language. Negotiation, social change, and process analysis principles can be used to advance the adoption of additional definitions. © 2016 by the Society for Academic Emergency Medicine.

  9. Benchmarking and Its Relevance to the Library and Information Sector. Interim Findings of "Best Practice Benchmarking in the Library and Information Sector," a British Library Research and Development Department Project.

    ERIC Educational Resources Information Center

    Kinnell, Margaret; Garrod, Penny

    This British Library Research and Development Department study assesses current activities and attitudes toward quality management in library and information services (LIS) in the academic sector as well as the commercial/industrial sector. Definitions and types of benchmarking are described, and the relevance of benchmarking to LIS is evaluated.…

  10. Work Readiness Standards and Benchmarks: The Key to Differentiating America's Workforce and Regaining Global Competitiveness

    ERIC Educational Resources Information Center

    Clark, Hope

    2013-01-01

    In this report, ACT presents a definition of "work readiness" along with empirically driven ACT Work Readiness Standards and Benchmarks. The introduction of standards and benchmarks for workplace success provides a more complete picture of the factors that are important in establishing readiness for success throughout a lifetime. While…

  11. Modification and benchmarking of MCNP for low-energy tungsten spectra.

    PubMed

    Mercier, J R; Kopp, D T; McDavid, W D; Dove, S B; Lancaster, J L; Tucker, D M

    2000-12-01

    The MCNP Monte Carlo radiation transport code was modified for diagnostic medical physics applications. In particular, the modified code was thoroughly benchmarked for the production of polychromatic tungsten x-ray spectra in the 30-150 kV range. Validating the modified code for coupled electron-photon transport with benchmark spectra was supplemented with independent electron-only and photon-only transport benchmarks. Major revisions to the code included the proper treatment of characteristic K x-ray production and scoring, new impact ionization cross sections, and new bremsstrahlung cross sections. Minor revisions included updated photon cross sections, electron-electron bremsstrahlung production, and K x-ray yield. The modified MCNP code is benchmarked to electron backscatter factors, x-ray spectra production, and primary and scatter photon transport.

  12. Benchmarking for Excellence and the Nursing Process

    NASA Technical Reports Server (NTRS)

    Sleboda, Claire

    1999-01-01

    Nursing is a service profession. The services provided are essential to life and welfare. Therefore, setting the benchmark for high quality care is fundamental. Exploring the definition of a benchmark value will help to determine a best practice approach. A benchmark is the descriptive statement of a desired level of performance against which quality can be judged. It must be sufficiently well understood by managers and personnel in order that it may serve as a standard against which to measure value.

  13. Benchmark simulation model no 2: general protocol and exploratory case studies.

    PubMed

    Jeppsson, U; Pons, M-N; Nopens, I; Alex, J; Copp, J B; Gernaey, K V; Rosen, C; Steyer, J-P; Vanrolleghem, P A

    2007-01-01

    Over a decade ago, the concept of objectively evaluating the performance of control strategies by simulating them using a standard model implementation was introduced for activated sludge wastewater treatment plants. The resulting Benchmark Simulation Model No 1 (BSM1) has been the basis for a significant new development that is reported on here: Rather than only evaluating control strategies at the level of the activated sludge unit (bioreactors and secondary clarifier) the new BSM2 now allows the evaluation of control strategies at the level of the whole plant, including primary clarifier and sludge treatment with anaerobic sludge digestion. In this contribution, the decisions that have been made over the past three years regarding the models used within the BSM2 are presented and argued, with particular emphasis on the ADM1 description of the digester, the interfaces between activated sludge and digester models, the included temperature dependencies and the reject water storage. BSM2-implementations are now available in a wide range of simulation platforms and a ring test has verified their proper implementation, consistent with the BSM2 definition. This guarantees that users can focus on the control strategy evaluation rather than on modelling issues. Finally, for illustration, twelve simple operational strategies have been implemented in BSM2 and their performance evaluated. Results show that it is an interesting control engineering challenge to further improve the performance of the BSM2 plant (which is the whole idea behind benchmarking) and that integrated control (i.e. acting at different places in the whole plant) is certainly worthwhile to achieve overall improvement.

  14. Visualization of the air flow behind the automotive benchmark vent

    NASA Astrophysics Data System (ADS)

    Pech, Ondrej; Jedelsky, Jan; Caletka, Petr; Jicha, Miroslav

    2015-05-01

    Passenger comfort in cars depends on appropriate function of the cabin HVAC system. A great attention is therefore paid to the effective function of automotive vents and proper formation of the flow behind the ventilation outlet. The article deals with the visualization of air flow from the automotive benchmark vent. The visualization was made for two different shapes of the inlet channel connected to the benchmark vent. The smoke visualization with the laser knife was used. The influence of the shape of the inlet channel to the airflow direction, its enlargement and position of air flow axis were investigated.

  15. PMLB: a large benchmark suite for machine learning evaluation and comparison.

    PubMed

    Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H

    2017-01-01

    The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.

  16. 76 T dwarfs from the UKIDSS LAS: benchmarks, kinematics and an updated space density

    NASA Astrophysics Data System (ADS)

    Burningham, Ben; Cardoso, C. V.; Smith, L.; Leggett, S. K.; Smart, R. L.; Mann, A. W.; Dhital, S.; Lucas, P. W.; Tinney, C. G.; Pinfield, D. J.; Zhang, Z.; Morley, C.; Saumon, D.; Aller, K.; Littlefair, S. P.; Homeier, D.; Lodieu, N.; Deacon, N.; Marley, M. S.; van Spaandonk, L.; Baker, D.; Allard, F.; Andrei, A. H.; Canty, J.; Clarke, J.; Day-Jones, A. C.; Dupuy, T.; Fortney, J. J.; Gomes, J.; Ishii, M.; Jones, H. R. A.; Liu, M.; Magazzú, A.; Marocco, F.; Murray, D. N.; Rojas-Ayala, B.; Tamura, M.

    2013-07-01

    We report the discovery of 76 new T dwarfs from the UKIRT Infrared Deep Sky Survey (UKIDSS) Large Area Survey (LAS). Near-infrared broad- and narrow-band photometry and spectroscopy are presented for the new objects, along with Wide-field Infrared Survey Explorer (WISE) and warm-Spitzer photometry. Proper motions for 128 UKIDSS T dwarfs are presented from a new two epoch LAS proper motion catalogue. We use these motions to identify two new benchmark systems: LHS 6176AB, a T8p+M4 pair and HD 118865AB, a T5.5+F8 pair. Using age constraints from the primaries and evolutionary models to constrain the radii, we have estimated their physical properties from their bolometric luminosity. We compare the colours and properties of known benchmark T dwarfs to the latest model atmospheres and draw two principal conclusions. First, it appears that the H - [4.5] and J - W2 colours are more sensitive to metallicity than has previously been recognized, such that differences in metallicity may dominate over differences in Teff when considering relative properties of cool objects using these colours. Secondly, the previously noted apparent dominance of young objects in the late-T dwarf sample is no longer apparent when using the new model grids and the expanded sample of late-T dwarfs and benchmarks. This is supported by the apparently similar distribution of late-T dwarfs and earlier type T dwarfs on reduced proper motion diagrams that we present. Finally, we present updated space densities for the late-T dwarfs, and compare our values to simulation predictions and those from WISE.

  17. MoMaS reactive transport benchmark using PFLOTRAN

    NASA Astrophysics Data System (ADS)

    Park, H.

    2017-12-01

    MoMaS benchmark was developed to enhance numerical simulation capability for reactive transport modeling in porous media. The benchmark was published in late September of 2009; it is not taken from a real chemical system, but realistic and numerically challenging tests. PFLOTRAN is a state-of-art massively parallel subsurface flow and reactive transport code that is being used in multiple nuclear waste repository projects at Sandia National Laboratories including Waste Isolation Pilot Plant and Used Fuel Disposition. MoMaS benchmark has three independent tests with easy, medium, and hard chemical complexity. This paper demonstrates how PFLOTRAN is applied to this benchmark exercise and shows results of the easy benchmark test case which includes mixing of aqueous components and surface complexation. Surface complexations consist of monodentate and bidentate reactions which introduces difficulty in defining selectivity coefficient if the reaction applies to a bulk reference volume. The selectivity coefficient becomes porosity dependent for bidentate reaction in heterogeneous porous media. The benchmark is solved by PFLOTRAN with minimal modification to address the issue and unit conversions were made properly to suit PFLOTRAN.

  18. Benchmarking biology research organizations using a new, dedicated tool.

    PubMed

    van Harten, Willem H; van Bokhorst, Leonard; van Luenen, Henri G A M

    2010-02-01

    International competition forces fundamental research organizations to assess their relative performance. We present a benchmark tool for scientific research organizations where, contrary to existing models, the group leader is placed in a central position within the organization. We used it in a pilot benchmark study involving six research institutions. Our study shows that data collection and data comparison based on this new tool can be achieved. It proved possible to compare relative performance and organizational characteristics and to generate suggestions for improvement for most participants. However, strict definitions of the parameters used for the benchmark and a thorough insight into the organization of each of the benchmark partners is required to produce comparable data and draw firm conclusions.

  19. Defining core elements and outstanding practice in Nutritional Science through collaborative benchmarking.

    PubMed

    Samman, Samir; McCarthur, Jennifer O; Peat, Mary

    2006-01-01

    Benchmarking has been adopted by educational institutions as a potentially sensitive tool for improving learning and teaching. To date there has been limited application of benchmarking methodology in the Discipline of Nutritional Science. The aim of this survey was to define core elements and outstanding practice in Nutritional Science through collaborative benchmarking. Questionnaires that aimed to establish proposed core elements for Nutritional Science, and inquired about definitions of " good" and " outstanding" practice were posted to named representatives at eight Australian universities. Seven respondents identified core elements that included knowledge of nutrient metabolism and requirement, food production and processing, modern biomedical techniques that could be applied to understanding nutrition, and social and environmental issues as related to Nutritional Science. Four of the eight institutions who agreed to participate in the present survey identified the integration of teaching with research as an indicator of outstanding practice. Nutritional Science is a rapidly evolving discipline. Further and more comprehensive surveys are required to consolidate and update the definition of the discipline, and to identify the optimal way of teaching it. Global ideas and specific regional requirements also need to be considered.

  20. Benchmarking routine psychological services: a discussion of challenges and methods.

    PubMed

    Delgadillo, Jaime; McMillan, Dean; Leach, Chris; Lucock, Mike; Gilbody, Simon; Wood, Nick

    2014-01-01

    Policy developments in recent years have led to important changes in the level of access to evidence-based psychological treatments. Several methods have been used to investigate the effectiveness of these treatments in routine care, with different approaches to outcome definition and data analysis. To present a review of challenges and methods for the evaluation of evidence-based treatments delivered in routine mental healthcare. This is followed by a case example of a benchmarking method applied in primary care. High, average and poor performance benchmarks were calculated through a meta-analysis of published data from services working under the Improving Access to Psychological Therapies (IAPT) Programme in England. Pre-post treatment effect sizes (ES) and confidence intervals were estimated to illustrate a benchmarking method enabling services to evaluate routine clinical outcomes. High, average and poor performance ES for routine IAPT services were estimated to be 0.91, 0.73 and 0.46 for depression (using PHQ-9) and 1.02, 0.78 and 0.52 for anxiety (using GAD-7). Data from one specific IAPT service exemplify how to evaluate and contextualize routine clinical performance against these benchmarks. The main contribution of this report is to summarize key recommendations for the selection of an adequate set of psychometric measures, the operational definition of outcomes, and the statistical evaluation of clinical performance. A benchmarking method is also presented, which may enable a robust evaluation of clinical performance against national benchmarks. Some limitations concerned significant heterogeneity among data sources, and wide variations in ES and data completeness.

  1. WWTP dynamic disturbance modelling--an essential module for long-term benchmarking development.

    PubMed

    Gernaey, K V; Rosen, C; Jeppsson, U

    2006-01-01

    Intensive use of the benchmark simulation model No. 1 (BSM1), a protocol for objective comparison of the effectiveness of control strategies in biological nitrogen removal activated sludge plants, has also revealed a number of limitations. Preliminary definitions of the long-term benchmark simulation model No. 1 (BSM1_LT) and the benchmark simulation model No. 2 (BSM2) have been made to extend BSM1 for evaluation of process monitoring methods and plant-wide control strategies, respectively. Influent-related disturbances for BSM1_LT/BSM2 are to be generated with a model, and this paper provides a general overview of the modelling methods used. Typical influent dynamic phenomena generated with the BSM1_LT/BSM2 influent disturbance model, including diurnal, weekend, seasonal and holiday effects, as well as rainfall, are illustrated with simulation results. As a result of the work described in this paper, a proposed influent model/file has been released to the benchmark developers for evaluation purposes. Pending this evaluation, a final BSM1_LT/BSM2 influent disturbance model definition is foreseen. Preliminary simulations with dynamic influent data generated by the influent disturbance model indicate that default BSM1 activated sludge plant control strategies will need extensions for BSM1_LT/BSM2 to efficiently handle 1 year of influent dynamics.

  2. Benchmarking Big Data Systems and the BigData Top100 List.

    PubMed

    Baru, Chaitanya; Bhandarkar, Milind; Nambiar, Raghunath; Poess, Meikel; Rabl, Tilmann

    2013-03-01

    "Big data" has become a major force of innovation across enterprises of all sizes. New platforms with increasingly more features for managing big datasets are being announced almost on a weekly basis. Yet, there is currently a lack of any means of comparability among such platforms. While the performance of traditional database systems is well understood and measured by long-established institutions such as the Transaction Processing Performance Council (TCP), there is neither a clear definition of the performance of big data systems nor a generally agreed upon metric for comparing these systems. In this article, we describe a community-based effort for defining a big data benchmark. Over the past year, a Big Data Benchmarking Community has become established in order to fill this void. The effort focuses on defining an end-to-end application-layer benchmark for measuring the performance of big data applications, with the ability to easily adapt the benchmark specification to evolving challenges in the big data space. This article describes the efforts that have been undertaken thus far toward the definition of a BigData Top100 List. While highlighting the major technical as well as organizational challenges, through this article, we also solicit community input into this process.

  3. NASA in-house Commercially Developed Space Facility (CDSF) study report. Volume 1: Concept configuration definition

    NASA Technical Reports Server (NTRS)

    Deryder, L. J.; Chiger, H. D.; Deryder, D. D.; Detweiler, K. N.; Dupree, R. L.; Gillespie, V. P.; Hall, J. B.; Heck, M. L.; Herrick, D. C.; Katzberg, S. J.

    1989-01-01

    The results of a NASA in-house team effort to develop a concept definition for a Commercially Developed Space Facility (CDSF) are presented. Science mission utilization definition scenarios are documented, the conceptual configuration definition system performance parameters qualified, benchmark operational scenarios developed, space shuttle interface descriptions provided, and development schedule activity was assessed with respect to the establishment of a proposed launch date.

  4. FireHose Streaming Benchmarks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karl Anderson, Steve Plimpton

    2015-01-27

    The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the stream of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created inmore » the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.« less

  5. A call for benchmarking transposable element annotation methods.

    PubMed

    Hoen, Douglas R; Hickey, Glenn; Bourque, Guillaume; Casacuberta, Josep; Cordaux, Richard; Feschotte, Cédric; Fiston-Lavier, Anna-Sophie; Hua-Van, Aurélie; Hubley, Robert; Kapusta, Aurélie; Lerat, Emmanuelle; Maumus, Florian; Pollock, David D; Quesneville, Hadi; Smit, Arian; Wheeler, Travis J; Bureau, Thomas E; Blanchette, Mathieu

    2015-01-01

    DNA derived from transposable elements (TEs) constitutes large parts of the genomes of complex eukaryotes, with major impacts not only on genomic research but also on how organisms evolve and function. Although a variety of methods and tools have been developed to detect and annotate TEs, there are as yet no standard benchmarks-that is, no standard way to measure or compare their accuracy. This lack of accuracy assessment calls into question conclusions from a wide range of research that depends explicitly or implicitly on TE annotation. In the absence of standard benchmarks, toolmakers are impeded in improving their tools, annotators cannot properly assess which tools might best suit their needs, and downstream researchers cannot judge how accuracy limitations might impact their studies. We therefore propose that the TE research community create and adopt standard TE annotation benchmarks, and we call for other researchers to join the authors in making this long-overdue effort a success.

  6. Hospital-affiliated practices reduce 'red ink'.

    PubMed

    Bohlmann, R C

    1998-01-01

    Many complain that hospital-group practice affiliations are a failed model and should be abandoned. The author argues for a less rash approach, saying the goal should be to understand the problems precisely, then fix them. Benchmarking is a good place to start. The article outlines the basic definition and ground rules of bench-marking and explains what resources help accomplish the task.

  7. Revenue cycle management.

    PubMed

    Manley, Ray; Satiani, Bhagwan

    2009-11-01

    With the widening gap between overhead expenses and reimbursement, management of the revenue cycle is a critical part of a successful vascular surgery practice. It is important to review the data on all the components of the revenue cycle: payer contracting, appointment scheduling, preregistration, registration process, coding and capturing charges, proper billing of patients and insurers, follow-up of accounts receivable, and finally using appropriate benchmarking. The industry benchmarks used should be those of peers in identical groups. Warning signs of poor performance are discussed enabling the practice to formulate a performance improvement plan.

  8. A large-scale benchmark of gene prioritization methods.

    PubMed

    Guala, Dimitri; Sonnhammer, Erik L L

    2017-04-21

    In order to maximize the use of results from high-throughput experimental studies, e.g. GWAS, for identification and diagnostics of new disease-associated genes, it is important to have properly analyzed and benchmarked gene prioritization tools. While prospective benchmarks are underpowered to provide statistically significant results in their attempt to differentiate the performance of gene prioritization tools, a strategy for retrospective benchmarking has been missing, and new tools usually only provide internal validations. The Gene Ontology(GO) contains genes clustered around annotation terms. This intrinsic property of GO can be utilized in construction of robust benchmarks, objective to the problem domain. We demonstrate how this can be achieved for network-based gene prioritization tools, utilizing the FunCoup network. We use cross-validation and a set of appropriate performance measures to compare state-of-the-art gene prioritization algorithms: three based on network diffusion, NetRank and two implementations of Random Walk with Restart, and MaxLink that utilizes network neighborhood. Our benchmark suite provides a systematic and objective way to compare the multitude of available and future gene prioritization tools, enabling researchers to select the best gene prioritization tool for the task at hand, and helping to guide the development of more accurate methods.

  9. Maricopa County Employer Wage Survey, 1988. Arizona Labor Market Information.

    ERIC Educational Resources Information Center

    Arizona State Dept. of Economic Security, Phoenix.

    This document contains fall 1988 data on salary and benefits as provided by Maricopa County, Arizona, employers. A chart with wage data presents weighted hourly wage paid, hourly range, and weighted hourly range for each occupational title. Definitions of terms follow. Then, benchmark summary position descriptions (definitions) of the occupations…

  10. Pima County Employer Wage Survey, 1988. Arizona Labor Market Information.

    ERIC Educational Resources Information Center

    Arizona State Dept. of Economic Security, Phoenix.

    This document contains Fall 1988 data on salary and benefits as provided by Pima County, Arizona, employers. A chart with wage data presents weighted hourly wage paid, hourly range, and weighted hourly range for each occupational title. Definitions of terms follow. Then, benchmark summary position descriptions (definitions) of the occupations are…

  11. The National Shipbuilding Research The National Shipbuilding Research Program. REAPS 5th Annual Technical Symposium Proceedings. Paper No. 2: An Approach for the Use of Interactive Graphics in Part Definition and Nesting

    DTIC Science & Technology

    1978-06-01

    documentation will vary from yard-to yard. Accuracy - What is needed by the shipbuilding industry? We keep hearing horror stories about ships being...station would generate at any time. Benchmarks - Benchmarks are necessary to evalute qraphics systems, however they don’t yield as quantitative a

  12. 7 CFR 993.21a - Proper storage.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 8 2013-01-01 2013-01-01 false Proper storage. 993.21a Section 993.21a Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... Order Regulating Handling Definitions § 993.21a Proper storage. Proper storage means storage of such...

  13. 7 CFR 993.21a - Proper storage.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 8 2011-01-01 2011-01-01 false Proper storage. 993.21a Section 993.21a Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Order Regulating Handling Definitions § 993.21a Proper storage. Proper storage means storage of such...

  14. 7 CFR 993.21a - Proper storage.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 8 2012-01-01 2012-01-01 false Proper storage. 993.21a Section 993.21a Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Order Regulating Handling Definitions § 993.21a Proper storage. Proper storage means storage of such...

  15. 7 CFR 993.21a - Proper storage.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 8 2014-01-01 2014-01-01 false Proper storage. 993.21a Section 993.21a Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... Order Regulating Handling Definitions § 993.21a Proper storage. Proper storage means storage of such...

  16. 7 CFR 993.21a - Proper storage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Proper storage. 993.21a Section 993.21a Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Order Regulating Handling Definitions § 993.21a Proper storage. Proper storage means storage of such...

  17. Analyzing the BBOB results by means of benchmarking concepts.

    PubMed

    Mersmann, O; Preuss, M; Trautmann, H; Bischl, B; Weihs, C

    2015-01-01

    We present methods to answer two basic questions that arise when benchmarking optimization algorithms. The first one is: which algorithm is the "best" one? and the second one is: which algorithm should I use for my real-world problem? Both are connected and neither is easy to answer. We present a theoretical framework for designing and analyzing the raw data of such benchmark experiments. This represents a first step in answering the aforementioned questions. The 2009 and 2010 BBOB benchmark results are analyzed by means of this framework and we derive insight regarding the answers to the two questions. Furthermore, we discuss how to properly aggregate rankings from algorithm evaluations on individual problems into a consensus, its theoretical background and which common pitfalls should be avoided. Finally, we address the grouping of test problems into sets with similar optimizer rankings and investigate whether these are reflected by already proposed test problem characteristics, finding that this is not always the case.

  18. Child-Resistant Packaging for E-Liquid: A Review of US State Legislation.

    PubMed

    Frey, Leslie T; Tilburg, William C

    2016-02-01

    A growing number of states have introduced or enacted legislation requiring child-resistant packaging for e-liquid containers; however, these laws involve varying terms, packaging standards, and enforcement provisions, raising concerns about their effectiveness. We evaluated bills against 4 benchmarks: broad product definitions that contemplate future developments in the market, citations to a specific packaging standard, stated penalties for violations, and express grants of authority to a state entity to enforce the packaging requirements. Our findings showed that 3 states meet all 4 benchmarks in their enacted legislation. We encourage states to consider these benchmarks when revising statutes or drafting future legislation.

  19. Child-Resistant Packaging for E-Liquid: A Review of US State Legislation

    PubMed Central

    Tilburg, William C.

    2016-01-01

    A growing number of states have introduced or enacted legislation requiring child-resistant packaging for e-liquid containers; however, these laws involve varying terms, packaging standards, and enforcement provisions, raising concerns about their effectiveness. We evaluated bills against 4 benchmarks: broad product definitions that contemplate future developments in the market, citations to a specific packaging standard, stated penalties for violations, and express grants of authority to a state entity to enforce the packaging requirements. Our findings showed that 3 states meet all 4 benchmarks in their enacted legislation. We encourage states to consider these benchmarks when revising statutes or drafting future legislation. PMID:26691114

  20. Kinetic energy definition in velocity Verlet integration for accurate pressure evaluation

    NASA Astrophysics Data System (ADS)

    Jung, Jaewoon; Kobayashi, Chigusa; Sugita, Yuji

    2018-04-01

    In molecular dynamics (MD) simulations, a proper definition of kinetic energy is essential for controlling pressure as well as temperature in the isothermal-isobaric condition. The virial theorem provides an equation that connects the average kinetic energy with the product of particle coordinate and force. In this paper, we show that the theorem is satisfied in MD simulations with a larger time step and holonomic constraints of bonds, only when a proper definition of kinetic energy is used. We provide a novel definition of kinetic energy, which is calculated from velocities at the half-time steps (t - Δt/2 and t + Δt/2) in the velocity Verlet integration method. MD simulations of a 1,2-dispalmitoyl-sn-phosphatidylcholine (DPPC) lipid bilayer and a water box using the kinetic energy definition could reproduce the physical properties in the isothermal-isobaric condition properly. We also develop a multiple time step (MTS) integration scheme with the kinetic energy definition. MD simulations with the MTS integration for the DPPC and water box systems provided the same quantities as the velocity Verlet integration method, even when the thermostat and barostat are updated less frequently.

  1. Kinetic energy definition in velocity Verlet integration for accurate pressure evaluation.

    PubMed

    Jung, Jaewoon; Kobayashi, Chigusa; Sugita, Yuji

    2018-04-28

    In molecular dynamics (MD) simulations, a proper definition of kinetic energy is essential for controlling pressure as well as temperature in the isothermal-isobaric condition. The virial theorem provides an equation that connects the average kinetic energy with the product of particle coordinate and force. In this paper, we show that the theorem is satisfied in MD simulations with a larger time step and holonomic constraints of bonds, only when a proper definition of kinetic energy is used. We provide a novel definition of kinetic energy, which is calculated from velocities at the half-time steps (t - Δt/2 and t + Δt/2) in the velocity Verlet integration method. MD simulations of a 1,2-dispalmitoyl-sn-phosphatidylcholine (DPPC) lipid bilayer and a water box using the kinetic energy definition could reproduce the physical properties in the isothermal-isobaric condition properly. We also develop a multiple time step (MTS) integration scheme with the kinetic energy definition. MD simulations with the MTS integration for the DPPC and water box systems provided the same quantities as the velocity Verlet integration method, even when the thermostat and barostat are updated less frequently.

  2. Constant-concentration boundary condition: Lessons from the HYDROCOIN variable-density groundwater benchmark problem

    USGS Publications Warehouse

    Konikow, Leonard F.; Sanford, W.E.; Campbell, P.J.

    1997-01-01

    In a solute-transport model, if a constant-concentration boundary condition is applied at a node in an active flow field, a solute flux can occur by both advective and dispersive processes. The potential for advective release is demonstrated by reexamining the Hydrologic Code Intercomparison (HYDROCOIN) project case 5 problem, which represents a salt dome overlain by a shallow groundwater system. The resulting flow field includes significant salinity and fluid density variations. Several independent teams simulated this problem using finite difference or finite element numerical models. We applied a method-of-characteristics model (MOCDENSE). The previous numerical implementations by HYDROCOIN teams of a constant-concentration boundary to represent salt release by lateral dispersion only (as stipulated in the original problem definition) was flawed because this boundary condition allows the release of salt into the flow field by both dispersion and advection. When the constant-concentration boundary is modified to allow salt release by dispersion only, significantly less salt is released into the flow field. The calculated brine distribution for case 5 depends very little on which numerical model is used, as long as the selected model is solving the proper equations. Instead, the accuracy of the solution depends strongly on the proper conceptualization of the problem, including the detailed design of the constant-concentration boundary condition. The importance and sensitivity to the manner of specification of this boundary does not appear to have been recognized previously in the analysis of this problem.

  3. 40 CFR 51.1000 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... Benchmark RFP plan means the reasonable further progress plan that requires generally linear emission... Federally enforceable national, State, or local control measure that has been approved in the SIP and that...

  4. 40 CFR 51.1000 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... Benchmark RFP plan means the reasonable further progress plan that requires generally linear emission... Federally enforceable national, State, or local control measure that has been approved in the SIP and that...

  5. 40 CFR 51.1000 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... Benchmark RFP plan means the reasonable further progress plan that requires generally linear emission... Federally enforceable national, State, or local control measure that has been approved in the SIP and that...

  6. 40 CFR 51.1000 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... Benchmark RFP plan means the reasonable further progress plan that requires generally linear emission... Federally enforceable national, State, or local control measure that has been approved in the SIP and that...

  7. 40 CFR 51.1000 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... Benchmark RFP plan means the reasonable further progress plan that requires generally linear emission... Federally enforceable national, State, or local control measure that has been approved in the SIP and that...

  8. Emergency department performance measures updates: proceedings of the 2014 emergency department benchmarking alliance consensus summit.

    PubMed

    Wiler, Jennifer L; Welch, Shari; Pines, Jesse; Schuur, Jeremiah; Jouriles, Nick; Stone-Griffith, Suzanne

    2015-05-01

    The objective was to review and update key definitions and metrics for emergency department (ED) performance and operations. Forty-five emergency medicine leaders convened for the Third Performance Measures and Benchmarking Summit held in Las Vegas, February 21-22, 2014. Prior to arrival, attendees were assigned to workgroups to review, revise, and update the definitions and vocabulary being used to communicate about ED performance and operations. They were provided with the prior definitions of those consensus summits that were published in 2006 and 2010. Other published definitions from key stakeholders in emergency medicine and health care were also reviewed and circulated. At the summit, key terminology and metrics were discussed and debated. Workgroups communicated online, via teleconference, and finally in a face-to-face meeting to reach consensus regarding their recommendations. Recommendations were then posted and open to a 30-day comment period. Participants then reanalyzed the recommendations, and modifications were made based on consensus. A comprehensive dictionary of ED terminology related to ED performance and operation was developed. This article includes definitions of operating characteristics and internal and external factors relevant to the stratification and categorization of EDs. Time stamps, time intervals, and measures of utilization were defined. Definitions of processes and staffing measures are also presented. Definitions were harmonized with performance measures put forth by the Centers for Medicare and Medicaid Services (CMS) for consistency. Standardized definitions are necessary to improve the comparability of EDs nationally for operations research and practice. More importantly, clear precise definitions describing ED operations are needed for incentive-based pay-for-performance models like those developed by CMS. This document provides a common language for front-line practitioners, managers, health policymakers, and researchers. © 2015 by the Society for Academic Emergency Medicine.

  9. Critical Assessment of Metagenome Interpretation – a benchmark of computational metagenomics software

    PubMed Central

    Sczyrba, Alexander; Hofmann, Peter; Belmann, Peter; Koslicki, David; Janssen, Stefan; Dröge, Johannes; Gregor, Ivan; Majda, Stephan; Fiedler, Jessika; Dahms, Eik; Bremges, Andreas; Fritz, Adrian; Garrido-Oter, Ruben; Jørgensen, Tue Sparholt; Shapiro, Nicole; Blood, Philip D.; Gurevich, Alexey; Bai, Yang; Turaev, Dmitrij; DeMaere, Matthew Z.; Chikhi, Rayan; Nagarajan, Niranjan; Quince, Christopher; Meyer, Fernando; Balvočiūtė, Monika; Hansen, Lars Hestbjerg; Sørensen, Søren J.; Chia, Burton K. H.; Denis, Bertrand; Froula, Jeff L.; Wang, Zhong; Egan, Robert; Kang, Dongwan Don; Cook, Jeffrey J.; Deltel, Charles; Beckstette, Michael; Lemaitre, Claire; Peterlongo, Pierre; Rizk, Guillaume; Lavenier, Dominique; Wu, Yu-Wei; Singer, Steven W.; Jain, Chirag; Strous, Marc; Klingenberg, Heiner; Meinicke, Peter; Barton, Michael; Lingner, Thomas; Lin, Hsin-Hung; Liao, Yu-Chieh; Silva, Genivaldo Gueiros Z.; Cuevas, Daniel A.; Edwards, Robert A.; Saha, Surya; Piro, Vitor C.; Renard, Bernhard Y.; Pop, Mihai; Klenk, Hans-Peter; Göker, Markus; Kyrpides, Nikos C.; Woyke, Tanja; Vorholt, Julia A.; Schulze-Lefert, Paul; Rubin, Edward M.; Darling, Aaron E.; Rattei, Thomas; McHardy, Alice C.

    2018-01-01

    In metagenome analysis, computational methods for assembly, taxonomic profiling and binning are key components facilitating downstream biological data interpretation. However, a lack of consensus about benchmarking datasets and evaluation metrics complicates proper performance assessment. The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark their programs on datasets of unprecedented complexity and realism. Benchmark metagenomes were generated from ~700 newly sequenced microorganisms and ~600 novel viruses and plasmids, including genomes with varying degrees of relatedness to each other and to publicly available ones and representing common experimental setups. Across all datasets, assembly and genome binning programs performed well for species represented by individual genomes, while performance was substantially affected by the presence of related strains. Taxonomic profiling and binning programs were proficient at high taxonomic ranks, with a notable performance decrease below the family level. Parameter settings substantially impacted performances, underscoring the importance of program reproducibility. While highlighting current challenges in computational metagenomics, the CAMI results provide a roadmap for software selection to answer specific research questions. PMID:28967888

  10. An ASM/ADM model interface for dynamic plant-wide simulation.

    PubMed

    Nopens, Ingmar; Batstone, Damien J; Copp, John B; Jeppsson, Ulf; Volcke, Eveline; Alex, Jens; Vanrolleghem, Peter A

    2009-04-01

    Mathematical modelling has proven to be very useful in process design, operation and optimisation. A recent trend in WWTP modelling is to include the different subunits in so-called plant-wide models rather than focusing on parts of the entire process. One example of a typical plant-wide model is the coupling of an upstream activated sludge plant (including primary settler, and secondary clarifier) to an anaerobic digester for sludge digestion. One of the key challenges when coupling these processes has been the definition of an interface between the well accepted activated sludge model (ASM1) and anaerobic digestion model (ADM1). Current characterisation and interface models have key limitations, the most critical of which is the over-use of X(c) (or lumped complex) variable as a main input to the ADM1. Over-use of X(c) does not allow for variation of degradability, carbon oxidation state or nitrogen content. In addition, achieving a target influent pH through the proper definition of the ionic system can be difficult. In this paper, we define an interface and characterisation model that maps degradable components directly to carbohydrates, proteins and lipids (and their soluble analogues), as well as organic acids, rather than using X(c). While this interface has been designed for use with the Benchmark Simulation Model No. 2 (BSM2), it is widely applicable to ADM1 input characterisation in general. We have demonstrated the model both hypothetically (BSM2), and practically on a full-scale anaerobic digester treating sewage sludge.

  11. Emergency department operations dictionary: results of the second performance measures and benchmarking summit.

    PubMed

    Welch, Shari J; Stone-Griffith, Suzanne; Asplin, Brent; Davidson, Steven J; Augustine, James; Schuur, Jeremiah D

    2011-05-01

    The public, payers, hospitals, and Centers for Medicare and Medicaid Services (CMS) are demanding that emergency departments (EDs) measure and improve performance, but this cannot be done unless we define the terms used in ED operations. On February 24, 2010, 32 stakeholders from 13 professional organizations met in Salt Lake City, Utah, to standardize ED operations metrics and definitions, which are presented in this consensus paper. Emergency medicine (EM) experts attending the Second Performance Measures and Benchmarking Summit reviewed, expanded, and updated key definitions for ED operations. Prior to the meeting, participants were provided with the definitions created at the first summit in 2006 and relevant documents from other organizations and asked to identify gaps and limitations in the original work. Those responses were used to devise a plan to revise and update the definitions. At the summit, attendees discussed and debated key terminology, and workgroups were created to draft a more comprehensive document. These results have been crafted into two reference documents, one for metrics and the operations dictionary presented here. The ED Operations Dictionary defines ED spaces, processes, patient populations, and new ED roles. Common definitions of key terms will improve the ability to compare ED operations research and practice and provide a common language for frontline practitioners, managers, and researchers. © 2011 by the Society for Academic Emergency Medicine.

  12. OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE - A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnis Judzis

    2002-10-01

    This document details the progress to date on the OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE -- A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING contract for the quarter starting July 2002 through September 2002. Even though we are awaiting the optimization portion of the testing program, accomplishments include the following: (1) Smith International agreed to participate in the DOE Mud Hammer program. (2) Smith International chromed collars for upcoming benchmark tests at TerraTek, now scheduled for 4Q 2002. (3) ConocoPhillips had a field trial of the Smith fluid hammer offshore Vietnam. The hammer functioned properly, though themore » well encountered hole conditions and reaming problems. ConocoPhillips plan another field trial as a result. (4) DOE/NETL extended the contract for the fluid hammer program to allow Novatek to ''optimize'' their much delayed tool to 2003 and to allow Smith International to add ''benchmarking'' tests in light of SDS Digger Tools' current financial inability to participate. (5) ConocoPhillips joined the Industry Advisors for the mud hammer program. (6) TerraTek acknowledges Smith International, BP America, PDVSA, and ConocoPhillips for cost-sharing the Smith benchmarking tests allowing extension of the contract to complete the optimizations.« less

  13. 14 CFR 153.3 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... AIRPORT OPERATIONS Aviation Safety Inspector Access § 153.3 Definitions. The following definitions apply... aircraft service. Aviation Safety Inspector means a properly credentialed individual who bears FAA Form... investigations. FAA Form 110A means the credentials issued to qualified Aviation Safety Inspectors by the FAA for...

  14. Defining College Readiness: Where Are We Now, and Where Do We Need to Be? The Progress of Education Reform. Volume 13, Number 2

    ERIC Educational Resources Information Center

    Zinth, Jennifer Dounay

    2012-01-01

    Multiple catalysts are fueling states' increased urgency to establish a definition of "college readiness". Some states are creating a "college readiness" definition that describes what a student will know and be able to do in such core academic courses as English language arts and math, and that identifies items or benchmarks on state assessments…

  15. Spectral Relative Standard Deviation: A Practical Benchmark in Metabolomics

    EPA Science Inventory

    Metabolomics datasets, by definition, comprise of measurements of large numbers of metabolites. Both technical (analytical) and biological factors will induce variation within these measurements that is not consistent across all metabolites. Consequently, criteria are required to...

  16. Towards the quantitative evaluation of visual attention models.

    PubMed

    Bylinskii, Z; DeGennaro, E M; Rajalingham, R; Ruda, H; Zhang, J; Tsotsos, J K

    2015-11-01

    Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuring various aspects of human and non-human primate visual attention. All of these elements highlight the need to integrate the computational models with the data by (1) operationalizing the definitions of visual attention tasks and (2) designing benchmark datasets to measure success on specific tasks, under these definitions. In this paper, we provide some examples of operationalizing and benchmarking different visual attention tasks, along with the relevant design considerations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Conclusions and Future Directions

    ERIC Educational Resources Information Center

    Lillibridge, Fred

    2012-01-01

    Benchmarking, when done properly, offers a lot of promise for higher education units that want to improve how they do business. It is clear that much is known, but still more needs to be learned before it reaches its full potential as a useful tool. Readers of this issue of "New Directions for Institutional Research" have been treated to useful…

  18. The Swedish Research Council's definition of 'scientific misconduct': a critique.

    PubMed

    Salwén, Håkan

    2015-02-01

    There is no consensus over the proper definition of 'scientific misconduct.' There are differences in opinion not only between countries but also between research institutions in the same country. This is unfortunate. Without a widely accepted definition it is difficult for scientists to adjust to new research milieux. This might hamper scientific innovation and make cooperation difficult. Furthermore, due to the potentially damaging consequences it is important to combat misconduct. But how frequent is it and what measures are efficient? Without an appropriate definition there are no interesting answers to these questions. In order to achieve a high degree of consensus and to foster research integrity, the international dialogue over the proper definition of 'scientific misconduct' must be on going. Yet, the scientific community should not end up with the definition suggested by the Swedish Research Council. The definition the council advocates does not satisfy the ordinary language condition. That is, the definition is not consistent with how 'scientific misconduct' is used by scientists. I will show that this is due to the fact that it refers to false results. I generalise this and argue that no adequate definition of 'scientific misconduct' makes such a reference.

  19. Benchmarking: a method for continuous quality improvement in health.

    PubMed

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-05-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical-social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted.

  20. 45 CFR 156.20 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... adjustments made pursuant to the benchmark standards described in § 156.110 of this subchapter. Benefit design... this subchapter. Enrollee satisfaction survey vendor means an organization that has relevant survey administration experience (for example, CAHPS® surveys), organizational survey capacity, and quality control...

  1. 21 CFR 660.50 - Anti-Human Globulin.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Globulin. (a) Proper name and definition. The proper name of this product shall be Anti-Human Globulin... in tissue cultures or in secondary hosts. [50 FR 5579, Feb. 11, 1985, as amended at 65 FR 77499, Dec...

  2. Analytical three-dimensional neutron transport benchmarks for verification of nuclear engineering codes. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganapol, B.D.; Kornreich, D.E.

    Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) pointmore » source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green`s function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade.« less

  3. Using business intelligence to manage supply costs.

    PubMed

    Bunata, Ernest

    2013-08-01

    Business intelligence tools can help materials managers and managers in the operating room and procedural areas track purchasing costs more precisely and determine the root causes of cost increases. Data can be shared with physicians to increase their awareness of the cost of physician preference items. Proper use of business intelligence goes beyond price benchmarking to manage price performance over time.

  4. Comparison of mapping algorithms used in high-throughput sequencing: application to Ion Torrent data

    PubMed Central

    2014-01-01

    Background The rapid evolution in high-throughput sequencing (HTS) technologies has opened up new perspectives in several research fields and led to the production of large volumes of sequence data. A fundamental step in HTS data analysis is the mapping of reads onto reference sequences. Choosing a suitable mapper for a given technology and a given application is a subtle task because of the difficulty of evaluating mapping algorithms. Results In this paper, we present a benchmark procedure to compare mapping algorithms used in HTS using both real and simulated datasets and considering four evaluation criteria: computational resource and time requirements, robustness of mapping, ability to report positions for reads in repetitive regions, and ability to retrieve true genetic variation positions. To measure robustness, we introduced a new definition for a correctly mapped read taking into account not only the expected start position of the read but also the end position and the number of indels and substitutions. We developed CuReSim, a new read simulator, that is able to generate customized benchmark data for any kind of HTS technology by adjusting parameters to the error types. CuReSim and CuReSimEval, a tool to evaluate the mapping quality of the CuReSim simulated reads, are freely available. We applied our benchmark procedure to evaluate 14 mappers in the context of whole genome sequencing of small genomes with Ion Torrent data for which such a comparison has not yet been established. Conclusions A benchmark procedure to compare HTS data mappers is introduced with a new definition for the mapping correctness as well as tools to generate simulated reads and evaluate mapping quality. The application of this procedure to Ion Torrent data from the whole genome sequencing of small genomes has allowed us to validate our benchmark procedure and demonstrate that it is helpful for selecting a mapper based on the intended application, questions to be addressed, and the technology used. This benchmark procedure can be used to evaluate existing or in-development mappers as well as to optimize parameters of a chosen mapper for any application and any sequencing platform. PMID:24708189

  5. A study of workstation computational performance for real-time flight simulation

    NASA Technical Reports Server (NTRS)

    Maddalon, Jeffrey M.; Cleveland, Jeff I., II

    1995-01-01

    With recent advances in microprocessor technology, some have suggested that modern workstations provide enough computational power to properly operate a real-time simulation. This paper presents the results of a computational benchmark, based on actual real-time flight simulation code used at Langley Research Center, which was executed on various workstation-class machines. The benchmark was executed on different machines from several companies including: CONVEX Computer Corporation, Cray Research, Digital Equipment Corporation, Hewlett-Packard, Intel, International Business Machines, Silicon Graphics, and Sun Microsystems. The machines are compared by their execution speed, computational accuracy, and porting effort. The results of this study show that the raw computational power needed for real-time simulation is now offered by workstations.

  6. SeSBench - An initiative to benchmark reactive transport models for environmental subsurface processes

    NASA Astrophysics Data System (ADS)

    Jacques, Diederik

    2017-04-01

    As soil functions are governed by a multitude of interacting hydrological, geochemical and biological processes, simulation tools coupling mathematical models for interacting processes are needed. Coupled reactive transport models are a typical example of such coupled tools mainly focusing on hydrological and geochemical coupling (see e.g. Steefel et al., 2015). Mathematical and numerical complexity for both the tool itself or of the specific conceptual model can increase rapidly. Therefore, numerical verification of such type of models is a prerequisite for guaranteeing reliability and confidence and qualifying simulation tools and approaches for any further model application. In 2011, a first SeSBench -Subsurface Environmental Simulation Benchmarking- workshop was held in Berkeley (USA) followed by four other ones. The objective is to benchmark subsurface environmental simulation models and methods with a current focus on reactive transport processes. The final outcome was a special issue in Computational Geosciences (2015, issue 3 - Reactive transport benchmarks for subsurface environmental simulation) with a collection of 11 benchmarks. Benchmarks, proposed by the participants of the workshops, should be relevant for environmental or geo-engineering applications; the latter were mostly related to radioactive waste disposal issues - excluding benchmarks defined for pure mathematical reasons. Another important feature is the tiered approach within a benchmark with the definition of a single principle problem and different sub problems. The latter typically benchmarked individual or simplified processes (e.g. inert solute transport, simplified geochemical conceptual model) or geometries (e.g. batch or one-dimensional, homogeneous). Finally, three codes should be involved into a benchmark. The SeSBench initiative contributes to confidence building for applying reactive transport codes. Furthermore, it illustrates the use of those type of models for different environmental and geo-engineering applications. SeSBench will organize new workshops to add new benchmarks in a new special issue. Steefel, C. I., et al. (2015). "Reactive transport codes for subsurface environmental simulation." Computational Geosciences 19: 445-478.

  7. Benchmarking Controlled Trial--a novel concept covering all observational effectiveness studies.

    PubMed

    Malmivaara, Antti

    2015-06-01

    The Benchmarking Controlled Trial (BCT) is a novel concept which covers all observational studies aiming to assess effectiveness. BCTs provide evidence of the comparative effectiveness between health service providers, and of effectiveness due to particular features of the health and social care systems. BCTs complement randomized controlled trials (RCTs) as the sources of evidence on effectiveness. This paper presents a definition of the BCT; compares the position of BCTs in assessing effectiveness with that of RCTs; presents a checklist for assessing methodological validity of a BCT; and pilot-tests the checklist with BCTs published recently in the leading medical journals.

  8. Benchmarking: A Method for Continuous Quality Improvement in Health

    PubMed Central

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-01-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical–social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted. PMID:23634166

  9. The Alpha consensus meeting on cryopreservation key performance indicators and benchmarks: proceedings of an expert meeting.

    PubMed

    2012-08-01

    This proceedings report presents the outcomes from an international workshop designed to establish consensus on: definitions for key performance indicators (KPIs) for oocyte and embryo cryopreservation, using either slow freezing or vitrification; minimum performance level values for each KPI, representing basic competency; and aspirational benchmark values for each KPI, representing best practice goals. This report includes general presentations about current practice and factors for consideration in the development of KPIs. A total of 14 KPIs were recommended and benchmarks for each are presented. No recommendations were made regarding specific cryopreservation techniques or devices, or whether vitrification is 'better' than slow freezing, or vice versa, for any particular stage or application, as this was considered to be outside the scope of this workshop. Copyright © 2012 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  10. Integral Full Core Multi-Physics PWR Benchmark with Measured Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forget, Benoit; Smith, Kord; Kumar, Shikhar

    In recent years, the importance of modeling and simulation has been highlighted extensively in the DOE research portfolio with concrete examples in nuclear engineering with the CASL and NEAMS programs. These research efforts and similar efforts worldwide aim at the development of high-fidelity multi-physics analysis tools for the simulation of current and next-generation nuclear power reactors. Like all analysis tools, verification and validation is essential to guarantee proper functioning of the software and methods employed. The current approach relies mainly on the validation of single physic phenomena (e.g. critical experiment, flow loops, etc.) and there is a lack of relevantmore » multiphysics benchmark measurements that are necessary to validate high-fidelity methods being developed today. This work introduces a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading and re-loading patterns. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from 58 instrumented assemblies. The benchmark description is now available online and has been used by many groups. However, much work remains to be done on the quantification of uncertainties and modeling sensitivities. This work aims to address these deficiencies and make this benchmark a true non-proprietary international benchmark for the validation of high-fidelity tools. This report details the BEAVRS uncertainty quantification for the first two cycle of operations and serves as the final report of the project.« less

  11. 34 CFR 300.320 - Definition of individualized education program.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... of the child's present levels of academic achievement and functional performance, including— (i) How... statement of measurable annual goals, including academic and functional goals designed to— (A) Meet the... aligned to alternate academic achievement standards, a description of benchmarks or short-term objectives...

  12. Addiction recovery: its definition and conceptual boundaries.

    PubMed

    White, William L

    2007-10-01

    The addiction field's failure to achieve consensus on a definition of "recovery" from severe and persistent alcohol and other drug problems undermines clinical research, compromises clinical practice, and muddles the field's communications to service constituents, allied service professionals, the public, and policymakers. This essay discusses 10 questions critical to the achievement of such a definition and offers a working definition of recovery that attempts to meet the criteria of precision, inclusiveness, exclusiveness, measurability, acceptability, and simplicity. The key questions explore who has professional and cultural authority to define recovery, the defining ingredients of recovery, the boundaries (scope and depth) of recovery, and temporal benchmarks of recovery (when recovery begins and ends). The process of defining recovery touches on some of the most controversial issues within the addictions field.

  13. 40 CFR 98.93 - Calculating GHG emissions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the facility after off-site recycling. Ni = Total nameplate capacity (full and proper charge) of... recycling or destruction (l). Disbursements should include only amounts that are properly stored and... account for the effect of any fluorinated GHG abatement system meeting the definition of abatement system...

  14. Accelerating progress in Artificial General Intelligence: Choosing a benchmark for natural world interaction

    NASA Astrophysics Data System (ADS)

    Rohrer, Brandon

    2010-12-01

    Measuring progress in the field of Artificial General Intelligence (AGI) can be difficult without commonly accepted methods of evaluation. An AGI benchmark would allow evaluation and comparison of the many computational intelligence algorithms that have been developed. In this paper I propose that a benchmark for natural world interaction would possess seven key characteristics: fitness, breadth, specificity, low cost, simplicity, range, and task focus. I also outline two benchmark examples that meet most of these criteria. In the first, the direction task, a human coach directs a machine to perform a novel task in an unfamiliar environment. The direction task is extremely broad, but may be idealistic. In the second, the AGI battery, AGI candidates are evaluated based on their performance on a collection of more specific tasks. The AGI battery is designed to be appropriate to the capabilities of currently existing systems. Both the direction task and the AGI battery would require further definition before implementing. The paper concludes with a description of a task that might be included in the AGI battery: the search and retrieve task.

  15. Anharmonic Vibrational Spectroscopy on Metal Transition Complexes

    NASA Astrophysics Data System (ADS)

    Latouche, Camille; Bloino, Julien; Barone, Vincenzo

    2014-06-01

    Advances in hardware performance and the availability of efficient and reliable computational models have made possible the application of computational spectroscopy to ever larger molecular systems. The systematic interpretation of experimental data and the full characterization of complex molecules can then be facilitated. Focusing on vibrational spectroscopy, several approaches have been proposed to simulate spectra beyond the double harmonic approximation, so that more details become available. However, a routine use of such tools requires the preliminary definition of a valid protocol with the most appropriate combination of electronic structure and nuclear calculation models. Several benchmark of anharmonic calculations frequency have been realized on organic molecules. Nevertheless, benchmarks of organometallics or inorganic metal complexes at this level are strongly lacking despite the interest of these systems due to their strong emission and vibrational properties. Herein we report the benchmark study realized with anharmonic calculations on simple metal complexes, along with some pilot applications on systems of direct technological or biological interest.

  16. PFLOTRAN Verification: Development of a Testing Suite to Ensure Software Quality

    NASA Astrophysics Data System (ADS)

    Hammond, G. E.; Frederick, J. M.

    2016-12-01

    In scientific computing, code verification ensures the reliability and numerical accuracy of a model simulation by comparing the simulation results to experimental data or known analytical solutions. The model is typically defined by a set of partial differential equations with initial and boundary conditions, and verification ensures whether the mathematical model is solved correctly by the software. Code verification is especially important if the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment [Oberkampf and Trucano (2007)]. Justified confidence in a particular computational tool requires clarity in the exercised physics and transparency in its verification process with proper documentation. We present a quality assurance (QA) testing suite developed by Sandia National Laboratories that performs code verification for PFLOTRAN, an open source, massively-parallel subsurface simulator. PFLOTRAN solves systems of generally nonlinear partial differential equations describing multiphase, multicomponent and multiscale reactive flow and transport processes in porous media. PFLOTRAN's QA test suite compares the numerical solutions of benchmark problems in heat and mass transport against known, closed-form, analytical solutions, including documentation of the exercised physical process models implemented in each PFLOTRAN benchmark simulation. The QA test suite development strives to follow the recommendations given by Oberkampf and Trucano (2007), which describes four essential elements in high-quality verification benchmark construction: (1) conceptual description, (2) mathematical description, (3) accuracy assessment, and (4) additional documentation and user information. Several QA tests within the suite will be presented, including details of the benchmark problems and their closed-form analytical solutions, implementation of benchmark problems in PFLOTRAN simulations, and the criteria used to assess PFLOTRAN's performance in the code verification procedure. References Oberkampf, W. L., and T. G. Trucano (2007), Verification and Validation Benchmarks, SAND2007-0853, 67 pgs., Sandia National Laboratories, Albuquerque, NM.

  17. Benchmarking health IT among OECD countries: better data for better policy

    PubMed Central

    Adler-Milstein, Julia; Ronchi, Elettra; Cohen, Genna R; Winn, Laura A Pannella; Jha, Ashish K

    2014-01-01

    Objective To develop benchmark measures of health information and communication technology (ICT) use to facilitate cross-country comparisons and learning. Materials and methods The effort is led by the Organisation for Economic Co-operation and Development (OECD). Approaches to definition and measurement within four ICT domains were compared across seven OECD countries in order to identify functionalities in each domain. These informed a set of functionality-based benchmark measures, which were refined in collaboration with representatives from more than 20 OECD and non-OECD countries. We report on progress to date and remaining work to enable countries to begin to collect benchmark data. Results The four benchmarking domains include provider-centric electronic record, patient-centric electronic record, health information exchange, and tele-health. There was broad agreement on functionalities in the provider-centric electronic record domain (eg, entry of core patient data, decision support), and less agreement in the other three domains in which country representatives worked to select benchmark functionalities. Discussion Many countries are working to implement ICTs to improve healthcare system performance. Although many countries are looking to others as potential models, the lack of consistent terminology and approach has made cross-national comparisons and learning difficult. Conclusions As countries develop and implement strategies to increase the use of ICTs to promote health goals, there is a historic opportunity to enable cross-country learning. To facilitate this learning and reduce the chances that individual countries flounder, a common understanding of health ICT adoption and use is needed. The OECD-led benchmarking process is a crucial step towards achieving this. PMID:23721983

  18. Benchmarking health IT among OECD countries: better data for better policy.

    PubMed

    Adler-Milstein, Julia; Ronchi, Elettra; Cohen, Genna R; Winn, Laura A Pannella; Jha, Ashish K

    2014-01-01

    To develop benchmark measures of health information and communication technology (ICT) use to facilitate cross-country comparisons and learning. The effort is led by the Organisation for Economic Co-operation and Development (OECD). Approaches to definition and measurement within four ICT domains were compared across seven OECD countries in order to identify functionalities in each domain. These informed a set of functionality-based benchmark measures, which were refined in collaboration with representatives from more than 20 OECD and non-OECD countries. We report on progress to date and remaining work to enable countries to begin to collect benchmark data. The four benchmarking domains include provider-centric electronic record, patient-centric electronic record, health information exchange, and tele-health. There was broad agreement on functionalities in the provider-centric electronic record domain (eg, entry of core patient data, decision support), and less agreement in the other three domains in which country representatives worked to select benchmark functionalities. Many countries are working to implement ICTs to improve healthcare system performance. Although many countries are looking to others as potential models, the lack of consistent terminology and approach has made cross-national comparisons and learning difficult. As countries develop and implement strategies to increase the use of ICTs to promote health goals, there is a historic opportunity to enable cross-country learning. To facilitate this learning and reduce the chances that individual countries flounder, a common understanding of health ICT adoption and use is needed. The OECD-led benchmarking process is a crucial step towards achieving this.

  19. 40 CFR 85.1902 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... POLLUTION FROM MOBILE SOURCES Emission Defect Reporting Requirements § 85.1902 Definitions. For the purposes...) which affects any parameter or specification enumerated in appendix VIII of this part; or (2) A defect..., components, systems, software or elements of design which must function properly to ensure continued...

  20. 40 CFR 85.1902 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... POLLUTION FROM MOBILE SOURCES Emission Defect Reporting Requirements § 85.1902 Definitions. For the purposes...) which affects any parameter or specification enumerated in appendix VIII of this part; or (2) A defect..., components, systems, software or elements of design which must function properly to assure continued...

  1. 40 CFR 85.1902 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... POLLUTION FROM MOBILE SOURCES Emission Defect Reporting Requirements § 85.1902 Definitions. For the purposes...) which affects any parameter or specification enumerated in appendix VIII of this part; or (2) A defect..., components, systems, software or elements of design which must function properly to ensure continued...

  2. 40 CFR 85.1902 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... POLLUTION FROM MOBILE SOURCES Emission Defect Reporting Requirements § 85.1902 Definitions. For the purposes...) which affects any parameter or specification enumerated in appendix VIII of this part; or (2) A defect..., components, systems, software or elements of design which must function properly to ensure continued...

  3. Realistic simplified gaugino-higgsino models in the MSSM

    NASA Astrophysics Data System (ADS)

    Fuks, Benjamin; Klasen, Michael; Schmiemann, Saskia; Sunder, Marthijn

    2018-03-01

    We present simplified MSSM models for light neutralinos and charginos with realistic mass spectra and realistic gaugino-higgsino mixing, that can be used in experimental searches at the LHC. The formerly used naive approach of defining mass spectra and mixing matrix elements manually and independently of each other does not yield genuine MSSM benchmarks. We suggest the use of less simplified, but realistic MSSM models, whose mass spectra and mixing matrix elements are the result of a proper matrix diagonalisation. We propose a novel strategy targeting the design of such benchmark scenarios, accounting for user-defined constraints in terms of masses and particle mixing. We apply it to the higgsino case and implement a scan in the four relevant underlying parameters {μ , tan β , M1, M2} for a given set of light neutralino and chargino masses. We define a measure for the quality of the obtained benchmarks, that also includes criteria to assess the higgsino content of the resulting charginos and neutralinos. We finally discuss the distribution of the resulting models in the MSSM parameter space as well as their implications for supersymmetric dark matter phenomenology.

  4. A frontier analysis approach for benchmarking hospital performance in the treatment of acute myocardial infarction.

    PubMed

    Stanford, Robert E

    2004-05-01

    This paper uses a non-parametric frontier model and adaptations of the concepts of cross-efficiency and peer-appraisal to develop a formal methodology for benchmarking provider performance in the treatment of Acute Myocardial Infarction (AMI). Parameters used in the benchmarking process are the rates of proper recognition of indications of six standard treatment processes for AMI; the decision making units (DMUs) to be compared are the Medicare eligible hospitals of a particular state; the analysis produces an ordinal ranking of individual hospital performance scores. The cross-efficiency/peer-appraisal calculation process is constructed to accommodate DMUs that experience no patients in some of the treatment categories. While continuing to rate highly the performances of DMUs which are efficient in the Pareto-optimal sense, our model produces individual DMU performance scores that correlate significantly with good overall performance, as determined by a comparison of the sums of the individual DMU recognition rates for the six standard treatment processes. The methodology is applied to data collected from 107 state Medicare hospitals.

  5. Benchmarking Controlled Trial—a novel concept covering all observational effectiveness studies

    PubMed Central

    Malmivaara, Antti

    2015-01-01

    Abstract The Benchmarking Controlled Trial (BCT) is a novel concept which covers all observational studies aiming to assess effectiveness. BCTs provide evidence of the comparative effectiveness between health service providers, and of effectiveness due to particular features of the health and social care systems. BCTs complement randomized controlled trials (RCTs) as the sources of evidence on effectiveness. This paper presents a definition of the BCT; compares the position of BCTs in assessing effectiveness with that of RCTs; presents a checklist for assessing methodological validity of a BCT; and pilot-tests the checklist with BCTs published recently in the leading medical journals. PMID:25965700

  6. PFLOTRAN-RepoTREND Source Term Comparison Summary.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frederick, Jennifer M.

    Code inter-comparison studies are useful exercises to verify and benchmark independently developed software to ensure proper function, especially when the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment. This summary describes the results of the first portion of the code inter-comparison between PFLOTRAN and RepoTREND, which compares the radionuclide source term used in a typical performance assessment.

  7. Computation of Asteroid Proper Elements on the Grid

    NASA Astrophysics Data System (ADS)

    Novakovic, B.; Balaz, A.; Knezevic, Z.; Potocnik, M.

    2009-12-01

    A procedure of gridification of the computation of asteroid proper orbital elements is described. The need to speed up the time consuming computations and make them more efficient is justified by the large increase of observational data expected from the next generation all sky surveys. We give the basic notion of proper elements and of the contemporary theories and methods used to compute them for different populations of objects. Proper elements for nearly 70,000 asteroids are derived since the beginning of use of the Grid infrastructure for the purpose. The average time for the catalogs update is significantly shortened with respect to the time needed with stand-alone workstations. We also present basics of the Grid computing, the concepts of Grid middleware and its Workload management system. The practical steps we undertook to efficiently gridify our application are described in full detail. We present the results of a comprehensive testing of the performance of different Grid sites, and offer some practical conclusions based on the benchmark results and on our experience. Finally, we propose some possibilities for the future work.

  8. Selecting Peer Institutions with IPEDS and Other Nationally Available Data

    ERIC Educational Resources Information Center

    Carrigan, Sarah D.

    2012-01-01

    The process of identifying and selecting peers for a college or university is one of this volume's definitions for "benchmarking": "a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to identify opportunities for…

  9. Expanded Outreach at Clemson University. A Case Study.

    ERIC Educational Resources Information Center

    Bennett, A. Wayne

    This paper summarizes recent strategic planning activities at Clemson University, focusing on outreach and extended education goals at the university. Specific benchmarks for outreach and extended education include: (1) by May 1994, each department will develop an operational definition of its public service mission, an action plan to integrate…

  10. A Proposal for the Diagnosis of Emotional Disturbance.

    ERIC Educational Resources Information Center

    Eaves, Ronald C.

    1982-01-01

    The underlying reasons for muddled definitions of emotional disturbance and their resultant befuddled diagnostic processes are discussed in terms of four factors: (1) the impact of theory, (2) societal diversity, (3) benchmarks for decision making, and (4) instrumentation. The author presents a method for diagnosis that is practical, functional,…

  11. 7 CFR 1924.253 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) Serious defects in or improper installation of heating systems or central air conditioning. (4) Defects in... CONSTRUCTION AND REPAIR Complaints and Compensation for Construction Defects § 1924.253 Definitions. As used in..., smoke detectors, railings, etc., as well as failure to provide or properly install devices to aid...

  12. 7 CFR 1924.253 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) Serious defects in or improper installation of heating systems or central air conditioning. (4) Defects in... CONSTRUCTION AND REPAIR Complaints and Compensation for Construction Defects § 1924.253 Definitions. As used in..., smoke detectors, railings, etc., as well as failure to provide or properly install devices to aid...

  13. 7 CFR 1924.253 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) Serious defects in or improper installation of heating systems or central air conditioning. (4) Defects in... CONSTRUCTION AND REPAIR Complaints and Compensation for Construction Defects § 1924.253 Definitions. As used in..., smoke detectors, railings, etc., as well as failure to provide or properly install devices to aid...

  14. 7 CFR 1924.253 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) Serious defects in or improper installation of heating systems or central air conditioning. (4) Defects in... CONSTRUCTION AND REPAIR Complaints and Compensation for Construction Defects § 1924.253 Definitions. As used in..., smoke detectors, railings, etc., as well as failure to provide or properly install devices to aid...

  15. 40 CFR 266.202 - Definition of solid waste.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... MANAGEMENT FACILITIES Military Munitions § 266.202 Definition of solid waste. (a) A military munition is not... personnel or explosives and munitions emergency response specialists (including training in proper destruction of unused propellant or other munitions); or (ii) Use in research, development, testing, and...

  16. 40 CFR 266.202 - Definition of solid waste.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... MANAGEMENT FACILITIES Military Munitions § 266.202 Definition of solid waste. (a) A military munition is not... personnel or explosives and munitions emergency response specialists (including training in proper destruction of unused propellant or other munitions); or (ii) Use in research, development, testing, and...

  17. 40 CFR 266.202 - Definition of solid waste.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... MANAGEMENT FACILITIES Military Munitions § 266.202 Definition of solid waste. (a) A military munition is not... personnel or explosives and munitions emergency response specialists (including training in proper destruction of unused propellant or other munitions); or (ii) Use in research, development, testing, and...

  18. 31 CFR 29.103 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Definitions. 29.103 Section 29.103 Money and Finance: Treasury Office of the Secretary of the Treasury FEDERAL BENEFIT PAYMENTS UNDER... regulations were properly applied; and/or (2) The mathematical computation of the benefit or liability is...

  19. 31 CFR 29.103 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 31 Money and Finance: Treasury 1 2013-07-01 2013-07-01 false Definitions. 29.103 Section 29.103 Money and Finance: Treasury Office of the Secretary of the Treasury FEDERAL BENEFIT PAYMENTS UNDER... regulations were properly applied; and/or (2) The mathematical computation of the benefit or liability is...

  20. 31 CFR 29.103 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 31 Money and Finance: Treasury 1 2011-07-01 2011-07-01 false Definitions. 29.103 Section 29.103 Money and Finance: Treasury Office of the Secretary of the Treasury FEDERAL BENEFIT PAYMENTS UNDER... regulations were properly applied; and/or (2) The mathematical computation of the benefit or liability is...

  1. 31 CFR 29.103 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 31 Money and Finance: Treasury 1 2014-07-01 2014-07-01 false Definitions. 29.103 Section 29.103 Money and Finance: Treasury Office of the Secretary of the Treasury FEDERAL BENEFIT PAYMENTS UNDER... regulations were properly applied; and/or (2) The mathematical computation of the benefit or liability is...

  2. 31 CFR 29.103 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 31 Money and Finance: Treasury 1 2012-07-01 2012-07-01 false Definitions. 29.103 Section 29.103 Money and Finance: Treasury Office of the Secretary of the Treasury FEDERAL BENEFIT PAYMENTS UNDER... regulations were properly applied; and/or (2) The mathematical computation of the benefit or liability is...

  3. 21 CFR 660.20 - Blood Grouping Reagent.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 7 2010-04-01 2010-04-01 false Blood Grouping Reagent. 660.20 Section 660.20 Food... ADDITIONAL STANDARDS FOR DIAGNOSTIC SUBSTANCES FOR LABORATORY TESTS Blood Grouping Reagent § 660.20 Blood Grouping Reagent. (a) Proper name and definition. The proper name of this product shall be Blood Grouping...

  4. 21 CFR 660.20 - Blood Grouping Reagent.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 7 2011-04-01 2010-04-01 true Blood Grouping Reagent. 660.20 Section 660.20 Food... ADDITIONAL STANDARDS FOR DIAGNOSTIC SUBSTANCES FOR LABORATORY TESTS Blood Grouping Reagent § 660.20 Blood Grouping Reagent. (a) Proper name and definition. The proper name of this product shall be Blood Grouping...

  5. 21 CFR 660.20 - Blood Grouping Reagent.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 7 2013-04-01 2013-04-01 false Blood Grouping Reagent. 660.20 Section 660.20 Food... ADDITIONAL STANDARDS FOR DIAGNOSTIC SUBSTANCES FOR LABORATORY TESTS Blood Grouping Reagent § 660.20 Blood Grouping Reagent. (a) Proper name and definition. The proper name of this product shall be Blood Grouping...

  6. 21 CFR 660.20 - Blood Grouping Reagent.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 7 2014-04-01 2014-04-01 false Blood Grouping Reagent. 660.20 Section 660.20 Food... ADDITIONAL STANDARDS FOR DIAGNOSTIC SUBSTANCES FOR LABORATORY TESTS Blood Grouping Reagent § 660.20 Blood Grouping Reagent. (a) Proper name and definition. The proper name of this product shall be Blood Grouping...

  7. 21 CFR 660.20 - Blood Grouping Reagent.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 7 2012-04-01 2012-04-01 false Blood Grouping Reagent. 660.20 Section 660.20 Food... ADDITIONAL STANDARDS FOR DIAGNOSTIC SUBSTANCES FOR LABORATORY TESTS Blood Grouping Reagent § 660.20 Blood Grouping Reagent. (a) Proper name and definition. The proper name of this product shall be Blood Grouping...

  8. 21 CFR 660.30 - Reagent Red Blood Cells.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 7 2010-04-01 2010-04-01 false Reagent Red Blood Cells. 660.30 Section 660.30...) BIOLOGICS ADDITIONAL STANDARDS FOR DIAGNOSTIC SUBSTANCES FOR LABORATORY TESTS Reagent Red Blood Cells § 660.30 Reagent Red Blood Cells. (a) Proper name and definition. The proper name of the product shall be...

  9. 21 CFR 660.40 - Hepatitis B Surface Antigen.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 7 2011-04-01 2010-04-01 true Hepatitis B Surface Antigen. 660.40 Section 660.40...) BIOLOGICS ADDITIONAL STANDARDS FOR DIAGNOSTIC SUBSTANCES FOR LABORATORY TESTS Hepatitis B Surface Antigen § 660.40 Hepatitis B Surface Antigen. (a) Proper name and definition. The proper name of this product...

  10. 21 CFR 660.40 - Hepatitis B Surface Antigen.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 7 2014-04-01 2014-04-01 false Hepatitis B Surface Antigen. 660.40 Section 660.40...) BIOLOGICS ADDITIONAL STANDARDS FOR DIAGNOSTIC SUBSTANCES FOR LABORATORY TESTS Hepatitis B Surface Antigen § 660.40 Hepatitis B Surface Antigen. (a) Proper name and definition. The proper name of this product...

  11. 21 CFR 660.40 - Hepatitis B Surface Antigen.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 7 2013-04-01 2013-04-01 false Hepatitis B Surface Antigen. 660.40 Section 660.40...) BIOLOGICS ADDITIONAL STANDARDS FOR DIAGNOSTIC SUBSTANCES FOR LABORATORY TESTS Hepatitis B Surface Antigen § 660.40 Hepatitis B Surface Antigen. (a) Proper name and definition. The proper name of this product...

  12. 21 CFR 660.40 - Hepatitis B Surface Antigen.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 7 2012-04-01 2012-04-01 false Hepatitis B Surface Antigen. 660.40 Section 660.40...) BIOLOGICS ADDITIONAL STANDARDS FOR DIAGNOSTIC SUBSTANCES FOR LABORATORY TESTS Hepatitis B Surface Antigen § 660.40 Hepatitis B Surface Antigen. (a) Proper name and definition. The proper name of this product...

  13. 21 CFR 640.90 - Plasma Protein Fraction (Human).

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 7 2010-04-01 2010-04-01 false Plasma Protein Fraction (Human). 640.90 Section...) BIOLOGICS ADDITIONAL STANDARDS FOR HUMAN BLOOD AND BLOOD PRODUCTS Plasma Protein Fraction (Human) § 640.90 Plasma Protein Fraction (Human). (a) Proper name and definition. The proper name of the product shall be...

  14. 21 CFR 640.90 - Plasma Protein Fraction (Human).

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 7 2012-04-01 2012-04-01 false Plasma Protein Fraction (Human). 640.90 Section...) BIOLOGICS ADDITIONAL STANDARDS FOR HUMAN BLOOD AND BLOOD PRODUCTS Plasma Protein Fraction (Human) § 640.90 Plasma Protein Fraction (Human). (a) Proper name and definition. The proper name of the product shall be...

  15. 21 CFR 640.90 - Plasma Protein Fraction (Human).

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 7 2011-04-01 2010-04-01 true Plasma Protein Fraction (Human). 640.90 Section 640...) BIOLOGICS ADDITIONAL STANDARDS FOR HUMAN BLOOD AND BLOOD PRODUCTS Plasma Protein Fraction (Human) § 640.90 Plasma Protein Fraction (Human). (a) Proper name and definition. The proper name of the product shall be...

  16. 21 CFR 640.90 - Plasma Protein Fraction (Human).

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 7 2013-04-01 2013-04-01 false Plasma Protein Fraction (Human). 640.90 Section...) BIOLOGICS ADDITIONAL STANDARDS FOR HUMAN BLOOD AND BLOOD PRODUCTS Plasma Protein Fraction (Human) § 640.90 Plasma Protein Fraction (Human). (a) Proper name and definition. The proper name of the product shall be...

  17. 21 CFR 640.90 - Plasma Protein Fraction (Human).

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 7 2014-04-01 2014-04-01 false Plasma Protein Fraction (Human). 640.90 Section...) BIOLOGICS ADDITIONAL STANDARDS FOR HUMAN BLOOD AND BLOOD PRODUCTS Plasma Protein Fraction (Human) § 640.90 Plasma Protein Fraction (Human). (a) Proper name and definition. The proper name of the product shall be...

  18. 21 CFR 660.30 - Reagent Red Blood Cells.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 7 2013-04-01 2013-04-01 false Reagent Red Blood Cells. 660.30 Section 660.30...) BIOLOGICS ADDITIONAL STANDARDS FOR DIAGNOSTIC SUBSTANCES FOR LABORATORY TESTS Reagent Red Blood Cells § 660.30 Reagent Red Blood Cells. (a) Proper name and definition. The proper name of the product shall be...

  19. 21 CFR 660.30 - Reagent Red Blood Cells.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 7 2014-04-01 2014-04-01 false Reagent Red Blood Cells. 660.30 Section 660.30...) BIOLOGICS ADDITIONAL STANDARDS FOR DIAGNOSTIC SUBSTANCES FOR LABORATORY TESTS Reagent Red Blood Cells § 660.30 Reagent Red Blood Cells. (a) Proper name and definition. The proper name of the product shall be...

  20. 21 CFR 660.30 - Reagent Red Blood Cells.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 7 2012-04-01 2012-04-01 false Reagent Red Blood Cells. 660.30 Section 660.30...) BIOLOGICS ADDITIONAL STANDARDS FOR DIAGNOSTIC SUBSTANCES FOR LABORATORY TESTS Reagent Red Blood Cells § 660.30 Reagent Red Blood Cells. (a) Proper name and definition. The proper name of the product shall be...

  1. 21 CFR 660.40 - Hepatitis B Surface Antigen.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 7 2010-04-01 2010-04-01 false Hepatitis B Surface Antigen. 660.40 Section 660.40...) BIOLOGICS ADDITIONAL STANDARDS FOR DIAGNOSTIC SUBSTANCES FOR LABORATORY TESTS Hepatitis B Surface Antigen § 660.40 Hepatitis B Surface Antigen. (a) Proper name and definition. The proper name of this product...

  2. Disaster metrics: quantitative benchmarking of hospital surge capacity in trauma-related multiple casualty events.

    PubMed

    Bayram, Jamil D; Zuabi, Shawki; Subbarao, Italo

    2011-06-01

    Hospital surge capacity in multiple casualty events (MCE) is the core of hospital medical response, and an integral part of the total medical capacity of the community affected. To date, however, there has been no consensus regarding the definition or quantification of hospital surge capacity. The first objective of this study was to quantitatively benchmark the various components of hospital surge capacity pertaining to the care of critically and moderately injured patients in trauma-related MCE. The second objective was to illustrate the applications of those quantitative parameters in local, regional, national, and international disaster planning; in the distribution of patients to various hospitals by prehospital medical services; and in the decision-making process for ambulance diversion. A 2-step approach was adopted in the methodology of this study. First, an extensive literature search was performed, followed by mathematical modeling. Quantitative studies on hospital surge capacity for trauma injuries were used as the framework for our model. The North Atlantic Treaty Organization triage categories (T1-T4) were used in the modeling process for simplicity purposes. Hospital Acute Care Surge Capacity (HACSC) was defined as the maximum number of critical (T1) and moderate (T2) casualties a hospital can adequately care for per hour, after recruiting all possible additional medical assets. HACSC was modeled to be equal to the number of emergency department beds (#EDB), divided by the emergency department time (EDT); HACSC = #EDB/EDT. In trauma-related MCE, the EDT was quantitatively benchmarked to be 2.5 (hours). Because most of the critical and moderate casualties arrive at hospitals within a 6-hour period requiring admission (by definition), the hospital bed surge capacity must match the HACSC at 6 hours to ensure coordinated care, and it was mathematically benchmarked to be 18% of the staffed hospital bed capacity. Defining and quantitatively benchmarking the different components of hospital surge capacity is vital to hospital preparedness in MCE. Prospective studies of our mathematical model are needed to verify its applicability, generalizability, and validity.

  3. Dark matter and electroweak phase transition in the mixed scalar dark matter model

    NASA Astrophysics Data System (ADS)

    Liu, Xuewen; Bian, Ligong

    2018-03-01

    We study the electroweak phase transition in the framework of the scalar singlet-doublet mixed dark matter model, in which the particle dark matter candidate is the lightest neutral Higgs that comprises the C P -even component of the inert doublet and a singlet scalar. The dark matter can be dominated by the inert doublet or singlet scalar depending on the mixing. We present several benchmark models to investigate the two situations after imposing several theoretical and experimental constraints. An additional singlet scalar and the inert doublet drive the electroweak phase transition to be strongly first order. A strong first-order electroweak phase transition and a viable dark matter candidate can be accomplished in two benchmark models simultaneously, for which a proper mass splitting among the neutral and charged Higgs masses is needed.

  4. Quantifying risk and benchmarking performance in the adult intensive care unit.

    PubMed

    Higgins, Thomas L

    2007-01-01

    Morbidity, mortality, and length-of-stay outcomes in patients receiving critical care are difficult to interpret unless they are risk-stratified for diagnosis, presenting severity of illness, and other patient characteristics. Acuity adjustment systems for adults include the Acute Physiology And Chronic Health Evaluation (APACHE), the Mortality Probability Model (MPM), and the Simplified Acute Physiology Score (SAPS). All have recently been updated and recalibrated to reflect contemporary results. Specialized scores are also available for patient subpopulations where general acuity scores have drawbacks. Demand for outcomes data is likely to grow with pay-for-performance initiatives as well as for routine clinical, prognostic, administrative, and research applications. It is important for clinicians to understand how these scores are derived and how they are properly applied to quantify patient severity of illness and benchmark intensive care unit performance.

  5. USE OF CATEGORICAL REGRESSION IN THE DEFINITION OF THE DURATION/CONCENTRATION CURVE IN THE U.S. EPA'S ACUTE REFEFENCE EXPOSURE (ARE) METHODOLOGY

    EPA Science Inventory

    The U.S. EPA's current draft ARE methodology offers three different approaches for derivation of health effects values for various chemicals and agents under inhalation exposure scenarios of < 24 hrs. These approaches, the NOAEL, benchmark concentration (BMC), and categorical ...

  6. Change for the Right Reasons: What Is a Best Practice?

    ERIC Educational Resources Information Center

    Todaro, Julie Beth

    2002-01-01

    Explores the concept and definition of best practice and how it relates to organizational concepts such as benchmarking, leadership, innovation, and managing change. Describes the six steps of best practice: (1) identifying areas of need; (2) assessing the need areas; (3) establishing profiles; (4) looking beyond the organization; (5) establishing…

  7. 31 CFR 321.25 - Payment and retention of definitive securities.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... prohibited from accepting an image, or other copy or reproduction of the definitive security, for redemption or processing. To ensure that all transactions processed by agents are properly validated, agents... converted to an electronic image. At a minimum, the agent must retain such securities for a period of thirty...

  8. 31 CFR 321.25 - Payment and retention of definitive securities.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... agent is prohibited from accepting an image, or other copy or reproduction of the definitive security, for redemption or processing. To ensure that all transactions processed by agents are properly... truncated and converted to an electronic image. At a minimum, the agent must retain such securities for a...

  9. 31 CFR 321.25 - Payment and retention of definitive securities.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... prohibited from accepting an image, or other copy or reproduction of the definitive security, for redemption or processing. To ensure that all transactions processed by agents are properly validated, agents... converted to an electronic image. At a minimum, the agent must retain such securities for a period of thirty...

  10. 45 CFR 1640.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Regulations Relating to Public Welfare (Continued) LEGAL SERVICES CORPORATION APPLICATION OF FEDERAL LAW TO LSC RECIPIENTS § 1640.2 Definitions. (a)(1) Federal law relating to the proper use of Federal funds... the laws listed in paragraph (a)(1) of this section, LSC shall be considered a Federal agency and a...

  11. 45 CFR 1640.2 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Regulations Relating to Public Welfare (Continued) LEGAL SERVICES CORPORATION APPLICATION OF FEDERAL LAW TO LSC RECIPIENTS § 1640.2 Definitions. (a)(1) Federal law relating to the proper use of Federal funds... the laws listed in paragraph (a)(1) of this section, LSC shall be considered a Federal agency and a...

  12. 45 CFR 1640.2 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Regulations Relating to Public Welfare (Continued) LEGAL SERVICES CORPORATION APPLICATION OF FEDERAL LAW TO LSC RECIPIENTS § 1640.2 Definitions. (a)(1) Federal law relating to the proper use of Federal funds... the laws listed in paragraph (a)(1) of this section, LSC shall be considered a Federal agency and a...

  13. 45 CFR 1640.2 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Regulations Relating to Public Welfare (Continued) LEGAL SERVICES CORPORATION APPLICATION OF FEDERAL LAW TO LSC RECIPIENTS § 1640.2 Definitions. (a)(1) Federal law relating to the proper use of Federal funds... the laws listed in paragraph (a)(1) of this section, LSC shall be considered a Federal agency and a...

  14. 45 CFR 1640.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Regulations Relating to Public Welfare (Continued) LEGAL SERVICES CORPORATION APPLICATION OF FEDERAL LAW TO LSC RECIPIENTS § 1640.2 Definitions. (a)(1) Federal law relating to the proper use of Federal funds... the laws listed in paragraph (a)(1) of this section, LSC shall be considered a Federal agency and a...

  15. 21 CFR 1210.3 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... MILK ACT General Provisions § 1210.3 Definitions. (a) Secretary. Secretary means the Secretary of...) Milk. For the purposes of the act and of the regulations in this part: Milk is the whole, fresh, clean, lacteal secretion obtained by the complete milking of one or more healthy cows, properly fed and kept...

  16. [A new questionnaire for the assessment of parental health literacy].

    PubMed

    Gács, Zsófia; Berend, Katalin; Csanádi, Gábor; Csizmady, Adrienne

    2015-10-18

    Parental health literacy is an important factor of pediatric health. Although it is thorougly studied in other countries, neither proper definition, nor adequate tool for its measurement exists in Hungarian. The aim of this work was to define the dimensions of parental health literacy and to introduce a questionnaire for its measurement. Opinions of parents, pediatric nurses and pediatricians on parental health literacy were used to establish the definition and basic components. Based on these and previously standardized tests a new questionnaire was developed. Four dimensions of parental health literacy were formulated such as knowledge, functional literacy, self-confidence and motivation. The new questionnaire assesses all four dimension through eight topics. This is the first culturally adapted definition and test of parental health literacy in Hungarian. With its application the efficacy of both primary care services and health education may be improved and the correlation between parental health literacy and pediatric health may be properly studied.

  17. How Students Learn from Multiple Contexts and Definitions: Proper Time as a Coordination Class

    ERIC Educational Resources Information Center

    Levrini, Olivia; diSessa, Andrea A.

    2008-01-01

    This article provides an empirical analysis of a single classroom episode in which students reveal difficulties with the concept of proper time in special relativity but slowly make progress in improving their understanding. The theoretical framework used is "coordination class theory," which is an evolving model of concepts and conceptual change.…

  18. 21 CFR 660.1 - Antibody to Hepatitis B Surface Antigen.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 7 2013-04-01 2013-04-01 false Antibody to Hepatitis B Surface Antigen. 660.1... Hepatitis B Surface Antigen § 660.1 Antibody to Hepatitis B Surface Antigen. (a) Proper name and definition. The proper name of this product shall be Antibody to Hepatitis B Surface Antigen. The product is...

  19. 21 CFR 660.1 - Antibody to Hepatitis B Surface Antigen.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 7 2014-04-01 2014-04-01 false Antibody to Hepatitis B Surface Antigen. 660.1... Hepatitis B Surface Antigen § 660.1 Antibody to Hepatitis B Surface Antigen. (a) Proper name and definition. The proper name of this product shall be Antibody to Hepatitis B Surface Antigen. The product is...

  20. 21 CFR 660.1 - Antibody to Hepatitis B Surface Antigen.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 7 2012-04-01 2012-04-01 false Antibody to Hepatitis B Surface Antigen. 660.1... Hepatitis B Surface Antigen § 660.1 Antibody to Hepatitis B Surface Antigen. (a) Proper name and definition. The proper name of this product shall be Antibody to Hepatitis B Surface Antigen. The product is...

  1. 21 CFR 660.1 - Antibody to Hepatitis B Surface Antigen.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 7 2011-04-01 2010-04-01 true Antibody to Hepatitis B Surface Antigen. 660.1... Hepatitis B Surface Antigen § 660.1 Antibody to Hepatitis B Surface Antigen. (a) Proper name and definition. The proper name of this product shall be Antibody to Hepatitis B Surface Antigen. The product is...

  2. 21 CFR 640.30 - Plasma.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 7 2010-04-01 2010-04-01 false Plasma. 640.30 Section 640.30 Food and Drugs FOOD... STANDARDS FOR HUMAN BLOOD AND BLOOD PRODUCTS Plasma § 640.30 Plasma. (a) Proper name and definition. The proper name of this component is Plasma. The component is defined as: (1) The fluid portion of one unit...

  3. 21 CFR 640.30 - Plasma.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 7 2012-04-01 2012-04-01 false Plasma. 640.30 Section 640.30 Food and Drugs FOOD... STANDARDS FOR HUMAN BLOOD AND BLOOD PRODUCTS Plasma § 640.30 Plasma. (a) Proper name and definition. The proper name of this component is Plasma. The component is defined as: (1) The fluid portion of one unit...

  4. 21 CFR 640.30 - Plasma.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 7 2011-04-01 2010-04-01 true Plasma. 640.30 Section 640.30 Food and Drugs FOOD... STANDARDS FOR HUMAN BLOOD AND BLOOD PRODUCTS Plasma § 640.30 Plasma. (a) Proper name and definition. The proper name of this component is Plasma. The component is defined as: (1) The fluid portion of one unit...

  5. 21 CFR 640.30 - Plasma.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 7 2013-04-01 2013-04-01 false Plasma. 640.30 Section 640.30 Food and Drugs FOOD... STANDARDS FOR HUMAN BLOOD AND BLOOD PRODUCTS Plasma § 640.30 Plasma. (a) Proper name and definition. The proper name of this component is Plasma. The component is defined as: (1) The fluid portion of one unit...

  6. 21 CFR 640.30 - Plasma.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 7 2014-04-01 2014-04-01 false Plasma. 640.30 Section 640.30 Food and Drugs FOOD... STANDARDS FOR HUMAN BLOOD AND BLOOD PRODUCTS Plasma § 640.30 Plasma. (a) Proper name and definition. The proper name of this component is Plasma. The component is defined as: (1) The fluid portion of one unit...

  7. 21 CFR 660.30 - Reagent Red Blood Cells.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 7 2011-04-01 2010-04-01 true Reagent Red Blood Cells. 660.30 Section 660.30 Food... ADDITIONAL STANDARDS FOR DIAGNOSTIC SUBSTANCES FOR LABORATORY TESTS Reagent Red Blood Cells § 660.30 Reagent Red Blood Cells. (a) Proper name and definition. The proper name of the product shall be Reagent Red...

  8. 21 CFR 660.1 - Antibody to Hepatitis B Surface Antigen.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 7 2010-04-01 2010-04-01 false Antibody to Hepatitis B Surface Antigen. 660.1... Hepatitis B Surface Antigen § 660.1 Antibody to Hepatitis B Surface Antigen. (a) Proper name and definition. The proper name of this product shall be Antibody to Hepatitis B Surface Antigen. The product is...

  9. Benchmarking of energy consumption in municipal wastewater treatment plants - a survey of over 200 plants in Italy.

    PubMed

    Vaccari, M; Foladori, P; Nembrini, S; Vitali, F

    2018-05-01

    One of the largest surveys in Europe about energy consumption in Italian wastewater treatment plants (WWTPs) is presented, based on 241 WWTPs and a total population equivalent (PE) of more than 9,000,000 PE. The study contributes towards standardised resilient data and benchmarking and to identify potentials for energy savings. In the energy benchmark, three indicators were used: specific energy consumption expressed per population equivalents (kWh PE -1 year -1 ), per cubic meter (kWh/m 3 ), and per unit of chemical oxygen demand (COD) removed (kWh/kgCOD). The indicator kWh/m 3 , even though widely applied, resulted in a biased benchmark, because highly influenced by stormwater and infiltrations. Plants with combined networks (often used in Europe) showed an apparent better energy performance. Conversely, the indicator kWh PE -1 year -1 resulted in a more meaningful definition of a benchmark. High energy efficiency was associated with: (i) large capacity of the plant, (ii) higher COD concentration in wastewater, (iii) separate sewer systems, (iv) capacity utilisation over 80%, and (v) high organic loads, but without overloading. The 25th percentile was proposed as a benchmark for four size classes: 23 kWh PE -1 y -1 for large plants > 100,000 PE; 42 kWh PE -1 y -1 for capacity 10,000 < PE < 100,000, 48 kWh PE -1 y -1 for capacity 2,000 < PE < 10,000 and 76 kWh PE -1 y -1 for small plants < 2,000 PE.

  10. The National Practice Benchmark for Oncology: 2015 Report for 2014 Data

    PubMed Central

    Balch, Carla; Ogle, John D.

    2016-01-01

    The National Practice Benchmark (NPB) is a unique tool used to measure oncology practices against others across the country in a meaningful way despite variations in practice demographics, size, and setting. In today’s challenging economic environment, each practice positions service offerings and competitive advantages to attract patients. Although the data in the NPB report are primarily reported by community oncology practices, the business structure and arrangements with regional health care systems are also reflected in the benchmark report. The ability to produce detailed metrics is an accomplishment of excellence in business and clinical management. With these metrics, a practice should be able to measure and analyze its current business practices and make appropriate changes, if necessary. In this report, we build on the foundation initially established by Oncology Metrics (acquired by Flatiron Health in 2014) over years of data collection and refine definitions to deliver the NPB, which is uniquely meaningful in the oncology market. PMID:27006357

  11. MPI, HPF or OpenMP: A Study with the NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Frumkin, Michael; Hribar, Michelle; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1999-01-01

    Porting applications to new high performance parallel and distributed platforms is a challenging task. Writing parallel code by hand is time consuming and costly, but the task can be simplified by high level languages and would even better be automated by parallelizing tools and compilers. The definition of HPF (High Performance Fortran, based on data parallel model) and OpenMP (based on shared memory parallel model) standards has offered great opportunity in this respect. Both provide simple and clear interfaces to language like FORTRAN and simplify many tedious tasks encountered in writing message passing programs. In our study we implemented the parallel versions of the NAS Benchmarks with HPF and OpenMP directives. Comparison of their performance with the MPI implementation and pros and cons of different approaches will be discussed along with experience of using computer-aided tools to help parallelize these benchmarks. Based on the study,potentials of applying some of the techniques to realistic aerospace applications will be presented

  12. MPI, HPF or OpenMP: A Study with the NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, H.; Frumkin, M.; Hribar, M.; Waheed, A.; Yan, J.; Saini, Subhash (Technical Monitor)

    1999-01-01

    Porting applications to new high performance parallel and distributed platforms is a challenging task. Writing parallel code by hand is time consuming and costly, but this task can be simplified by high level languages and would even better be automated by parallelizing tools and compilers. The definition of HPF (High Performance Fortran, based on data parallel model) and OpenMP (based on shared memory parallel model) standards has offered great opportunity in this respect. Both provide simple and clear interfaces to language like FORTRAN and simplify many tedious tasks encountered in writing message passing programs. In our study, we implemented the parallel versions of the NAS Benchmarks with HPF and OpenMP directives. Comparison of their performance with the MPI implementation and pros and cons of different approaches will be discussed along with experience of using computer-aided tools to help parallelize these benchmarks. Based on the study, potentials of applying some of the techniques to realistic aerospace applications will be presented.

  13. Real-time simulation of biological soft tissues: a PGD approach.

    PubMed

    Niroomandi, S; González, D; Alfaro, I; Bordeu, F; Leygue, A; Cueto, E; Chinesta, F

    2013-05-01

    We introduce here a novel approach for the numerical simulation of nonlinear, hyperelastic soft tissues at kilohertz feedback rates necessary for haptic rendering. This approach is based upon the use of proper generalized decomposition techniques, a generalization of PODs. Proper generalized decomposition techniques can be considered as a means of a priori model order reduction and provides a physics-based meta-model without the need for prior computer experiments. The suggested strategy is thus composed of an offline phase, in which a general meta-model is computed, and an online evaluation phase in which the results are obtained at real time. Results are provided that show the potential of the proposed technique, together with some benchmark test that shows the accuracy of the method. Copyright © 2013 John Wiley & Sons, Ltd.

  14. Incremental cost effectiveness evaluation in clinical research.

    PubMed

    Krummenauer, Frank; Landwehr, I

    2005-01-28

    The health economic evaluation of therapeutic and diagnostic strategies is of increasing importance in clinical research. Therefore also clinical trialists have to involve health economic aspects more frequently. However, whereas they are quite familiar with classical effect measures in clinical trials, the corresponding parameters in health economic evaluation of therapeutic and diagnostic procedures are still not this common. The concepts of incremental cost effectiveness ratios (ICERs) and incremental net health benefit (INHB) will be illustrated and contrasted along the cost effectiveness evaluation of cataract surgery with monofocal and multifocal intraocular lenses. ICERs relate the costs of a treatment to its clinical benefit in terms of a ratio expression (indexed as Euro per clinical benefit unit). Therefore ICERs can be directly compared to a pre-specified willingness to pay (WTP) benchmark, which represents the maximum costs, health insurers would invest to achieve one clinical benefit unit. INHBs estimate a treatment's net clinical benefit after accounting for its cost increase versus an established therapeutic standard. Resource allocation rules can be formulated by means of both effect measures. Both the ICER and the INHB approach enable the definition of directional resource allocation rules. The allocation decisions arising from these rules are identical, as long as the willingness to pay benchmark is fixed in advance. Therefore both strategies crucially call for a priori determination of both the underlying clinical benefit endpoint (such as gain in vision lines after cataract surgery or gain in quality-adjusted life years) and the corresponding willingness to pay benchmark. The use of incremental cost effectiveness and net health benefit estimates provides a rationale for health economic allocation discussions and founding decisions. It implies the same requirements on trial protocols as yet established for clinical trials, that is the a priori definition of primary hypotheses (formulated as an allocation rule involving a pre-specified willingness to pay benchmark) and the primary clinical benefit endpoint (as a rationale for effectiveness evaluation).

  15. 76 FR 36976 - Sample Income Data To Meet the Low-Income Definition

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-24

    ...% benchmark provides a good balance. NCUA will consider a more flexible approach in the future if warranted... will permit flexibility and will enable NCUA to work with potential candidates. NCUA may in the future... NCUA Board finds that the 5-year look back period provides a good balance. The Board emphasizes that...

  16. Exploration of freely available web-interfaces for comparative homology modelling of microbial proteins.

    PubMed

    Nema, Vijay; Pal, Sudhir Kumar

    2013-01-01

    This study was conducted to find the best suited freely available software for modelling of proteins by taking a few sample proteins. The proteins used were small to big in size with available crystal structures for the purpose of benchmarking. Key players like Phyre2, Swiss-Model, CPHmodels-3.0, Homer, (PS)2, (PS)(2)-V(2), Modweb were used for the comparison and model generation. Benchmarking process was done for four proteins, Icl, InhA, and KatG of Mycobacterium tuberculosis and RpoB of Thermus Thermophilus to get the most suited software. Parameters compared during analysis gave relatively better values for Phyre2 and Swiss-Model. This comparative study gave the information that Phyre2 and Swiss-Model make good models of small and large proteins as compared to other screened software. Other software was also good but is often not very efficient in providing full-length and properly folded structure.

  17. Benchmarking worker nodes using LHCb productions and comparing with HEPSpec06

    NASA Astrophysics Data System (ADS)

    Charpentier, P.

    2017-10-01

    In order to estimate the capabilities of a computing slot with limited processing time, it is necessary to know with a rather good precision its “power”. This allows for example pilot jobs to match a task for which the required CPU-work is known, or to define the number of events to be processed knowing the CPU-work per event. Otherwise one always has the risk that the task is aborted because it exceeds the CPU capabilities of the resource. It also allows a better accounting of the consumed resources. The traditional way the CPU power is estimated in WLCG since 2007 is using the HEP-Spec06 benchmark (HS06) suite that was verified at the time to scale properly with a set of typical HEP applications. However, the hardware architecture of processors has evolved, all WLCG experiments moved to using 64-bit applications and use different compilation flags from those advertised for running HS06. It is therefore interesting to check the scaling of HS06 with the HEP applications. For this purpose, we have been using CPU intensive massive simulation productions from the LHCb experiment and compared their event throughput to the HS06 rating of the worker nodes. We also compared it with a much faster benchmark script that is used by the DIRAC framework used by LHCb for evaluating at run time the performance of the worker nodes. This contribution reports on the finding of these comparisons: the main observation is that the scaling with HS06 is no longer fulfilled, while the fast benchmarks have a better scaling but are less precise. One can also clearly see that some hardware or software features when enabled on the worker nodes may enhance their performance beyond expectation from either benchmark, depending on external factors.

  18. Calibrating coseismic coastal land-level changes during the 2014 Iquique (Mw=8.2) earthquake (northern Chile) with leveling, GPS and intertidal biota.

    PubMed

    Jaramillo, Eduardo; Melnick, Daniel; Baez, Juan Carlos; Montecino, Henry; Lagos, Nelson A; Acuña, Emilio; Manzano, Mario; Camus, Patricio A

    2017-01-01

    The April 1st 2014 Iquique earthquake (MW 8.1) occurred along the northern Chile margin where the Nazca plate is subducted below the South American continent. The last great megathrust earthquake here, in 1877 of Mw ~8.8 opened a seismic gap, which was only partly closed by the 2014 earthquake. Prior to the earthquake in 2013, and shortly after it we compared data from leveled benchmarks, deployed campaign GPS instruments, continuous GPS stations and estimated sea levels using the upper vertical level of rocky shore benthic organisms including algae, barnacles, and mussels. Land-level changes estimated from mean elevations of benchmarks indicate subsidence along a ~100-km stretch of coast, ranging from 3 to 9 cm at Corazones (18°30'S) to between 30 and 50 cm at Pisagua (19°30'S). About 15 cm of uplift was measured along the southern part of the rupture at Chanabaya (20°50'S). Land-level changes obtained from benchmarks and campaign GPS were similar at most sites (mean difference 3.7±3.2 cm). Higher differences however, were found between benchmarks and continuous GPS (mean difference 8.5±3.6 cm), possibly because sites were not collocated and separated by several kilometers. Subsidence estimated from the upper limits of intertidal fauna at Pisagua ranged between 40 to 60 cm, in general agreement with benchmarks and GPS. At Chanavaya, the magnitude and sense of displacement of the upper marine limit was variable across species, possibly due to species-dependent differences in ecology. Among the studied species, measurements on lithothamnioid calcareous algae most closely matched those made with benchmarks and GPS. When properly calibrated, rocky shore benthic species may be used to accurately measure land-level changes along coasts affected by subduction earthquakes. Our calibration of those methods will improve their accuracy when applied to coasts lacking pre-earthquake data and in estimating deformation during pre-instrumental earthquakes.

  19. Four-Nozzle Benchmark Wind Tunnel Model USA Code Solutions for Simulation of Multiple Rocket Base Flow Recirculation at 145,000 Feet Altitude

    NASA Technical Reports Server (NTRS)

    Dougherty, N. S.; Johnson, S. L.

    1993-01-01

    Multiple rocket exhaust plume interactions at high altitudes can produce base flow recirculation with attendant alteration of the base pressure coefficient and increased base heating. A search for a good wind tunnel benchmark problem to check grid clustering technique and turbulence modeling turned up the experiment done at AEDC in 1961 by Goethert and Matz on a 4.25-in. diameter domed missile base model with four rocket nozzles. This wind tunnel model with varied external bleed air flow for the base flow wake produced measured p/p(sub ref) at the center of the base as high as 3.3 due to plume flow recirculation back onto the base. At that time in 1961, relatively inexpensive experimentation with air at gamma = 1.4 and nozzle A(sub e)/A of 10.6 and theta(sub n) = 7.55 deg with P(sub c) = 155 psia simulated a LO2/LH2 rocket exhaust plume with gamma = 1.20, A(sub e)/A of 78 and P(sub c) about 1,000 psia. An array of base pressure taps on the aft dome gave a clear measurement of the plume recirculation effects at p(infinity) = 4.76 psfa corresponding to 145,000 ft altitude. Our CFD computations of the flow field with direct comparison of computed-versus-measured base pressure distribution (across the dome) provide detailed information on velocities and particle traces as well eddy viscosity in the base and nozzle region. The solution was obtained using a six-zone mesh with 284,000 grid points for one quadrant taking advantage of symmetry. Results are compared using a zero-equation algebraic and a one-equation pointwise R(sub t) turbulence model (work in progress). Good agreement with the experimental pressure data was obtained with both; and this benchmark showed the importance of: (1) proper grid clustering and (2) proper choice of turbulence modeling for rocket plume problems/recirculation at high altitude.

  20. A new benchmark T8-9 brown dwarf and a couple of new mid-T dwarfs from the UKIDSS DR5+ LAS

    NASA Astrophysics Data System (ADS)

    Goldman, B.; Marsat, S.; Henning, T.; Clemens, C.; Greiner, J.

    2010-06-01

    Benchmark brown dwarfs are those objects for which fiducial constraints are available, including effective temperature, parallax, age and metallicity. We searched for new cool brown dwarfs in 186deg2 of the new area covered by the data release DR5+ of the UKIRT Deep Infrared Sky Survey (UKIDSS) Large Area Survey. Follow-up optical and near-infrared broad-band photometry, and methane imaging of four promising candidates, revealed three objects with distinct methane absorption, typical of mid- to late-T dwarfs and one possibly T4 dwarf. The latest-type object, classified as T8-9, shares its large proper motion with Ross 458 (BD+13o2618), an active M0.5 binary which is 102arcsec away, forming a hierarchical low-mass star+brown dwarf system. Ross 458C has an absolute J-band magnitude of 16.4, and seems overluminous, particularly in the K band, compared to similar field brown dwarfs. We estimate the age of the system to be less than 1Gyr, and its mass to be as low as 14 Jupiter masses for the age of 1Gyr. At 11.4pc, this new late-T benchmark dwarf is a promising target to constrain the evolutionary and atmospheric models of very low-mass brown dwarfs. We present proper motion measurements for our targets and for 13 known brown dwarfs. Two brown dwarfs have velocities typical of the thick disc and may be old brown dwarfs. Based on observations collected at the German-Spanish Astronomical Center, Calar Alto, jointly operated by the Max-Planck Institut für Astronomie Heidelberg and the Instituto de Astrofísica de Andaluc'a (CSIC), and on observations made with ESO/MPG Telescope at the La Silla Observatory under programme ID 081.A-9012 and 081.A-9014. E-mail: goldman@mpia.de

  1. Comprehensive evaluation of untargeted metabolomics data processing software in feature detection, quantification and discriminating marker selection.

    PubMed

    Li, Zhucui; Lu, Yan; Guo, Yufeng; Cao, Haijie; Wang, Qinhong; Shui, Wenqing

    2018-10-31

    Data analysis represents a key challenge for untargeted metabolomics studies and it commonly requires extensive processing of more than thousands of metabolite peaks included in raw high-resolution MS data. Although a number of software packages have been developed to facilitate untargeted data processing, they have not been comprehensively scrutinized in the capability of feature detection, quantification and marker selection using a well-defined benchmark sample set. In this study, we acquired a benchmark dataset from standard mixtures consisting of 1100 compounds with specified concentration ratios including 130 compounds with significant variation of concentrations. Five software evaluated here (MS-Dial, MZmine 2, XCMS, MarkerView, and Compound Discoverer) showed similar performance in detection of true features derived from compounds in the mixtures. However, significant differences between untargeted metabolomics software were observed in relative quantification of true features in the benchmark dataset. MZmine 2 outperformed the other software in terms of quantification accuracy and it reported the most true discriminating markers together with the fewest false markers. Furthermore, we assessed selection of discriminating markers by different software using both the benchmark dataset and a real-case metabolomics dataset to propose combined usage of two software for increasing confidence of biomarker identification. Our findings from comprehensive evaluation of untargeted metabolomics software would help guide future improvements of these widely used bioinformatics tools and enable users to properly interpret their metabolomics results. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Properties of model-averaged BMDLs: a study of model averaging in dichotomous response risk estimation.

    PubMed

    Wheeler, Matthew W; Bailer, A John

    2007-06-01

    Model averaging (MA) has been proposed as a method of accounting for model uncertainty in benchmark dose (BMD) estimation. The technique has been used to average BMD dose estimates derived from dichotomous dose-response experiments, microbial dose-response experiments, as well as observational epidemiological studies. While MA is a promising tool for the risk assessor, a previous study suggested that the simple strategy of averaging individual models' BMD lower limits did not yield interval estimators that met nominal coverage levels in certain situations, and this performance was very sensitive to the underlying model space chosen. We present a different, more computationally intensive, approach in which the BMD is estimated using the average dose-response model and the corresponding benchmark dose lower bound (BMDL) is computed by bootstrapping. This method is illustrated with TiO(2) dose-response rat lung cancer data, and then systematically studied through an extensive Monte Carlo simulation. The results of this study suggest that the MA-BMD, estimated using this technique, performs better, in terms of bias and coverage, than the previous MA methodology. Further, the MA-BMDL achieves nominal coverage in most cases, and is superior to picking the "best fitting model" when estimating the benchmark dose. Although these results show utility of MA for benchmark dose risk estimation, they continue to highlight the importance of choosing an adequate model space as well as proper model fit diagnostics.

  3. Experimental and analytical studies of a model helicopter rotor in hover

    NASA Technical Reports Server (NTRS)

    Caradonna, F. X.; Tung, C.

    1981-01-01

    A benchmark test to aid the development of various rotor performance codes was conducted. Simultaneous blade pressure measurements and tip vortex surveys were made for a wide range of tip Mach numbers including the transonic flow regime. The measured tip vortex strength and geometry permit effective blade loading predictions when used as input to a prescribed wake lifting surface code. It is also shown that with proper inflow and boundary layer modeling, the supercritical flow regime can be accurately predicted.

  4. Toward multimodal signal detection of adverse drug reactions.

    PubMed

    Harpaz, Rave; DuMouchel, William; Schuemie, Martijn; Bodenreider, Olivier; Friedman, Carol; Horvitz, Eric; Ripple, Anna; Sorbello, Alfred; White, Ryen W; Winnenburg, Rainer; Shah, Nigam H

    2017-12-01

    Improving mechanisms to detect adverse drug reactions (ADRs) is key to strengthening post-marketing drug safety surveillance. Signal detection is presently unimodal, relying on a single information source. Multimodal signal detection is based on jointly analyzing multiple information sources. Building on, and expanding the work done in prior studies, the aim of the article is to further research on multimodal signal detection, explore its potential benefits, and propose methods for its construction and evaluation. Four data sources are investigated; FDA's adverse event reporting system, insurance claims, the MEDLINE citation database, and the logs of major Web search engines. Published methods are used to generate and combine signals from each data source. Two distinct reference benchmarks corresponding to well-established and recently labeled ADRs respectively are used to evaluate the performance of multimodal signal detection in terms of area under the ROC curve (AUC) and lead-time-to-detection, with the latter relative to labeling revision dates. Limited to our reference benchmarks, multimodal signal detection provides AUC improvements ranging from 0.04 to 0.09 based on a widely used evaluation benchmark, and a comparative added lead-time of 7-22 months relative to labeling revision dates from a time-indexed benchmark. The results support the notion that utilizing and jointly analyzing multiple data sources may lead to improved signal detection. Given certain data and benchmark limitations, the early stage of development, and the complexity of ADRs, it is currently not possible to make definitive statements about the ultimate utility of the concept. Continued development of multimodal signal detection requires a deeper understanding the data sources used, additional benchmarks, and further research on methods to generate and synthesize signals. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Automatic Multilevel Parallelization Using OpenMP

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)

    2002-01-01

    In this paper we describe the extension of the CAPO parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report first results for several benchmark codes and one full application that have been parallelized using our system.

  6. Duty periods for establishing eligibility for health care. Final rule.

    PubMed

    2013-12-26

    The Department of Veterans Affairs (VA) is amending its medical regulations concerning eligibility for health care to re-establish the definitions of "active military, naval, or air service,'' "active duty,'' and "active duty for training.'' These definitions were deleted in 1996; however, we believe that all duty periods should be defined in part 17 of the Code of Federal Regulations (CFR) to ensure proper determination of eligibility for VA health care. We are also providing a more complete definition of "inactive duty training.''

  7. Improvement and Evaluation of Copper Oxidation Experimental Procedure for the Introduction of the Law of Definite Proportion

    ERIC Educational Resources Information Center

    Yamashita, Shuichi; Kashiwaguma, Yasuyuki; Hayashi, Hideko; Pietzner, Verena

    2017-01-01

    In science classes, students usually learn about the law of definite proportions by the oxidation of copper. However, common procedures usually do not lead to proper results. This leads to confusion among the students because their experimental results do not fit to the theoretical values. Therefore, we invented a new procedure for this experiment…

  8. An accelerated proximal augmented Lagrangian method and its application in compressive sensing.

    PubMed

    Sun, Min; Liu, Jing

    2017-01-01

    As a first-order method, the augmented Lagrangian method (ALM) is a benchmark solver for linearly constrained convex programming, and in practice some semi-definite proximal terms are often added to its primal variable's subproblem to make it more implementable. In this paper, we propose an accelerated PALM with indefinite proximal regularization (PALM-IPR) for convex programming with linear constraints, which generalizes the proximal terms from semi-definite to indefinite. Under mild assumptions, we establish the worst-case [Formula: see text] convergence rate of PALM-IPR in a non-ergodic sense. Finally, numerical results show that our new method is feasible and efficient for solving compressive sensing.

  9. Constraining the Mass of the Local Group through Proper Motion Measurements of Local Group Galaxies

    NASA Astrophysics Data System (ADS)

    Sohn, S. Tony; van der Marel, R.; Anderson, J.

    2012-01-01

    The Local Group and its two dominant spiral galaxies have been the benchmark for testing many aspects of cosmological and galaxy formation theories. This includes, e.g., dark halo profiles and shapes, substructure and the "missing satellite" problem, and the minimum mass for galaxy formation. But despite the extensive work in all of these areas, our knowledge of the mass of the Milky Way and M31, and thus the total mass of the Local Group remains one of the most poorly established astronomical parameters (uncertain by a factor of 4). One important reason for this problem is the lack of information in tangential motions of galaxies, which can be only obtained through proper motion measurements. In this study, we introduce our projects for measuring absolute proper motions of (1) the dwarf spheroidal galaxy Leo I, (2) M31, and (3) the 4 dwarf galaxies near the edge of the Local Group (Cetus, Leo A, Tucana, and Sag DIG). Results from these three independent measurements will provide important clues to the mass of the Milky Way, M31, and the Local Group as a whole, respectively. We also present our proper motion measurement technique that uses compact background galaxies as astrometric reference sources.

  10. Resonance Parameter Adjustment Based on Integral Experiments

    DOE PAGES

    Sobes, Vladimir; Leal, Luiz; Arbanas, Goran; ...

    2016-06-02

    Our project seeks to allow coupling of differential and integral data evaluation in a continuous-energy framework and to use the generalized linear least-squares (GLLS) methodology in the TSURFER module of the SCALE code package to update the parameters of a resolved resonance region evaluation. We recognize that the GLLS methodology in TSURFER is identical to the mathematical description of a Bayesian update in SAMMY, the SAMINT code was created to use the mathematical machinery of SAMMY to update resolved resonance parameters based on integral data. Traditionally, SAMMY used differential experimental data to adjust nuclear data parameters. Integral experimental data, suchmore » as in the International Criticality Safety Benchmark Experiments Project, remain a tool for validation of completed nuclear data evaluations. SAMINT extracts information from integral benchmarks to aid the nuclear data evaluation process. Later, integral data can be used to resolve any remaining ambiguity between differential data sets, highlight troublesome energy regions, determine key nuclear data parameters for integral benchmark calculations, and improve the nuclear data covariance matrix evaluation. Moreover, SAMINT is not intended to bias nuclear data toward specific integral experiments but should be used to supplement the evaluation of differential experimental data. Using GLLS ensures proper weight is given to the differential data.« less

  11. Large eddy simulation of the FDA benchmark nozzle for a Reynolds number of 6500.

    PubMed

    Janiga, Gábor

    2014-04-01

    This work investigates the flow in a benchmark nozzle model of an idealized medical device proposed by the FDA using computational fluid dynamics (CFD). It was in particular shown that a proper modeling of the transitional flow features is particularly challenging, leading to large discrepancies and inaccurate predictions from the different research groups using Reynolds-averaged Navier-Stokes (RANS) modeling. In spite of the relatively simple, axisymmetric computational geometry, the resulting turbulent flow is fairly complex and non-axisymmetric, in particular due to the sudden expansion. The resulting flow cannot be well predicted with simple modeling approaches. Due to the varying diameters and flow velocities encountered in the nozzle, different typical flow regions and regimes can be distinguished, from laminar to transitional and to weakly turbulent. The purpose of the present work is to re-examine the FDA-CFD benchmark nozzle model at a Reynolds number of 6500 using large eddy simulation (LES). The LES results are compared with published experimental data obtained by Particle Image Velocimetry (PIV) and an excellent agreement can be observed considering the temporally averaged flow velocities. Different flow regimes are characterized by computing the temporal energy spectra at different locations along the main axis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. 7 CFR 1717.652 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., all investments properly recorded on the borrower's books and records in investment accounts as those.... Operating DSC means Operating Debt Service Coverage (ODSC) of the borrower's electric system calculated as...

  13. An Expert System for Processing Uncorrelated Satellite Tracks

    DTIC Science & Technology

    1992-12-17

    earthworms with much intellect e\\en though they routinely carry out this same function. One definition given artificial intelligence is "the study of mental...Networks: Benchmarking Studies ," Proceedings from the IEEE International Conference on Neural Networkv. pp. 64-65, 1988. 229 Lyddane, R., "Small...reverse if necessary and rdenqtl_ by block number, Field Group Subgroup Artificial Intelligence, Expert Systems, Neural Networks. Orbital Mechanics

  14. Spleens and holoendemic malaria in West New Guinea.

    PubMed

    METSELAAR, D

    1956-01-01

    The author describes the results obtained in recent malaria surveys in West New Guinea, where what is essentially holoendemic malaria prevails. However, the spleen-rate in adults differs markedly from what is regarded as normal under holoendemic conditions according to the definition put forward at the Malaria Conference in Equatorial Africa in 1950. The author therefore concludes that that definition is not properly applicable to New Guinea.

  15. The Milky Way Tomography With SDSS. 3. Stellar Kinematics

    DTIC Science & Technology

    2010-06-10

    photometric-parallax and photometric- metallicity methods, and then describe the proper-motion data and their error analysis. The subsample definitions are...symmetry. The observed ellipsoid is sufficiently round, however, that no definitive comparison can be made. 4.2. Direct Determination of the Solar...Institute for Particle Astrophysics and Cosmology , the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos Na- tional

  16. Assessment of the MPACT Resonance Data Generation Procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog; Williams, Mark L.

    Currently, heterogeneous models are being used to generate resonance self-shielded cross-section tables as a function of background cross sections for important nuclides such as 235U and 238U by performing the CENTRM (Continuous Energy Transport Model) slowing down calculation with the MOC (Method of Characteristics) spatial discretization and ESSM (Embedded Self-Shielding Method) calculations to obtain background cross sections. And then the resonance self-shielded cross section tables are converted into subgroup data which are to be used in estimating problem-dependent self-shielded cross sections in MPACT (Michigan Parallel Characteristics Transport Code). Although this procedure has been developed and thus resonance data have beenmore » generated and validated by benchmark calculations, assessment has never been performed to review if the resonance data are properly generated by the procedure and utilized in MPACT. This study focuses on assessing the procedure and a proper use in MPACT.« less

  17. Benchmark solution for the Spencer-Lewis equation of electron transport theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganapol, B.D.

    As integrated circuits become smaller, the shielding of these sensitive components against penetrating electrons becomes extremely critical. Monte Carlo methods have traditionally been the method of choice in shielding evaluations primarily because they can incorporate a wide variety of relevant physical processes. Recently, however, as a result of a more accurate numerical representation of the highly forward peaked scattering process, S/sub n/ methods for one-dimensional problems have been shown to be at least as cost-effective in comparison with Monte Carlo methods. With the development of these deterministic methods for electron transport, a need has arisen to assess the accuracy ofmore » proposed numerical algorithms and to ensure their proper coding. It is the purpose of this presentation to develop a benchmark to the Spencer-Lewis equation describing the transport of energetic electrons in solids. The solution will take advantage of the correspondence between the Spencer-Lewis equation and the transport equation describing one-group time-dependent neutron transport.« less

  18. Exploration of freely available web-interfaces for comparative homology modelling of microbial proteins

    PubMed Central

    Nema, Vijay; Pal, Sudhir Kumar

    2013-01-01

    Aim: This study was conducted to find the best suited freely available software for modelling of proteins by taking a few sample proteins. The proteins used were small to big in size with available crystal structures for the purpose of benchmarking. Key players like Phyre2, Swiss-Model, CPHmodels-3.0, Homer, (PS)2, (PS)2-V2, Modweb were used for the comparison and model generation. Results: Benchmarking process was done for four proteins, Icl, InhA, and KatG of Mycobacterium tuberculosis and RpoB of Thermus Thermophilus to get the most suited software. Parameters compared during analysis gave relatively better values for Phyre2 and Swiss-Model. Conclusion: This comparative study gave the information that Phyre2 and Swiss-Model make good models of small and large proteins as compared to other screened software. Other software was also good but is often not very efficient in providing full-length and properly folded structure. PMID:24023424

  19. Implementing ADM1 for plant-wide benchmark simulations in Matlab/Simulink.

    PubMed

    Rosen, C; Vrecko, D; Gernaey, K V; Pons, M N; Jeppsson, U

    2006-01-01

    The IWA Anaerobic Digestion Model No.1 (ADM1) was presented in 2002 and is expected to represent the state-of-the-art model within this field in the future. Due to its complexity the implementation of the model is not a simple task and several computational aspects need to be considered, in particular if the ADM1 is to be included in dynamic simulations of plant-wide or even integrated systems. In this paper, the experiences gained from a Matlab/Simulink implementation of ADM1 into the extended COST/IWA Benchmark Simulation Model (BSM2) are presented. Aspects related to system stiffness, model interfacing with the ASM family, mass balances, acid-base equilibrium and algebraic solvers for pH and other troublesome state variables, numerical solvers and simulation time are discussed. The main conclusion is that if implemented properly, the ADM1 will also produce high-quality results in dynamic plant-wide simulations including noise, discrete sub-systems, etc. without imposing any major restrictions due to extensive computational efforts.

  20. Heart Rate Monitor for Portable MP3 Player.

    PubMed

    Kim, Jaywoo; Lee, Mi-Hee; Lee, Hyoung-Ki; Choi, Kiwan; Bang, Seokwon; Kim, Sangryong

    2005-01-01

    This paper presents a photoplethysmography sensor based on a heart rate monitor for a portable MP3 player. Two major design issues are addressed: one is to acquire the sensor signal with a proper amplitude despite a wide range of variation and the other is to handle the noise contaminated signal which is caused by a motion artifact. A benchmarking test with a professional medical photoplethysmography sensor shows that our device performs very well in calculating heart rate even though our photoplethysmography sensor module was designed to be cost effective.

  1. Calibrating coseismic coastal land-level changes during the 2014 Iquique (Mw=8.2) earthquake (northern Chile) with leveling, GPS and intertidal biota

    PubMed Central

    Melnick, Daniel; Baez, Juan Carlos; Montecino, Henry; Lagos, Nelson A.; Acuña, Emilio; Manzano, Mario; Camus, Patricio A.

    2017-01-01

    The April 1st 2014 Iquique earthquake (MW 8.1) occurred along the northern Chile margin where the Nazca plate is subducted below the South American continent. The last great megathrust earthquake here, in 1877 of Mw ~8.8 opened a seismic gap, which was only partly closed by the 2014 earthquake. Prior to the earthquake in 2013, and shortly after it we compared data from leveled benchmarks, deployed campaign GPS instruments, continuous GPS stations and estimated sea levels using the upper vertical level of rocky shore benthic organisms including algae, barnacles, and mussels. Land-level changes estimated from mean elevations of benchmarks indicate subsidence along a ~100-km stretch of coast, ranging from 3 to 9 cm at Corazones (18°30’S) to between 30 and 50 cm at Pisagua (19°30’S). About 15 cm of uplift was measured along the southern part of the rupture at Chanabaya (20°50’S). Land-level changes obtained from benchmarks and campaign GPS were similar at most sites (mean difference 3.7±3.2 cm). Higher differences however, were found between benchmarks and continuous GPS (mean difference 8.5±3.6 cm), possibly because sites were not collocated and separated by several kilometers. Subsidence estimated from the upper limits of intertidal fauna at Pisagua ranged between 40 to 60 cm, in general agreement with benchmarks and GPS. At Chanavaya, the magnitude and sense of displacement of the upper marine limit was variable across species, possibly due to species—dependent differences in ecology. Among the studied species, measurements on lithothamnioid calcareous algae most closely matched those made with benchmarks and GPS. When properly calibrated, rocky shore benthic species may be used to accurately measure land-level changes along coasts affected by subduction earthquakes. Our calibration of those methods will improve their accuracy when applied to coasts lacking pre-earthquake data and in estimating deformation during pre–instrumental earthquakes. PMID:28333998

  2. On the evaluation of the fidelity of supervised classifiers in the prediction of chimeric RNAs.

    PubMed

    Beaumeunier, Sacha; Audoux, Jérôme; Boureux, Anthony; Ruffle, Florence; Commes, Thérèse; Philippe, Nicolas; Alves, Ronnie

    2016-01-01

    High-throughput sequencing technology and bioinformatics have identified chimeric RNAs (chRNAs), raising the possibility of chRNAs expressing particularly in diseases can be used as potential biomarkers in both diagnosis and prognosis. The task of discriminating true chRNAs from the false ones poses an interesting Machine Learning (ML) challenge. First of all, the sequencing data may contain false reads due to technical artifacts and during the analysis process, bioinformatics tools may generate false positives due to methodological biases. Moreover, if we succeed to have a proper set of observations (enough sequencing data) about true chRNAs, chances are that the devised model can not be able to generalize beyond it. Like any other machine learning problem, the first big issue is finding the good data to build models. As far as we were concerned, there is no common benchmark data available for chRNAs detection. The definition of a classification baseline is lacking in the related literature too. In this work we are moving towards benchmark data and an evaluation of the fidelity of supervised classifiers in the prediction of chRNAs. We proposed a modelization strategy that can be used to increase the tools performances in context of chRNA classification based on a simulated data generator, that permit to continuously integrate new complex chimeric events. The pipeline incorporated a genome mutation process and simulated RNA-seq data. The reads within distinct depth were aligned and analysed by CRAC that integrates genomic location and local coverage, allowing biological predictions at the read scale. Additionally, these reads were functionally annotated and aggregated to form chRNAs events, making it possible to evaluate ML methods (classifiers) performance in both levels of reads and events. Ensemble learning strategies demonstrated to be more robust to this classification problem, providing an average AUC performance of 95 % (ACC=94 %, Kappa=0.87 %). The resulting classification models were also tested on real RNA-seq data from a set of twenty-seven patients with acute myeloid leukemia (AML).

  3. Mapping Turnaround Times (TAT) to a Generic Timeline: A Systematic Review of TAT Definitions in Clinical Domains

    PubMed Central

    2011-01-01

    Background Assessing turnaround times can help to analyse workflows in hospital information systems. This paper presents a systematic review of literature concerning different turnaround time definitions. Our objectives were to collect relevant literature with respect to this kind of process times in hospitals and their respective domains. We then analysed the existing definitions and summarised them in an appropriate format. Methods Our search strategy was based on Pubmed queries and manual reviews of the bibliographies of retrieved articles. Studies were included if precise definitions of turnaround times were available. A generic timeline was designed through a consensus process to provide an overview of these definitions. Results More than 1000 articles were analysed and resulted in 122 papers. Of those, 162 turnaround time definitions in different clinical domains were identified. Starting and end points vary between these domains. To illustrate those turnaround time definitions, a generic timeline was constructed using preferred terms derived from the identified definitions. The consensus process resulted in the following 15 terms: admission, order, biopsy/examination, receipt of specimen in laboratory, procedure completion, interpretation, dictation, transcription, verification, report available, delivery, physician views report, treatment, discharge and discharge letter sent. Based on this analysis, several standard terms for turnaround time definitions are proposed. Conclusion Using turnaround times to benchmark clinical workflows is still difficult, because even within the same clinical domain many different definitions exist. Mapping of turnaround time definitions to a generic timeline is feasible. PMID:21609424

  4. Validation of numerical codes for impact and explosion cratering: Impacts on strengthless and metal targets

    NASA Astrophysics Data System (ADS)

    Pierazzo, E.; Artemieva, N.; Asphaug, E.; Baldwin, E. C.; Cazamias, J.; Coker, R.; Collins, G. S.; Crawford, D. A.; Davison, T.; Elbeshausen, D.; Holsapple, K. A.; Housen, K. R.; Korycansky, D. G.; Wünnemann, K.

    2008-12-01

    Over the last few decades, rapid improvement of computer capabilities has allowed impact cratering to be modeled with increasing complexity and realism, and has paved the way for a new era of numerical modeling of the impact process, including full, three-dimensional (3D) simulations. When properly benchmarked and validated against observation, computer models offer a powerful tool for understanding the mechanics of impact crater formation. This work presents results from the first phase of a project to benchmark and validate shock codes. A variety of 2D and 3D codes were used in this study, from commercial products like AUTODYN, to codes developed within the scientific community like SOVA, SPH, ZEUS-MP, iSALE, and codes developed at U.S. National Laboratories like CTH, SAGE/RAGE, and ALE3D. Benchmark calculations of shock wave propagation in aluminum-on-aluminum impacts were performed to examine the agreement between codes for simple idealized problems. The benchmark simulations show that variability in code results is to be expected due to differences in the underlying solution algorithm of each code, artificial stability parameters, spatial and temporal resolution, and material models. Overall, the inter-code variability in peak shock pressure as a function of distance is around 10 to 20%. In general, if the impactor is resolved by at least 20 cells across its radius, the underestimation of peak shock pressure due to spatial resolution is less than 10%. In addition to the benchmark tests, three validation tests were performed to examine the ability of the codes to reproduce the time evolution of crater radius and depth observed in vertical laboratory impacts in water and two well-characterized aluminum alloys. Results from these calculations are in good agreement with experiments. There appears to be a general tendency of shock physics codes to underestimate the radius of the forming crater. Overall, the discrepancy between the model and experiment results is between 10 and 20%, similar to the inter-code variability.

  5. Ultracool dwarf benchmarks with Gaia primaries

    NASA Astrophysics Data System (ADS)

    Marocco, F.; Pinfield, D. J.; Cook, N. J.; Zapatero Osorio, M. R.; Montes, D.; Caballero, J. A.; Gálvez-Ortiz, M. C.; Gromadzki, M.; Jones, H. R. A.; Kurtev, R.; Smart, R. L.; Zhang, Z.; Cabrera Lavers, A. L.; García Álvarez, D.; Qi, Z. X.; Rickard, M. J.; Dover, L.

    2017-10-01

    We explore the potential of Gaia for the field of benchmark ultracool/brown dwarf companions, and present the results of an initial search for metal-rich/metal-poor systems. A simulated population of resolved ultracool dwarf companions to Gaia primary stars is generated and assessed. Of the order of ˜24 000 companions should be identifiable outside of the Galactic plane (|b| > 10 deg) with large-scale ground- and space-based surveys including late M, L, T and Y types. Our simulated companion parameter space covers 0.02 ≤ M/M⊙ ≤ 0.1, 0.1 ≤ age/Gyr ≤ 14 and -2.5 ≤ [Fe/H] ≤ 0.5, with systems required to have a false alarm probability <10-4, based on projected separation and expected constraints on common distance, common proper motion and/or common radial velocity. Within this bulk population, we identify smaller target subsets of rarer systems whose collective properties still span the full parameter space of the population, as well as systems containing primary stars that are good age calibrators. Our simulation analysis leads to a series of recommendations for candidate selection and observational follow-up that could identify ˜500 diverse Gaia benchmarks. As a test of the veracity of our methodology and simulations, our initial search uses UKIRT Infrared Deep Sky Survey and Sloan Digital Sky Survey to select secondaries, with the parameters of primaries taken from Tycho-2, Radial Velocity Experiment, Large sky Area Multi-Object fibre Spectroscopic Telescope and Tycho-Gaia Astrometric Solution. We identify and follow up 13 new benchmarks. These include M8-L2 companions, with metallicity constraints ranging in quality, but robust in the range -0.39 ≤ [Fe/H] ≤ +0.36, and with projected physical separation in the range 0.6 < s/kau < 76. Going forward, Gaia offers a very high yield of benchmark systems, from which diverse subsamples may be able to calibrate a range of foundational ultracool/sub-stellar theory and observation.

  6. Automatic Multilevel Parallelization Using OpenMP

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)

    2002-01-01

    In this paper we describe the extension of the CAPO (CAPtools (Computer Aided Parallelization Toolkit) OpenMP) parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report some results for several benchmark codes and one full application that have been parallelized using our system.

  7. On variational definition of quantum entropy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belavkin, Roman V.

    Entropy of distribution P can be defined in at least three different ways: 1) as the expectation of the Kullback-Leibler (KL) divergence of P from elementary δ-measures (in this case, it is interpreted as expected surprise); 2) as a negative KL-divergence of some reference measure ν from the probability measure P; 3) as the supremum of Shannon’s mutual information taken over all channels such that P is the output probability, in which case it is dual of some transportation problem. In classical (i.e. commutative) probability, all three definitions lead to the same quantity, providing only different interpretations of entropy. Inmore » non-commutative (i.e. quantum) probability, however, these definitions are not equivalent. In particular, the third definition, where the supremum is taken over all entanglements of two quantum systems with P being the output state, leads to the quantity that can be twice the von Neumann entropy. It was proposed originally by V. Belavkin and Ohya [1] and called the proper quantum entropy, because it allows one to define quantum conditional entropy that is always non-negative. Here we extend these ideas to define also quantum counterpart of proper cross-entropy and cross-information. We also show inequality for the values of classical and quantum information.« less

  8. Flight program language requirements. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The activities and results of a study for the definition of flight program language requirements are described. A set of detailed requirements are presented for a language capable of supporting onboard application programming for the Marshall Space Flight Center's anticipated future activities in the decade of 1975-85. These requirements are based, in part, on the evaluation of existing flight programming language designs to determine the applicability of these designs to flight programming activities which are anticipated. The coding of benchmark problems in the selected programming languages is discussed. These benchmarks are in the form of program kernels selected from existing flight programs. This approach was taken to insure that the results of the study would reflect state of the art language capabilities, as well as to determine whether an existing language design should be selected for adaptation.

  9. RBscore&NBench: a high-level web server for nucleic acid binding residues prediction with a large-scale benchmarking database.

    PubMed

    Miao, Zhichao; Westhof, Eric

    2016-07-08

    RBscore&NBench combines a web server, RBscore and a database, NBench. RBscore predicts RNA-/DNA-binding residues in proteins and visualizes the prediction scores and features on protein structures. The scoring scheme of RBscore directly links feature values to nucleic acid binding probabilities and illustrates the nucleic acid binding energy funnel on the protein surface. To avoid dataset, binding site definition and assessment metric biases, we compared RBscore with 18 web servers and 3 stand-alone programs on 41 datasets, which demonstrated the high and stable accuracy of RBscore. A comprehensive comparison led us to develop a benchmark database named NBench. The web server is available on: http://ahsoka.u-strasbg.fr/rbscorenbench/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. Resetting Biological Clocks

    ERIC Educational Resources Information Center

    Winfree, Arthur T.

    1975-01-01

    Reports on experiments conducted on two biological clocks, in organisms in the plant and animal kingdoms, which indicate that biological oscillation can be arrested by a single stimulus of a definite strength delivered at the proper time. (GS)

  11. 49 CFR 240.7 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... engineer under this part, (2) Has been selected by the railroad to teach others proper train handling... upon which the operation of trains is governed by one or more of the following methods of operation...

  12. 7 CFR 51.3746 - Mature.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Standards for Grades of Honey Dew and Honey Ball Type Melons Definitions § 51.3746 Mature. Mature means that the melon has reached the stage of maturity which will insure the proper completion of the normal...

  13. 7 CFR 51.3746 - Mature.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Standards for Grades of Honey Dew and Honey Ball Type Melons Definitions § 51.3746 Mature. Mature means that the melon has reached the stage of maturity which will insure the proper completion of the normal...

  14. Defining Information Security.

    PubMed

    Lundgren, Björn; Möller, Niklas

    2017-11-15

    This article proposes a new definition of information security, the 'Appropriate Access' definition. Apart from providing the basic criteria for a definition-correct demarcation and meaning concerning the state of security-it also aims at being a definition suitable for any information security perspective. As such, it bridges the conceptual divide between so-called 'soft issues' of information security (those including, e.g., humans, organizations, culture, ethics, policies, and law) and more technical issues. Because of this it is also suitable for various analytical purposes, such as analysing possible security breaches, or for studying conflicting attitudes on security in an organization. The need for a new definition is demonstrated by pointing to a number of problems for the standard definition type of information security-the so-called CIA definition. Besides being too broad as well as too narrow, it cannot properly handle the soft issues of information security, nor recognize the contextual and normative nature of security.

  15. Nociception, pain, consciousness, and society: A plea for constrained use of pain-related terminologies.

    PubMed

    Apkarian, A Vania

    2018-06-08

    This focus article addresses the issue of the proper use of terminology in pain research. A review and some revisions on the definitions of pain and nociception, in relation to consciousness is presented. From a behavioral viewpoint, it is argued that pain is a conscious assessment of the failure of the organism to protect the body from injury (actual or potential); while continuously-ongoing sub/pre-conscious nociceptive processes protect the body from injuries. Thus, pain perception/behavior requires a subjective ability to evaluate the environment and form coordinated responses. Yet, too often our literature conflates the two concepts, resulting in a confusion that impacts on society. The issue is especially topical as the US Senate has been voting a bill called: Pain-Capable Unborn Child Protection Act. The title of the bill itself does not make sense, if we adhere to the strict definitions commonly accepted in our field. Thus, this article concludes with a plea to properly constrain the narrative with which we describe our research, and minimize potential abuse of the science of pain for political interests. Perspective The focus article goes over the classic definitions of pain and nociception; incorporates novel concepts recently advances as to their functional differentiation; and is a plea for our research and clinical society to adhere to the proper use of these terms to minimize misinterpretation by society at large. Copyright © 2018. Published by Elsevier Inc.

  16. The tethered galaxy problem: a possible window to explore cosmological models

    NASA Astrophysics Data System (ADS)

    Tangmatitham, Matipon; Nemiroff, Robert J.

    2017-01-01

    In the tethered galaxy problem, a hypothetical galaxy is being held at a fixed proper distance. Contrary to Newtonian intuition, it has been shown that this tethered galaxy can have a nonzero redshift. However, constant proper distance has been suggested as unphysical in a cosmological setting and therefore other definitions have been suggested. The tethered galaxy problem is therefore reviewed in Friedmann cosmology. In this work, different tethers are considered as possible local cosmological discriminators.

  17. The Environmental Management Project Manager`s Handbook for improved project definition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-02-01

    The United States Department of Energy (DOE) is committed to providing high quality products that satisfy customer needs and are the associated with this goal, DOE personnel must possess the knowledge, skills, and abilities to ensure successful job performance. In addition, there must be recognition that the greatest obstacle to proper project performance is inadequate project definition. Without strong project definition, DOE environmental management efforts are vulnerable to fragmented solutions, duplication of effort, and wastes resources. The primary means of ensuring environmental management projects meet cost and schedule milestones is through a structured and graded approach to project definition, whichmore » is the focus of this handbook.« less

  18. Oceanic signals in rapid polar motion: results from a barotropic forward model with explicit consideration of self-attraction and loading effects

    NASA Astrophysics Data System (ADS)

    Schindelegger, Michael; Quinn, Katherine J.; Ponte, Rui M.

    2017-04-01

    Numerical modeling of non-tidal variations in ocean currents and bottom pressure has played a key role in closing the excitation budget of Earth's polar motion for a wide range of periodicities. Non-negligible discrepancies between observations and model accounts of pole position changes prevail, however, on sub-monthly time scales and call for examination of hydrodynamic effects usually omitted in general circulation models. Specifically, complete hydrodynamic cores must incorporate self-attraction and loading (SAL) feedbacks on redistributed water masses, effects that produces ocean bottom pressure perturbations of typically about 10% relative to the computed mass variations. Here, we report on a benchmark simulation with a near-global, barotropic forward model forced by wind stress, atmospheric pressure, and a properly calculated SAL term. The latter is obtained by decomposing ocean mass anomalies on a 30-minute grid into spherical harmonics at each time step and applying Love numbers to account for seafloor deformation and changed gravitational attraction. The increase in computational time at each time step is on the order of 50%. Preliminary results indicate that the explicit consideration of SAL in the forward runs increases the fidelity of modeled polar motion excitations, in particular on time scales shorter than 5 days as evident from cross spectral comparisons with geodetic excitation. Definite conclusions regarding the relevance of SAL in simulating rapid polar motion are, however, still hampered by the model's incomplete domain representation that excludes parts of the highly energetic Arctic Ocean.

  19. A Study of Fixed-Order Mixed Norm Designs for a Benchmark Problem in Structural Control

    NASA Technical Reports Server (NTRS)

    Whorton, Mark S.; Calise, Anthony J.; Hsu, C. C.

    1998-01-01

    This study investigates the use of H2, p-synthesis, and mixed H2/mu methods to construct full-order controllers and optimized controllers of fixed dimensions. The benchmark problem definition is first extended to include uncertainty within the controller bandwidth in the form of parametric uncertainty representative of uncertainty in the natural frequencies of the design model. The sensitivity of H2 design to unmodelled dynamics and parametric uncertainty is evaluated for a range of controller levels of authority. Next, mu-synthesis methods are applied to design full-order compensators that are robust to both unmodelled dynamics and to parametric uncertainty. Finally, a set of mixed H2/mu compensators are designed which are optimized for a fixed compensator dimension. These mixed norm designs recover the H, design performance levels while providing the same levels of robust stability as the u designs. It is shown that designing with the mixed norm approach permits higher levels of controller authority for which the H, designs are destabilizing. The benchmark problem is that of an active tendon system. The controller designs are all based on the use of acceleration feedback.

  20. Locating underwater objects. [technology transfer

    NASA Technical Reports Server (NTRS)

    Grice, C. F.

    1974-01-01

    Underwater search operations are considered to be engineering and operational problems. A process for proper definition of the problem and selection of instrumentation and operational procedures is described. An outline of underwater search instrumentation and techniques is given.

  1. Earth Observing System Covariance Realism

    NASA Technical Reports Server (NTRS)

    Zaidi, Waqar H.; Hejduk, Matthew D.

    2016-01-01

    The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.

  2. Pavement marking extensions for deceleration lanes.

    DOT National Transportation Integrated Search

    1974-01-01

    Pavement markings have definite and important functions in a proper scheme of traffic control. One such marking, the pavement edge line, has received much favorable public reaction. One of the limitations of the edge line as conventionally applied is...

  3. Importance of inlet boundary conditions for numerical simulation of combustor flows

    NASA Technical Reports Server (NTRS)

    Sturgess, G. J.; Syed, S. A.; Mcmanus, K. R.

    1983-01-01

    Fluid dynamic computer codes for the mathematical simulation of problems in gas turbine engine combustion systems are required as design and diagnostic tools. To eventually achieve a performance standard with these codes of more than qualitative accuracy it is desirable to use benchmark experiments for validation studies. Typical of the fluid dynamic computer codes being developed for combustor simulations is the TEACH (Teaching Elliptic Axisymmetric Characteristics Heuristically) solution procedure. It is difficult to find suitable experiments which satisfy the present definition of benchmark quality. For the majority of the available experiments there is a lack of information concerning the boundary conditions. A standard TEACH-type numerical technique is applied to a number of test-case experiments. It is found that numerical simulations of gas turbine combustor-relevant flows can be sensitive to the plane at which the calculations start and the spatial distributions of inlet quantities for swirling flows.

  4. International Continence Society guidelines on urodynamic equipment performance.

    PubMed

    Gammie, Andrew; Clarkson, Becky; Constantinou, Chris; Damaser, Margot; Drinnan, Michael; Geleijnse, Geert; Griffiths, Derek; Rosier, Peter; Schäfer, Werner; Van Mastrigt, Ron

    2014-04-01

    These guidelines provide benchmarks for the performance of urodynamic equipment, and have been developed by the International Continence Society to assist purchasing decisions, design requirements, and performance checks. The guidelines suggest ranges of specification for uroflowmetry, volume, pressure, and EMG measurement, along with recommendations for user interfaces and performance tests. Factors affecting measurement relating to the different technologies used are also described. Summary tables of essential and desirable features are included for ease of reference. It is emphasized that these guidelines can only contribute to good urodynamics if equipment is used properly, in accordance with good practice. © 2014 Wiley Periodicals, Inc.

  5. Data Quality- and Master Data Management - A Hospital Case.

    PubMed

    Arthofer, Klaus; Girardi, Dominic

    2017-01-01

    Poor data quality prevents the analysis of data for decisions which are critical for business. It also has a negative impact on business processes. Nevertheless the maturity level of data quality- and master data management is still insufficient in many organizations nowadays. This article discusses the corresponding maturity of companies and a management cycle integrating data quality- and master data management in a case dealing with benchmarking in hospitals. In conclusion if data quality and master data are not properly managed, structured data should not be acquired in the first place due to the added expense and complexity.

  6. Comment on Modified Stokes Parameters

    NASA Technical Reports Server (NTRS)

    Le Vine, D.M.; Utku, C.

    2009-01-01

    It is common practice in passive microwave remote sensing (microwave radiometry) to express observables as temperatures and in the case of polarimetric radiometry to use what are called "Modified Stokes Parameters in Brightness Temperature" to describe the scene. However, definitions with slightly different normalization (with and without division by bandwidth) have appeared in the literature. The purpose of this manuscript is to present an analysis to clarify the meaning of terms in the definition and resolve the question of the proper normalization.

  7. 40 CFR 745.83 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... with the manufacturer's instructions. Interim controls means a set of measures designed to temporarily... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT LEAD-BASED... disposable cleaning cloths with the card, whether post-renovation cleaning has been properly completed...

  8. 40 CFR 745.83 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... with the manufacturer's instructions. Interim controls means a set of measures designed to temporarily... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT LEAD-BASED... disposable cleaning cloths with the card, whether post-renovation cleaning has been properly completed...

  9. 40 CFR 745.83 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... with the manufacturer's instructions. Interim controls means a set of measures designed to temporarily... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT LEAD-BASED... disposable cleaning cloths with the card, whether post-renovation cleaning has been properly completed...

  10. Evaluation of VHF-FM Shore-Based Direction Finding Triangulation System in Massachusetts Bay Area

    DOT National Transportation Integrated Search

    1983-06-01

    The evaluation consisted of the following phases: (1) System definition and site selection; System calibration; Operational evaluation; Cost/benefit analysis. It was concluded that properly implemented shore-based direction finding systems in either ...

  11. Differential Die-Away Instrument: Report on Benchmark Measurements and Comparison with Simulation for the Effects of Neutron Poisons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodsell, Alison Victoria; Swinhoe, Martyn Thomas; Henzl, Vladimir

    2015-03-30

    In this report, new experimental data and MCNPX simulation results of the differential die-away (DDA) instrument response to the presence of neutron absorbers are evaluated. In our previous fresh nuclear fuel experiments and simulations, no neutron absorbers or poisons were included in the fuel definition. These new results showcase the capability of the DDA instrument to acquire data from a system that better mimics spent nuclear fuel.

  12. Methodologie experimentale pour evaluer les caracteristiques des plateformes graphiques avioniques

    NASA Astrophysics Data System (ADS)

    Legault, Vincent

    Within a context where the aviation industry intensifies the development of new visually appealing features and where time-to-market must be as short as possible, rapid graphics processing benchmarking in a certified avionics environment becomes an important issue. With this work we intend to demonstrate that it is possible to deploy a high-performance graphics application on an avionics platform that uses certified graphical COTS components. Moreover, we would like to bring to the avionics community a methodology which will allow developers to identify the needed elements for graphics system optimisation and provide them tools that can measure the complexity of this type of application and measure the amount of resources to properly scale a graphics system according to their needs. As far as we know, no graphics performance profiling tool dedicated to critical embedded architectures has been proposed. We thus had the idea of implementing a specialized benchmarking tool that would be an appropriate and effective solution to this problem. Our solution resides in the extraction of the key graphics specifications from an inherited application to use them afterwards in a 3D image generation application.

  13. Introduction of risk size in the determination of uncertainty factor UFL in risk assessment

    NASA Astrophysics Data System (ADS)

    Xue, Jinling; Lu, Yun; Velasquez, Natalia; Yu, Ruozhen; Hu, Hongying; Liu, Zhengtao; Meng, Wei

    2012-09-01

    The methodology for using uncertainty factors in health risk assessment has been developed for several decades. A default value is usually applied for the uncertainty factor UFL, which is used to extrapolate from LOAEL (lowest observed adverse effect level) to NAEL (no adverse effect level). Here, we have developed a new method that establishes a linear relationship between UFL and the additional risk level at LOAEL based on the dose-response information, which represents a very important factor that should be carefully considered. This linear formula makes it possible to select UFL properly in the additional risk range from 5.3% to 16.2%. Also the results remind us that the default value 10 may not be conservative enough when the additional risk level at LOAEL exceeds 16.2%. Furthermore, this novel method not only provides a flexible UFL instead of the traditional default value, but also can ensure a conservative estimation of the UFL with fewer errors, and avoid the benchmark response selection involved in the benchmark dose method. These advantages can improve the estimation of the extrapolation starting point in the risk assessment.

  14. The Mass of the Milky Way via HST Proper Motions of Satellite Objects

    NASA Astrophysics Data System (ADS)

    Sohn, Sangmo Tony; van der Marel, Roeland

    2018-01-01

    The Universe evolves hierarchically with small structures merging and falling in to form bigger structures. Due to its proximity, the Milky Way (MW) is the best place to witness and study these hierarchical processes in action as evidenced by e.g., the many stellar streams found in MW halo. Stellar systems in the MW halo have therefore become the benchmark for testing many aspects of cosmological theories. Despite the advances in both observational and theoretical areas in the last decade or so, the total mass and mass profile of the MW still remain poorly constrained, mainly due to the limited information on the transverse motions of satellite objects in the MW halo. As part of our HSTPROMO collaboration, we have been measuring proper motions of stars, globular clusters, and satellite galaxies in the MW halo to remedy this situation. In this contribution, I will present results from our recent studies, and report our progress of ongoing projects.

  15. 7 CFR 654.2 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... the sponsor(s) and NRCS or other recipient(s) in which responsibilities and actions are established... conservation, development, utilization, and disposal of water; the conservation and proper utilization of land... U.S. Virgin Islands. Structural measures. Structural measures are those measures that are excavated...

  16. 7 CFR 654.2 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... the sponsor(s) and NRCS or other recipient(s) in which responsibilities and actions are established... conservation, development, utilization, and disposal of water; the conservation and proper utilization of land... U.S. Virgin Islands. Structural measures. Structural measures are those measures that are excavated...

  17. 7 CFR 654.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... the sponsor(s) and NRCS or other recipient(s) in which responsibilities and actions are established... conservation, development, utilization, and disposal of water; the conservation and proper utilization of land... U.S. Virgin Islands. Structural measures. Structural measures are those measures that are excavated...

  18. 7 CFR 654.2 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... the sponsor(s) and NRCS or other recipient(s) in which responsibilities and actions are established... conservation, development, utilization, and disposal of water; the conservation and proper utilization of land... U.S. Virgin Islands. Structural measures. Structural measures are those measures that are excavated...

  19. A benchmark initiative on mantle convection with melting and melt segregation

    NASA Astrophysics Data System (ADS)

    Schmeling, Harro; Dohmen, Janik; Wallner, Herbert; Noack, Lena; Tosi, Nicola; Plesa, Ana-Catalina; Maurice, Maxime

    2015-04-01

    In recent years a number of mantle convection models have been developed which include partial melting within the asthenosphere, estimation of melt volumes, as well as melt extraction with and without redistribution at the surface or within the lithosphere. All these approaches use various simplifying modelling assumptions whose effects on the dynamics of convection including the feedback on melting have not been explored in sufficient detail. To better assess the significance of such assumptions and to provide test cases for the modelling community we initiate a benchmark comparison. In the initial phase of this endeavor we focus on the usefulness of the definitions of the test cases keeping the physics as sound as possible. The reference model is taken from the mantle convection benchmark, case 1b (Blanckenbach et al., 1989), assuming a square box with free slip boundary conditions, the Boussinesq approximation, constant viscosity and a Rayleigh number of 1e5. Melting is modelled assuming a simplified binary solid solution with linearly depth dependent solidus and liquidus temperatures, as well as a solidus temperature depending linearly on depletion. Starting from a plume free initial temperature condition (to avoid melting at the onset time) three cases are investigated: Case 1 includes melting, but without thermal or dynamic feedback on the convection flow. This case provides a total melt generation rate (qm) in a steady state. Case 2 includes batch melting, melt buoyancy (melt Rayleigh number Rm), depletion buoyancy and latent heat, but no melt percolation. Output quantities are the Nusselt number (Nu), root mean square velocity (vrms) and qm approaching a statistical steady state. Case 3 includes two-phase flow, i.e. melt percolation, assuming a constant shear and bulk viscosity of the matrix and various melt retention numbers (Rt). These cases should be carried out using the Compaction Boussinseq Approximation (Schmeling, 2000) or the full compaction formulation. Variations of cases 1 - 3 may be tested, particularly studying the effect of melt extraction. The motivation of this presentation is to summarize first experiences, suggest possible modifications of the case definitions and call interested modelers to join this benchmark exercise. References: Blanckenbach, B., Busse, F., Christensen, U., Cserepes, L. Gun¬kel, D., Hansen, U., Har¬der, H. Jarvis, G., Koch, M., Mar¬quart, G., Moore D., Olson, P., and Schmeling, H., 1989: A benchmark comparison for mantle convection codes, J. Geo¬phys., 98, 23 38. Schmeling, H., 2000: Partial melting and melt segregation in a convecting mantle. In: Physics and Chemistry of Partially Molten Rocks, eds. N. Bagdassarov, D. Laporte, and A.B. Thompson, Kluwer Academic Publ., Dordrecht, pp. 141 - 178.

  20. Biofuels glossary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milne, T.A.

    1986-09-01

    This glossary is a compendium of terms frequently encountered by those interested or involved in the use of biomass for energy purposes. As this is a newly developing technology crossing many disciplines, there is a special need for definitions, including the definition of biomass itself. The initial basis of the present glossary was a collection of terms prepared for the Bioenergy Program of the National Research Council of Canada. To this list were added many terms from existing published glossaries and lists of definitions in published government reports. This preliminary collection of terms was submitted to a number of reviewersmore » for initial screening and assessment. The reviewers' and the Solar Energy Research Institute's consensus was that many extant definitions needed to be reformulated or were actually technically incorrect. Furthermore, it became apparent that many different opinions exist as to the proper thrust of definitions. 32 refs.« less

  1. 23 CFR 668.103 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... of a disaster which can reasonably be accommodated by a State or local road authority's maintenance... roads and streets which are a part of the Federal-aid highways. Betterments. Added protective features... be primarily attributable to gradual and progressive deterioration or lack of proper maintenance. The...

  2. 40 CFR 745.83 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... with none of the air leaking past it. Interim controls means a set of measures designed to temporarily... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT LEAD-BASED... disposable cleaning cloths with the card, whether post-renovation cleaning has been properly completed...

  3. 40 CFR 745.83 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... with none of the air leaking past it. Interim controls means a set of measures designed to temporarily... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT LEAD-BASED... disposable cleaning cloths with the card, whether post-renovation cleaning has been properly completed...

  4. 7 CFR 810.602 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Damaged kernels. Kernels and pieces of flaxseed kernels that are badly ground-damaged, badly weather... instructions. Also, underdeveloped, shriveled, and small pieces of flaxseed kernels removed in properly... recleaning. (c) Heat-damaged kernels. Kernels and pieces of flaxseed kernels that are materially discolored...

  5. 23 CFR 668.103 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... be primarily attributable to gradual and progressive deterioration or lack of proper maintenance. The... the disaster occurrence for the purpose of: (1) Minimizing the extent of the damage, (2) Protecting... maintenance. Work usually done by highway agencies in repairing damage normally expected from seasonal and...

  6. 23 CFR 668.103 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... be primarily attributable to gradual and progressive deterioration or lack of proper maintenance. The... the disaster occurrence for the purpose of: (1) Minimizing the extent of the damage, (2) Protecting... maintenance. Work usually done by highway agencies in repairing damage normally expected from seasonal and...

  7. 23 CFR 668.103 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... be primarily attributable to gradual and progressive deterioration or lack of proper maintenance. The... the disaster occurrence for the purpose of: (1) Minimizing the extent of the damage, (2) Protecting... maintenance. Work usually done by highway agencies in repairing damage normally expected from seasonal and...

  8. A morphing-based scheme for large deformation analysis with stereo-DIC

    NASA Astrophysics Data System (ADS)

    Genovese, Katia; Sorgente, Donato

    2018-05-01

    A key step in the DIC-based image registration process is the definition of the initial guess for the non-linear optimization routine aimed at finding the parameters describing the pixel subset transformation. This initialization may result very challenging and possibly fail when dealing with pairs of largely deformed images such those obtained from two angled-views of not-flat objects or from the temporal undersampling of rapidly evolving phenomena. To address this problem, we developed a procedure that generates a sequence of intermediate synthetic images for gradually tracking the pixel subset transformation between the two extreme configurations. To this scope, a proper image warping function is defined over the entire image domain through the adoption of a robust feature-based algorithm followed by a NURBS-based interpolation scheme. This allows a fast and reliable estimation of the initial guess of the deformation parameters for the subsequent refinement stage of the DIC analysis. The proposed method is described step-by-step by illustrating the measurement of the large and heterogeneous deformation of a circular silicone membrane undergoing axisymmetric indentation. A comparative analysis of the results is carried out by taking as a benchmark a standard reference-updating approach. Finally, the morphing scheme is extended to the most general case of the correspondence search between two largely deformed textured 3D geometries. The feasibility of this latter approach is demonstrated on a very challenging case: the full-surface measurement of the severe deformation (> 150% strain) suffered by an aluminum sheet blank subjected to a pneumatic bulge test.

  9. Deformable image registration with a featurelet algorithm: implementation as a 3D-slicer extension and validation

    NASA Astrophysics Data System (ADS)

    Renner, A.; Furtado, H.; Seppenwoolde, Y.; Birkfellner, W.; Georg, D.

    2016-03-01

    A radiotherapy (RT) treatment can last for several weeks. In that time organ motion and shape changes introduce uncertainty in dose application. Monitoring and quantifying the change can yield a more precise irradiation margin definition and thereby reduce dose delivery to healthy tissue and adjust tumor targeting. Deformable image registration (DIR) has the potential to fulfill this task by calculating a deformation field (DF) between a planning CT and a repeated CT of the altered anatomy. Application of the DF on the original contours yields new contours that can be used for an adapted treatment plan. DIR is a challenging method and therefore needs careful user interaction. Without a proper graphical user interface (GUI) a misregistration cannot be easily detected by visual inspection and the results cannot be fine-tuned by changing registration parameters. To provide a DIR algorithm with such a GUI available for everyone, we created the extension Featurelet-Registration for the open source software platform 3D Slicer. The registration logic is an upgrade of an in-house-developed DIR method, which is a featurelet-based piecewise rigid registration. The so called "featurelets" are equally sized rectangular subvolumes of the moving image which are rigidly registered to rectangular search regions on the fixed image. The output is a deformed image and a deformation field. Both can be visualized directly in 3D Slicer facilitating the interpretation and quantification of the results. For validation of the registration accuracy two deformable phantoms were used. The performance was benchmarked against a demons algorithm with comparable results.

  10. Gearing up to handle the mosaic nature of life in the quest for orthologs.

    PubMed

    Forslund, Kristoffer; Pereira, Cecile; Capella-Gutierrez, Salvador; Sousa da Silva, Alan; Altenhoff, Adrian; Huerta-Cepas, Jaime; Muffato, Matthieu; Patricio, Mateus; Vandepoele, Klaas; Ebersberger, Ingo; Blake, Judith; Fernández Breis, Jesualdo Tomás; Boeckmann, Brigitte; Gabaldón, Toni; Sonnhammer, Erik; Dessimoz, Christophe; Lewis, Suzanna

    2017-08-30

    The Quest for Orthologs (QfO) is an open collaboration framework for experts in comparative phylogenomics and related research areas who have an interest in highly accurate orthology predictions and their applications. We here report highlights and discussion points from the QfO meeting 2015 held in Barcelona. Achievements in recent years have established a basis to support developments for improved orthology prediction and to explore new approaches. Central to the QfO effort is proper benchmarking of methods and services, as well as design of standardized datasets and standardized formats to allow sharing and comparison of results. Simultaneously, analysis pipelines have been improved, evaluated, and adapted to handle large datasets. All this would not have occurred without the long-term collaboration of Consortium members. Meeting regularly to review and coordinate complementary activities from a broad spectrum of innovative researchers clearly benefits the community. Highlights of the meeting include addressing sources of and legitimacy of disagreements between orthology calls, the context dependency of orthology definitions, special challenges encountered when analyzing very anciently rooted orthologies, orthology in the light of whole-genome duplications, and the concept of orthologous versus paralogous relationships at different levels, including domain-level orthology. Furthermore, particular needs for different applications (e.g. plant genomics, ancient gene families, and others) and the infrastructure for making orthology inferences available (e.g. interfaces with model organism databases) were discussed, with several ongoing efforts that are expected to be reported on during the upcoming 2017 QfO meeting. © The Author(s) 2017. Published by Oxford University Press.

  11. UAV visual signature suppression via adaptive materials

    NASA Astrophysics Data System (ADS)

    Barrett, Ron; Melkert, Joris

    2005-05-01

    Visual signature suppression (VSS) methods for several classes of aircraft from WWII on are examined and historically summarized. This study shows that for some classes of uninhabited aerial vehicles (UAVs), primary mission threats do not stem from infrared or radar signatures, but from the amount that an aircraft visually stands out against the sky. The paper shows that such visual mismatch can often jeopardize mission success and/or induce the destruction of the entire aircraft. A psycho-physioptical study was conducted to establish the definition and benchmarks of a Visual Cross Section (VCS) for airborne objects. This study was centered on combining the effects of size, shape, color and luminosity or effective illumance (EI) of a given aircraft to arrive at a VCS. A series of tests were conducted with a 6.6ft (2m) UAV which was fitted with optically adaptive electroluminescent sheets at altitudes of up to 1000 ft (300m). It was shown that with proper tailoring of the color and luminosity, the VCS of the aircraft dropped from more than 4,200cm2 to less than 1.8cm2 at 100m (the observed lower limit of the 20-20 human eye in this study). In laypersons terms this indicated that the UAV essentially "disappeared". This study concludes with an assessment of the weight and volume impact of such a Visual Suppression System (VSS) on the UAV, showing that VCS levels on this class UAV can be suppressed to below 1.8cm2 for aircraft gross weight penalties of only 9.8%.

  12. Capillary-Driven Flow in Liquid Filaments Connecting Orthogonal Channels

    NASA Technical Reports Server (NTRS)

    Allen, Jeffrey S.

    2005-01-01

    Capillary phenomena plays an important role in the management of product water in PEM fuel cells because of the length scales associated with the porous layers and the gas flow channels. The distribution of liquid water within the network of gas flow channels can be dramatically altered by capillary flow. We experimentally demonstrate the rapid movement of significant volumes of liquid via capillarity through thin liquid films which connect orthogonal channels. The microfluidic experiments discussed provide a good benchmark against which the proper modeling of capillarity by computational models may be tested. The effect of surface wettability, as expressed through the contact angle, on capillary flow will also be discussed.

  13. A simple microfluidic Coriolis effect flowmeter for operation at high pressure and high temperature.

    PubMed

    Harrison, Christopher; Jundt, Jacques

    2016-08-01

    We describe a microfluidic Coriolis effect flowmeter that is simple to assemble, operates at elevated temperature and pressure, and can be operated with a lock-in amplifier. The sensor has a flow rate sensitivity greater than 2° of phase shift per 1 g/min of mass flow and is benchmarked with flow rates ranging from 0.05 to 2.0 g/min. The internal volume is 15 μl and uses off-the-shelf optical components to measure the tube motion. We demonstrate that fluid density can be calculated from the frequency of the resonating element with proper calibration.

  14. 5 CFR 930.102 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... motor vehicle to properly carry out his or her assigned duties. Motor vehicle means a vehicle designed... vehicle (a) designed or used for military field training, combat, or tactical purposes; (b) used principally within the confines of a regularly established military post, camp, or depot; or (c) regularly...

  15. 5 CFR 930.102 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... motor vehicle to properly carry out his or her assigned duties. Motor vehicle means a vehicle designed... vehicle (a) designed or used for military field training, combat, or tactical purposes; (b) used principally within the confines of a regularly established military post, camp, or depot; or (c) regularly...

  16. 5 CFR 930.102 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... motor vehicle to properly carry out his or her assigned duties. Motor vehicle means a vehicle designed... vehicle (a) designed or used for military field training, combat, or tactical purposes; (b) used principally within the confines of a regularly established military post, camp, or depot; or (c) regularly...

  17. 5 CFR 930.102 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... motor vehicle to properly carry out his or her assigned duties. Motor vehicle means a vehicle designed... vehicle (a) designed or used for military field training, combat, or tactical purposes; (b) used principally within the confines of a regularly established military post, camp, or depot; or (c) regularly...

  18. 5 CFR 930.102 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... motor vehicle to properly carry out his or her assigned duties. Motor vehicle means a vehicle designed... vehicle (a) designed or used for military field training, combat, or tactical purposes; (b) used principally within the confines of a regularly established military post, camp, or depot; or (c) regularly...

  19. 24 CFR 3280.802 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... which, by insertion in a receptacle, establishes connection between the conductors of the attached... and the capacity to conduct safely any current likely to be imposed. (6) Branch circuit (i) means the... predetermined overload of current without injury to itself when properly applied within its rating. (9...

  20. 50 CFR 300.181 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... filing of an entry summary for consumption with customs authorities, in proper form, with estimated duties attached. Entry for consumption, for purposes of this subpart, has the same meaning as entry for consumption, withdrawal from warehouse for consumption, or entry for consumption of merchandise from a foreign...

  1. 50 CFR 300.181 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... filing of an entry summary for consumption with customs authorities, in proper form, with estimated duties attached. Entry for consumption, for purposes of this subpart, has the same meaning as entry for consumption, withdrawal from warehouse for consumption, or entry for consumption of merchandise from a foreign...

  2. 50 CFR 300.181 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... filing of an entry summary for consumption with customs authorities, in proper form, with estimated duties attached. Entry for consumption, for purposes of this subpart, has the same meaning as entry for consumption, withdrawal from warehouse for consumption, or entry for consumption of merchandise from a foreign...

  3. 22 CFR 505.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... record keeping functions; exercise of control over and hence responsibility and accountability for... proper and necessary uses even if any such uses occur infrequently. (h) Statistical record. A record in a system of records maintained for statistical research or reporting purposes only and not used in whole or...

  4. The Transculturing Self: A Philosophical Approach

    ERIC Educational Resources Information Center

    Monceri, Flavia

    2003-01-01

    The "Self" is a core notion in Western philosophy, which mainly defined it as an "autarchic individual", dependent on no "Other" whatsoever. Such a prevailing definition fails to recognise that just the "Other" is needed to properly define the "Self". As a result, Western mainstream philosophy…

  5. 7 CFR 51.2930 - Mature.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Mature. 51.2930 Section 51.2930 Agriculture..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apricots Definitions § 51.2930 Mature. Mature means having reached the stage of development which will insure a proper completion of the...

  6. 7 CFR 51.2930 - Mature.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Mature. 51.2930 Section 51.2930 Agriculture..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apricots Definitions § 51.2930 Mature. Mature means having reached the stage of development which will insure a proper completion of the...

  7. 7 CFR 1436.3 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... security interest in personal property when properly filed or recorded. Hay means a grass or legume that has been cut and stored. Commonly used grass mixtures include rye grass, timothy, brome, fescue, coastal Bermuda, orchard grass, and other native species, depending on the region. Forage legumes include...

  8. 7 CFR 1436.3 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... security interest in personal property when properly filed or recorded. Hay means a grass or legume that has been cut and stored. Commonly used grass mixtures include rye grass, timothy, brome, fescue, coastal Bermuda, orchard grass, and other native species, depending on the region. Forage legumes include...

  9. 7 CFR 810.1202 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... kernels. Kernels, pieces of rye kernels, and other grains that are badly ground-damaged, badly weather.... Also, underdeveloped, shriveled, and small pieces of rye kernels removed in properly separating the...-damaged kernels. Kernels, pieces of rye kernels, and other grains that are materially discolored and...

  10. Comprehending Envy

    ERIC Educational Resources Information Center

    Smith, Richard H.; Kim, Sung Hee

    2007-01-01

    The authors reviewed the psychological research on envy. The authors examined definitional challenges associated with studying envy, such as the important distinction between envy proper (which contains hostile feelings) and benign envy (which is free of hostile feelings). The authors concluded that envy is reasonably defined as an unpleasant,…

  11. Lumley's PODT definition of large eddies and a trio of numerical procedures. [Proper Orthogonal Decomposition Theorem

    NASA Technical Reports Server (NTRS)

    Payne, Fred R.

    1992-01-01

    Lumley's 1967 Moscow paper provided, for the first time, a completely rational definition of the physically-useful term 'large eddy', popular for a half-century. The numerical procedures based upon his results are: (1) PODT (Proper Orthogonal Decomposition Theorem), which extracts the Large Eddy structure of stochastic processes from physical or computer simulation two-point covariances, and 2) LEIM (Large-Eddy Interaction Model), a predictive scheme for the dynamical large eddies based upon higher order turbulence modeling. Earlier Lumley's work (1964) forms the basis for the final member of the triad of numerical procedures: this predicts the global neutral modes of turbulence which have surprising agreement with both structural eigenmodes and those obtained from the dynamical equations. The ultimate goal of improved engineering design tools for turbulence may be near at hand, partly due to the power and storage of 'supermicrocomputer' workstations finally becoming adequate for the demanding numerics of these procedures.

  12. Comparing Complementary and Alternative Medicine Use with or without Including Prayer as a Modality in a Local and Diverse United States Jurisdiction.

    PubMed

    Robles, Brenda; Upchurch, Dawn M; Kuo, Tony

    2017-01-01

    Few studies to date have examined the utilization of complementary and alternative medicine (CAM) in a local, ethnically diverse population in the United States (U.S.). Fewer have addressed the differences in their use based on inclusion or exclusion of prayer as a modality. Variable definitions of CAM are known to affect public health surveillance (i.e., continuous, systematic data collection, analysis, and interpretation) or benchmarking (i.e., identifying and comparing key indicators of health to inform community planning) related to this non-mainstream collection of health and wellness therapies. The present study sought to better understand how including or excluding prayer could affect reporting of CAM use among residents of a large, urban U.S. jurisdiction. Using population-weighted data from a cross-sectional Internet panel survey collected as part of a larger countywide population health survey, the study compared use of CAM based on whether prayer or no prayer was included in its definition. Patterns of CAM use by socio-demographic characteristics were described for the two operationalized definitions. Multivariable binomial regression analyses were performed to control for gender, age, race/ethnicity, education, employment, income, and health insurance status. One of the analyses explored the associations between CAM use and racial/ethnic characteristics in the study sample. Los Angeles County, California. A socio-demographically diverse sample of Los Angeles County residents. CAM use (with prayer) and CAM use (excluding prayer). Blacks were among the highest users of CAM when compared to Whites, especially when prayer was included as a CAM modality. Regardless of prayer inclusion, being a woman predicted higher use of CAM. How CAM is defined matters in gauging the utilization of this non-mainstream collection of therapies. Given that surveillance and/or benchmarking data are often used to inform resource allocation and planning decisions, results from the present study suggest that when prayer is included as part of the CAM definition, utilization/volume estimates of its use increased correspondingly, especially among non-White residents of the region.

  13. Continuum definition for Ceres absorption bands at 3.1, 3.4 and 4.0 μm

    NASA Astrophysics Data System (ADS)

    Galiano, A.; Palomba, E.; Longobardo, A.; Zinzi, A.; De Sanctis, M. C.; Raponi, A.; Carrozzo, F. G.; Ciarniello, M.; Dirri, F.

    2017-09-01

    The images and hyperspectral data acquired during various Dawn mission phases (e.g. Survey, HAMO and LAMO) allowed identifying regions of different albedo on Ceres surface, where absorption bands located at 3.4 and 4.0 μm can assume different shapes. The 3.1 μm feature is observed on the entire Ceres surface except on Cerealia Facula, the brightest spot located on the dome of Occator crater. To perform a mineralogical investigation, absorption bands in reflectance spectra should be properly isolated by removing spectral continuum; hence, parameters as band centers and band depths must be estimated. The problem in the defining the continuum is in the VIR spectral range, which ends at 5.1 μm even though the reliable data, where the thermal contribution is properly removed, stops at 4.2 μm. Band shoulders located at longer wavelengths cannot be estimated. We defined different continua, with the aim to find the most appropriate to isolate the three spectral bands, whatever the region and the spatial resolution of hyperspectral images. The linear continuum seems to be the most suitable definition for our goals. Then, we performed an error evaluation on band depths and band centers introduced by this continuum definition.

  14. Software solutions manage the definition, operation, maintenance and configuration control of the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobson, D; Churby, A; Krieger, E

    2011-07-25

    The National Ignition Facility (NIF) is the world's largest laser composed of millions of individual parts brought together to form one massive assembly. Maintaining control of the physical definition, status and configuration of this structure is a monumental undertaking yet critical to the validity of the shot experiment data and the safe operation of the facility. The NIF business application suite of software provides the means to effectively manage the definition, build, operation, maintenance and configuration control of all components of the National Ignition Facility. State of the art Computer Aided Design software applications are used to generate a virtualmore » model and assemblies. Engineering bills of material are controlled through the Enterprise Configuration Management System. This data structure is passed to the Enterprise Resource Planning system to create a manufacturing bill of material. Specific parts are serialized then tracked along their entire lifecycle providing visibility to the location and status of optical, target and diagnostic components that are key to assessing pre-shot machine readiness. Nearly forty thousand items requiring preventive, reactive and calibration maintenance are tracked through the System Maintenance & Reliability Tracking application to ensure proper operation. Radiological tracking applications ensure proper stewardship of radiological and hazardous materials and help provide a safe working environment for NIF personnel.« less

  15. Stabilized High-order Galerkin Methods Based on a Parameter-free Dynamic SGS Model for LES

    DTIC Science & Technology

    2015-01-01

    stresses obtained via Dyn-SGS are residual-based, the effect of the artificial diffusion is minimal in the regions where the solution is smooth. The direct...used in the analysis of the results rather than in the definition and analysis of the LES equations described from now on. 2.1 LES and the Dyn-SGS model... definition is sucient given the scope of the current study; nevertheless, a more proper defi- nition of for LES should be used in future work

  16. The τq-Fourier transform: Covariance and uniqueness

    NASA Astrophysics Data System (ADS)

    Kalogeropoulos, Nikolaos

    2018-05-01

    We propose an alternative definition for a Tsallis entropy composition-inspired Fourier transform, which we call “τq-Fourier transform”. We comment about the underlying “covariance” on the set of algebraic fields that motivates its introduction. We see that the definition of the τq-Fourier transform is automatically invertible in the proper context. Based on recent results in Fourier analysis, it turns that the τq-Fourier transform is essentially unique under the assumption of the exchange of the point-wise product of functions with their convolution.

  17. Similarity indices of meteo-climatic gauging stations: definition and comparison.

    PubMed

    Barca, Emanuele; Bruno, Delia Evelina; Passarella, Giuseppe

    2016-07-01

    Space-time dependencies among monitoring network stations have been investigated to detect and quantify similarity relationships among gauging stations. In this work, besides the well-known rank correlation index, two new similarity indices have been defined and applied to compute the similarity matrix related to the Apulian meteo-climatic monitoring network. The similarity matrices can be applied to address reliably the issue of missing data in space-time series. In order to establish the effectiveness of the similarity indices, a simulation test was then designed and performed with the aim of estimating missing monthly rainfall rates in a suitably selected gauging station. The results of the simulation allowed us to evaluate the effectiveness of the proposed similarity indices. Finally, the multiple imputation by chained equations method was used as a benchmark to have an absolute yardstick for comparing the outcomes of the test. In conclusion, the new proposed multiplicative similarity index resulted at least as reliable as the selected benchmark.

  18. A wind energy benchmark for ABL modelling of a diurnal cycle with a nocturnal low-level jet: GABLS3 revisited

    DOE PAGES

    Rodrigo, J. Sanz; Churchfield, M.; Kosović, B.

    2016-10-03

    The third GEWEX Atmospheric Boundary Layer Studies (GABLS3) model intercomparison study, around the Cabauw met tower in the Netherlands, is revisited as a benchmark for wind energy atmospheric boundary layer (ABL) models. The case was originally developed by the boundary layer meteorology community, interested in analysing the performance of single-column and large-eddy simulation atmospheric models dealing with a diurnal cycle leading to the development of a nocturnal low-level jet. The case addresses fundamental questions related to the definition of the large-scale forcing, the interaction of the ABL with the surface and the evaluation of model results with observations. The characterizationmore » of mesoscale forcing for asynchronous microscale modelling of the ABL is discussed based on momentum budget analysis of WRF simulations. Then a single-column model is used to demonstrate the added value of incorporating different forcing mechanisms in microscale models. The simulations are evaluated in terms of wind energy quantities of interest.« less

  19. 7 CFR 51.312 - Mature.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Definitions § 51.312 Mature. “Mature” means that the apples have reached the stage of development which will insure the proper completion of the ripening process. Before a mature apple becomes overripe it will show varying degrees of firmness...

  20. 7 CFR 51.312 - Mature.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Definitions § 51.312 Mature. “Mature” means that the apples have reached the stage of development which will insure the proper completion of the ripening process. Before a mature apple becomes overripe it will show varying degrees of firmness...

  1. 39 CFR 310.4 - Responsibility of carriers.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... make sure that their carriage of matter is lawful within the definition, exceptions, suspension, and... inform their customers of the contents of these regulations so that only proper matter is tendered to them for carriage. Carriers should desist from carrying any matter when the form of shipment, identity...

  2. 39 CFR 310.4 - Responsibility of carriers.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... make sure that their carriage of matter is lawful within the definition, exceptions, suspension, and... inform their customers of the contents of these regulations so that only proper matter is tendered to them for carriage. Carriers should desist from carrying any matter when the form of shipment, identity...

  3. 39 CFR 310.4 - Responsibility of carriers.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... make sure that their carriage of matter is lawful within the definition, exceptions, suspension, and... inform their customers of the contents of these regulations so that only proper matter is tendered to them for carriage. Carriers should desist from carrying any matter when the form of shipment, identity...

  4. 39 CFR 310.4 - Responsibility of carriers.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... make sure that their carriage of matter is lawful within the definition, exceptions, suspension, and... inform their customers of the contents of these regulations so that only proper matter is tendered to them for carriage. Carriers should desist from carrying any matter when the form of shipment, identity...

  5. 39 CFR 310.4 - Responsibility of carriers.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... make sure that their carriage of matter is lawful within the definition, exceptions, suspension, and... inform their customers of the contents of these regulations so that only proper matter is tendered to them for carriage. Carriers should desist from carrying any matter when the form of shipment, identity...

  6. 42 CFR 56.102 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES GRANTS GRANTS FOR MIGRANT HEALTH SERVICES... pesticide use; and (vii) Information on the availability and proper use of health services. (2) For purposes... agricultural worker. (n) Secretary means the Secretary of Health and Human Services and any other officer or...

  7. 40 CFR 86.010-2 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... diagnostics, means verifying that a component and/or system that receives information from a control computer... maintained. In general, limp-home operation implies that a component or system is not operating properly or... cannot be erased through human interaction with the OBD system or any onboard computer. Potential...

  8. 40 CFR 86.010-2 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... diagnostics, means verifying that a component and/or system that receives information from a control computer... maintained. In general, limp-home operation implies that a component or system is not operating properly or... cannot be erased through human interaction with the OBD system or any onboard computer. Potential...

  9. 40 CFR 86.010-2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... diagnostics, means verifying that a component and/or system that receives information from a control computer... maintained. In general, limp-home operation implies that a component or system is not operating properly or... cannot be erased through human interaction with the OBD system or any onboard computer. Potential...

  10. 40 CFR 86.010-2 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... diagnostics, means verifying that a component and/or system that receives information from a control computer... maintained. In general, limp-home operation implies that a component or system is not operating properly or... cannot be erased through human interaction with the OBD system or any onboard computer. Potential...

  11. 40 CFR 86.010-2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... diagnostics, means verifying that a component and/or system that receives information from a control computer... maintained. In general, limp-home operation implies that a component or system is not operating properly or... cannot be erased through human interaction with the OBD system or any onboard computer. Potential...

  12. 7 CFR 51.1907 - Mature.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Mature. 51.1907 Section 51.1907 Agriculture..., CERTIFICATION, AND STANDARDS) United States Consumer Standards for Fresh Tomatoes Definitions § 51.1907 Mature. Mature means that the tomato has reached the stage of development which will insure a proper completion...

  13. 7 CFR 51.1907 - Mature.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Mature. 51.1907 Section 51.1907 Agriculture..., CERTIFICATION, AND STANDARDS) United States Consumer Standards for Fresh Tomatoes Definitions § 51.1907 Mature. Mature means that the tomato has reached the stage of development which will insure a proper completion...

  14. 7 CFR 160.1 - Definitions of general terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Analysis. Any examination by physical, chemical, or sensory methods. (m) Classification. Designation as to... Administrator has sufficient and proper interest in the analysis, classification, grading, or sale of naval... provisions of the act and the provisions in this part to show the results of any examination, analysis...

  15. A Comparison of Coverage Restrictions for Biopharmaceuticals and Medical Procedures.

    PubMed

    Chambers, James; Pope, Elle; Bungay, Kathy; Cohen, Joshua; Ciarametaro, Michael; Dubois, Robert; Neumann, Peter J

    2018-04-01

    Differences in payer evaluation and coverage of pharmaceuticals and medical procedures suggest that coverage may differ for medications and procedures independent of their clinical benefit. We hypothesized that coverage for medications is more restricted than corresponding coverage for nonmedication interventions. We included top-selling medications and highly utilized procedures. For each intervention-indication pair, we classified value in terms of cost-effectiveness (incremental cost per quality-adjusted life-year), as reported by the Tufts Medical Center Cost-Effectiveness Analysis Registry. For each intervention-indication pair and for each of 10 large payers, we classified coverage, when available, as either "more restrictive" or as "not more restrictive," compared with a benchmark. The benchmark reflected the US Food and Drug Administration label information, when available, or pertinent clinical guidelines. We compared coverage policies and the benchmark in terms of step edits and clinical restrictions. Finally, we regressed coverage restrictiveness against intervention type (medication or nonmedication), controlling for value (cost-effectiveness more or less favorable than a designated threshold). We identified 392 medication and 185 procedure coverage decisions. A total of 26.3% of the medication coverage and 38.4% of the procedure coverage decisions were more restrictive than their corresponding benchmarks. After controlling for value, the odds of being more restrictive were 42% lower for medications than for procedures. Including unfavorable tier placement in the definition of "more restrictive" greatly increased the proportion of medication coverage decisions classified as "more restrictive" and reversed our findings. Therapy access depends on factors other than cost and clinical benefit, suggesting potential health care system inefficiency. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  16. Transonic Flutter Suppression Control Law Design, Analysis and Wind-Tunnel Results

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1999-01-01

    The benchmark active controls technology and wind tunnel test program at NASA Langley Research Center was started with the objective to investigate the nonlinear, unsteady aerodynamics and active flutter suppression of wings in transonic flow. The paper will present the flutter suppression control law design process, numerical nonlinear simulation and wind tunnel test results for the NACA 0012 benchmark active control wing model. The flutter suppression control law design processes using classical, and minimax techniques are described. A unified general formulation and solution for the minimax approach, based on the steady state differential game theory is presented. Design considerations for improving the control law robustness and digital implementation are outlined. It was shown that simple control laws when properly designed based on physical principles, can suppress flutter with limited control power even in the presence of transonic shocks and flow separation. In wind tunnel tests in air and heavy gas medium, the closed-loop flutter dynamic pressure was increased to the tunnel upper limit of 200 psf. The control law robustness and performance predictions were verified in highly nonlinear flow conditions, gain and phase perturbations, and spoiler deployment. A non-design plunge instability condition was also successfully suppressed.

  17. 21 CFR 155.201 - Canned mushrooms.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 2 2013-04-01 2013-04-01 false Canned mushrooms. 155.201 Section 155.201 Food and... mushrooms. (a) Identity—(1) Definition. Canned mushrooms is the food properly prepared from the caps and stems of succulent mushrooms conforming to the characteristics of the species Agaricus (Psalliota...

  18. 21 CFR 155.201 - Canned mushrooms.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 2 2014-04-01 2014-04-01 false Canned mushrooms. 155.201 Section 155.201 Food and... mushrooms. (a) Identity—(1) Definition. Canned mushrooms is the food properly prepared from the caps and stems of succulent mushrooms conforming to the characteristics of the species Agaricus (Psalliota...

  19. 21 CFR 155.201 - Canned mushrooms.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 2 2012-04-01 2012-04-01 false Canned mushrooms. 155.201 Section 155.201 Food and... mushrooms. (a) Identity—(1) Definition. Canned mushrooms is the food properly prepared from the caps and stems of succulent mushrooms conforming to the characteristics of the species Agaricus (Psalliota...

  20. Single Sourcing, Boilerplates, and Re-Purposing: Plagiarism and Technical Writing

    ERIC Educational Resources Information Center

    Louch, Michelle O'Brien

    2016-01-01

    In academia, plagiarism adheres to the traditional definition: utilizing another person's words or ideas without proper credit. Students are taught to cite everything, while instructors are given tools to detect plagiarism. This ultimately creates an atmosphere of paranoia, where students fear accusation and teachers are convinced that plagiarism…

  1. 29 CFR 1918.94 - Ventilation and atmospheric conditions (See also § 1918.2, definitions of Hazardous cargo...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... fumigant manufacturers' recommendations and warnings, and the proper use of personal protective equipment... other personal protective equipment recommended by the fumigant manufacturer for protection against the... and required to use any personal protective equipment recommended by the manufacturer of the product...

  2. 29 CFR 1918.94 - Ventilation and atmospheric conditions (See also § 1918.2, definitions of Hazardous cargo...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... fumigant manufacturers' recommendations and warnings, and the proper use of personal protective equipment... other personal protective equipment recommended by the fumigant manufacturer for protection against the... and required to use any personal protective equipment recommended by the manufacturer of the product...

  3. 7 CFR 51.3153 - Mature.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Mature. 51.3153 Section 51.3153 Agriculture..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Nectarines Definitions § 51.3153 Mature. “Mature” means that the nectarine has reached the stage of growth which will insure a proper completion of...

  4. 7 CFR 51.1313 - Mature.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Mature. 51.1313 Section 51.1313 Agriculture..., CERTIFICATION, AND STANDARDS) United States Standards for Winter Pears 1 Definitions § 51.1313 Mature. (a) Mature means that the pear has reached the stage of maturity which will insure the proper completion of...

  5. 7 CFR 51.484 - Mature.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Mature. 51.484 Section 51.484 Agriculture Regulations..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Cantaloups 1 Definitions § 51.484 Mature. Mature means that the cantaloup has reached the stage of maturity which will insure the proper completion...

  6. 7 CFR 51.1218 - Mature.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Mature. 51.1218 Section 51.1218 Agriculture..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Peaches Definitions § 51.1218 Mature. “Mature” means that the peach has reached the stage of growth which will ensure a proper completion of the...

  7. 7 CFR 51.1313 - Mature.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Mature. 51.1313 Section 51.1313 Agriculture... Standards for Winter Pears 1 Definitions § 51.1313 Mature. (a) Mature means that the pear has reached the stage of maturity which will insure the proper completion of the ripening process. (b) Before a mature...

  8. 7 CFR 51.3153 - Mature.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Mature. 51.3153 Section 51.3153 Agriculture..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Nectarines Definitions § 51.3153 Mature. “Mature” means that the nectarine has reached the stage of growth which will insure a proper completion of...

  9. 7 CFR 51.1313 - Mature.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Mature. 51.1313 Section 51.1313 Agriculture... Standards for Winter Pears 1 Definitions § 51.1313 Mature. (a) Mature means that the pear has reached the stage of maturity which will insure the proper completion of the ripening process. (b) Before a mature...

  10. 7 CFR 51.1313 - Mature.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Mature. 51.1313 Section 51.1313 Agriculture..., CERTIFICATION, AND STANDARDS) United States Standards for Winter Pears 1 Definitions § 51.1313 Mature. (a) Mature means that the pear has reached the stage of maturity which will insure the proper completion of...

  11. 7 CFR 51.484 - Mature.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Mature. 51.484 Section 51.484 Agriculture Regulations..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Cantaloups 1 Definitions § 51.484 Mature. Mature means that the cantaloup has reached the stage of maturity which will insure the proper completion...

  12. 7 CFR 51.1272 - Mature.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Mature. 51.1272 Section 51.1272 Agriculture..., CERTIFICATION, AND STANDARDS) United States Standards for Summer and Fall Pears 1 Definitions § 51.1272 Mature. (a) Mature means that the pear has reached the stage of maturity which will insure the proper...

  13. 7 CFR 51.1218 - Mature.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Mature. 51.1218 Section 51.1218 Agriculture..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Peaches Definitions § 51.1218 Mature. “Mature” means that the peach has reached the stage of growth which will ensure a proper completion of the...

  14. 7 CFR 51.1272 - Mature.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Mature. 51.1272 Section 51.1272 Agriculture..., CERTIFICATION, AND STANDARDS) United States Standards for Summer and Fall Pears 1 Definitions § 51.1272 Mature. (a) Mature means that the pear has reached the stage of maturity which will insure the proper...

  15. 7 CFR 51.1313 - Mature.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Mature. 51.1313 Section 51.1313 Agriculture... Standards for Winter Pears 1 Definitions § 51.1313 Mature. (a) Mature means that the pear has reached the stage of maturity which will insure the proper completion of the ripening process. (b) Before a mature...

  16. 22 CFR 505.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... individual's name or personal identifier, such as a social security number. (g) Routine use. With respect to... proper and necessary uses even if any such uses occur infrequently. (h) Statistical record. A record in a system of records maintained for statistical research or reporting purposes only and not used in whole or...

  17. 40 CFR 60.4248 - What definitions apply to this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... is designed to properly function in terms of reliability and fuel consumption, without being... Earth's surface that maintains a gaseous state at standard atmospheric temperature and pressure under... rich burn engine if the excess oxygen content of the exhaust at full load conditions is less than or...

  18. 40 CFR 60.4248 - What definitions apply to this subpart?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... is designed to properly function in terms of reliability and fuel consumption, without being... Earth's surface that maintains a gaseous state at standard atmospheric temperature and pressure under... rich burn engine if the excess oxygen content of the exhaust at full load conditions is less than or...

  19. 40 CFR 60.4248 - What definitions apply to this subpart?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... is designed to properly function in terms of reliability and fuel consumption, without being... a gaseous state at standard atmospheric temperature and pressure under ordinary conditions, and... the excess oxygen content of the exhaust at full load conditions is less than or equal to 2 percent...

  20. 40 CFR 60.4248 - What definitions apply to this subpart?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... is designed to properly function in terms of reliability and fuel consumption, without being... gaseous state at standard atmospheric temperature and pressure under ordinary conditions, and which is... recommendations regarding air/fuel ratio will be considered a rich burn engine if the excess oxygen content of the...

  1. 40 CFR 60.4248 - What definitions apply to this subpart?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... is designed to properly function in terms of reliability and fuel consumption, without being... a gaseous state at standard atmospheric temperature and pressure under ordinary conditions, and... the excess oxygen content of the exhaust at full load conditions is less than or equal to 2 percent...

  2. 32 CFR 284.3 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... Committee. The person or persons invested, by order of a proper court, with the guardianship of a minor or incompetent person and/or the estate of a minor or incompetent person. Component concerned. The agency... concerning a waiver application from which there is no right to appeal or request reconsideration, or for...

  3. Complexity and demographic explanations of cumulative culture.

    PubMed

    Querbes, Adrien; Vaesen, Krist; Houkes, Wybo

    2014-01-01

    Formal models have linked prehistoric and historical instances of technological change (e.g., the Upper Paleolithic transition, cultural loss in Holocene Tasmania, scientific progress since the late nineteenth century) to demographic change. According to these models, cumulation of technological complexity is inhibited by decreasing--while favoured by increasing--population levels. Here we show that these findings are contingent on how complexity is defined: demography plays a much more limited role in sustaining cumulative culture in case formal models deploy Herbert Simon's definition of complexity rather than the particular definitions of complexity hitherto assumed. Given that currently available empirical evidence doesn't afford discriminating proper from improper definitions of complexity, our robustness analyses put into question the force of recent demographic explanations of particular episodes of cultural change.

  4. Zero-gravity cloud physics laboratory: Candidate experiments definition and preliminary concept studies

    NASA Technical Reports Server (NTRS)

    Eaton, L. R.; Greco, R. V.; Hollinden, A. B.

    1973-01-01

    The candidate definition studies on the zero-g cloud physics laboratory are covered. This laboratory will be an independent self-contained shuttle sortie payload. Several critical technology areas have been identified and studied to assure proper consideration in terms of engineering requirements for the final design. Areas include chambers, gas and particle generators, environmental controls, motion controls, change controls, observational techniques, and composition controls. This unique laboratory will allow studies to be performed without mechanical, aerodynamics, electrical, or other type techniques to support the object under study. This report also covers the candidate experiment definitions, chambers and experiment classes, laboratory concepts and plans, special supporting studies, early flight opportunities and payload planning data for overall shuttle payload requirements assessments.

  5. A benchmark for comparison of cell tracking algorithms

    PubMed Central

    Maška, Martin; Ulman, Vladimír; Svoboda, David; Matula, Pavel; Matula, Petr; Ederra, Cristina; Urbiola, Ainhoa; España, Tomás; Venkatesan, Subramanian; Balak, Deepak M.W.; Karas, Pavel; Bolcková, Tereza; Štreitová, Markéta; Carthel, Craig; Coraluppi, Stefano; Harder, Nathalie; Rohr, Karl; Magnusson, Klas E. G.; Jaldén, Joakim; Blau, Helen M.; Dzyubachyk, Oleh; Křížek, Pavel; Hagen, Guy M.; Pastor-Escuredo, David; Jimenez-Carretero, Daniel; Ledesma-Carbayo, Maria J.; Muñoz-Barrutia, Arrate; Meijering, Erik; Kozubek, Michal; Ortiz-de-Solorzano, Carlos

    2014-01-01

    Motivation: Automatic tracking of cells in multidimensional time-lapse fluorescence microscopy is an important task in many biomedical applications. A novel framework for objective evaluation of cell tracking algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2013 Cell Tracking Challenge. In this article, we present the logistics, datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. Results: The main contributions of the challenge include the creation of a comprehensive video dataset repository and the definition of objective measures for comparison and ranking of the algorithms. With this benchmark, six algorithms covering a variety of segmentation and tracking paradigms have been compared and ranked based on their performance on both synthetic and real datasets. Given the diversity of the datasets, we do not declare a single winner of the challenge. Instead, we present and discuss the results for each individual dataset separately. Availability and implementation: The challenge Web site (http://www.codesolorzano.com/celltrackingchallenge) provides access to the training and competition datasets, along with the ground truth of the training videos. It also provides access to Windows and Linux executable files of the evaluation software and most of the algorithms that competed in the challenge. Contact: codesolorzano@unav.es Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24526711

  6. BENCHMARK DOSE TECHNICAL GUIDANCE DOCUMENT ...

    EPA Pesticide Factsheets

    The U.S. EPA conducts risk assessments for an array of health effects that may result from exposure to environmental agents, and that require an analysis of the relationship between exposure and health-related outcomes. The dose-response assessment is essentially a two-step process, the first being the definition of a point of departure (POD), and the second extrapolation from the POD to low environmentally-relevant exposure levels. The benchmark dose (BMD) approach provides a more quantitative alternative to the first step in the dose-response assessment than the current NOAEL/LOAEL process for noncancer health effects, and is similar to that for determining the POD proposed for cancer endpoints. As the Agency moves toward harmonization of approaches for human health risk assessment, the dichotomy between cancer and noncancer health effects is being replaced by consideration of mode of action and whether the effects of concern are likely to be linear or nonlinear at low doses. Thus, the purpose of this project is to provide guidance for the Agency and the outside community on the application of the BMD approach in determining the POD for all types of health effects data, whether a linear or nonlinear low dose extrapolation is used. A guidance document is being developed under the auspices of EPA's Risk Assessment Forum. The purpose of this project is to provide guidance for the Agency and the outside community on the application of the benchmark dose (BMD) appr

  7. A benchmark for comparison of dental radiography analysis algorithms.

    PubMed

    Wang, Ching-Wei; Huang, Cheng-Ta; Lee, Jia-Hong; Li, Chung-Hsing; Chang, Sheng-Wei; Siao, Ming-Jhih; Lai, Tat-Ming; Ibragimov, Bulat; Vrtovec, Tomaž; Ronneberger, Olaf; Fischer, Philipp; Cootes, Tim F; Lindner, Claudia

    2016-07-01

    Dental radiography plays an important role in clinical diagnosis, treatment and surgery. In recent years, efforts have been made on developing computerized dental X-ray image analysis systems for clinical usages. A novel framework for objective evaluation of automatic dental radiography analysis algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2015 Bitewing Radiography Caries Detection Challenge and Cephalometric X-ray Image Analysis Challenge. In this article, we present the datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. The main contributions of the challenge include the creation of the dental anatomy data repository of bitewing radiographs, the creation of the anatomical abnormality classification data repository of cephalometric radiographs, and the definition of objective quantitative evaluation for comparison and ranking of the algorithms. With this benchmark, seven automatic methods for analysing cephalometric X-ray image and two automatic methods for detecting bitewing radiography caries have been compared, and detailed quantitative evaluation results are presented in this paper. Based on the quantitative evaluation results, we believe automatic dental radiography analysis is still a challenging and unsolved problem. The datasets and the evaluation software will be made available to the research community, further encouraging future developments in this field. (http://www-o.ntust.edu.tw/~cweiwang/ISBI2015/). Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Search for long-lived, weakly interacting particles that decay to displaced hadronic jets in proton-proton collisions at s = 8 TeV with the ATLAS detector

    DOE PAGES

    Aad, G.; Abbott, B.; Abdallah, J.; ...

    2015-07-17

    A search for the decay of neutral, weakly interacting, long-lived particles using data collected by the ATLAS detector at the LHC is presented. This analysis uses the full data set recorded in 2012: 20.3 fb –1 of proton-proton collision data at √s = 8 TeV. The search employs techniques for reconstructing decay vertices of long-lived particles decaying to jets in the inner tracking detector and muon spectrometer. Signal events require at least two reconstructed vertices. No significant excess of events over the expected background is found, and limits as a function of proper lifetime are reported for the decay ofmore » the Higgs boson and other scalar bosons to long-lived particles and for Hidden Valley Z' and Stealth SUSY benchmark models. The first search results for displaced decays in Z' and Stealth SUSY models are presented. The upper bounds of the excluded proper lifetimes are the most stringent to date.« less

  9. Prevention of catheter-related blood stream infection.

    PubMed

    Byrnes, Matthew C; Coopersmith, Craig M

    2007-08-01

    Catheter-related blood stream infections are a morbid complication of central venous catheters. This review will highlight a comprehensive approach demonstrated to prevent catheter-related blood stream infections. Elements of prevention important to inserting a central venous catheter include proper hand hygiene, use of full barrier precautions, appropriate skin preparation with 2% chlorhexidine, and using the subclavian vein as the preferred anatomic site. Rigorous attention needs to be given to dressing care, and there should be daily assessment of the need for central venous catheters, with prompt removal as soon as is practicable. Healthcare workers should be educated routinely on methods to prevent catheter-related blood stream infections. If rates remain higher than benchmark levels despite proper bedside practice, antiseptic or antibiotic-impregnated catheters can also prevent infections effectively. A recent program utilizing these practices in 103 ICUs in Michigan resulted in a 66% decrease in infection rates. There is increasing recognition that a comprehensive strategy to prevent catheter-related blood stream infections can prevent most infections, if not all. This suggests that thousands of infections can potentially be averted if the simple practices outlined herein are followed.

  10. Defining relative humidity in terms of water activity. Part 1: definition

    NASA Astrophysics Data System (ADS)

    Feistel, Rainer; Lovell-Smith, Jeremy W.

    2017-08-01

    Relative humidity (RH) is a quantity widely used in various fields such as metrology, meteorology, climatology or engineering. However, RH is neither uniformly defined, nor do some definitions properly account for deviations from ideal-gas properties, nor is the application range of interest fully covered. In this paper, a new full-range definition of RH is proposed that is based on the thermodynamics of activities in order to include deviations from ideal-gas behaviour. Below the critical point of pure water, at pressures p  <  22.064 MPa and temperatures T  <  647.096 K, RH is rigorously defined as the relative activity (or relative fugacity) of water in humid air. For this purpose, reference states of the relative activity are specified appropriately. Asymptotically, the ideal-gas limit of the new definition is consistent with de-facto standard RH definitions published previously and recommended internationally. Virial approximations are reported for estimating small corrections to the ideal-gas equations.

  11. Creating a meaningful infection control program: one home healthcare agency's lessons.

    PubMed

    Poff, Renee McCoy; Browning, Sarah Via

    2014-03-01

    Creating a meaningful infection control program in the home care setting proved to be challenging for agency leaders of one hospital-based home healthcare agency. Challenges arose when agency leaders provided infection control (IC) data to the hospital's IC Committee. The IC Section Chief asked for national benchmark comparisons to align home healthcare reporting to that of the hospital level. At that point, it was evident that the home healthcare IC program lacked definition and structure. The purpose of this article is to share how one agency built a meaningful IC program.

  12. [Controlling instruments in radiology].

    PubMed

    Maurer, M

    2013-10-01

    Due to the rising costs and competitive pressures radiological clinics and practices are now facing, controlling instruments are gaining importance in the optimization of structures and processes of the various diagnostic examinations and interventional procedures. It will be shown how the use of selected controlling instruments can secure and improve the performance of radiological facilities. A definition of the concept of controlling will be provided. It will be shown which controlling instruments can be applied in radiological departments and practices. As an example, two of the controlling instruments, material cost analysis and benchmarking, will be illustrated.

  13. Using Key Performance Indicators to Do More with Less in Your Practice

    PubMed Central

    Taylor, Brian

    2016-01-01

    Key performance indicators (KPIs) are important to managing any sustainable business. This tutorial provides audiologists, especially those with little formal business education, with a working definition of KPIs. A major theme of this article is that a relatively small group of about a dozen KPIs are an essential part of managing a successful audiology practice. The most useful KPIs for managing retail-oriented and medically oriented practices will be provided. Best practice benchmarks and how to use them to hire, coach, and train your staff also is covered. PMID:28028323

  14. Using Key Performance Indicators to Do More with Less in Your Practice.

    PubMed

    Taylor, Brian

    2016-11-01

    Key performance indicators (KPIs) are important to managing any sustainable business. This tutorial provides audiologists, especially those with little formal business education, with a working definition of KPIs. A major theme of this article is that a relatively small group of about a dozen KPIs are an essential part of managing a successful audiology practice. The most useful KPIs for managing retail-oriented and medically oriented practices will be provided. Best practice benchmarks and how to use them to hire, coach, and train your staff also is covered.

  15. Definition of drug resistant epilepsy: consensus proposal by the ad hoc Task Force of the ILAE Commission on Therapeutic Strategies.

    PubMed

    Kwan, Patrick; Arzimanoglou, Alexis; Berg, Anne T; Brodie, Martin J; Allen Hauser, W; Mathern, Gary; Moshé, Solomon L; Perucca, Emilio; Wiebe, Samuel; French, Jacqueline

    2010-06-01

    To improve patient care and facilitate clinical research, the International League Against Epilepsy (ILAE) appointed a Task Force to formulate a consensus definition of drug resistant epilepsy. The overall framework of the definition has two "hierarchical" levels: Level 1 provides a general scheme to categorize response to each therapeutic intervention, including a minimum dataset of knowledge about the intervention that would be needed; Level 2 provides a core definition of drug resistant epilepsy using a set of essential criteria based on the categorization of response (from Level 1) to trials of antiepileptic drugs. It is proposed as a testable hypothesis that drug resistant epilepsy is defined as failure of adequate trials of two tolerated, appropriately chosen and used antiepileptic drug schedules (whether as monotherapies or in combination) to achieve sustained seizure freedom. This definition can be further refined when new evidence emerges. The rationale behind the definition and the principles governing its proper use are discussed, and examples to illustrate its application in clinical practice are provided.

  16. Notch1 acts via Foxc2 to promote definitive hematopoiesis via effects on hemogenic endothelium

    PubMed Central

    Jang, Il Ho; Lu, Yi-Fen; Zhao, Long; Wenzel, Pamela L.; Kume, Tsutomu; Datta, Sumon M.; Arora, Natasha; Guiu, Jordi; Lagha, Mounia; Kim, Peter G.; Do, Eun Kyoung; Kim, Jae Ho; Schlaeger, Thorsten M.; Zon, Leonard I.; Bigas, Anna; Burns, Caroline E.

    2015-01-01

    Hematopoietic and vascular development share many common features, including cell surface markers and sites of origin. Recent lineage-tracing studies have established that definitive hematopoietic stem and progenitor cells arise from vascular endothelial–cadherin+ hemogenic endothelial cells of the aorta-gonad-mesonephros region, but the genetic programs underlying the specification of hemogenic endothelial cells remain poorly defined. Here, we discovered that Notch induction enhances hematopoietic potential and promotes the specification of hemogenic endothelium in differentiating cultures of mouse embryonic stem cells, and we identified Foxc2 as a highly upregulated transcript in the hemogenic endothelial population. Studies in zebrafish and mouse embryos revealed that Foxc2 and its orthologs are required for the proper development of definitive hematopoiesis and function downstream of Notch signaling in the hemogenic endothelium. These data establish a pathway linking Notch signaling to Foxc2 in hemogenic endothelial cells to promote definitive hematopoiesis. PMID:25587036

  17. A univocal definition of the neuronal soma morphology using Gaussian mixture models.

    PubMed

    Luengo-Sanchez, Sergio; Bielza, Concha; Benavides-Piccione, Ruth; Fernaud-Espinosa, Isabel; DeFelipe, Javier; Larrañaga, Pedro

    2015-01-01

    The definition of the soma is fuzzy, as there is no clear line demarcating the soma of the labeled neurons and the origin of the dendrites and axon. Thus, the morphometric analysis of the neuronal soma is highly subjective. In this paper, we provide a mathematical definition and an automatic segmentation method to delimit the neuronal soma. We applied this method to the characterization of pyramidal cells, which are the most abundant neurons in the cerebral cortex. Since there are no benchmarks with which to compare the proposed procedure, we validated the goodness of this automatic segmentation method against manual segmentation by neuroanatomists to set up a framework for comparison. We concluded that there were no significant differences between automatically and manually segmented somata, i.e., the proposed procedure segments the neurons similarly to how a neuroanatomist does. It also provides univocal, justifiable and objective cutoffs. Thus, this study is a means of characterizing pyramidal neurons in order to objectively compare the morphometry of the somata of these neurons in different cortical areas and species.

  18. Simulation studies for the PANDA experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kopf, B.

    2005-10-26

    One main component of the planned Facility for Antiproton and Ion Research (FAIR) is the High Energy Storage Ring (HESR) at GSI, Darmstadt, which will provide cooled antiprotons with momenta between 1.5 and 15 GeV/c. The PANDA experiment will investigate p-barannihilations with internal hydrogen and nuclear targets. Due to the planned extensive physics program a multipurpose detector with nearly complete solid angle coverage, proper particle identification over a large momentum range, and high resolution calorimetry for neutral particles is required. For the optimization of the detector design simulation studies of several benchmark channels are in progress which are covering themore » most relevant physics topics. Some important simulation results are discussed here.« less

  19. Verification of a magnetic island in gyro-kinetics by comparison with analytic theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zarzoso, D., E-mail: david.zarzoso-fernandez@polytechnique.org; Casson, F. J.; Poli, E.

    A rotating magnetic island is imposed in the gyrokinetic code GKW, when finite differences are used for the radial direction, in order to develop the predictions of analytic tearing mode theory and understand its limitations. The implementation is verified against analytics in sheared slab geometry with three numerical tests that are suggested as benchmark cases for every code that imposes a magnetic island. The convergence requirements to properly resolve physics around the island separatrix are investigated. In the slab geometry, at low magnetic shear, binormal flows inside the island can drive Kelvin-Helmholtz instabilities which prevent the formation of the steadymore » state for which the analytic theory is formulated.« less

  20. Simulated Exercise Physiology Laboratories.

    ERIC Educational Resources Information Center

    Morrow, James R., Jr.; Pivarnik, James M.

    This book consists of a lab manual and computer disks for either Apple or IBM hardware. The lab manual serves as "tour guide" for the learner going through the various lab experiences. The manual contains definitions, proper terminology, and other basic information about physiological principles. It is organized so a step-by-step procedure may be…

  1. 36 CFR 14.91 - Procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... structure, must show definitely that each one is necessary for a proper use of the right-of-way for the....91 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR RIGHTS-OF..., for a line right-of-way in excess of 100 feet in width or for a structure or facility right-of-way of...

  2. 21 CFR 169.3 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... part: (a) The term vanilla beans means the properly cured and dried fruit pods of Vanilla planifolia Andrews and of Vanilla tahitensis Moore. (b) The term unit weight of vanilla beans means, in the case of vanilla beans containing not more than 25 percent moisture, 13.35 ounces of such beans; and, in the case...

  3. 21 CFR 169.3 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... part: (a) The term vanilla beans means the properly cured and dried fruit pods of Vanilla planifolia Andrews and of Vanilla tahitensis Moore. (b) The term unit weight of vanilla beans means, in the case of vanilla beans containing not more than 25 percent moisture, 13.35 ounces of such beans; and, in the case...

  4. 21 CFR 169.3 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... part: (a) The term vanilla beans means the properly cured and dried fruit pods of Vanilla planifolia Andrews and of Vanilla tahitensis Moore. (b) The term unit weight of vanilla beans means, in the case of vanilla beans containing not more than 25 percent moisture, 13.35 ounces of such beans; and, in the case...

  5. 21 CFR 169.3 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... part: (a) The term vanilla beans means the properly cured and dried fruit pods of Vanilla planifolia Andrews and of Vanilla tahitensis Moore. (b) The term unit weight of vanilla beans means, in the case of vanilla beans containing not more than 25 percent moisture, 13.35 ounces of such beans; and, in the case...

  6. 46 CFR 188.10-3 - Approved container.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 7 2010-10-01 2010-10-01 false Approved container. 188.10-3 Section 188.10-3 Shipping... PROVISIONS Definition of Terms Used in This Subchapter § 188.10-3 Approved container. This term means a container which is properly labeled, marked and approved by DOT for the commodity which it contains. [CGFR...

  7. 29 CFR 1910.155 - Scope, application and definitions applicable to this subpart.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... moisture absorption (caking) as well as to provide proper flow capabilities. Dry chemical does not include... require formal classroom instruction. (15) Enclosed structure means a structure with a roof or ceiling and... a rigid shell, energy absorption system, and chin strap intended to be worn to provide protection...

  8. 29 CFR 1910.155 - Scope, application and definitions applicable to this subpart.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... moisture absorption (caking) as well as to provide proper flow capabilities. Dry chemical does not include... require formal classroom instruction. (15) Enclosed structure means a structure with a roof or ceiling and... a rigid shell, energy absorption system, and chin strap intended to be worn to provide protection...

  9. 29 CFR 1910.155 - Scope, application and definitions applicable to this subpart.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... moisture absorption (caking) as well as to provide proper flow capabilities. Dry chemical does not include... require formal classroom instruction. (15) Enclosed structure means a structure with a roof or ceiling and... a rigid shell, energy absorption system, and chin strap intended to be worn to provide protection...

  10. 29 CFR 1910.155 - Scope, application and definitions applicable to this subpart.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... moisture absorption (caking) as well as to provide proper flow capabilities. Dry chemical does not include... require formal classroom instruction. (15) Enclosed structure means a structure with a roof or ceiling and... a rigid shell, energy absorption system, and chin strap intended to be worn to provide protection...

  11. 7 CFR 407.11 - Area risk protection insurance for corn.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... addition to the definition contained in the Area Risk Protection Insurance Basic Provisions, corn seed that... accepted application; (3) Properly planted by the final planting date and reported on or before the acreage reporting date; (4) Planted with the intent to be harvested; and (5) Not planted into an established grass...

  12. 7 CFR 407.10 - Area risk protection insurance for barley.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... definition contained in the Area Risk Protection Insurance Basic Provisions, land on which seed is initially... application; (b) Properly planted by the final planting date and reported on or before the acreage reporting date; (c) Planted with the intent to be harvested; (d) Not planted into an established grass or legume...

  13. 21 CFR 169.3 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... part: (a) The term vanilla beans means the properly cured and dried fruit pods of Vanilla planifolia Andrews and of Vanilla tahitensis Moore. (b) The term unit weight of vanilla beans means, in the case of vanilla beans containing not more than 25 percent moisture, 13.35 ounces of such beans; and, in the case...

  14. Needed: Guidelines for defining acceptable advance regeneration

    Treesearch

    Dennis E. Ferguson

    1984-01-01

    Advance regeneration is an important component in many stands scheduled for harvesting. Properly managed, such regeneration can contribute to a healthy, new stand, but too often trees do not quickly respond to the new environment or take too long to adjust. Definitions of acceptable advance regeneration are needed for pre- and postharvest inventories. The author...

  15. The English Language Learner Variable in Research: One Definition Is Not Enough

    ERIC Educational Resources Information Center

    Debossu, Stephanie C.

    2015-01-01

    Properly defining a population ensures that resources, such as funding and access, meet the needs, expectations, and intended outcomes for those represented. Ethical concerns arise when a target population, such as the English Language Learner population, is defined in numerous yet incomplete ways, and differently in research and in state policies…

  16. Infection control practices for dental radiography.

    PubMed

    Palenik, Charles John

    2004-06-01

    Infection control for dental radiography employs the same materials, processes, and techniques used in the operatory, yet unless proper procedures are established and followed, there is a definite potential for cross-contamination to clinical area surfaces and DHCP. In general, the aseptic practices used are relatively simple and inexpensive, yet they require complete application in every situation.

  17. 26 CFR 1.956-2 - Definition of United States property.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... company equivalent to the unearned premiums or reserves which are ordinary and necessary for the proper... determined from all the facts and circumstances in each case. (vi) [Reserved] For further guidance, see § 1... described in section 953(a)(1) and the regulations thereunder. For purposes of this subdivision, a reserve...

  18. Assessment of economic factors affecting the satellite power system. Volume 1: System cost factors

    NASA Technical Reports Server (NTRS)

    Hazelrigg, G. A., Jr.

    1978-01-01

    The factors relevant to SPS costing and selection of preferred SPS satellite configurations were studied. The issues discussed are: (1) consideration of economic factors in the SPS system that relate to selection of SPS satellite configuration; (2) analysis of the proper rate of interest for use in SPS system definition studies; and (3) the impacts of differential inflation on SPS system definition costing procedures. A cost-risk comparison of the SPS satellite configurations showed a significant difference in the levelized cost of power from them. It is concluded, that this difference is the result more of differences in the procedures for assessing costs rather than in the satellite technologies required or of any advantages of one satellite configuration over the other. Analysis of the proper rate of interest for use in SPS system is 4 percent. The major item of differential inflation to be expected over this period of time is the real cost of labor. This cost is likely to double between today and the period of SPS construction.

  19. Hazardous Material Packaging and Transportation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hypes, Philip A.

    2016-02-04

    This is a student training course. Some course objectives are to: recognize and use standard international and US customary units to describe activities and exposure rates associated with radioactive material; determine whether a quantity of a single radionuclide meets the definition of a class 7 (radioactive) material; determine, for a given single radionuclide, the shipping quantity activity limits per 49 Code of Federal Regulations (CFR) 173.435; determine the appropriate radioactive material hazard class proper shipping name for a given material; determine when a single radionuclide meets the DOT definition of a hazardous substance; determine the appropriate packaging required for amore » given radioactive material; identify the markings to be placed on a package of radioactive material; determine the label(s) to apply to a given radioactive material package; identify the entry requirements for radioactive material labels; determine the proper placement for radioactive material label(s); identify the shipping paper entry requirements for radioactive material; select the appropriate placards for a given radioactive material shipment or vehicle load; and identify allowable transport limits and unacceptable transport conditions for radioactive material.« less

  20. The Smithsonian Earth Physics Satellite (SEPS) definition study, volumes 1 through 4

    NASA Technical Reports Server (NTRS)

    1971-01-01

    A limited Phase B study was undertaken to determine the merit and feasibility of launching a proposed earth physics satellite with Apollo-type hardware. The study revealed that it would be feasible to launch this satellite using a S-IB stage, a S-IVB with restart capability, an instrument unit, a SLA for the satellite shroud, and a nose cone (AS-204 configuration). A definition of the proposed satellite is provided, which is specifically designed to satisfy the fundamental requirement of providing an orbiting benchmark of maximum accuracy. The satellite is a completely passive, solid 3628-kg sphere of 38.1-cm radius and very high mass-to-area ratio (7980 kg sq mi). In the suggested orbit of 55 degrees inclination, 3720 km altitude, and low eccentricity, the orbital lifetime is extremely long, so many decades of operation can be expected.

  1. Review and standardization of cell phone exposure calculations using the SAM phantom and anatomically correct head models.

    PubMed

    Beard, Brian B; Kainz, Wolfgang

    2004-10-13

    We reviewed articles using computational RF dosimetry to compare the Specific Anthropomorphic Mannequin (SAM) to anatomically correct models of the human head. Published conclusions based on such comparisons have varied widely. We looked for reasons that might cause apparently similar comparisons to produce dissimilar results. We also looked at the information needed to adequately compare the results of computational RF dosimetry studies. We concluded studies were not comparable because of differences in definitions, models, and methodology. Therefore we propose a protocol, developed by an IEEE standards group, as an initial step in alleviating this problem. The protocol calls for a benchmark validation study comparing the SAM phantom to two anatomically correct models of the human head. It also establishes common definitions and reporting requirements that will increase the comparability of all computational RF dosimetry studies of the human head.

  2. Review and standardization of cell phone exposure calculations using the SAM phantom and anatomically correct head models

    PubMed Central

    Beard, Brian B; Kainz, Wolfgang

    2004-01-01

    We reviewed articles using computational RF dosimetry to compare the Specific Anthropomorphic Mannequin (SAM) to anatomically correct models of the human head. Published conclusions based on such comparisons have varied widely. We looked for reasons that might cause apparently similar comparisons to produce dissimilar results. We also looked at the information needed to adequately compare the results of computational RF dosimetry studies. We concluded studies were not comparable because of differences in definitions, models, and methodology. Therefore we propose a protocol, developed by an IEEE standards group, as an initial step in alleviating this problem. The protocol calls for a benchmark validation study comparing the SAM phantom to two anatomically correct models of the human head. It also establishes common definitions and reporting requirements that will increase the comparability of all computational RF dosimetry studies of the human head. PMID:15482601

  3. The [(AI 2O 3) 2] - Anion Cluster: Electron Localization-Delocalization Isomerism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierka, Marek; Dobler, Jens; Sauer, Joachim

    2009-10-05

    Three-dimensional bulk alumina and its two-dimensional thin films show great structural diversity, posing considerable challenges to their experimental structural characterization and computational modeling. Recently, structural diversity has also been demonstrated for zerodimensional gas phase aluminum oxide clusters. Mass-selected clusters not only make systematic studies of the structural and electronic properties as a function of size possible, but lately have also emerged as powerful molecular models of complex surfaces and solid catalysts. In particular, the [(Al 2O 3) 3-5] + clusters were the first example of polynuclear maingroup metal oxide cluster that are able to thermally activate CH 4. Over themore » past decades gas phase aluminum oxide clusters have been extensively studied both experimentally and computationally, but definitive structural assignments were made for only a handful of them: the planar [Al 3O 3] - and [Al 5O 4] - cluster anions, and the [(Al 2O 3) 1-4(AlO)] + cluster cations. For stoichiometric clusters only the atomic structures of [(Al 2O 3) 4] +/0 have been nambiguously resolved. Here we report on the structures of the [(Al 2O 3) 2] -/0 clusters combining photoelectron spectroscopy (PES) and quantum chemical calculations employing a genetic algorithm as a global optimization technique. The [(Al 2O 3) 2] - cluster anion show energetically close lying but structurally distinct cage and sheet-like isomers which differ by delocalization/localization of the extra electron. The experimental results are crucial for benchmarking the different computational methods applied with respect to a proper description of electron localization and the relative energies for the isomers which will be of considerable value for future computational studies of aluminum oxide and related systems.« less

  4. On the definition of a confounder

    PubMed Central

    VanderWeele, Tyler J.; Shpitser, Ilya

    2014-01-01

    Summary The causal inference literature has provided a clear formal definition of confounding expressed in terms of counterfactual independence. The causal inference literature has not, however, produced a clear formal definition of a confounder, as it has given priority to the concept of confounding over that of a confounder. We consider a number of candidate definitions arising from various more informal statements made in the literature. We consider the properties satisfied by each candidate definition, principally focusing on (i) whether under the candidate definition control for all “confounders” suffices to control for “confounding” and (ii) whether each confounder in some context helps eliminate or reduce confounding bias. Several of the candidate definitions do not have these two properties. Only one candidate definition of those considered satisfies both properties. We propose that a “confounder” be defined as a pre-exposure covariate C for which there exists a set of other covariates X such that effect of the exposure on the outcome is unconfounded conditional on (X, C) but such that for no proper subset of (X, C) is the effect of the exposure on the outcome unconfounded given the subset. A variable that helps reduce bias but not eliminate bias we propose referring to as a “surrogate confounder.” PMID:25544784

  5. Complexity and Demographic Explanations of Cumulative Culture

    PubMed Central

    Querbes, Adrien; Vaesen, Krist; Houkes, Wybo

    2014-01-01

    Formal models have linked prehistoric and historical instances of technological change (e.g., the Upper Paleolithic transition, cultural loss in Holocene Tasmania, scientific progress since the late nineteenth century) to demographic change. According to these models, cumulation of technological complexity is inhibited by decreasing— while favoured by increasing—population levels. Here we show that these findings are contingent on how complexity is defined: demography plays a much more limited role in sustaining cumulative culture in case formal models deploy Herbert Simon's definition of complexity rather than the particular definitions of complexity hitherto assumed. Given that currently available empirical evidence doesn't afford discriminating proper from improper definitions of complexity, our robustness analyses put into question the force of recent demographic explanations of particular episodes of cultural change. PMID:25048625

  6. Benchmarking the Cost per Person of Mass Treatment for Selected Neglected Tropical Diseases: An Approach Based on Literature Review and Meta-regression with Web-Based Software Application

    PubMed Central

    Fitzpatrick, Christopher; Fleming, Fiona M.; Madin-Warburton, Matthew; Schneider, Timm; Meheus, Filip; Asiedu, Kingsley; Solomon, Anthony W.; Montresor, Antonio; Biswas, Gautam

    2016-01-01

    Background Advocacy around mass treatment for the elimination of selected Neglected Tropical Diseases (NTDs) has typically put the cost per person treated at less than US$ 0.50. Whilst useful for advocacy, the focus on a single number misrepresents the complexity of delivering “free” donated medicines to about a billion people across the world. We perform a literature review and meta-regression of the cost per person per round of mass treatment against NTDs. We develop a web-based software application (https://healthy.shinyapps.io/benchmark/) to calculate setting-specific unit costs against which programme budgets and expenditures or results-based pay-outs can be benchmarked. Methods We reviewed costing studies of mass treatment for the control, elimination or eradication of lymphatic filariasis, schistosomiasis, soil-transmitted helminthiasis, onchocerciasis, trachoma and yaws. These are the main 6 NTDs for which mass treatment is recommended. We extracted financial and economic unit costs, adjusted to a standard definition and base year. We regressed unit costs on the number of people treated and other explanatory variables. Regression results were used to “predict” country-specific unit cost benchmarks. Results We reviewed 56 costing studies and included in the meta-regression 34 studies from 23 countries and 91 sites. Unit costs were found to be very sensitive to economies of scale, and the decision of whether or not to use local volunteers. Financial unit costs are expected to be less than 2015 US$ 0.50 in most countries for programmes that treat 100 thousand people or more. However, for smaller programmes, including those in the “last mile”, or those that cannot rely on local volunteers, both economic and financial unit costs are expected to be higher. Discussion The available evidence confirms that mass treatment offers a low cost public health intervention on the path towards universal health coverage. However, more costing studies focussed on elimination are needed. Unit cost benchmarks can help in monitoring value for money in programme plans, budgets and accounts, or in setting a reasonable pay-out for results-based financing mechanisms. PMID:27918573

  7. Benchmarking the Cost per Person of Mass Treatment for Selected Neglected Tropical Diseases: An Approach Based on Literature Review and Meta-regression with Web-Based Software Application.

    PubMed

    Fitzpatrick, Christopher; Fleming, Fiona M; Madin-Warburton, Matthew; Schneider, Timm; Meheus, Filip; Asiedu, Kingsley; Solomon, Anthony W; Montresor, Antonio; Biswas, Gautam

    2016-12-01

    Advocacy around mass treatment for the elimination of selected Neglected Tropical Diseases (NTDs) has typically put the cost per person treated at less than US$ 0.50. Whilst useful for advocacy, the focus on a single number misrepresents the complexity of delivering "free" donated medicines to about a billion people across the world. We perform a literature review and meta-regression of the cost per person per round of mass treatment against NTDs. We develop a web-based software application (https://healthy.shinyapps.io/benchmark/) to calculate setting-specific unit costs against which programme budgets and expenditures or results-based pay-outs can be benchmarked. We reviewed costing studies of mass treatment for the control, elimination or eradication of lymphatic filariasis, schistosomiasis, soil-transmitted helminthiasis, onchocerciasis, trachoma and yaws. These are the main 6 NTDs for which mass treatment is recommended. We extracted financial and economic unit costs, adjusted to a standard definition and base year. We regressed unit costs on the number of people treated and other explanatory variables. Regression results were used to "predict" country-specific unit cost benchmarks. We reviewed 56 costing studies and included in the meta-regression 34 studies from 23 countries and 91 sites. Unit costs were found to be very sensitive to economies of scale, and the decision of whether or not to use local volunteers. Financial unit costs are expected to be less than 2015 US$ 0.50 in most countries for programmes that treat 100 thousand people or more. However, for smaller programmes, including those in the "last mile", or those that cannot rely on local volunteers, both economic and financial unit costs are expected to be higher. The available evidence confirms that mass treatment offers a low cost public health intervention on the path towards universal health coverage. However, more costing studies focussed on elimination are needed. Unit cost benchmarks can help in monitoring value for money in programme plans, budgets and accounts, or in setting a reasonable pay-out for results-based financing mechanisms.

  8. VizieR Online Data Catalog: Overlooked wide companions of nearby F stars (Scholz, 2016)

    NASA Astrophysics Data System (ADS)

    Scholz, R.-D.

    2016-02-01

    We checked a sample of 545 F stars within 50pc for wide companions using existing near-infrared and optical sky surveys. Applying the common proper motion (CPM) criterion, we detected wide companion candidates with 6-120arcsec angular separations by visual inspection of multi-epoch finder charts and by searching in proper motion catalogues. Final proper motions were measured by involving positional measurements from up to eleven surveys. Spectral types of red CPM companions were estimated from their absolute J-band magnitudes based on the Hipparcos distances of the primaries. In addition to about 100 known CPM objects, we found 19 new CPM companions and confirmed 31 previously known candidates. A few CPM objects are still considered as candidates according to their level of proper motion agreement. Among the new objects there are nine M0-M4, eight M5-M6, one ~L3.5 dwarf (HD 3861B), and one white dwarf (WD) (HD 2726B), whereas we confirmed two K, 19 M0-M4, six M5-M6, two early-L dwarfs, and two DA WDs as CPM companions. In a few cases, previous spectral types were available that all agree well with our estimates. Two companions (HD 22879B and HD 49933B) are associated with moderately metal-poor Gaia benchmark stars. One doubtful CPM companion, spectroscopically classified as WD but found to be very bright (J=11.1) by others, should either be a very nearby foreground WD or a different kind of object associated with HD 165670. The main results of this research note, data on new, confirmed, and rejected CPM pairs, are listed in tablea1.dat, tableb1.dat, and tablec1.dat, respectively. (4 data files).

  9. Noncovalent interactions between cisplatin and graphene prototypes.

    PubMed

    Cuevas-Flores, Ma Del Refugio; Garcia-Revilla, Marco Antonio; Bartolomei, Massimiliano

    2018-01-15

    Cisplatin (CP) has been widely used as an anticancer drug for more than 30 years despite severe side effects due to its low bioavailability and poor specificity. For this reason, it is paramount to study and design novel nanomaterials to be used as vectors capable to effectively deliver the drug to the biological target. The CP square-planar geometry, together with its low water solubility, suggests that it could be possibly easily adsorbed on 2D graphene nanostructures through the interaction with the related highly conjugated π-electron system. In this work, pyrene has been first selected as the minimum approximation to the graphene plane, which allows to properly study the noncovalent interactions determining the CP adsorption. In particular, electronic structure calculations at the MP2C and DFT-SAPT levels of theory have allowed to obtain benchmark interaction energies for some limiting configurations of the CP-pyrene complex, as well as to assess the role of the different contributions to the total interaction: it has been found that the parallel configurations of the aggregate are mainly stabilized around the minimum region by dispersion, in a similar way as for complexes bonded through π-π interactions. Then, the benchmark interaction energies have been used to test corresponding estimations obtained within the less expensive DFT to validate an optimal exchange-correlation functional which includes corrections to take properly into account for the dispersion contribution. Reliable DFT interaction energies have been therefore obtained for CP adsorbed on graphene prototypes of increasing size, ranging from coronene, ovalene, and up to C 150 H 30 . Finally, DFT geometry optimizations and frequency calculations have also allowed a reliable estimation of the adsorption enthalpy of CP on graphene, which is found particularly favorable (about -20 kcal/mol at 298 K and 1 bar) being twice that estimated for the corresponding benzene adsorption. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  10. Change in quality management in diabetes care groups and outpatient clinics after feedback and tailored support.

    PubMed

    Campmans-Kuijpers, Marjo J; Baan, Caroline A; Lemmens, Lidwien C; Rutten, Guy E

    2015-02-01

    To assess the change in level of diabetes quality management in primary care groups and outpatient clinics after feedback and tailored support. This before-and-after study with a 1-year follow-up surveyed quality managers on six domains of quality management. Questionnaires measured organization of care, multidisciplinary teamwork, patient centeredness, performance results, quality improvement policy, and management strategies (score range 0-100%). Based on the scores, responders received feedback and a benchmark and were granted access to a toolbox of quality improvement instruments. If requested, additional support in improving quality management was available, consisting of an elucidating phone call or a visit from an experienced consultant. After 1 year, the level of quality management was measured again. Of the initially 60 participating care groups, 51 completed the study. The total quality management score improved from 59.8% (95% CI 57.0-62.6%) to 65.1% (62.8-67.5%; P < 0.0001). The same applied to all six domains. The feedback and benchmark improved the total quality management score (P = 0.001). Of the 44 participating outpatient clinics, 28 completed the study. Their total score changed from 65.7% (CI 60.3-71.1%) to 67.3% (CI 62.9-71.7%; P = 0.30). Only the results in the domain multidisciplinary teamwork improved (P = 0.001). Measuring quality management and providing feedback and a benchmark improves the level of quality management in care groups but not in outpatient clinics. The questionnaires might also be a useful asset for other diabetes care groups, such as Accountable Care Organizations. © 2015 by the American Diabetes Association. Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered.

  11. International variation in adherence to referral guidelines for suspected cancer: a secondary analysis of survey data.

    PubMed

    Nicholson, Brian D; Mant, David; Neal, Richard D; Hart, Nigel; Hamilton, Willie; Shinkins, Bethany; Rubin, Greg; Rose, Peter W

    2016-02-01

    Variation in cancer survival persists between comparable nations and appears to be due, in part, to primary care practitioners (PCPs) having different thresholds for acting definitively in response to cancer-related symptoms. To explore whether cancer guidelines, and adherence to them, differ between jurisdictions and impacts on PCPs' propensity to take definitive action on cancer-related symptoms. A secondary analysis of survey data from six countries (10 jurisdictions) participating in the International Cancer Benchmarking Partnership. PCPs' responses to five clinical vignettes presenting symptoms and signs of lung (n = 2), colorectal (n = 2), and ovarian cancer (n = 1) were compared with investigation and referral recommendations in cancer guidelines. Nine jurisdictions had guidelines covering the two colorectal vignettes. For the lung vignettes, although eight jurisdictions had guidelines for the first, the second was covered by a Swedish guideline alone. Only the UK and Denmark had an ovarian cancer guideline. Survey responses of 2795 PCPs (crude response rate: 12%) were analysed. Guideline adherence ranged from 20-82%. UK adherence was lower than other jurisdictions for the lung vignette covered by the guidance (47% versus 58%; P <0.01) but similar (45% versus 46%) or higher (67% versus 38%; P <0.01) for the two colorectal vignettes. PCPs took definitive action least often when a guideline recommended a non-definitive action or made no recommendation. UK PCPs adhered to recommendations for definitive action less than their counterparts (P <0.01). There wasno association between jurisdictional guideline adherence and 1-year survival. Cancer guideline content is variable between similarly developed nations and poor guideline adherence does not explain differential survival. Guidelines that fail to cover high-risk presentations or that recommend non-definitive action may reduce definitive diagnostic action. © British Journal of General Practice 2016.

  12. Stochastic fluctuations and the detectability limit of network communities.

    PubMed

    Floretta, Lucio; Liechti, Jonas; Flammini, Alessandro; De Los Rios, Paolo

    2013-12-01

    We have analyzed the detectability limits of network communities in the framework of the popular Girvan and Newman benchmark. By carefully taking into account the inevitable stochastic fluctuations that affect the construction of each and every instance of the benchmark, we come to the conclusion that the native, putative partition of the network is completely lost even before the in-degree/out-degree ratio becomes equal to that of a structureless Erdös-Rényi network. We develop a simple iterative scheme, analytically well described by an infinite branching process, to provide an estimate of the true detectability limit. Using various algorithms based on modularity optimization, we show that all of them behave (semiquantitatively) in the same way, with the same functional form of the detectability threshold as a function of the network parameters. Because the same behavior has also been found by further modularity-optimization methods and for methods based on different heuristics implementations, we conclude that indeed a correct definition of the detectability limit must take into account the stochastic fluctuations of the network construction.

  13. Fixed-Order Mixed Norm Designs for Building Vibration Control

    NASA Technical Reports Server (NTRS)

    Whorton, Mark S.; Calise, Anthony J.

    2000-01-01

    This study investigates the use of H2, mu-synthesis, and mixed H2/mu methods to construct full order controllers and optimized controllers of fixed dimensions. The benchmark problem definition is first extended to include uncertainty within the controller bandwidth in the form of parametric uncertainty representative of uncertainty in the natural frequencies of the design model. The sensitivity of H2 design to unmodeled dynamics and parametric uncertainty is evaluated for a range of controller levels of authority. Next, mu-synthesis methods are applied to design full order compensators that are robust to both unmodeled dynamics and to parametric uncertainty. Finally, a set of mixed H2/mu compensators are designed which are optimized for a fixed compensator dimension. These mixed norm designs recover the H2 design performance levels while providing the same levels of robust stability as the mu designs. It is shown that designing with the mixed norm approach permits higher levels of controller authority for which the H2 designs are destabilizing. The benchmark problem is that of an active tendon system. The controller designs are all based on the use of acceleration feedback.

  14. A general theory of effect size, and its consequences for defining the benchmark response (BMR) for continuous endpoints.

    PubMed

    Slob, Wout

    2017-04-01

    A general theory on effect size for continuous data predicts a relationship between maximum response and within-group variation of biological parameters, which is empirically confirmed by results from dose-response analyses of 27 different biological parameters. The theory shows how effect sizes observed in distinct biological parameters can be compared and provides a basis for a generic definition of small, intermediate and large effects. While the theory is useful for experimental science in general, it has specific consequences for risk assessment: it solves the current debate on the appropriate metric for the Benchmark response in continuous data. The theory shows that scaling the BMR expressed as a percent change in means to the maximum response (in the way specified) automatically takes "natural variability" into account. Thus, the theory supports the underlying rationale of the BMR 1 SD. For various reasons, it is, however, recommended to use a BMR in terms of a percent change that is scaled to maximum response and/or within group variation (averaged over studies), as a single harmonized approach.

  15. Exploring Stakeholder Definitions within the Aerospace Industry: A Qualitative Case Study

    NASA Astrophysics Data System (ADS)

    Hebert, Jonathan R.

    A best practice in the discipline of project management is to identify all key project stakeholders prior to the execution of a project. When stakeholders are properly identified, they can be consulted to provide expert advice on project activities so that the project manager can ensure the project stays within the budget and schedule constraints. The problem addressed by this study is that managers fail to properly identify key project stakeholders when using stakeholder theory because there are multiple conflicting definitions for the term stakeholder. Poor stakeholder identification has been linked to multiple negative project outcomes such as budget and schedules overruns, and this problem is heightened in certain industries such as aerospace. The purpose of this qualitative study was to explore project managers' and project stakeholders' perceptions of how they define and use the term stakeholder within the aerospace industry. This qualitative exploratory single-case study had two embedded units of analysis: project managers and project stakeholders. Six aerospace project managers and five aerospace project stakeholders were purposively selected for this study. Data were collected through individual semi-structured interviews with both project managers and project stakeholders. All data were analyzed using Yin's (2011) five-phased cycle approach for qualitative research. The results indicated that the aerospace project managers and project stakeholder define the term stakeholder as "those who do the work of a company." The participants build upon this well-known concept by adding that, "a company should list specific job titles" that correspond to their company specific-stakeholder definition. Results also indicated that the definition of the term stakeholder is used when management is assigning human resources to a project to mitigate or control project risk. Results showed that project managers tended to include the customer in their stakeholder definitions while project stakeholders included a wider range of stakeholders from young employees to union workers. Practical application recommendations, based on the study's findings, include that companies start to develop company-specific definitions of the term stakeholder. Recommendations for future research should focus on exploring how CEOs, executive members, new hires, and hourly workers define and use the term stakeholder in the aerospace industry.

  16. Integration of oncology and palliative care: setting a benchmark.

    PubMed

    Vayne-Bossert, P; Richard, E; Good, P; Sullivan, K; Hardy, J R

    2017-10-01

    Integration of oncology and palliative care (PC) should be the standard model of care for patients with advanced cancer. An expert panel developed criteria that constitute integration. This study determined whether the PC service within this Health Service, which is considered to be fully "integrated", could be benchmarked against these criteria. A survey was undertaken to determine the perceived level of integration of oncology and palliative care by all health care professionals (HCPs) within our cancer centre. An objective determination of integration was obtained from chart reviews of deceased patients. Integration was defined as >70% of all respondents answered "agree" or "strongly agree" to each indicator and >70% of patient charts supported each criteria. Thirty-four HCPs participated in the survey (response rate 69%). Over 90% were aware of the outpatient PC clinic, interdisciplinary and consultation team, PC senior leadership, and the acceptance of concurrent anticancer therapy. None of the other criteria met the 70% agreement mark but many respondents lacked the necessary knowledge to respond. The chart review included 67 patients, 92% of whom were seen by the PC team prior to death. The median time from referral to death was 103 days (range 0-1347). The level of agreement across all criteria was below our predefined definition of integration. The integration criteria relating to service delivery are medically focused and do not lend themselves to interdisciplinary review. The objective criteria can be audited and serve both as a benchmark and a basis for improvement activities.

  17. Weighing Photons Using Bathroom Scales: A Thought Experiment

    ERIC Educational Resources Information Center

    Huggins, Elisha

    2010-01-01

    Jay Orear, in his introductory physics text, defined the weight of a person as the reading one gets when standing on a (properly calibrated) bathroom scale. Here we will use Jay's definition of weight in a thought experiment to measure the weight of a photon. The thought experiment uses the results of the Pound-Rebka-Snider experiments, Compton…

  18. Edible Wild Plants from Neighborhood to Wilderness: A Catalyst for Experiential Education.

    ERIC Educational Resources Information Center

    Kallas, John

    Wild foods are ubiquitous motivational tools for teaching botany, environmental education, cultural foodways, and survival. Edible wild plants are wild plants endowed with one or more parts that can be used for food if gathered at the appropriate stage of growth and properly prepared. The components of this definition are discussed with…

  19. Exploring the Articulation and Evaluation of the Learning Outcomes of Instructor Development: A Case Study

    ERIC Educational Resources Information Center

    Macfarlane, Jack P.

    2014-01-01

    The definition of clear goals is essential to the effectiveness of training intended for the advancement of professional development. Properly articulated instructional goals also facilitate the evaluation of the intended outcomes of the learning event. Although outcomes-based instruction is prevalent in K-12 settings as well as in post-secondary…

  20. Redefining Health: The Evolution of Health Ideas from Antiquity to the Era of Value-Based Care.

    PubMed

    Badash, Ido; Kleinman, Nicole P; Barr, Stephanie; Jang, Julie; Rahman, Suraiya; Wu, Brian W

    2017-02-09

    The current healthcare system in the United States (US) is characterized by high costs and poor patient outcomes. A value-based healthcare system, centered on providing the highest quality of care for the lowest cost, is the country's chosen solution for its healthcare crisis. As the US transitions to a value-based model, a new definition of health is necessary to clearly define what constitutes a healthy state. However, such a definition is impossible to develop without a proper understanding of what "health" actually means. To truly understand its meaning, one must have a thorough historical understanding of the changes in the concept of health and how it has evolved to reflect the beliefs and scientific understanding of each time period. Thus, this review summarizes the changes in the definition of health over time in order to provide a context for the definition needed today. We then propose a new definition of health that is specifically tailored to providers working in the era of value-based care.

  1. Redefining Health: The Evolution of Health Ideas from Antiquity to the Era of Value-Based Care

    PubMed Central

    Kleinman, Nicole P; Barr, Stephanie; Jang, Julie; Rahman, Suraiya; Wu, Brian W

    2017-01-01

    The current healthcare system in the United States (US) is characterized by high costs and poor patient outcomes. A value-based healthcare system, centered on providing the highest quality of care for the lowest cost, is the country’s chosen solution for its healthcare crisis. As the US transitions to a value-based model, a new definition of health is necessary to clearly define what constitutes a healthy state. However, such a definition is impossible to develop without a proper understanding of what “health” actually means. To truly understand its meaning, one must have a thorough historical understanding of the changes in the concept of health and how it has evolved to reflect the beliefs and scientific understanding of each time period. Thus, this review summarizes the changes in the definition of health over time in order to provide a context for the definition needed today. We then propose a new definition of health that is specifically tailored to providers working in the era of value-based care.  PMID:28348937

  2. The Vienna consensus: report of an expert meeting on the development of ART laboratory performance indicators.

    PubMed

    2017-11-01

    This proceedings report presents the outcomes from an international workshop supported by the European Society of Human Reproduction and Embryology (ESHRE) and Alpha Scientists in Reproductive Medicine, designed to establish consensus on definitions and recommended values for Indicators for the assisted reproductive technology (ART) laboratory. Minimum performance-level values ('competency') and aspirational ('benchmark') values were recommended for a total of 19 Indicators, including 12 Key Performance Indicators (KPIs), five Performance Indicators (PIs), and two Reference Indicators (RIs). Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  3. Hybrid parallel code acceleration methods in full-core reactor physics calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Courau, T.; Plagne, L.; Ponicot, A.

    2012-07-01

    When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadraturemore » required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)« less

  4. Transonic Flutter Suppression Control Law Design, Analysis and Wind Tunnel Results

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1999-01-01

    The benchmark active controls technology and wind tunnel test program at NASA Langley Research Center was started with the objective to investigate the nonlinear, unsteady aerodynamics and active flutter suppression of wings in transonic flow. The paper will present the flutter suppression control law design process, numerical nonlinear simulation and wind tunnel test results for the NACA 0012 benchmark active control wing model. The flutter suppression control law design processes using (1) classical, (2) linear quadratic Gaussian (LQG), and (3) minimax techniques are described. A unified general formulation and solution for the LQG and minimax approaches, based on the steady state differential game theory is presented. Design considerations for improving the control law robustness and digital implementation are outlined. It was shown that simple control laws when properly designed based on physical principles, can suppress flutter with limited control power even in the presence of transonic shocks and flow separation. In wind tunnel tests in air and heavy gas medium, the closed-loop flutter dynamic pressure was increased to the tunnel upper limit of 200 psf The control law robustness and performance predictions were verified in highly nonlinear flow conditions, gain and phase perturbations, and spoiler deployment. A non-design plunge instability condition was also successfully suppressed.

  5. Transonic Flutter Suppression Control Law Design, Analysis and Wind-Tunnel Results

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1999-01-01

    The benchmark active controls technology and wind tunnel test program at NASA Langley Research Center was started with the objective to investigate the nonlinear, unsteady aerodynamics and active flutter suppression of wings in transonic flow. The paper will present the flutter suppression control law design process, numerical nonlinear simulation and wind tunnel test results for the NACA 0012 benchmark active control wing model. The flutter suppression control law design processes using (1) classical, (2) linear quadratic Gaussian (LQG), and (3) minimax techniques are described. A unified general formulation and solution for the LQG and minimax approaches, based on the steady state differential game theory is presented. Design considerations for improving the control law robustness and digital implementation are outlined. It was shown that simple control laws when properly designed based on physical principles, can suppress flutter with limited control power even in the presence of transonic shocks and flow separation. In wind tunnel tests in air and heavy gas medium, the closed-loop flutter dynamic pressure was increased to the tunnel upper limit of 200 psf. The control law robustness and performance predictions were verified in highly nonlinear flow conditions, gain and phase perturbations, and spoiler deployment. A non-design plunge instability condition was also successfully suppressed.

  6. Transonic Flutter Suppression Control Law Design Using Classical and Optimal Techniques with Wind-Tunnel Results

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1999-01-01

    The benchmark active controls technology and wind tunnel test program at NASA Langley Research Center was started with the objective to investigate the nonlinear, unsteady aerodynamics and active flutter suppression of wings in transonic flow. The paper will present the flutter suppression control law design process, numerical nonlinear simulation and wind tunnel test results for the NACA 0012 benchmark active control wing model. The flutter suppression control law design processes using (1) classical, (2) linear quadratic Gaussian (LQG), and (3) minimax techniques are described. A unified general formulation and solution for the LQG and minimax approaches, based on the steady state differential game theory is presented. Design considerations for improving the control law robustness and digital implementation are outlined. It was shown that simple control laws when properly designed based on physical principles, can suppress flutter with limited control power even in the presence of transonic shocks and flow separation. In wind tunnel tests in air and heavy gas medium, the closed-loop flutter dynamic pressure was increased to the tunnel upper limit of 200 psf. The control law robustness and performance predictions were verified in highly nonlinear flow conditions, gain and phase perturbations, and spoiler deployment. A non-design plunge instability condition was also successfully suppressed.

  7. Image segmentation with a novel regularized composite shape prior based on surrogate study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu

    Purpose: Incorporating training into image segmentation is a good approach to achieve additional robustness. This work aims to develop an effective strategy to utilize shape prior knowledge, so that the segmentation label evolution can be driven toward the desired global optimum. Methods: In the variational image segmentation framework, a regularization for the composite shape prior is designed to incorporate the geometric relevance of individual training data to the target, which is inferred by an image-based surrogate relevance metric. Specifically, this regularization is imposed on the linear weights of composite shapes and serves as a hyperprior. The overall problem is formulatedmore » in a unified optimization setting and a variational block-descent algorithm is derived. Results: The performance of the proposed scheme is assessed in both corpus callosum segmentation from an MR image set and clavicle segmentation based on CT images. The resulted shape composition provides a proper preference for the geometrically relevant training data. A paired Wilcoxon signed rank test demonstrates statistically significant improvement of image segmentation accuracy, when compared to multiatlas label fusion method and three other benchmark active contour schemes. Conclusions: This work has developed a novel composite shape prior regularization, which achieves superior segmentation performance than typical benchmark schemes.« less

  8. Using Bayesian Inference Framework towards Identifying Gas Species and Concentration from High Temperature Resistive Sensor Array Data

    DOE PAGES

    Liu, Yixin; Zhou, Kai; Lei, Yu

    2015-01-01

    High temperature gas sensors have been highly demanded for combustion process optimization and toxic emissions control, which usually suffer from poor selectivity. In order to solve this selectivity issue and identify unknown reducing gas species (CO, CH 4 , and CH 8 ) and concentrations, a high temperature resistive sensor array data set was built in this study based on 5 reported sensors. As each sensor showed specific responses towards different types of reducing gas with certain concentrations, based on which calibration curves were fitted, providing benchmark sensor array response database, then Bayesian inference framework was utilized to process themore » sensor array data and build a sample selection program to simultaneously identify gas species and concentration, by formulating proper likelihood between input measured sensor array response pattern of an unknown gas and each sampled sensor array response pattern in benchmark database. This algorithm shows good robustness which can accurately identify gas species and predict gas concentration with a small error of less than 10% based on limited amount of experiment data. These features indicate that Bayesian probabilistic approach is a simple and efficient way to process sensor array data, which can significantly reduce the required computational overhead and training data.« less

  9. On the Performance of Linear Decreasing Inertia Weight Particle Swarm Optimization for Global Optimization

    PubMed Central

    Arasomwan, Martins Akugbe; Adewumi, Aderemi Oluyinka

    2013-01-01

    Linear decreasing inertia weight (LDIW) strategy was introduced to improve on the performance of the original particle swarm optimization (PSO). However, linear decreasing inertia weight PSO (LDIW-PSO) algorithm is known to have the shortcoming of premature convergence in solving complex (multipeak) optimization problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. Researchers have tried to address this shortcoming by modifying LDIW-PSO or proposing new PSO variants. Some of these variants have been claimed to outperform LDIW-PSO. The major goal of this paper is to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters are properly set. First, an experiment was conducted to acquire a percentage value of the search space limits to compute the particle velocity limits in LDIW-PSO based on commonly used benchmark global optimization problems. Second, using the experimentally obtained values, five well-known benchmark optimization problems were used to show the outstanding performance of LDIW-PSO over some of its competitors which have in the past claimed superiority over it. Two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO with the latter outperforming both in the simulation experiments conducted. PMID:24324383

  10. The Isothermal Dendritic Growth Experiment Archive

    NASA Astrophysics Data System (ADS)

    Koss, Matthew

    2009-03-01

    The growth of dendrites is governed by the interplay between two simple and familiar processes---the irreversible diffusion of energy, and the reversible work done in the formation of new surface area. To advance our understanding of these processes, NASA sponsored a project that flew on the Space Shuttle Columbia is 1994, 1996, and 1997 to record and analyze benchmark data in an apparent-microgravity ``laboratory.'' In this laboratory, energy transfer by gravity driven convection was essentially eliminated and one could test independently, for the first time, both components of dendritic growth theory. The analysis of this data shows that although the diffusion of energy can be properly accounted for, the results from interfacial physics appear to be in disagreement and alternate models should receive increased attention. Unfortunately, currently and for the foreseeable future, there is no access or financial support to develop and conduct additional experiments of this type. However, the benchmark data of 35mm photonegatives, video, and all supporting instrument data are now available at the IDGE Archive at the College of the Holy Cross. This data may still have considerable relevance to researchers working specifically with dendritic growth, and more generally those working in the synthesis, growth & processing of materials, multiscale computational modeling, pattern formation, and systems far from equilibrium.

  11. Culture-specific delusions. Sense and nonsense in cultural context.

    PubMed

    Gaines, A D

    1995-06-01

    It can be said that a definition of delusions requires the invocation of cultural understandings, standards of acceptability, as well as conceptions of reality and the forces that animate it. For these reasons, the determination of delusional or normative ideation can only be effected properly within particular cultural contexts. The cross-cultural record suggests that it is difficult to separate the delusional from the cultural; a belief that is patterened and culturally specific is, by definition a cultural, not a delusional belief. One must rely upon particular, relevant local cultural understandings to ascertain when the bounds of culture have been transgressed and meaning has given way to unshareable nonsense.

  12. Nutraceuticals: opening the debate for a regulatory framework

    PubMed Central

    Santini, Antonello; Cammarata, Silvia Miriam; Capone, Giacomo; Ianaro, Angela; Tenore, Gian Carlo; Pani, Luca

    2018-01-01

    Currently, nutraceuticals do not have a specific definition distinct from those of other food‐derived categories, such as food supplements, herbal products, pre‐ and probiotics, functional foods, and fortified foods. Many studies have led to an understanding of the potential mechanisms of action of pharmaceutically active components contained in food that may improve health and reduce the risk of pathological conditions while enhancing overall well‐being. Nevertheless, there is a lack of clear information and, often, the claimed health benefits may not be properly substantiated by safety and efficacy information or in vitro and in vivo data, which can induce false expectations and miss the target for a product to be effective, as claimed. An officially shared and accepted definition of nutraceuticals is still missing, as nutraceuticals are mostly referred to as pharma‐foods, a powerful toolbox to be used beyond the diet but before the drugs to prevent and treat pathological conditions, such as in subjects who may not yet be eligible for conventional pharmaceutical therapy. Hence, it is of utmost importance to have a proper and unequivocal definition of nutraceuticals and shared regulations. It also seems wise to assess the safety, mechanism of action and efficacy of nutraceuticals with clinical data. A growing demand exists for nutraceuticals, which seem to reside in the grey area between pharmaceuticals and food. Nonetheless, given specific legislation from different countries, nutraceuticals are experiencing challenges with safety and health claim substantiation. PMID:29433155

  13. Computational Functional Analysis of Lipid Metabolic Enzymes.

    PubMed

    Bagnato, Carolina; Have, Arjen Ten; Prados, María B; Beligni, María V

    2017-01-01

    The computational analysis of enzymes that participate in lipid metabolism has both common and unique challenges when compared to the whole protein universe. Some of the hurdles that interfere with the functional annotation of lipid metabolic enzymes that are common to other pathways include the definition of proper starting datasets, the construction of reliable multiple sequence alignments, the definition of appropriate evolutionary models, and the reconstruction of phylogenetic trees with high statistical support, particularly for large datasets. Most enzymes that take part in lipid metabolism belong to complex superfamilies with many members that are not involved in lipid metabolism. In addition, some enzymes that do not have sequence similarity catalyze similar or even identical reactions. Some of the challenges that, albeit not unique, are more specific to lipid metabolism refer to the high compartmentalization of the routes, the catalysis in hydrophobic environments and, related to this, the function near or in biological membranes.In this work, we provide guidelines intended to assist in the proper functional annotation of lipid metabolic enzymes, based on previous experiences related to the phospholipase D superfamily and the annotation of the triglyceride synthesis pathway in algae. We describe a pipeline that starts with the definition of an initial set of sequences to be used in similarity-based searches and ends in the reconstruction of phylogenies. We also mention the main issues that have to be taken into consideration when using tools to analyze subcellular localization, hydrophobicity patterns, or presence of transmembrane domains in lipid metabolic enzymes.

  14. On the definition of absorbed dose

    NASA Astrophysics Data System (ADS)

    Grusell, Erik

    2015-02-01

    Purpose: The quantity absorbed dose is used extensively in all areas concerning the interaction of ionizing radiation with biological organisms, as well as with matter in general. The most recent and authoritative definition of absorbed dose is given by the International Commission on Radiation Units and Measurements (ICRU) in ICRU Report 85. However, that definition is incomplete. The purpose of the present work is to give a rigorous definition of absorbed dose. Methods: Absorbed dose is defined in terms of the random variable specific energy imparted. A random variable is a mathematical function, and it cannot be defined without specifying its domain of definition which is a probability space. This is not done in report 85 by the ICRU, mentioned above. Results: In the present work a definition of a suitable probability space is given, so that a rigorous definition of absorbed dose is possible. This necessarily includes the specification of the experiment which the probability space describes. In this case this is an irradiation, which is specified by the initial particles released and by the material objects which can interact with the radiation. Some consequences are discussed. Specific energy imparted is defined for a volume, and the definition of absorbed dose as a point function involves the specific energy imparted for a small mass contained in a volume surrounding the point. A possible more precise definition of this volume is suggested and discussed. Conclusions: The importance of absorbed dose motivates a proper definition, and one is given in the present work. No rigorous definition has been presented before.

  15. HRI observations of the Pleiades

    NASA Technical Reports Server (NTRS)

    Harnden, F. R., Jr.; Caillault, J.-P.; Damiani, F.; Kashyap, V.; Micela, G.; Prosser, C.; Rosner, R.; Sciortino, S.; Stauffer, J.

    1996-01-01

    The preliminary analysis of the data from the first four Rosat high resolution imager (HRI) pointings provided many new faint Pleiades detections. The completion of the high resolution survey of the most source-confused regions of this open cluster will lead to the construction of proper X-ray luminosity functions and will yield a definitive assessment of the coronal emission of the Pleiades members.

  16. Earth Observation System Flight Dynamics System Covariance Realism

    NASA Technical Reports Server (NTRS)

    Zaidi, Waqar H.; Tracewell, David

    2016-01-01

    This presentation applies a covariance realism technique to the National Aeronautics and Space Administration (NASA) Earth Observation System (EOS) Aqua and Aura spacecraft based on inferential statistics. The technique consists of three parts: collection calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics.

  17. Economic Growth and Poverty Reduction: Measurement and Policy Issues. OECD Development Centre Working Paper No. 246

    ERIC Educational Resources Information Center

    Klasen, Stephan

    2005-01-01

    The aim of this Working Paper is to broaden the debate on "pro-poor growth". An exclusive focus on the income dimension of poverty has neglected the non-income dimensions. After an examination of prominent views on the linkages between economic growth, inequality, and poverty reduction this paper discusses the proper definition and…

  18. Understanding Language Use in the Classroom: A Linguistic Guide for College Educators

    ERIC Educational Resources Information Center

    Behrens, Susan J.

    2014-01-01

    It is clear that a proper understanding of what academic English is and how to use it is crucial for success in college, and yet students face multiple obstacles in acquiring this new "code", not least that their professors often cannot agree amongst themselves on a definition and a set of rules. "Understanding Language Use in the…

  19. 26 CFR 1.1411-4 - Definition of net investment income.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... business described in § 1.1411-5; and (iii) Net gain (to the extent taken into account in computing taxable... described in paragraph (d)(4)(i)(A) of this section for gain or loss attributable to property held in a... properly allocable to such gross income or net gain (as determined in paragraph (f) of this section). (b...

  20. Semi-automated ontology generation within OBO-Edit.

    PubMed

    Wächter, Thomas; Schroeder, Michael

    2010-06-15

    Ontologies and taxonomies have proven highly beneficial for biocuration. The Open Biomedical Ontology (OBO) Foundry alone lists over 90 ontologies mainly built with OBO-Edit. Creating and maintaining such ontologies is a labour-intensive, difficult, manual process. Automating parts of it is of great importance for the further development of ontologies and for biocuration. We have developed the Dresden Ontology Generator for Directed Acyclic Graphs (DOG4DAG), a system which supports the creation and extension of OBO ontologies by semi-automatically generating terms, definitions and parent-child relations from text in PubMed, the web and PDF repositories. DOG4DAG is seamlessly integrated into OBO-Edit. It generates terms by identifying statistically significant noun phrases in text. For definitions and parent-child relations it employs pattern-based web searches. We systematically evaluate each generation step using manually validated benchmarks. The term generation leads to high-quality terms also found in manually created ontologies. Up to 78% of definitions are valid and up to 54% of child-ancestor relations can be retrieved. There is no other validated system that achieves comparable results. By combining the prediction of high-quality terms, definitions and parent-child relations with the ontology editor OBO-Edit we contribute a thoroughly validated tool for all OBO ontology engineers. DOG4DAG is available within OBO-Edit 2.1 at http://www.oboedit.org. Supplementary data are available at Bioinformatics online.

  1. A Note on the Problem of Proper Time in Weyl Space-Time

    NASA Astrophysics Data System (ADS)

    Avalos, R.; Dahia, F.; Romero, C.

    2018-02-01

    We discuss the question of whether or not a general Weyl structure is a suitable mathematical model of space-time. This is an issue that has been in debate since Weyl formulated his unified field theory for the first time. We do not present the discussion from the point of view of a particular unification theory, but instead from a more general standpoint, in which the viability of such a structure as a model of space-time is investigated. Our starting point is the well known axiomatic approach to space-time given by Elhers, Pirani and Schild (EPS). In this framework, we carry out an exhaustive analysis of what is required for a consistent definition for proper time and show that such a definition leads to the prediction of the so-called "second clock effect". We take the view that if, based on experience, we were to reject space-time models predicting this effect, this could be incorporated as the last axiom in the EPS approach. Finally, we provide a proof that, in this case, we are led to a Weyl integrable space-time as the most general structure that would be suitable to model space-time.

  2. Proper-motion Study of the Magellanic Clouds Using SPM Material

    NASA Astrophysics Data System (ADS)

    Vieira, Katherine; Girard, Terrence M.; van Altena, William F.; Zacharias, Norbert; Casetti-Dinescu, Dana I.; Korchagin, Vladimir I.; Platais, Imants; Monet, David G.; López, Carlos E.; Herrera, David; Castillo, Danilo J.

    2010-12-01

    Absolute proper motions are determined for stars and galaxies to V = 17.5 over a 450 deg2 area that encloses both Magellanic Clouds. The proper motions are based on photographic and CCD observations of the Yale/San Juan Southern Proper Motion program, which span a baseline of 40 years. Multiple, local relative proper-motion measures are combined in an overlap solution using photometrically selected Galactic disk stars to define a global relative system that is then transformed to absolute using external galaxies and Hipparcos stars to tie into the ICRS. The resulting catalog of 1.4 million objects is used to derive the mean absolute proper motions of the Large Magellanic Cloud (LMC) and the Small Magellanic Cloud (SMC); (μαcos δ, μδ)LMC = (1.89, + 0.39) ± (0.27, 0.27) masyr-1 and (μαcos δ, μδ)SMC = (0.98, - 1.01) ± (0.30, 0.29) masyr-1. These mean motions are based on best-measured samples of 3822 LMC stars and 964 SMC stars. A dominant portion (0.25 mas yr-1) of the formal errors is due to the estimated uncertainty in the inertial system of the Hipparcos Catalog stars used to anchor the bright end of our proper motion measures. A more precise determination can be made for the proper motion of the SMC relative to the LMC; (μαcos δ, μδ)SMC-LMC = (-0.91, - 1.49) ± (0.16, 0.15) masyr-1. This differential value is combined with measurements of the proper motion of the LMC taken from the literature to produce new absolute proper-motion determinations for the SMC, as well as an estimate of the total velocity difference of the two clouds to within ±54 km s-1. The absolute proper-motion results are consistent with the Clouds' orbits being marginally bound to the Milky Way, albeit on an elongated orbit. The inferred relative velocity between the Clouds places them near their binding energy limit and, thus, no definitive conclusion can be made as to whether or not the Clouds are bound to one another.

  3. Geostatistical simulations for radon indoor with a nested model including the housing factor.

    PubMed

    Cafaro, C; Giovani, C; Garavaglia, M

    2016-01-01

    The radon prone areas definition is matter of many researches in radioecology, since radon is considered a leading cause of lung tumours, therefore the authorities ask for support to develop an appropriate sanitary prevention strategy. In this paper, we use geostatistical tools to elaborate a definition accounting for some of the available information about the dwellings. Co-kriging is the proper interpolator used in geostatistics to refine the predictions by using external covariates. In advance, co-kriging is not guaranteed to improve significantly the results obtained by applying the common lognormal kriging. Here, instead, such multivariate approach leads to reduce the cross-validation residual variance to an extent which is deemed as satisfying. Furthermore, with the application of Monte Carlo simulations, the paradigm provides a more conservative radon prone areas definition than the one previously made by lognormal kriging. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Resection of complex pancreatic injuries: Benchmarking postoperative complications using the Accordion classification

    PubMed Central

    Krige, Jake E; Jonas, Eduard; Thomson, Sandie R; Kotze, Urda K; Setshedi, Mashiko; Navsaria, Pradeep H; Nicol, Andrew J

    2017-01-01

    AIM To benchmark severity of complications using the Accordion Severity Grading System (ASGS) in patients undergoing operation for severe pancreatic injuries. METHODS A prospective institutional database of 461 patients with pancreatic injuries treated from 1990 to 2015 was reviewed. One hundred and thirty patients with AAST grade 3, 4 or 5 pancreatic injuries underwent resection (pancreatoduodenectomy, n = 20, distal pancreatectomy, n = 110), including 30 who had an initial damage control laparotomy (DCL) and later definitive surgery. AAST injury grades, type of pancreatic resection, need for DCL and incidence and ASGS severity of complications were assessed. Uni- and multivariate logistic regression analysis was applied. RESULTS Overall 238 complications occurred in 95 (73%) patients of which 73% were ASGS grades 3-6. Nineteen patients (14.6%) died. Patients more likely to have complications after pancreatic resection were older, had a revised trauma score (RTS) < 7.8, were shocked on admission, had grade 5 injuries of the head and neck of the pancreas with associated vascular and duodenal injuries, required a DCL, received a larger blood transfusion, had a pancreatoduodenectomy (PD) and repeat laparotomies. Applying univariate logistic regression analysis, mechanism of injury, RTS < 7.8, shock on admission, DCL, increasing AAST grade and type of pancreatic resection were significant variables for complications. Multivariate logistic regression analysis however showed that only age and type of pancreatic resection (PD) were significant. CONCLUSION This ASGS-based study benchmarked postoperative morbidity after pancreatic resection for trauma. The detailed outcome analysis provided may serve as a reference for future institutional comparisons. PMID:28396721

  5. Can Propensity Score Analysis Approximate Randomized Experiments Using Pretest and Demographic Information in Pre-K Intervention Research?

    PubMed

    Dong, Nianbo; Lipsey, Mark W

    2017-01-01

    It is unclear whether propensity score analysis (PSA) based on pretest and demographic covariates will meet the ignorability assumption for replicating the results of randomized experiments. This study applies within-study comparisons to assess whether pre-Kindergarten (pre-K) treatment effects on achievement outcomes estimated using PSA based on a pretest and demographic covariates can approximate those found in a randomized experiment. Data-Four studies with samples of pre-K children each provided data on two math achievement outcome measures with baseline pretests and child demographic variables that included race, gender, age, language spoken at home, and mother's highest education. Research Design and Data Analysis-A randomized study of a pre-K math curriculum provided benchmark estimates of effects on achievement measures. Comparison samples from other pre-K studies were then substituted for the original randomized control and the effects were reestimated using PSA. The correspondence was evaluated using multiple criteria. The effect estimates using PSA were in the same direction as the benchmark estimates, had similar but not identical statistical significance, and did not differ from the benchmarks at statistically significant levels. However, the magnitude of the effect sizes differed and displayed both absolute and relative bias larger than required to show statistical equivalence with formal tests, but those results were not definitive because of the limited statistical power. We conclude that treatment effect estimates based on a single pretest and demographic covariates in PSA correspond to those from a randomized experiment on the most general criteria for equivalence.

  6. Critical appraisal of the Vienna consensus: performance indicators for assisted reproductive technology laboratories.

    PubMed

    Lopez-Regalado, María Luisa; Martínez-Granados, Luis; González-Utor, Antonio; Ortiz, Nereyda; Iglesias, Miriam; Ardoy, Manuel; Castilla, Jose A

    2018-05-24

    The Vienna consensus, based on the recommendations of an expert panel, has identified 19 performance indicators for assisted reproductive technology (ART) laboratories. Two levels of reference values are established for these performance indicators: competence and benchmark. For over 10 years, the Spanish embryology association (ASEBIR) has participated in the definition and design of ART performance indicators, seeking to establish specific guidelines for ART laboratories to enhance quality, safety and patient welfare. Four years ago, ASEBIR took part in an initiative by AENOR, the Spanish Association for Standardization and Certification, to develop a national standard in this field (UNE 17900:2013 System of quality management for assisted reproduction laboratories), extending the former requirements, based on ISO 9001, to include performance indicators. Considering the experience acquired, we discuss various aspects of the Vienna consensus and consider certain discrepancies in performance indicators between the consensus and UNE 179007:2013, and analyse the definitions, methodology and reference values used. Copyright © 2018. Published by Elsevier Ltd.

  7. Fuzzy scalar and vector median filters based on fuzzy distances.

    PubMed

    Chatzis, V; Pitas, I

    1999-01-01

    In this paper, the fuzzy scalar median (FSM) is proposed, defined by using ordering of fuzzy numbers based on fuzzy minimum and maximum operations defined by using the extension principle. Alternatively, the FSM is defined from the minimization of a fuzzy distance measure, and the equivalence of the two definitions is proven. Then, the fuzzy vector median (FVM) is proposed as an extension of vector median, based on a novel distance definition of fuzzy vectors, which satisfy the property of angle decomposition. By defining properly the fuzziness of a value, the combination of the basic properties of the classical scalar and vector median (VM) filter with other desirable characteristics can be succeeded.

  8. Exploiting Surroundedness for Saliency Detection: A Boolean Map Approach.

    PubMed

    Zhang, Jianming; Sclaroff, Stan

    2016-05-01

    We demonstrate the usefulness of surroundedness for eye fixation prediction by proposing a Boolean Map based Saliency model (BMS). In our formulation, an image is characterized by a set of binary images, which are generated by randomly thresholding the image's feature maps in a whitened feature space. Based on a Gestalt principle of figure-ground segregation, BMS computes a saliency map by discovering surrounded regions via topological analysis of Boolean maps. Furthermore, we draw a connection between BMS and the Minimum Barrier Distance to provide insight into why and how BMS can properly captures the surroundedness cue via Boolean maps. The strength of BMS is verified by its simplicity, efficiency and superior performance compared with 10 state-of-the-art methods on seven eye tracking benchmark datasets.

  9. Hybrid fiber-rod laser

    DOEpatents

    Beach, Raymond J.; Dawson, Jay W.; Messerly, Michael J.; Barty, Christopher P. J.

    2012-12-18

    Single, or near single transverse mode waveguide definition is produced using a single homogeneous medium to transport both the pump excitation light and generated laser light. By properly configuring the pump deposition and resulting thermal power generation in the waveguide device, a thermal focusing power is established that supports perturbation-stable guided wave propagation of an appropriately configured single or near single transverse mode laser beam and/or laser pulse.

  10. Laboratory Experimental Design for a Glycomic Study.

    PubMed

    Ugrina, Ivo; Campbell, Harry; Vučković, Frano

    2017-01-01

    Proper attention to study design before, careful conduct of procedures during, and appropriate inference from results after scientific experiments are important in all scientific studies in order to ensure valid and sometimes definitive conclusions can be made. The design of experiments, also called experimental design, addresses the challenge of structuring and conducting experiments to answer the questions of interest as clearly and efficiently as possible.

  11. Typical event horizons in AdS/CFT

    NASA Astrophysics Data System (ADS)

    Avery, Steven G.; Lowe, David A.

    2016-01-01

    We consider the construction of local bulk operators in a black hole background dual to a pure state in conformal field theory. The properties of these operators in a microcanonical ensemble are studied. It has been argued in the literature that typical states in such an ensemble contain firewalls, or otherwise singular horizons. We argue this conclusion can be avoided with a proper definition of the interior operators.

  12. Performability modeling with continuous accomplishment sets

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1979-01-01

    A general modeling framework that permits the definition, formulation, and evaluation of performability is described. It is shown that performability relates directly to system effectiveness, and is a proper generalization of both performance and reliability. A hierarchical modeling scheme is used to formulate the capability function used to evaluate performability. The case in which performance variables take values in a continuous accomplishment set is treated explicitly.

  13. 29 CFR Appendix A to Subpart B of... - Compliance Assistance Guidelines for Confined and Enclosed Spaces and Other Dangerous Atmospheres

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... explosive limit (LEL) are used interchangeably in fire science literature. Section 1915.11(b)Definition of... interchangeably in fire science literature. Section 1915.12(a)(3). After a tank has been properly washed and... oxygen content of 19.5 percent can support life and is adequate for entry. However, any oxygen level...

  14. 29 CFR Appendix A to Subpart B of... - Compliance Assistance Guidelines for Confined and Enclosed Spaces and Other Dangerous Atmospheres

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... explosive limit (LEL) are used interchangeably in fire science literature. Section 1915.11(b)Definition of... interchangeably in fire science literature. Section 1915.12(a)(3). After a tank has been properly washed and... oxygen content of 19.5 percent can support life and is adequate for entry. However, any oxygen level...

  15. 29 CFR Appendix A to Subpart B of... - Compliance Assistance Guidelines for Confined and Enclosed Spaces and Other Dangerous Atmospheres

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... explosive limit (LEL) are used interchangeably in fire science literature. Section 1915.11(b)Definition of... interchangeably in fire science literature. Section 1915.12(a)(3). After a tank has been properly washed and... oxygen content of 19.5 percent can support life and is adequate for entry. However, any oxygen level...

  16. US - European Workshop on Thermal Waste Treatment for Naval Vessels

    DTIC Science & Technology

    1997-01-01

    REGULATION 2 Definitions REGULATION 3.: GENERAL EXCEPTIONS REGULATION 4.: EQUIVALENTS REGULATION 5.: SURVEYS AND INSPECTIONS REUGLATION 6.: ISSUE OF...item Shampoo /Lotion PAPER OR REFILLABLE DISPENSERS Plates. Plastic coated PAPER PLATES Coffe cups. styrofoam PAPER CUPS Plastic clear portion cups...shipboard operation and has demonstrated that it can efficiently destroy liquid wastes when properly maintained and operated. A survey of marine and

  17. Nutraceuticals: opening the debate for a regulatory framework.

    PubMed

    Santini, Antonello; Cammarata, Silvia Miriam; Capone, Giacomo; Ianaro, Angela; Tenore, Gian Carlo; Pani, Luca; Novellino, Ettore

    2018-04-01

    Currently, nutraceuticals do not have a specific definition distinct from those of other food-derived categories, such as food supplements, herbal products, pre- and probiotics, functional foods, and fortified foods. Many studies have led to an understanding of the potential mechanisms of action of pharmaceutically active components contained in food that may improve health and reduce the risk of pathological conditions while enhancing overall well-being. Nevertheless, there is a lack of clear information and, often, the claimed health benefits may not be properly substantiated by safety and efficacy information or in vitro and in vivo data, which can induce false expectations and miss the target for a product to be effective, as claimed. An officially shared and accepted definition of nutraceuticals is still missing, as nutraceuticals are mostly referred to as pharma-foods, a powerful toolbox to be used beyond the diet but before the drugs to prevent and treat pathological conditions, such as in subjects who may not yet be eligible for conventional pharmaceutical therapy. Hence, it is of utmost importance to have a proper and unequivocal definition of nutraceuticals and shared regulations. It also seems wise to assess the safety, mechanism of action and efficacy of nutraceuticals with clinical data. A growing demand exists for nutraceuticals, which seem to reside in the grey area between pharmaceuticals and food. Nonetheless, given specific legislation from different countries, nutraceuticals are experiencing challenges with safety and health claim substantiation. © 2018 The Authors. British Journal of Clinical Pharmacology published by John Wiley & Sons Ltd on behalf of British Pharmacological Society.

  18. Tracking the emergence of synthetic biology.

    PubMed

    Shapira, Philip; Kwon, Seokbeom; Youtie, Jan

    2017-01-01

    Synthetic biology is an emerging domain that combines biological and engineering concepts and which has seen rapid growth in research, innovation, and policy interest in recent years. This paper contributes to efforts to delineate this emerging domain by presenting a newly constructed bibliometric definition of synthetic biology. Our approach is dimensioned from a core set of papers in synthetic biology, using procedures to obtain benchmark synthetic biology publication records, extract keywords from these benchmark records, and refine the keywords, supplemented with articles published in dedicated synthetic biology journals. We compare our search strategy with other recent bibliometric approaches to define synthetic biology, using a common source of publication data for the period from 2000 to 2015. The paper details the rapid growth and international spread of research in synthetic biology in recent years, demonstrates that diverse research disciplines are contributing to the multidisciplinary development of synthetic biology research, and visualizes this by profiling synthetic biology research on the map of science. We further show the roles of a relatively concentrated set of research sponsors in funding the growth and trajectories of synthetic biology. In addition to discussing these analyses, the paper notes limitations and suggests lines for further work.

  19. Definition of supportive care: does the semantic matter?

    PubMed

    Hui, David

    2014-07-01

    'Supportive care' is a commonly used term in oncology; however, no consensus definition exists. This represents a barrier to communication in both the clinical and research settings. In this review, we propose a unifying conceptual framework for supportive care and discuss the proper use of this term in the clinical and research settings. A recent systematic review revealed several themes for supportive care: a focus on symptom management and improvement of quality of life, and care for patients on treatments and those with advanced stage disease. These findings are consistent with a broad definition for supportive care: 'the provision of the necessary services for those living with or affected by cancer to meet their informational, emotional, spiritual, social, or physical needs during their diagnostic, treatment, or follow-up phases encompassing issues of health promotion and prevention, survivorship, palliation, and bereavement.' Supportive care can be classified as primary, secondary, and tertiary based on the level of specialization. For example, palliative care teams provide secondary supportive care for patients with advanced cancer. Until a consensus definition is available for supportive care, this term should be clearly defined or cited whenever it is used.

  20. Benchmarking atomic physics models for magnetically confined fusion plasma physics experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    May, M.J.; Finkenthal, M.; Soukhanovskii, V.

    In present magnetically confined fusion devices, high and intermediate {ital Z} impurities are either puffed into the plasma for divertor radiative cooling experiments or are sputtered from the high {ital Z} plasma facing armor. The beneficial cooling of the edge as well as the detrimental radiative losses from the core of these impurities can be properly understood only if the atomic physics used in the modeling of the cooling curves is very accurate. To this end, a comprehensive experimental and theoretical analysis of some relevant impurities is undertaken. Gases (Ne, Ar, Kr, and Xe) are puffed and nongases are introducedmore » through laser ablation into the FTU tokamak plasma. The charge state distributions and total density of these impurities are determined from spatial scans of several photometrically calibrated vacuum ultraviolet and x-ray spectrographs (3{endash}1600 {Angstrom}), the multiple ionization state transport code transport code (MIST) and a collisional radiative model. The radiative power losses are measured with bolometery, and the emissivity profiles were measured by a visible bremsstrahlung array. The ionization balance, excitation physics, and the radiative cooling curves are computed from the Hebrew University Lawrence Livermore atomic code (HULLAC) and are benchmarked by these experiments. (Supported by U.S. DOE Grant No. DE-FG02-86ER53214 at JHU and Contract No. W-7405-ENG-48 at LLNL.) {copyright} {ital 1999 American Institute of Physics.}« less

  1. Visco-Resistive MHD Modeling Benchmark of Forced Magnetic Reconnection

    NASA Astrophysics Data System (ADS)

    Beidler, M. T.; Hegna, C. C.; Sovinec, C. R.; Callen, J. D.; Ferraro, N. M.

    2016-10-01

    The presence of externally-applied 3D magnetic fields can affect important phenomena in tokamaks, including mode locking, disruptions, and edge localized modes. External fields penetrate into the plasma and can lead to forced magnetic reconnection (FMR), and hence magnetic islands, on resonant surfaces if the local plasma rotation relative to the external field is slow. Preliminary visco-resistive MHD simulations of FMR in a slab geometry are consistent with theory. Specifically, linear simulations exhibit proper scaling of the penetrated field with resistivity, viscosity, and flow, and nonlinear simulations exhibit a bifurcation from a flow-screened to a field-penetrated, magnetic island state as the external field is increased, due to the 3D electromagnetic force. These results will be compared to simulations of FMR in a circular cross-section, cylindrical geometry by way of a benchmark between the NIMROD and M3D-C1 extended-MHD codes. Because neither this geometry nor the MHD model has the physics of poloidal flow damping, the theory of will be expanded to include poloidal flow effects. The resulting theory will be tested with linear and nonlinear simulations that vary the resistivity, viscosity, flow, and external field. Supported by OFES DoE Grants DE-FG02-92ER54139, DE-FG02-86ER53218, DE-AC02-09CH11466, and the SciDAC Center for Extended MHD Modeling.

  2. Transparency of Outcome Reporting and Trial Registration of Randomized Controlled Trials Published in the Journal of Consulting and Clinical Psychology

    PubMed Central

    Azar, Marleine; Riehm, Kira E.; McKay, Dean; Thombs, Brett D.

    2015-01-01

    Background Confidence that randomized controlled trial (RCT) results accurately reflect intervention effectiveness depends on proper trial conduct and the accuracy and completeness of published trial reports. The Journal of Consulting and Clinical Psychology (JCCP) is the primary trials journal amongst American Psychological Association (APA) journals. The objectives of this study were to review RCTs recently published in JCCP to evaluate (1) adequacy of primary outcome analysis definitions; (2) registration status; and, (3) among registered trials, adequacy of outcome registrations. Additionally, we compared results from JCCP to findings from a recent study of top psychosomatic and behavioral medicine journals. Methods Eligible RCTs were published in JCCP in 2013–2014. For each RCT, two investigators independently extracted data on (1) adequacy of outcome analysis definitions in the published report, (2) whether the RCT was registered prior to enrolling patients, and (3) adequacy of outcome registration. Results Of 70 RCTs reviewed, 12 (17.1%) adequately defined primary or secondary outcome analyses, whereas 58 (82.3%) had multiple primary outcome analyses without statistical adjustment or undefined outcome analyses. There were 39 (55.7%) registered trials. Only two trials registered prior to patient enrollment with a single primary outcome variable and time point of assessment. However, in one of the two trials, registered and published outcomes were discrepant. No studies were adequately registered as per Standard Protocol Items: Recommendation for Interventional Trials guidelines. Compared to psychosomatic and behavioral medicine journals, the proportion of published trials with adequate outcome analysis declarations was significantly lower in JCCP (17.1% versus 32.9%; p = 0.029). The proportion of registered trials in JCCP (55.7%) was comparable to behavioral medicine journals (52.6%; p = 0.709). Conclusions The quality of published outcome analysis definitions and trial registrations in JCCP is suboptimal. Greater attention to proper trial registration and outcome analysis definition in published reports is needed. PMID:26581079

  3. Transparency of Outcome Reporting and Trial Registration of Randomized Controlled Trials Published in the Journal of Consulting and Clinical Psychology.

    PubMed

    Azar, Marleine; Riehm, Kira E; McKay, Dean; Thombs, Brett D

    2015-01-01

    Confidence that randomized controlled trial (RCT) results accurately reflect intervention effectiveness depends on proper trial conduct and the accuracy and completeness of published trial reports. The Journal of Consulting and Clinical Psychology (JCCP) is the primary trials journal amongst American Psychological Association (APA) journals. The objectives of this study were to review RCTs recently published in JCCP to evaluate (1) adequacy of primary outcome analysis definitions; (2) registration status; and, (3) among registered trials, adequacy of outcome registrations. Additionally, we compared results from JCCP to findings from a recent study of top psychosomatic and behavioral medicine journals. Eligible RCTs were published in JCCP in 2013-2014. For each RCT, two investigators independently extracted data on (1) adequacy of outcome analysis definitions in the published report, (2) whether the RCT was registered prior to enrolling patients, and (3) adequacy of outcome registration. Of 70 RCTs reviewed, 12 (17.1%) adequately defined primary or secondary outcome analyses, whereas 58 (82.3%) had multiple primary outcome analyses without statistical adjustment or undefined outcome analyses. There were 39 (55.7%) registered trials. Only two trials registered prior to patient enrollment with a single primary outcome variable and time point of assessment. However, in one of the two trials, registered and published outcomes were discrepant. No studies were adequately registered as per Standard Protocol Items: Recommendation for Interventional Trials guidelines. Compared to psychosomatic and behavioral medicine journals, the proportion of published trials with adequate outcome analysis declarations was significantly lower in JCCP (17.1% versus 32.9%; p = 0.029). The proportion of registered trials in JCCP (55.7%) was comparable to behavioral medicine journals (52.6%; p = 0.709). The quality of published outcome analysis definitions and trial registrations in JCCP is suboptimal. Greater attention to proper trial registration and outcome analysis definition in published reports is needed.

  4. Overlooked wide companions of nearby F stars

    NASA Astrophysics Data System (ADS)

    Scholz, R.-D.

    2016-03-01

    Aims: We checked a sample of 545 F stars within 50 pc for wide companions using existing near-infrared and optical sky surveys. Methods: Applying the common proper motion (CPM) criterion, we detected wide companion candidates with 6-120 arcsec angular separations by visual inspection of multi-epoch finder charts and by searching in proper motion catalogues. Final proper motions were measured by involving positional measurements from up to eleven surveys. Spectral types of red CPM companions were estimated from their absolute J-band magnitudes based on the Hipparcos distances of the primaries. Results: In addition to about 100 known CPM objects, we found 19 new CPM companions and confirmed 31 previously known candidates. A few CPM objects are still considered as candidates according to their level of proper motion agreement. Among the new objects there are nine M0-M4, eight M5-M6, one ≈L3.5 dwarf (HD 3861B), and one white dwarf (WD) (HD 2726B), whereas we confirmed two K, 19 M0-M4, six M5-M6, two early-L dwarfs, and two DA WDs as CPM companions. In a few cases, previous spectral types were available that all agree well with our estimates. Two companions (HD 22879B and HD 49933B) are associated with moderately metal-poor Gaia benchmark stars. One doubtful CPM companion, spectroscopically classified as WD but found to be very bright (J = 11.1) by others, should either be a very nearby foreground WD or a different kind of object associated with HD 165670. Tables A.1, B.1, and C.1 are also available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/587/A51

  5. Benchmarking reference services: step by step.

    PubMed

    Buchanan, H S; Marshall, J G

    1996-01-01

    This article is a companion to an introductory article on benchmarking published in an earlier issue of Medical Reference Services Quarterly. Librarians interested in benchmarking often ask the following questions: How do I determine what to benchmark; how do I form a benchmarking team; how do I identify benchmarking partners; what's the best way to collect and analyze benchmarking information; and what will I do with the data? Careful planning is a critical success factor of any benchmarking project, and these questions must be answered before embarking on a benchmarking study. This article summarizes the steps necessary to conduct benchmarking research. Relevant examples of each benchmarking step are provided.

  6. Enigmatic human tails: A review of their history, embryology, classification, and clinical manifestations.

    PubMed

    Tubbs, R Shane; Malefant, Jason; Loukas, Marios; Jerry Oakes, W; Oskouian, Rod J; Fries, Fabian N

    2016-05-01

    The presence of a human tail is a rare and intriguing phenomenon. While cases have been reported in the literature, confusion remains with respect to the proper classification, definition, and treatment methods. We review the literature concerning this anatomical derailment. We also consider the importance of excluding underlying congenital anomalies in these patients to prevent neurological deficits and other abnormal manifestations. © 2016 Wiley Periodicals, Inc.

  7. Typical event horizons in AdS/CFT

    DOE PAGES

    Avery, Steven G.; Lowe, David A.

    2016-01-14

    We consider the construction of local bulk operators in a black hole background dual to a pure state in conformal field theory. The properties of these operators in a microcanonical ensemble are studied. It has been argued in the literature that typical states in such an ensemble contain firewalls, or otherwise singular horizons. Here, we argue this conclusion can be avoided with a proper definition of the interior operators.

  8. Conceptual IT model

    NASA Astrophysics Data System (ADS)

    Arnaoudova, Kristina; Stanchev, Peter

    2015-11-01

    The business processes are the key asset for every organization. The design of the business process models is the foremost concern and target among an organization's functions. Business processes and their proper management are intensely dependent on the performance of software applications and technology solutions. The paper is attempt for definition of new Conceptual model of IT service provider, it could be examined as IT focused Enterprise model, part of Enterprise Architecture (EA) school.

  9. Discrete mathematics, formal methods, the Z schema and the software life cycle

    NASA Technical Reports Server (NTRS)

    Bown, Rodney L.

    1991-01-01

    The proper role and scope for the use of discrete mathematics and formal methods in support of engineering the security and integrity of components within deployed computer systems are discussed. It is proposed that the Z schema can be used as the specification language to capture the precise definition of system and component interfaces. This can be accomplished with an object oriented development paradigm.

  10. Delayed presentation of anorectal malformation for definitive surgery.

    PubMed

    Sharma, Shilpa; Gupta, Devendra K

    2012-08-01

    To retrospectively study the outcome of patients with anorectal malformations (ARM) presenting late for definitive procedure. Patients with ARM presenting beyond 5 months of age managed from January 2008 to March 2012 were studied for clinical outcome. Ages at presentation varied from 5 months to 14 years, seven patients were older than 5 years of age. Of the 36 cases, 5 patients (3 boys and 2 girls) had presented with colostomy done elsewhere. Four patients had high anomalies. Of the 33 girls, 14 had rectovestibular fistula and 9 had anovestibular fistula. Bowel preparation with peglec was used in patients without colostomy. Preoperative retention enemas, laxatives and Hegar dilators were used for 3-11 days before surgery. On table irrigation was required in four. Patients without a covering colostomy were kept nil per oral for 5 days following surgery in prone/lateral position. Two patients had mild post-op wound infection, and were managed with local care. Delayed presentation of ARM especially in girls is quite common in developing countries. With proper perioperative care, these cases may be managed successfully with a single stage procedure in most cases. The mature tissue growth with age allows proper tissue dissection and good repair of the perineal body in girls.

  11. Baseline and extensions approach to information retrieval of complex medical data: Poznan's approach to the bioCADDIE 2016

    PubMed Central

    Cieslewicz, Artur; Dutkiewicz, Jakub; Jedrzejek, Czeslaw

    2018-01-01

    Abstract Information retrieval from biomedical repositories has become a challenging task because of their increasing size and complexity. To facilitate the research aimed at improving the search for relevant documents, various information retrieval challenges have been launched. In this article, we present the improved medical information retrieval systems designed by Poznan University of Technology and Poznan University of Medical Sciences as a contribution to the bioCADDIE 2016 challenge—a task focusing on information retrieval from a collection of 794 992 datasets generated from 20 biomedical repositories. The system developed by our team utilizes the Terrier 4.2 search platform enhanced by a query expansion method using word embeddings. This approach, after post-challenge modifications and improvements (with particular regard to assigning proper weights for original and expanded terms), allowed us achieving the second best infNDCG measure (0.4539) compared with the challenge results and infAP 0.3978. This demonstrates that proper utilization of word embeddings can be a valuable addition to the information retrieval process. Some analysis is provided on related work involving other bioCADDIE contributions. We discuss the possibility of improving our results by using better word embedding schemes to find candidates for query expansion. Database URL: https://biocaddie.org/benchmark-data PMID:29688372

  12. On the geometry of mixed states and the Fisher information tensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Contreras, I., E-mail: icontrer@illinois.edu; Ercolessi, E., E-mail: ercolessi@bo.infn.it; Schiavina, M., E-mail: michele.schiavina@math.uzh.ch

    2016-06-15

    In this paper, we will review the co-adjoint orbit formulation of finite dimensional quantum mechanics, and in this framework, we will interpret the notion of quantum Fisher information index (and metric). Following previous work of part of the authors, who introduced the definition of Fisher information tensor, we will show how its antisymmetric part is the pullback of the natural Kostant–Kirillov–Souriau symplectic form along some natural diffeomorphism. In order to do this, we will need to understand the symmetric logarithmic derivative as a proper 1-form, settling the issues about its very definition and explicit computation. Moreover, the fibration of co-adjointmore » orbits, seen as spaces of mixed states, is also discussed.« less

  13. An error criterion for determining sampling rates in closed-loop control systems

    NASA Technical Reports Server (NTRS)

    Brecher, S. M.

    1972-01-01

    The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.

  14. Toward a common language for biobanking.

    PubMed

    Fransson, Martin N; Rial-Sebbag, Emmanuelle; Brochhausen, Mathias; Litton, Jan-Eric

    2015-01-01

    To encourage the process of harmonization, the biobank community should support and use a common terminology. Relevant terms may be found in general thesauri for medicine, legal instruments or specific glossaries for biobanking. A comparison of the use of these sources has so far not been conducted and would be a useful instrument to further promote harmonization and data sharing. Thus, the purpose of the present study was to investigate the preference of definitions important for sharing biological samples and data. Definitions for 10 terms -[human] biobank, sample/specimen, sample collection, study, aliquot, coded, identifying information, anonymised, personal data and informed consent-were collected from several sources. A web-based questionnaire was sent to 560 European individuals working with biobanks asking to select their preferred definition for the terms. A total of 123 people participated in the survey, giving a response rate of 23%. The result was evaluated from four aspects: scope of definitions, potential regional differences, differences in semantics and definitions in the context of ontologies, guided by comments from responders. Indicative from the survey is the risk of focusing only on the research aspect of biobanking in definitions. Hence, it is recommended that important terms should be formulated in such a way that all areas of biobanking are covered to improve the bridges between research and clinical application. Since several of the terms investigated here within can also be found in a legal context, which may differ between countries, establishing what is a proper definition on how it adheres to law is also crucial.

  15. Recommendations for the definition of clinical responder in insulin preservation studies.

    PubMed

    Beam, Craig A; Gitelman, Stephen E; Palmer, Jerry P

    2014-09-01

    Clinical responder studies should contribute to the translation of effective treatments and interventions to the clinic. Since ultimately this translation will involve regulatory approval, we recommend that clinical trials prespecify a responder definition that can be assessed against the requirements and suggestions of regulatory agencies. In this article, we propose a clinical responder definition to specifically assist researchers and regulatory agencies in interpreting the clinical importance of statistically significant findings for studies of interventions intended to preserve β-cell function in newly diagnosed type 1 diabetes. We focus on studies of 6-month β-cell preservation in type 1 diabetes as measured by 2-h-stimulated C-peptide. We introduce criteria (bias, reliability, and external validity) for the assessment of responder definitions to ensure they meet U.S. Food and Drug Administration and European Medicines Agency guidelines. Using data from several published TrialNet studies, we evaluate our definition (no decrease in C-peptide) against published alternatives and determine that our definition has minimum bias with external validity. We observe that reliability could be improved by using changes in C-peptide later than 6 months beyond baseline. In sum, to support efficacy claims of β-cell preservation therapies in type 1 diabetes submitted to U.S. and European regulatory agencies, we recommend use of our definition. © 2014 by the American Diabetes Association. Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered.

  16. Indicators of AEI applied to the Delaware Estuary.

    PubMed

    Barnthouse, Lawrence W; Heimbuch, Douglas G; Anthony, Vaughn C; Hilborn, Ray W; Myers, Ransom A

    2002-05-18

    We evaluated the impacts of entrainment and impingement at the Salem Generating Station on fish populations and communities in the Delaware Estuary. In the absence of an agreed-upon regulatory definition of "adverse environmental impact" (AEI), we developed three independent benchmarks of AEI based on observed or predicted changes that could threaten the sustainability of a population or the integrity of a community. Our benchmarks of AEI included: (1) disruption of the balanced indigenous community of fish in the vicinity of Salem (the "BIC" analysis); (2) a continued downward trend in the abundance of one or more susceptible fish species (the "Trends" analysis); and (3) occurrence of entrainment/impingement mortality sufficient, in combination with fishing mortality, to jeopardize the future sustainability of one or more populations (the "Stock Jeopardy" analysis). The BIC analysis utilized nearly 30 years of species presence/absence data collected in the immediate vicinity of Salem. The Trends analysis examined three independent data sets that document trends in the abundance of juvenile fish throughout the estuary over the past 20 years. The Stock Jeopardy analysis used two different assessment models to quantify potential long-term impacts of entrainment and impingement on susceptible fish populations. For one of these models, the compensatory capacities of the modeled species were quantified through meta-analysis of spawner-recruit data available for several hundred fish stocks. All three analyses indicated that the fish populations and communities of the Delaware Estuary are healthy and show no evidence of an adverse impact due to Salem. Although the specific models and analyses used at Salem are not applicable to every facility, we believe that a weight of evidence approach that evaluates multiple benchmarks of AEI using both retrospective and predictive methods is the best approach for assessing entrainment and impingement impacts at existing facilities.

  17. Limitations of Community College Benchmarking and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy H.

    2006-01-01

    This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.

  18. Alternative Modal Basis Selection Procedures For Reduced-Order Nonlinear Random Response Simulation

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Guo, Xinyun; Rizi, Stephen A.

    2012-01-01

    Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of a computationally taxing full-order analysis in physical degrees of freedom are taken as the benchmark for comparison with the results from the three reduced-order analyses. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.

  19. On the numerical computation of nonlinear force-free magnetic fields. [from solar photosphere

    NASA Technical Reports Server (NTRS)

    Wu, S. T.; Sun, M. T.; Chang, H. M.; Hagyard, M. J.; Gary, G. A.

    1990-01-01

    An algorithm has been developed to extrapolate nonlinear force-free magnetic fields from the photosphere, given the proper boundary conditions. This paper presents the results of this work, describing the mathematical formalism that was developed, the numerical techniques employed, and comments on the stability criteria and accuracy developed for these numerical schemes. An analytical solution is used for a benchmark test; the results show that the computational accuracy for the case of a nonlinear force-free magnetic field was on the order of a few percent (less than 5 percent). This newly developed scheme was applied to analyze a solar vector magnetogram, and the results were compared with the results deduced from the classical potential field method. The comparison shows that additional physical features of the vector magnetogram were revealed in the nonlinear force-free case.

  20. Nucleic acid polymeric properties and electrostatics: Directly comparing theory and simulation with experiment.

    PubMed

    Sim, Adelene Y L

    2016-06-01

    Nucleic acids are biopolymers that carry genetic information and are also involved in various gene regulation functions such as gene silencing and protein translation. Because of their negatively charged backbones, nucleic acids are polyelectrolytes. To adequately understand nucleic acid folding and function, we need to properly describe its i) polymer/polyelectrolyte properties and ii) associating ion atmosphere. While various theories and simulation models have been developed to describe nucleic acids and the ions around them, many of these theories/simulations have not been well evaluated due to complexities in comparison with experiment. In this review, I discuss some recent experiments that have been strategically designed for straightforward comparison with theories and simulation models. Such data serve as excellent benchmarks to identify limitations in prevailing theories and simulation parameters. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Electroelastic fields in a layered piezoelectric cylindrical shell under dynamic load

    NASA Astrophysics Data System (ADS)

    Saviz, M. R.; Shakeri, M.; Yas, M. H.

    2007-10-01

    The objective of this paper is to demonstrate layerwise theory for the analysis of thick laminated piezoelectric shell structures. A general finite element formulation using the layerwise theory is developed for a laminated cylindrical shell with piezoelectric layers, subjected to dynamic loads. The quadratic approximation of the displacement and electric potential in the thickness direction is considered. The governing equations are reduced to two-dimensional (2D) differential equations. The three-dimensional (3D) elasticity solution is also presented. The resulting equations are solved by a proper finite element method. The numerical results for static loading are compared with exact solutions of benchmark problems. Numerical examples of the dynamic problem are presented. The convergence is studied, as is the influence of the electromechanical coupling on the axisymmetric free-vibration characteristics of a thick cylinder.

  2. Where Gibrat meets Zipf: Scale and scope of French firms

    NASA Astrophysics Data System (ADS)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2017-09-01

    The proper characterization of the size distribution and growth of firms represents an important issue in economics and business. We use the Maximum Entropy approach to assess the plausibility of the assumption that firm size follows Lognormal or Pareto distributions, which underlies most recent works on the subject. A comprehensive dataset covering the universe of French firms allows us to draw two major conclusions. First, the Pareto hypothesis for the whole distribution should be rejected. Second, by discriminating across firms based on the number of products sold and markets served, we find that, within the class of multi-product companies active in multiple markets, the distribution converges to a Zipf's law. Conversely, Lognormal distribution is a good benchmark for small single-product firms. The size distribution of firms largely depends on firms' diversification patterns.

  3. Ventilator-associated pneumonia rates at major trauma centers compared with a national benchmark: a multi-institutional study of the AAST.

    PubMed

    Michetti, Christopher P; Fakhry, Samir M; Ferguson, Pamela L; Cook, Alan; Moore, Forrest O; Gross, Ronald

    2012-05-01

    Ventilator-associated pneumonia (VAP) rates reported by the National Healthcare Safety Network (NHSN) are used as a benchmark and quality measure, yet different rates are reported from many trauma centers. This multi-institutional study was undertaken to elucidate VAP rates at major trauma centers. VAP rate/1,000 ventilator days, diagnostic methods, institutional, and aggregate patient data were collected retrospectively from a convenience sample of trauma centers for 2008 and 2009 and analyzed with descriptive statistics. At 47 participating Level I and II centers, the pooled mean VAP rate was 17.2 versus 8.1 for NHSN (2006-2008). Hospitals' rates were highly variable (range, 1.8-57.6), with 72.3% being above NHSN's mean. Rates differed based on who determined the rate (trauma service, 27.5; infection control or quality or epidemiology, 11.9; or collaborative effort, 19.9) and the frequency with which VAP was excluded based on aspiration or diagnosis before hospital day 5. In 2008 and 2009, blunt trauma patients had higher VAP rates (17.3 and 17.6, respectively) than penetrating patients (11.0 and 10.9, respectively). More centers used a clinical diagnostic strategy (57%) than a bacteriologic strategy (43%). Patients with VAP had a mean Injury Severity Score of 28.7, mean Intensive Care Unit length of stay of 20.8 days, and a 12.2% mortality rate. 50.5% of VAP patients had a traumatic brain injury. VAP rates at major trauma centers are markedly higher than those reported by NHSN and vary significantly among centers. Available data are insufficient to set benchmarks, because it is questionable whether any one data set is truly representative of most trauma centers. Application of a single benchmark to all centers may be inappropriate, and reliable diagnostic and reporting standards are needed. Prospective analysis of a larger data set is warranted, with attention to injury severity, risk factors specific to trauma patients, diagnostic method used, VAP definitions and exclusions, and reporting guidelines. III, prognostic study.

  4. Sufficient conditions for uniqueness of the weak value

    NASA Astrophysics Data System (ADS)

    Dressel, J.; Jordan, A. N.

    2012-01-01

    We review and clarify the sufficient conditions for uniquely defining the generalized weak value as the weak limit of a conditioned average using the contextual values formalism introduced in Dressel, Agarwal and Jordan (2010 Phys. Rev. Lett. 104 240401). We also respond to criticism of our work by Parrott (arXiv:1105.4188v1) concerning a proposed counter-example to the uniqueness of the definition of the generalized weak value. The counter-example does not satisfy our prescription in the case of an underspecified measurement context. We show that when the contextual values formalism is properly applied to this example, a natural interpretation of the measurement emerges and the unique definition in the weak limit holds. We also prove a theorem regarding the uniqueness of the definition under our sufficient conditions for the general case. Finally, a second proposed counter-example by Parrott (arXiv:1105.4188v6) is shown not to satisfy the sufficiency conditions for the provided theorem.

  5. DOD Needs to Improve Management and Oversight of Operations at the Theater Retrograde- Camp Arifjan, Kuwait

    DTIC Science & Technology

    2010-09-30

    only at the TRC (includes Retro Sort), Warehouse, and Bulk Yard. 2 Theater Retrograde The Theater Retrograde consists of the TRC (includes Retro Sort...through Kuwait. Figure 1. Proper Flow of Materiel from Iraq through Kuwait Note 1: We reviewed operations at the TRC, Retro Sort, Warehouse...Materiel Processing Instructions CIIC Definition 1 Highest Sensitivity - Non-nuclear missiles and rockets , launcher tube and explosive rounds 2 Highest

  6. Guidelines on disposing of medical waste in the dialysis clinic.

    PubMed

    Park, Lawrence K

    2002-02-01

    The term "medical waste" varies from state to state as to its name, definition, and scope of coverage. In this article, we will focus on the process of how a dialysis clinic ensures proper classification, labeling, packaging, tracking, and disposal of medical waste. In addition, we will reference: OSHA regulations (29CFR1910), state specific regulations, DOT regulations (49CFR) and FDA regulations that impact the disposal of medical waste.

  7. Understanding and Improving Modifiable Cardiovascular Risks within the Air Force

    DTIC Science & Technology

    2013-10-04

    Promotion Model ( HPM ). Findings: The definition of health included exercise, proper eating, sleep, and a spiritual connection, as well as the absence of...to health behaviors, including what it takes to be healthy, knowing oneself, and existing Air Force policies. The HPM did not fully address all of the...was used to arrange the data into data-driven themes. These themes were then compared to the elements of the Health Promotion Model ( HPM

  8. [Chemical databases and virtual screening].

    PubMed

    Rognan, Didier; Bonnet, Pascal

    2014-12-01

    A prerequisite to any virtual screening is the definition of compound libraries to be screened. As we describe here, various sources are available. The selection of the proper library is usually project-dependent but at least as important as the screening method itself. This review details the main compound libraries that are available for virtual screening and guide the reader to the best possible selection according to its needs. © 2014 médecine/sciences – Inserm.

  9. Benchmarking specialty hospitals, a scoping review on theory and practice.

    PubMed

    Wind, A; van Harten, W H

    2017-04-04

    Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.

  10. All inclusive benchmarking.

    PubMed

    Ellis, Judith

    2006-07-01

    The aim of this article is to review published descriptions of benchmarking activity and synthesize benchmarking principles to encourage the acceptance and use of Essence of Care as a new benchmarking approach to continuous quality improvement, and to promote its acceptance as an integral and effective part of benchmarking activity in health services. The Essence of Care, was launched by the Department of Health in England in 2001 to provide a benchmarking tool kit to support continuous improvement in the quality of fundamental aspects of health care, for example, privacy and dignity, nutrition and hygiene. The tool kit is now being effectively used by some frontline staff. However, use is inconsistent, with the value of the tool kit, or the support clinical practice benchmarking requires to be effective, not always recognized or provided by National Health Service managers, who are absorbed with the use of quantitative benchmarking approaches and measurability of comparative performance data. This review of published benchmarking literature, was obtained through an ever-narrowing search strategy commencing from benchmarking within quality improvement literature through to benchmarking activity in health services and including access to not only published examples of benchmarking approaches and models used but the actual consideration of web-based benchmarking data. This supported identification of how benchmarking approaches have developed and been used, remaining true to the basic benchmarking principles of continuous improvement through comparison and sharing (Camp 1989). Descriptions of models and exemplars of quantitative and specifically performance benchmarking activity in industry abound (Camp 1998), with far fewer examples of more qualitative and process benchmarking approaches in use in the public services and then applied to the health service (Bullivant 1998). The literature is also in the main descriptive in its support of the effectiveness of benchmarking activity and although this does not seem to have restricted its popularity in quantitative activity, reticence about the value of the more qualitative approaches, for example Essence of Care, needs to be overcome in order to improve the quality of patient care and experiences. The perceived immeasurability and subjectivity of Essence of Care and clinical practice benchmarks means that these benchmarking approaches are not always accepted or supported by health service organizations as valid benchmarking activity. In conclusion, Essence of Care benchmarking is a sophisticated clinical practice benchmarking approach which needs to be accepted as an integral part of health service benchmarking activity to support improvement in the quality of patient care and experiences.

  11. Determination of five active compounds in Artemisia princeps and A. capillaris based on UPLC-DAD and discrimination of two species with multivariate analysis.

    PubMed

    Yang, Heejung; Lee, Dong Young; Jeon, Minji; Suh, Youngbae; Sung, Sang Hyun

    2014-05-01

    Five active compounds, chlorogenic acid, 3,5-di-O-caffeoylquinic acid, 4,5-di-O-caffeoylquinic acid, jaceosidin, and eupatilin, in Artemisia princeps (Compositae) were simultaneously determined by ultra-performance liquid chromatography connected to diode array detector. The morphological resemblance between A. princeps and A. capillaris makes it difficult to properly identify species properly. It occasionally leads to misuse or misapplication in Korean traditional medicine. In the study, the discrimination between A. princeps and A. capillaris was optimally performed by the developed validation method, which resulted in definitely a difference between two species. Also, it was developed the most reliable markers contributing to the discrimination of two species by the multivariate analysis methods, such as a principal component analysis and a partial least squares discrimination analysis.

  12. Distortion definition and correction in off-axis systems

    NASA Astrophysics Data System (ADS)

    Da Deppo, Vania; Simioni, Emanuele; Naletto, Giampiero; Cremonese, Gabriele

    2015-09-01

    Off-axis optical configurations are becoming more and more used in a variety of applications, in particular they are the most preferred solution for cameras devoted to Solar System planets and small bodies (i.e. asteroids and comets) study. Off-axis designs, being devoid of central obstruction, are able to guarantee better PSF and MTF performance, and thus higher contrast imaging capabilities with respect to classical on-axis designs. In particular they are suitable for observing extended targets with intrinsic low contrast features, or scenes where a high dynamical signal range is present. Classical distortion theory is able to well describe the performance of the on-axis systems, but it has to be adapted for the off-axis case. A proper way to deal with off-axis distortion definition is thus needed together with dedicated techniques to accurately measure and hence remove the distortion effects present in the acquired images. In this paper, a review of the distortion definition for off-axis systems will be given. In particular the method adopted by the authors to deal with the distortion related issues (definition, measure, removal) in some off-axis instruments will be described in detail.

  13. Review of life-cycle approaches coupled with data envelopment analysis: launching the CFP + DEA method for energy policy making.

    PubMed

    Vázquez-Rowe, Ian; Iribarren, Diego

    2015-01-01

    Life-cycle (LC) approaches play a significant role in energy policy making to determine the environmental impacts associated with the choice of energy source. Data envelopment analysis (DEA) can be combined with LC approaches to provide quantitative benchmarks that orientate the performance of energy systems towards environmental sustainability, with different implications depending on the selected LC + DEA method. The present paper examines currently available LC + DEA methods and develops a novel method combining carbon footprinting (CFP) and DEA. Thus, the CFP + DEA method is proposed, a five-step structure including data collection for multiple homogenous entities, calculation of target operating points, evaluation of current and target carbon footprints, and result interpretation. As the current context for energy policy implies an anthropocentric perspective with focus on the global warming impact of energy systems, the CFP + DEA method is foreseen to be the most consistent LC + DEA approach to provide benchmarks for energy policy making. The fact that this method relies on the definition of operating points with optimised resource intensity helps to moderate the concerns about the omission of other environmental impacts. Moreover, the CFP + DEA method benefits from CFP specifications in terms of flexibility, understanding, and reporting.

  14. Review of Life-Cycle Approaches Coupled with Data Envelopment Analysis: Launching the CFP + DEA Method for Energy Policy Making

    PubMed Central

    Vázquez-Rowe, Ian

    2015-01-01

    Life-cycle (LC) approaches play a significant role in energy policy making to determine the environmental impacts associated with the choice of energy source. Data envelopment analysis (DEA) can be combined with LC approaches to provide quantitative benchmarks that orientate the performance of energy systems towards environmental sustainability, with different implications depending on the selected LC + DEA method. The present paper examines currently available LC + DEA methods and develops a novel method combining carbon footprinting (CFP) and DEA. Thus, the CFP + DEA method is proposed, a five-step structure including data collection for multiple homogenous entities, calculation of target operating points, evaluation of current and target carbon footprints, and result interpretation. As the current context for energy policy implies an anthropocentric perspective with focus on the global warming impact of energy systems, the CFP + DEA method is foreseen to be the most consistent LC + DEA approach to provide benchmarks for energy policy making. The fact that this method relies on the definition of operating points with optimised resource intensity helps to moderate the concerns about the omission of other environmental impacts. Moreover, the CFP + DEA method benefits from CFP specifications in terms of flexibility, understanding, and reporting. PMID:25654136

  15. Using scientific evidence to improve hospital library services: Southern Chapter/Medical Library Association journal usage study.

    PubMed

    Dee, C R; Rankin, J A; Burns, C A

    1998-07-01

    Journal usage studies, which are useful for budget management and for evaluating collection performance relative to library use, have generally described a single library or subject discipline. The Southern Chapter/Medical Library Association (SC/MLA) study has examined journal usage at the aggregate data level with the long-term goal of developing hospital library benchmarks for journal use. Thirty-six SC/MLA hospital libraries, categorized for the study by size as small, medium, or large, reported current journal title use centrally for a one-year period following standardized data collection procedures. Institutional and aggregate data were analyzed for the average annual frequency of use, average costs per use and non-use, and average percent of non-used titles. Permutation F-type tests were used to measure difference among the three hospital groups. Averages were reported for each data set analysis. Statistical tests indicated no significant differences between the hospital groups, suggesting that benchmarks can be derived applying to all types of hospital libraries. The unanticipated lack of commonality among heavily used titles pointed to a need for uniquely tailored collections. Although the small sample size precluded definitive results, the study's findings constituted a baseline of data that can be compared against future studies.

  16. Using scientific evidence to improve hospital library services: Southern Chapter/Medical Library Association journal usage study.

    PubMed Central

    Dee, C R; Rankin, J A; Burns, C A

    1998-01-01

    BACKGROUND: Journal usage studies, which are useful for budget management and for evaluating collection performance relative to library use, have generally described a single library or subject discipline. The Southern Chapter/Medical Library Association (SC/MLA) study has examined journal usage at the aggregate data level with the long-term goal of developing hospital library benchmarks for journal use. METHODS: Thirty-six SC/MLA hospital libraries, categorized for the study by size as small, medium, or large, reported current journal title use centrally for a one-year period following standardized data collection procedures. Institutional and aggregate data were analyzed for the average annual frequency of use, average costs per use and non-use, and average percent of non-used titles. Permutation F-type tests were used to measure difference among the three hospital groups. RESULTS: Averages were reported for each data set analysis. Statistical tests indicated no significant differences between the hospital groups, suggesting that benchmarks can be derived applying to all types of hospital libraries. The unanticipated lack of commonality among heavily used titles pointed to a need for uniquely tailored collections. CONCLUSION: Although the small sample size precluded definitive results, the study's findings constituted a baseline of data that can be compared against future studies. PMID:9681164

  17. Embedded real-time image processing hardware for feature extraction and clustering

    NASA Astrophysics Data System (ADS)

    Chiu, Lihu; Chang, Grant

    2003-08-01

    Printronix, Inc. uses scanner-based image systems to perform print quality measurements for line-matrix printers. The size of the image samples and image definition required make commercial scanners convenient to use. The image processing is relatively well defined, and we are able to simplify many of the calculations into hardware equations and "c" code. The process of rapidly prototyping the system using DSP based "c" code gets the algorithms well defined early in the development cycle. Once a working system is defined, the rest of the process involves splitting the task up for the FPGA and the DSP implementation. Deciding which of the two to use, the DSP or the FPGA, is a simple matter of trial benchmarking. There are two kinds of benchmarking: One for speed, and the other for memory. The more memory intensive algorithms should run in the DSP, and the simple real time tasks can use the FPGA most effectively. Once the task is split, we can decide which platform the algorithm should be executed. This involves prototyping all the code in the DSP, then timing various blocks of the algorithm. Slow routines can be optimized using the compiler tools, and if further reduction in time is needed, into tasks that the FPGA can perform.

  18. The multichannel n-propyl + O2 reaction surface: Definitive theory on a model hydrocarbon oxidation mechanism

    NASA Astrophysics Data System (ADS)

    Bartlett, Marcus A.; Liang, Tao; Pu, Liang; Schaefer, Henry F.; Allen, Wesley D.

    2018-03-01

    The n-propyl + O2 reaction is an important model of chain branching reactions in larger combustion systems. In this work, focal point analyses (FPAs) extrapolating to the ab initio limit were performed on the n-propyl + O2 system based on explicit quantum chemical computations with electron correlation treatments through coupled cluster single, double, triple, and perturbative quadruple excitations [CCSDT(Q)] and basis sets up to cc-pV5Z. All reaction species and transition states were fully optimized at the rigorous CCSD(T)/cc-pVTZ level of theory, revealing some substantial differences in comparison to the density functional theory geometries existing in the literature. A mixed Hessian methodology was implemented and benchmarked that essentially makes the computations of CCSD(T)/cc-pVTZ vibrational frequencies feasible and thus provides critical improvements to zero-point vibrational energies for the n-propyl + O2 system. Two key stationary points, n-propylperoxy radical (MIN1) and its concerted elimination transition state (TS1), were located 32.7 kcal mol-1 and 2.4 kcal mol-1 below the reactants, respectively. Two competitive β-hydrogen transfer transition states (TS2 and TS2') were found separated by only 0.16 kcal mol-1, a fact unrecognized in the current combustion literature. Incorporating TS2' in master equation (ME) kinetic models might reduce the large discrepancy of 2.5 kcal mol-1 between FPA and ME barrier heights for TS2. TS2 exhibits an anomalously large diagonal Born-Oppenheimer correction (ΔDBOC = 1.71 kcal mol-1), which is indicative of a nearby surface crossing and possible nonadiabatic reaction dynamics. The first systematic conformational search of three hydroperoxypropyl (QOOH) intermediates was completed, uncovering a total of 32 rotamers lying within 1.6 kcal mol-1 of their respective lowest-energy minima. Our definitive energetics for stationary points on the n-propyl + O2 potential energy surface provide key benchmarks for future studies of hydrocarbon oxidation.

  19. First Steps Toward a Quality of Climate Finance Scorecard (QUODA-CF): Creating a Comparative Index to Assess International Climate Finance Contributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierra, Katherine; Roberts, Timmons; de Nevers, Michele

    Are climate finance contributor countries, multilateral aid agencies and specialized funds using widely accepted best practices in foreign assistance? How is it possible to measure and compare international climate finance contributions when there are as yet no established metrics or agreed definitions of the quality of climate finance? As a subjective metric, quality can mean different things to different stakeholders, while of donor countries, recipients and institutional actors may place quality across a broad spectrum of objectives. This subjectivity makes the assessment of the quality of climate finance contributions a useful and necessary exercise, but one that has many challenges.more » This work seeks to enhance the development of common definitions and metrics of the quality of climate finance, to understand what we can about those areas where climate finance information is available and shine a light on the areas where there is a severe dearth of data. Allowing for comparisons of the use of best practices across funding institutions in the climate sector could begin a process of benchmarking performance, fostering learning across institutions and driving improvements when incorporated in internal evaluation protocols of those institutions. In the medium term, this kind of benchmarking and transparency could support fundraising in contributor countries and help build trust with recipient countries. As a feasibility study, this paper attempts to outline the importance of assessing international climate finance contributions while describing the difficulties in arriving at universally agreed measurements and indicators for assessment. In many cases, data are neither readily available nor complete, and there is no consensus on what should be included. A number of indicators are proposed in this study as a starting point with which to analyze voluntary contributions, but in some cases their methodologies are not complete, and further research is required for a robust measurement tool to be created.« less

  20. On Defining Mass

    NASA Astrophysics Data System (ADS)

    Hecht, Eugene

    2011-01-01

    Though central to any pedagogical development of physics, the concept of mass is still not well understood. Properly defining mass has proven to be far more daunting than contemporary textbooks would have us believe. And yet today the origin of mass is one of the most aggressively pursued areas of research in all of physics. Much of the excitement surrounding the Large Hadron Collider at CERN is associated with discovering the mechanism responsible for the masses of the elementary particles. This paper will first briefly examine the leading definitions, pointing out their shortcomings. Then, utilizing relativity theory, it will propose—for consideration by the community of physicists—a conceptual definition of mass predicated on the more fundamental concept of energy, more fundamental in that everything that has mass has energy, yet not everything that has energy has mass.

  1. Bodily systems and the spatial-functional structure of the human body.

    PubMed

    Smith, Barry; Munn, Katherine; Papakin, Igor

    2004-01-01

    The human body is a system made of systems. The body is divided into bodily systems proper, such as the endocrine and circulatory systems, which are subdivided into many sub-systems at a variety of levels, whereby all systems and subsystems engage in massive causal interaction with each other and with their surrounding environments. Here we offer an explicit definition of bodily system and provide a framework for understanding their causal interactions. Medical sciences provide at best informal accounts of basic notions such as system, process, and function, and while such informality is acceptable in documentation created for human beings, it falls short of what is needed for computer representations. In our analysis we will accordingly provide the framework for a formal definition of bodily system and of associated notions.

  2. Genetic Algorithm Approaches for Actuator Placement

    NASA Technical Reports Server (NTRS)

    Crossley, William A.

    2000-01-01

    This research investigated genetic algorithm approaches for smart actuator placement to provide aircraft maneuverability without requiring hinged flaps or other control surfaces. The effort supported goals of the Multidisciplinary Design Optimization focus efforts in NASA's Aircraft au program. This work helped to properly identify various aspects of the genetic algorithm operators and parameters that allow for placement of discrete control actuators/effectors. An improved problem definition, including better definition of the objective function and constraints, resulted from this research effort. The work conducted for this research used a geometrically simple wing model; however, an increasing number of potential actuator placement locations were incorporated to illustrate the ability of the GA to determine promising actuator placement arrangements. This effort's major result is a useful genetic algorithm-based approach to assist in the discrete actuator/effector placement problem.

  3. Results Oriented Benchmarking: The Evolution of Benchmarking at NASA from Competitive Comparisons to World Class Space Partnerships

    NASA Technical Reports Server (NTRS)

    Bell, Michael A.

    1999-01-01

    Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.

  4. A knowledge-based potential with an accurate description of local interactions improves discrimination between native and near-native protein conformations.

    PubMed

    Ferrada, Evandro; Vergara, Ismael A; Melo, Francisco

    2007-01-01

    The correct discrimination between native and near-native protein conformations is essential for achieving accurate computer-based protein structure prediction. However, this has proven to be a difficult task, since currently available physical energy functions, empirical potentials and statistical scoring functions are still limited in achieving this goal consistently. In this work, we assess and compare the ability of different full atom knowledge-based potentials to discriminate between native protein structures and near-native protein conformations generated by comparative modeling. Using a benchmark of 152 near-native protein models and their corresponding native structures that encompass several different folds, we demonstrate that the incorporation of close non-bonded pairwise atom terms improves the discriminating power of the empirical potentials. Since the direct and unbiased derivation of close non-bonded terms from current experimental data is not possible, we obtained and used those terms from the corresponding pseudo-energy functions of a non-local knowledge-based potential. It is shown that this methodology significantly improves the discrimination between native and near-native protein conformations, suggesting that a proper description of close non-bonded terms is important to achieve a more complete and accurate description of native protein conformations. Some external knowledge-based energy functions that are widely used in model assessment performed poorly, indicating that the benchmark of models and the specific discrimination task tested in this work constitutes a difficult challenge.

  5. Benchmark experiment for the cross section of the 100Mo(p,2n)99mTc and 100Mo(p,pn)99Mo reactions

    NASA Astrophysics Data System (ADS)

    Takács, S.; Ditrói, F.; Aikawa, M.; Haba, H.; Otuka, N.

    2016-05-01

    As nuclear medicine community has shown an increasing interest in accelerator produced 99mTc radionuclide, the possible alternative direct production routes for producing 99mTc were investigated intensively. One of these accelerator production routes is based on the 100Mo(p,2n)99mTc reaction. The cross section of this nuclear reaction was studied by several laboratories earlier but the available data-sets are not in good agreement. For large scale accelerator production of 99mTc based on the 100Mo(p,2n)99mTc reaction, a well-defined excitation function is required to optimise the production process effectively. One of our recent publications pointed out that most of the available experimental excitation functions for the 100Mo(p,2n)99mTc reaction have the same general shape while their amplitudes are different. To confirm the proper amplitude of the excitation function, results of three independent experiments were presented (Takács et al., 2015). In this work we present results of a thick target count rate measurement of the Eγ = 140.5 keV gamma-line from molybdenum irradiated by Ep = 17.9 MeV proton beam, as an integral benchmark experiment, to prove the cross section data reported for the 100Mo(p,2n)99mTc and 100Mo(p,pn)99Mo reactions in Takács et al. (2015).

  6. The Borexino Thermal Monitoring & Management System and simulations of the fluid-dynamics of the Borexino detector under asymmetrical, changing boundary conditions

    NASA Astrophysics Data System (ADS)

    Bravo-Berguño, D.; Mereu, R.; Cavalcante, P.; Carlini, M.; Ianni, A.; Goretti, A.; Gabriele, F.; Wright, T.; Yokley, Z.; Vogelaar, R. B.; Calaprice, F.; Inzoli, F.

    2018-03-01

    A comprehensive monitoring system for the thermal environment inside the Borexino neutrino detector was developed and installed in order to reduce uncertainties in determining temperatures throughout the detector. A complementary thermal management system limits undesirable thermal couplings between the environment and Borexino's active sections. This strategy is bringing improved radioactive background conditions to the region of interest for the physics signal thanks to reduced fluid mixing induced in the liquid scintillator. Although fluid-dynamical equilibrium has not yet been fully reached, and thermal fine-tuning is possible, the system has proven extremely effective at stabilizing the detector's thermal conditions while offering precise insights into its mechanisms of internal thermal transport. Furthermore, a Computational Fluid-Dynamics analysis has been performed, based on the empirical measurements provided by the thermal monitoring system, and providing information into present and future thermal trends. A two-dimensional modeling approach was implemented in order to achieve a proper understanding of the thermal and fluid-dynamics in Borexino. It was optimized for different regions and periods of interest, focusing on the most critical effects that were identified as influencing background concentrations. Literature experimental case studies were reproduced to benchmark the method and settings, and a Borexino-specific benchmark was implemented in order to validate the modeling approach for thermal transport. Finally, fully-convective models were applied to understand general and specific fluid motions impacting the detector's Active Volume.

  7. Benchmark of 3D halo neutral simulation in TRANSP and FIDASIM and application to projected neutral-beam-heated NSTX-U plasmas

    NASA Astrophysics Data System (ADS)

    Liu, D.; Medley, S. S.; Gorelenkova, M. V.; Heidbrink, W. W.; Stagner, L.

    2014-10-01

    A cloud of halo neutrals is created in the vicinity of beam footprint during the neutral beam injection and the halo neutral density can be comparable with beam neutral density. Proper modeling of halo neutrals is critical to correctly interpret neutral particle analyzers (NPA) and fast ion D-alpha (FIDA) signals since these signals strongly depend on local beam and halo neutral density. A 3D halo neutral model has been recently developed and implemented inside TRANSP code. The 3D halo neutral code uses a ``beam-in-a-box'' model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components produce thermal halo neutrals that are tracked through successive halo neutral generations until an ionization event occurs or a descendant halo exits the box. A benchmark between 3D halo neural model in TRANSP and in FIDA/NPA synthetic diagnostic code FIDASIM is carried out. Detailed comparison of halo neutral density profiles from two codes will be shown. The NPA and FIDA simulations with and without 3D halos are applied to projections of plasma performance for the National Spherical Tours eXperiment-Upgrade (NSTX-U) and the effects of halo neutral density on NPA and FIDA signal amplitude and profile will be presented. Work supported by US DOE.

  8. A Computable Definition of Sepsis Facilitates Screening and Performance Improvement Tracking.

    PubMed

    Alessi, Lauren J; Warmus, Holly R; Schaffner, Erin K; Kantawala, Sajel; Carcillo, Joseph; Rosen, Johanna; Horvat, Christopher M

    2018-03-01

    Sepsis kills almost 5,000 children annually, accounting for 16% of pediatric health care spending in the United States. We sought to identify sepsis within the Electronic Health Record (EHR) of a quaternary children's hospital to characterize disease incidence, improve recognition and response, and track performance metrics. Methods are organized in a plan-do-study-act cycle. During the "plan" phase, electronic definitions of sepsis (blood culture and antibiotic within 24 hours) and septic shock (sepsis plus vasoactive medication) were created to establish benchmark data and track progress with statistical process control. The performance of a screening tool was evaluated in the emergency department. During the "do" phase, a novel inpatient workflow is being piloted, which involves regular sepsis screening by nurses using the tool, and a regimented response to high risk patients. Screening tool use in the emergency department reduced time to antibiotics (Fig. 1). Of the 6,159 admissions, EHR definitions identified 1,433 (23.3%) between July and December 2016 with sepsis, of which 159 (11.1%) had septic shock. Hospital mortality for all sepsis patients was 2.2% and 15.7% for septic shock (Table 1). These findings approximate epidemiologic studies of sepsis and severe sepsis, which report a prevalence range of 0.45-8.2% and mortality range of 8.2-25% (Table 2). 1-5 . Implementation of a sepsis screening tool is associated with improved performance. The prevalence of sepsis conditions identified with electronic definitions approximates the epidemiologic landscape characterized by other point-prevalence and administrative studies, providing face validity to this approach, and proving useful for tracking performance improvement.

  9. Toxicological benchmarks for screening potential contaminants of concern for effects on aquatic biota: 1996 revision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W. II; Tsao, C.L.

    1996-06-01

    This report presents potential screening benchmarks for protection of aquatic life form contaminants in water. Because there is no guidance for screening for benchmarks, a set of alternative benchmarks is presented herein. This report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. Also included is the updates of benchmark values where appropriate, new benchmark values, secondary sources are replaced by primary sources, and a more completemore » documentation of the sources and derivation of all values are presented.« less

  10. Benchmarking in emergency health systems.

    PubMed

    Kennedy, Marcus P; Allen, Jacqueline; Allen, Greg

    2002-12-01

    This paper discusses the role of benchmarking as a component of quality management. It describes the historical background of benchmarking, its competitive origin and the requirement in today's health environment for a more collaborative approach. The classical 'functional and generic' types of benchmarking are discussed with a suggestion to adopt a different terminology that describes the purpose and practicalities of benchmarking. Benchmarking is not without risks. The consequence of inappropriate focus and the need for a balanced overview of process is explored. The competition that is intrinsic to benchmarking is questioned and the negative impact it may have on improvement strategies in poorly performing organizations is recognized. The difficulty in achieving cross-organizational validity in benchmarking is emphasized, as is the need to scrutinize benchmarking measures. The cost effectiveness of benchmarking projects is questioned and the concept of 'best value, best practice' in an environment of fixed resources is examined.

  11. Overcoming the Problems of Inconsistent International Migration data: A New Method Applied to Flows in Europe

    PubMed Central

    Raymer, James; van der Erf, Rob; van Wissen, Leo

    2010-01-01

    Due to differences in definitions and measurement methods, cross-country comparisons of international migration patterns are difficult and confusing. Emigration numbers reported by sending countries tend to differ from the corresponding immigration numbers reported by receiving countries. In this paper, a methodology is presented to achieve harmonised estimates of migration flows benchmarked to a specific definition of duration. This methodology accounts for both differences in definitions and the effects of measurement error due to, for example, under reporting and sampling fluctuations. More specifically, the differences between the two sets of reported data are overcome by estimating a set of adjustment factors for each country’s immigration and emigration data. The adjusted data take into account any special cases where the origin–destination patterns do not match the overall patterns. The new method for harmonising migration flows that we present is based on earlier efforts by Poulain (European Journal of Population, 9(4): 353–381 1993, Working Paper 12, joint ECE-Eurostat Work Session on Migration Statistics, Geneva, Switzerland 1999) and is illustrated for movements between 19 European countries from 2002 to 2007. The results represent a reliable and consistent set of international migration flows that can be used for understanding recent changes in migration patterns, as inputs into population projections and for developing evidence-based migration policies. PMID:21124647

  12. Overcoming the Problems of Inconsistent International Migration data: A New Method Applied to Flows in Europe.

    PubMed

    de Beer, Joop; Raymer, James; van der Erf, Rob; van Wissen, Leo

    2010-11-01

    Due to differences in definitions and measurement methods, cross-country comparisons of international migration patterns are difficult and confusing. Emigration numbers reported by sending countries tend to differ from the corresponding immigration numbers reported by receiving countries. In this paper, a methodology is presented to achieve harmonised estimates of migration flows benchmarked to a specific definition of duration. This methodology accounts for both differences in definitions and the effects of measurement error due to, for example, under reporting and sampling fluctuations. More specifically, the differences between the two sets of reported data are overcome by estimating a set of adjustment factors for each country's immigration and emigration data. The adjusted data take into account any special cases where the origin-destination patterns do not match the overall patterns. The new method for harmonising migration flows that we present is based on earlier efforts by Poulain (European Journal of Population, 9(4): 353-381 1993, Working Paper 12, joint ECE-Eurostat Work Session on Migration Statistics, Geneva, Switzerland 1999) and is illustrated for movements between 19 European countries from 2002 to 2007. The results represent a reliable and consistent set of international migration flows that can be used for understanding recent changes in migration patterns, as inputs into population projections and for developing evidence-based migration policies.

  13. MULTIVARIATERESIDUES : A Mathematica package for computing multivariate residues

    NASA Astrophysics Data System (ADS)

    Larsen, Kasper J.; Rietkerk, Robbert

    2018-01-01

    Multivariate residues appear in many different contexts in theoretical physics and algebraic geometry. In theoretical physics, they for example give the proper definition of generalized-unitarity cuts, and they play a central role in the Grassmannian formulation of the S-matrix by Arkani-Hamed et al. In realistic cases their evaluation can be non-trivial. In this paper we provide a Mathematica package for efficient evaluation of multivariate residues based on methods from computational algebraic geometry.

  14. Proceedings of the International Conference on Stiff Computation, April 12-14, 1982, Park City, Utah. Volume I.

    DTIC Science & Technology

    1982-01-01

    physical reasoning and based on computational experience with similar equations. There is another non- automatic way: through proper scaling of all...1979) for an automatic scheme for this scaling on a digitial computer . Shampine(1980) reports a special definition of stiffness appropriate for...an analog for a laboratory that typically already has a digital computer . The digitial is much more versatile. Also there does not yet exist " software

  15. Clinical studies in orthodontics--an overview of NIDR-sponsored clinical orthodontic studies in the US.

    PubMed

    Baumrind, S

    1998-11-01

    A number of clinical trials sponsored by the National Institutes of Health (NIH) use rigorous methods of data acquisition and analysis previously developed in fundamental biology and the physical sciences. The naive expectation that these trials would lead relatively rapidly to definitive answers concerning the therapeutic strategies and techniques under study is dispelled. This presentation focuses on delineating differences between the study of central tendencies and individual variation, more specifically on the strategy to study this variation: measure additional sources of variance within each patient at more timepoints and perhaps with greater precision. As rigorous orthodontic research is still in its infancy, the problem of defining the proper mix between prospective and retrospective trials is discussed. In view of the high costs of prospective clinical trials, many of the questions germane to orthodontics can be answered by well-conducted retrospective trials, assuming that properly randomized sampling procedures are employed. Definitive clinical trials are likely to require better theoretical constructs, better instrumentation, and better measures than now available. Reasons for concern are the restricted resources available and the fact that current mensurational approaches may not detect many of the individual differences. The task of constructing sharable databases and record bases stored in digital form and available either remotely from servers, or locally from CD-ROMs or optical disks, is crucial to the optimization of future investigations.

  16. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  17. Benchmarking and Performance Measurement.

    ERIC Educational Resources Information Center

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  18. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  19. Toxicological Benchmarks for Screening of Potential Contaminants of Concern for Effects on Aquatic Biota on the Oak Ridge Reservation, Oak Ridge, Tennessee

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W., II

    1993-01-01

    One of the initial stages in ecological risk assessment of hazardous waste sites is the screening of contaminants to determine which, if any, of them are worthy of further consideration; this process is termed contaminant screening. Screening is performed by comparing concentrations in ambient media to benchmark concentrations that are either indicative of a high likelihood of significant effects (upper screening benchmarks) or of a very low likelihood of significant effects (lower screening benchmarks). Exceedance of an upper screening benchmark indicates that the chemical in question is clearly of concern and remedial actions are likely to be needed. Exceedance ofmore » a lower screening benchmark indicates that a contaminant is of concern unless other information indicates that the data are unreliable or the comparison is inappropriate. Chemicals with concentrations below the lower benchmark are not of concern if the ambient data are judged to be adequate. This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronic Value (SCV), the lowest chronic values for fish and daphnids, the lowest EC20 for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. This report supersedes a prior aquatic benchmarks report (Suter and Mabrey 1994). It adds two new types of benchmarks. It also updates the benchmark values where appropriate, adds some new benchmark values, replaces secondary sources with primary sources, and provides more complete documentation of the sources and derivation of all values.« less

  20. The KMAT: Benchmarking Knowledge Management.

    ERIC Educational Resources Information Center

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  1. Web-HLA and Service-Enabled RTI in the Simulation Grid

    NASA Astrophysics Data System (ADS)

    Huang, Jijie; Li, Bo Hu; Chai, Xudong; Zhang, Lin

    HLA-based simulations in a grid environment have now become a main research hotspot in the M&S community, but there are many shortcomings of the current HLA running in a grid environment. This paper analyzes the analogies between HLA and OGSA from the software architecture point of view, and points out the service-oriented method should be introduced into the three components of HLA to overcome its shortcomings. This paper proposes an expanded running architecture that can integrate the HLA with OGSA and realizes a service-enabled RTI (SE-RTI). In addition, in order to handle the bottleneck problem that is how to efficiently realize the HLA time management mechanism, this paper proposes a centralized way by which the CRC of the SE-RTI takes charge of the time management and the dispatching of TSO events of each federate. Benchmark experiments indicate that the running velocity of simulations in Internet or WAN is properly improved.

  2. A new calibration code for the JET polarimeter.

    PubMed

    Gelfusa, M; Murari, A; Gaudio, P; Boboc, A; Brombin, M; Orsitto, F P; Giovannozzi, E

    2010-05-01

    An equivalent model of JET polarimeter is presented, which overcomes the drawbacks of previous versions of the fitting procedures used to provide calibrated results. First of all the signal processing electronics has been simulated, to confirm that it is still working within the original specifications. Then the effective optical path of both the vertical and lateral chords has been implemented to produce the calibration curves. The principle approach to the model has allowed obtaining a unique procedure which can be applied to any manual calibration and remains constant until the following one. The optical model of the chords is then applied to derive the plasma measurements. The results are in good agreement with the estimates of the most advanced full wave propagation code available and have been benchmarked with other diagnostics. The devised procedure has proved to work properly also for the most recent campaigns and high current experiments.

  3. Magnetic exchange couplings from noncollinear perturbation theory: dinuclear CuII complexes.

    PubMed

    Phillips, Jordan J; Peralta, Juan E

    2014-08-07

    To benchmark the performance of a new method based on noncollinear coupled-perturbed density functional theory [J. Chem. Phys. 138, 174115 (2013)], we calculate the magnetic exchange couplings in a series of triply bridged ferromagnetic dinuclear Cu(II) complexes that have been recently synthesized [Phys. Chem. Chem. Phys. 15, 1966 (2013)]. We find that for any basis-set the couplings from our noncollinear coupled-perturbed methodology are practically identical to those of spin-projected energy-differences when a hybrid density functional approximation is employed. This demonstrates that our methodology properly recovers a Heisenberg description for these systems, and is robust in its predictive power of magnetic couplings. Furthermore, this indicates that the failure of density functional theory to capture the subtle variation of the exchange couplings in these complexes is not simply an artifact of broken-symmetry methods, but rather a fundamental weakness of current approximate density functionals for the description of magnetic couplings.

  4. Experimental physics characteristics of a heavy-metal-reflected fast-spectrum critical assembly

    NASA Technical Reports Server (NTRS)

    Heneveld, W. H.; Paschall, R. K.; Springer, T. H.; Swanson, V. A.; Thiele, A. W.; Tuttle, R. J.

    1972-01-01

    A zero-power critical assembly was designed, constructed, and operated for the purpose of conducting a series of benchmark experiments dealing with the physics characteristics of a UN-fueled, Li-cooled, Mo-reflected, drum-controlled compact fast reactor for use with a space-power electric conversion system. The range of the previous experimental investigations has been expanded to include the reactivity effects of:(1) surrounding the reactor with 15.24 cm (6 in.) of polyethylene, (2) reducing the heights of a portion of the upper and lower axial reflectors by factors of 2 and 4, (3) adding 45 kg of W to the core uniformly in two steps, (4) adding 9.54 kg of Ta to the core uniformly, and (5) inserting 2.3 kg of polyethylene into the core proper and determining the effect of a Ta addition on the polyethylene worth.

  5. Task 28: Web Accessible APIs in the Cloud Trade Study

    NASA Technical Reports Server (NTRS)

    Gallagher, James; Habermann, Ted; Jelenak, Aleksandar; Lee, Joe; Potter, Nathan; Yang, Muqun

    2017-01-01

    This study explored three candidate architectures for serving NASA Earth Science Hierarchical Data Format Version 5 (HDF5) data via Hyrax running on Amazon Web Services (AWS). We studied the cost and performance for each architecture using several representative Use-Cases. The objectives of the project are: Conduct a trade study to identify one or more high performance integrated solutions for storing and retrieving NASA HDF5 and Network Common Data Format Version 4 (netCDF4) data in a cloud (web object store) environment. The target environment is Amazon Web Services (AWS) Simple Storage Service (S3).Conduct needed level of software development to properly evaluate solutions in the trade study and to obtain required benchmarking metrics for input into government decision of potential follow-on prototyping. Develop a cloud cost model for the preferred data storage solution (or solutions) that accounts for different granulation and aggregation schemes as well as cost and performance trades.

  6. Library design practices for success in lead generation with small molecule libraries.

    PubMed

    Goodnow, R A; Guba, W; Haap, W

    2003-11-01

    The generation of novel structures amenable to rapid and efficient lead optimization comprises an emerging strategy for success in modern drug discovery. Small molecule libraries of sufficient size and diversity to increase the chances of discovery of novel structures make the high throughput synthesis approach the method of choice for lead generation. Despite an industry trend for smaller, more focused libraries, the need to generate novel lead structures makes larger libraries a necessary strategy. For libraries of a several thousand or more members, solid phase synthesis approaches are the most suitable. While the technology and chemistry necessary for small molecule library synthesis continue to advance, success in lead generation requires rigorous consideration in the library design process to ensure the synthesis of molecules possessing the proper characteristics for subsequent lead optimization. Without proper selection of library templates and building blocks, solid phase synthesis methods often generate molecules which are too heavy, too lipophilic and too complex to be useful for lead optimization. The appropriate filtering of virtual library designs with multiple computational tools allows the generation of information-rich libraries within a drug-like molecular property space. An understanding of the hit-to-lead process provides a practical guide to molecular design characteristics. Examples of leads generated from library approaches also provide a benchmarking of successes as well as aspects for continued development of library design practices.

  7. Benchmarking the Performance of Mobile Laser Scanning Systems Using a Permanent Test Field

    PubMed Central

    Kaartinen, Harri; Hyyppä, Juha; Kukko, Antero; Jaakkola, Anttoni; Hyyppä, Hannu

    2012-01-01

    The performance of various mobile laser scanning systems was tested on an established urban test field. The test was connected to the European Spatial Data Research (EuroSDR) project “Mobile Mapping—Road Environment Mapping Using Mobile Laser Scanning”. Several commercial and research systems collected laser point cloud data on the same test field. The system comparisons focused on planimetric and elevation errors using a filtered digital elevation model, poles, and building corners as the reference objects. The results revealed the high quality of the point clouds generated by all of the tested systems under good GNSS conditions. With all professional systems properly calibrated, the elevation accuracy was better than 3.5 cm up to a range of 35 m. The best system achieved a planimetric accuracy of 2.5 cm over a range of 45 m. The planimetric errors increased as a function of range, but moderately so if the system was properly calibrated. The main focus on mobile laser scanning development in the near future should be on the improvement of the trajectory solution, especially under non-ideal conditions, using both improvements in hardware and software. Test fields are relatively easy to implement in built environments and they are feasible for verifying and comparing the performance of different systems and also for improving system calibration to achieve optimum quality.

  8. Real-time monitoring of emissions from monoethanolamine-based industrial scale carbon capture facilities.

    PubMed

    Zhu, Liang; Schade, Gunnar Wolfgang; Nielsen, Claus Jørgen

    2013-12-17

    We demonstrate the capabilities and properties of using Proton Transfer Reaction time-of-flight mass spectrometry (PTR-ToF-MS) to real-time monitor gaseous emissions from industrial scale amine-based carbon capture processes. The benchmark monoethanolamine (MEA) was used as an example of amines needing to be monitored from carbon capture facilities, and to describe how the measurements may be influenced by potentially interfering species in CO2 absorber stack discharges. On the basis of known or expected emission compositions, we investigated the PTR-ToF-MS MEA response as a function of sample flow humidity, ammonia, and CO2 abundances, and show that all can exhibit interferences, thus making accurate amine measurements difficult. This warrants a proper sample pretreatment, and we show an example using a dilution with bottled zero air of 1:20 to 1:10 to monitor stack gas concentrations at the CO2 Technology Center Mongstad (TCM), Norway. Observed emissions included many expected chemical species, dominantly ammonia and acetaldehyde, but also two new species previously not reported but emitted in significant quantities. With respect to concerns regarding amine emissions, we show that accurate amine quantifications in the presence of water vapor, ammonia, and CO2 become feasible after proper sample dilution, thus making PTR-ToF-MS a viable technique to monitor future carbon capture facility emissions, without conventional laborious sample pretreatment.

  9. OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE - A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alan Black; Arnis Judzis

    2003-01-01

    Progress during current reporting year 2002 by quarter--Progress during Q1 2002: (1) In accordance to Task 7.0 (D. No.2 Technical Publications) TerraTek, NETL, and the Industry Contributors successfully presented a paper detailing Phase 1 testing results at the February 2002 IADC/SPE Drilling Conference, a prestigious venue for presenting DOE and private sector drilling technology advances. The full reference is as follows: IADC/SPE 74540 ''World's First Benchmarking of Drilling Mud Hammer Performance at Depth Conditions'' authored by Gordon A. Tibbitts, TerraTek; Roy C. Long, US Department of Energy, Brian E. Miller, BP America, Inc.; Arnis Judzis, TerraTek; and Alan D. Black,more » TerraTek. Gordon Tibbitts, TerraTek, will presented the well-attended paper in February of 2002. The full text of the Mud Hammer paper was included in the last quarterly report. (2) The Phase 2 project planning meeting (Task 6) was held at ExxonMobil's Houston Greenspoint offices on February 22, 2002. In attendance were representatives from TerraTek, DOE, BP, ExxonMobil, PDVSA, Novatek, and SDS Digger Tools. (3) PDVSA has joined the advisory board to this DOE mud hammer project. PDVSA's commitment of cash and in-kind contributions were reported during the last quarter. (4) Strong Industry support remains for the DOE project. Both Andergauge and Smith Tools have expressed an interest in participating in the ''optimization'' phase of the program. The potential for increased testing with additional Industry cash support was discussed at the planning meeting in February 2002. Progress during Q2 2002: (1) Presentation material was provided to the DOE/NETL project manager (Dr. John Rogers) for the DOE exhibit at the 2002 Offshore Technology Conference. (2) Two meeting at Smith International and one at Andergauge in Houston were held to investigate their interest in joining the Mud Hammer Performance study. (3) SDS Digger Tools (Task 3 Benchmarking participant) apparently has not negotiated a commercial deal with Halliburton on the supply of fluid hammers to the oil and gas business. (4) TerraTek is awaiting progress by Novatek (a DOE contractor) on the redesign and development of their next hammer tool. Their delay will require an extension to TerraTek's contracted program. (5) Smith International has sufficient interest in the program to start engineering and chroming of collars for testing at TerraTek. (6) Shell's Brian Tarr has agreed to join the Industry Advisory Group for the DOE project. The addition of Brian Tarr is welcomed as he has numerous years of experience with the Novatek tool and was involved in the early tests in Europe while with Mobil Oil. (7) Conoco's field trial of the Smith fluid hammer for an application in Vietnam was organized and has contributed to the increased interest in their tool. Progress during Q3 2002: (1) Smith International agreed to participate in the DOE Mud Hammer program. (2) Smith International chromed collars for upcoming benchmark tests at TerraTek, now scheduled for 4Q 2002. (3) ConocoPhillips had a field trial of the Smith fluid hammer offshore Vietnam. The hammer functioned properly, though the well encountered hole conditions and reaming problems. ConocoPhillips plan another field trial as a result. (4) DOE/NETL extended the contract for the fluid hammer program to allow Novatek to ''optimize'' their much delayed tool to 2003 and to allow Smith International to add ''benchmarking'' tests in light of SDS Digger Tools' current financial inability to participate. (5) ConocoPhillips joined the Industry Advisors for the mud hammer program. Progress during Q4 2002: (1) Smith International participated in the DOE Mud Hammer program through full scale benchmarking testing during the week of 4 November 2003. (2) TerraTek acknowledges Smith International, BP America, PDVSA, and ConocoPhillips for cost-sharing the Smith benchmarking tests allowing extension of the contract to add to the benchmarking testing program. (3) Following the benchmark testing of the Smith International hammer, representatives from DOE/NETL, TerraTek, Smith International and PDVSA met at TerraTek in Salt Lake City to review observations, performance and views on the optimization step for 2003. (4) The December 2002 issue of Journal of Petroleum Technology (Society of Petroleum Engineers) highlighted the DOE fluid hammer testing program and reviewed last years paper on the benchmark performance of the SDS Digger and Novatek hammers. (5) TerraTek's Sid Green presented a technical review for DOE/NETL personnel in Morgantown on ''Impact Rock Breakage'' and its importance on improving fluid hammer performance. Much discussion has taken place on the issues surrounding mud hammer performance at depth conditions.« less

  10. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barszcz, E.; Barton, J. T.; Carter, R. L.; Lasinski, T. A.; Browning, D. S.; Dagum, L.; Fatoohi, R. A.; Frederickson, P. O.; Schreiber, R. S.

    1991-01-01

    A new set of benchmarks has been developed for the performance evaluation of highly parallel supercomputers in the framework of the NASA Ames Numerical Aerodynamic Simulation (NAS) Program. These consist of five 'parallel kernel' benchmarks and three 'simulated application' benchmarks. Together they mimic the computation and data movement characteristics of large-scale computational fluid dynamics applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification-all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  11. 42 CFR 440.335 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Benchmark-equivalent health benefits coverage. 440... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has an aggregate...

  12. 42 CFR 440.335 - Benchmark-equivalent health benefits coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Benchmark-equivalent health benefits coverage. 440... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has an aggregate...

  13. A progress report on estuary modeling by the finite-element method

    USGS Publications Warehouse

    Gray, William G.

    1978-01-01

    Various schemes are investigated for finite-element modeling of two-dimensional surface-water flows. The first schemes investigated combine finite-element spatial discretization with split-step time stepping schemes that have been found useful in finite-difference computations. Because of the large number of numerical integrations performed in space and the large sparse matrices solved, these finite-element schemes were found to be economically uncompetitive with finite-difference schemes. A very promising leapfrog scheme is proposed which, when combined with a novel very fast spatial integration procedure, eliminates the need to solve any matrices at all. Additional problems attacked included proper propagation of waves and proper specification of the normal flow-boundary condition. This report indicates work in progress and does not come to a definitive conclusion as to the best approach for finite-element modeling of surface-water problems. The results presented represent findings obtained between September 1973 and July 1976. (Woodard-USGS)

  14. Use of radio frequency identification (RFID) tags in bedside monitoring of endotracheal tube position.

    PubMed

    Reicher, Joshua; Reicher, Danielle; Reicher, Murray

    2007-06-01

    Improper positioning of the endotracheal tube during intubation poses a serious health risk to patients. In one prospective study of 219 critically ill patients, 14% required endotracheal tube repositioning after intubation [Brunel et al. Chest 1989; 96: 1043-1045] While a variety of techniques are used to confirm proper tube placement, a chest X-ray is usually employed for definitive verification. Radio frequency identification (RFID) technology, in which an RFID reader emits and receives a signal from an RFID tag, may be useful in evaluating endotracheal tube position. RFID technology has already been approved for use in humans as a safe and effective tool in a variety of applications. The use of handheld RFID detectors and RFID tag-labeled endotracheal tubes could allow for easy and accurate bedside monitoring of endotracheal tube position, once initial proper placement is confirmed.

  15. Pathogenesis and diagnostic criteria for rickets and osteomalacia - proposal by an expert panel supported by Ministry of Health, Labour and Welfare, Japan, The Japanese Society for Bone and Mineral Research and The Japan Endocrine Society.

    PubMed

    Fukumoto, Seiji; Ozono, Keiichi; Michigami, Toshimi; Minagawa, Masanori; Okazaki, Ryo; Sugimoto, Toshitsugu; Takeuchi, Yasuhiro; Matsumoto, Toshio

    2015-01-01

    Rickets and osteomalacia are diseases characterized by impaired mineralization of bone matrix. Recent investigations revealed that the causes for rickets and osteomalacia are quite variable. While these diseases can severely impair the quality of life of the affected patients, rickets and osteomalacia can be completely cured or at least respond to treatment when properly diagnosed and treated according to the specific causes. On the other hand, there are no standard criteria to diagnose rickets or osteomalacia nationally and internationally. Therefore, we summarize the definition and pathogenesis of rickets and osteomalacia, and propose the diagnostic criteria and a flowchart for the differential diagnosis of various causes for these diseases. We hope that these criteria and flowchart are clinically useful for the proper diagnosis and management of patients with these diseases.

  16. Interiors, intentions, and the "spirituality" of Islamic ritual practice.

    PubMed

    Powers, Paul R

    2004-01-01

    The Arabic term niyya (intention) is prominent in texts of Islamic ritual law. Muslim jurists require niyya in the "heart" during such ritual duties as prayer, fasting, and pilgrimage. Western scholars often treat niyya as a "spiritual" component of Islamic ritual. Muslim jurists, however, consistently treat niyya as a formal, taxonomic matter, a mental focus that makes a given act into the specific named duty required by religious law. Although the effort to thrust niyya into a "spiritual" role is meant to defend Islam from charges of "empty ritualism," it subtly reinforces the characterization of Islam as rigidly legalistic. Much scholarship on niyya belies the scholars' own definition of "proper religion" centered on an inner, individual, nonmaterial dimension of the self. Instead of trying to wash away the formalism of niyya, I argue that scholars ought to recognize that such embodied practices are properly religious rather than spiritually defective.

  17. Pathogenesis and diagnostic criteria for rickets and osteomalacia--proposal by an expert panel supported by the Ministry of Health, Labour and Welfare, Japan, the Japanese Society for Bone and Mineral Research, and the Japan Endocrine Society.

    PubMed

    Fukumoto, Seiji; Ozono, Keiichi; Michigami, Toshimi; Minagawa, Masanori; Okazaki, Ryo; Sugimoto, Toshitsugu; Takeuchi, Yasuhiro; Matsumoto, Toshio

    2015-09-01

    Rickets and osteomalacia are diseases characterized by impaired mineralization of bone matrix. Recent investigations have revealed that the causes of rickets and osteomalacia are quite variable. Although these diseases can severely impair the quality of life of affected patients, rickets and osteomalacia can be completely cured or at least respond to treatment when properly diagnosed and treated according to the specific causes. On the other hand, there are no standard criteria to diagnose rickets or osteomalacia nationally and internationally. Therefore, we summarize the definition and pathogenesis of rickets and osteomalacia, and propose diagnostic criteria and a flowchart for the differential diagnosis of various causes of these diseases. We hope that these criteria and the flowchart are clinically useful for the proper diagnosis and management of these diseases.

  18. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...

  19. SFTYCHEF: A Consultative, Diagnostic Expert System for Trench Excavation Safety Analysis on Light Commercial Construction Projects.

    DTIC Science & Technology

    1987-03-30

    Safe Trench Excavation ...... 2 Applicability to Solution via Expert System. 3 Background: Expert Systems ..................... 4 Definition of an...trench, drownings in the trench, and other mishaps which are the result of a lack of S C- proper consideration for safe construction practices. Although...the problem is not a new one, there is as yet no *" obvious method that will guarantee a safe trench. In addition, the expertise needed to provide case

  20. Relativistic chaos is coordinate invariant.

    PubMed

    Motter, Adilson E

    2003-12-05

    The noninvariance of Lyapunov exponents in general relativity has led to the conclusion that chaos depends on the choice of the space-time coordinates. Strikingly, we uncover the transformation laws of Lyapunov exponents under general space-time transformations and we find that chaos, as characterized by positive Lyapunov exponents, is coordinate invariant. As a result, the previous conclusion regarding the noninvariance of chaos in cosmology, a major claim about chaos in general relativity, necessarily involves the violation of hypotheses required for a proper definition of the Lyapunov exponents.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Argurio, Riccardo; Dehouck, Francois

    We study how gravitational duality acts on rotating solutions, using the Kerr-NUT black hole as an example. After properly reconsidering how to take into account both electric (i.e. masslike) and magnetic (i.e. NUT-like) sources in the equations of general relativity, we propose a set of definitions for the dual Lorentz charges. We then show that the Kerr-NUT solution has nontrivial such charges. Further, we clarify in which respect Kerr's source can be seen as a mass M with a dipole of NUT charges.

  2. Community core detection in transportation networks

    NASA Astrophysics Data System (ADS)

    De Leo, Vincenzo; Santoboni, Giovanni; Cerina, Federica; Mureddu, Mario; Secchi, Luca; Chessa, Alessandro

    2013-10-01

    This work analyzes methods for the identification and the stability under perturbation of a territorial community structure with specific reference to transportation networks. We considered networks of commuters for a city and an insular region. In both cases, we have studied the distribution of commuters’ trips (i.e., home-to-work trips and vice versa). The identification and stability of the communities’ cores are linked to the land-use distribution within the zone system, and therefore their proper definition may be useful to transport planners.

  3. Breast Abscess Mimicking Breast Carcinoma in Male.

    PubMed

    Gochhait, Debasis; Dehuri, Priyadarshini; Umamahesweran, Sandyya; Kamat, Rohan

    2018-01-01

    Male breast can show almost all pathological entities described in female breast. Inflammatory conditions of the breast in male are not common; however, occasionally, it can be encountered in the form of an abscess. Clinically, gynecomastia always presents as a symmetric unilateral or bilateral lump in the retroareolar region, and any irregular asymmetric lump raises a possibility of malignancy. Radiology should be used as a part of the triple assessment protocol for breast lump along with fine-needle aspiration cytology for definite diagnosis and proper management.

  4. Solving a product safety problem using a recycled high density polyethylene container

    NASA Technical Reports Server (NTRS)

    Liu, Ping; Waskom, T. L.

    1993-01-01

    The objectives are to introduce basic problem-solving techniques for product safety including problem identification, definition, solution criteria, test process and design, and data analysis. The students are given a recycled milk jug made of high density polyethylene (HDPE) by blow molding. The objectives are to design and perform proper material test(s) so they can evaluate the product safety if the milk jug is used in a certain way which is specified in the description of the procedure for this investigation.

  5. Navy Child Care, 1980.

    DTIC Science & Technology

    1982-12-01

    Children Survey, 61% were married . In the survey done for this thesis, 39% were found to be unmarried (single parents) and 27% of these were divorced...ENLIST14ENT OR RETENTION INTENTIONS (Percent) All married Single E-1 to E-5 (N-193) (N-118) (N-n75) (N-142) I definitely will 25.9 22.9 30.7 20.4 I probably... Married , AK2: The child-care facility at (base name) is not properly manned. I’ve found that the woman employed there seem(s) the least

  6. Toxicological benchmarks for screening potential contaminants of concern for effects on aquatic biota: 1994 Revision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W. II; Mabrey, J.B.

    1994-07-01

    This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronicmore » Value (SCV), the lowest chronic values for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility.« less

  7. License Compliance Issues For Biopharmaceuticals: Special Challenges For Negotiations Between Companies And Non-Profit Research Institutions.

    PubMed

    Ponzio, Todd A; Feindt, Hans; Ferguson, Steven

    2011-09-01

    Biopharmaceuticals are therapeutic products based on biotechnology. They are manufactured by or from living organisms and are the most complex of all commercial medicines to develop, manufacture and qualify for regulatory approval. In recent years biopharmaceuticals have rapidly increased in number and importance with over 400() already marketed in the U.S. and European markets alone. Many companies throughout the world are now ramping up investments in biopharmaceutical R&D and expanding their portfolios through licensing of early-stage biotechnologies from universities and other non-profit research institutions, and there is an increasing number of license agreements for biopharmaceutical product development relative to traditional small molecule drug compounds. This trend will only continue as large numbers of biosimilars and biogenerics enter the market.A primary goal of technology transfer offices associated with publicly-funded, non-profit research institutions is to establish patent protection for inventions deemed to have commercial potential and license them for product development. Such licenses help stimulate economic development and job creation, bring a stream of royalty revenue to the institution and, hopefully, advance the public good or public health by bringing new and useful products to market. In the course of applying for such licenses, a commercial development plan is usually put forth by the license applicant. This plan indicates the path the applicant expects to follow to bring the licensed invention to market. In the case of small molecule drug compounds, there exists a widely-recognized series of clinical development steps, dictated by regulatory requirements, that must be met to bring a new drug to market, such as completion of preclinical toxicology, Phase 1, 2 and 3 testing and product approvals. These steps often become the milestone/benchmark schedule incorporated into license agreements which technology transfer offices use to monitor the licensee's diligence and progress; most exclusive licenses include a commercial development plan, with penalties, financial or even revocation of the license, if the plan is not followed, e.g., the license falls too far behind.This study examines whether developmental milestone schedules based on a small molecule drug development model are useful and realistic in setting expectations for biopharmaceutical product development. We reviewed the monitoring records of all exclusive Public Health Service (PHS) commercial development license agreements for small molecule drugs or therapeutics based on biotechnology (biopharmaceuticals) executed by the National Institutes of Health (NIH) Office of Technology Transfer (OTT) between 2003 and 2009. We found that most biopharmaceutical development license agreements required amending because developmental milestones in the negotiated schedule could not be met by the licensee. This was in stark contrast with license agreements for small molecule chemical compounds which rarely needed changes to their developmental milestone schedules. As commercial development licenses for biopharmaceuticals make up the vast majority of NIH's exclusive license agreements, there is clearly a need to: 1) more closely examine how these benchmark schedules are formed, 2) try to understand the particular risk factors contributing to benchmark schedule non-compliance, and 3) devise alternatives to the current license benchmark schedule structural model. Schedules that properly weigh the most relevant risk factors such as technology classification (e.g., vaccine vs recombinant antibody vs gene therapy), likelihood of unforeseen regulatory issues, and company size/structure may help assure compliance with original license benchmark schedules. This understanding, coupled with a modified approach to the license negotiation process that makes use of a clear and comprehensive term sheet to minimize ambiguities should result in a more realistic benchmark schedule.

  8. License Compliance Issues For Biopharmaceuticals: Special Challenges For Negotiations Between Companies And Non-Profit Research Institutions

    PubMed Central

    Ponzio, Todd A.; Feindt, Hans; Ferguson, Steven

    2011-01-01

    Summary Biopharmaceuticals are therapeutic products based on biotechnology. They are manufactured by or from living organisms and are the most complex of all commercial medicines to develop, manufacture and qualify for regulatory approval. In recent years biopharmaceuticals have rapidly increased in number and importance with over 4001 already marketed in the U.S. and European markets alone. Many companies throughout the world are now ramping up investments in biopharmaceutical R&D and expanding their portfolios through licensing of early-stage biotechnologies from universities and other non-profit research institutions, and there is an increasing number of license agreements for biopharmaceutical product development relative to traditional small molecule drug compounds. This trend will only continue as large numbers of biosimilars and biogenerics enter the market. A primary goal of technology transfer offices associated with publicly-funded, non-profit research institutions is to establish patent protection for inventions deemed to have commercial potential and license them for product development. Such licenses help stimulate economic development and job creation, bring a stream of royalty revenue to the institution and, hopefully, advance the public good or public health by bringing new and useful products to market. In the course of applying for such licenses, a commercial development plan is usually put forth by the license applicant. This plan indicates the path the applicant expects to follow to bring the licensed invention to market. In the case of small molecule drug compounds, there exists a widely-recognized series of clinical development steps, dictated by regulatory requirements, that must be met to bring a new drug to market, such as completion of preclinical toxicology, Phase 1, 2 and 3 testing and product approvals. These steps often become the milestone/benchmark schedule incorporated into license agreements which technology transfer offices use to monitor the licensee’s diligence and progress; most exclusive licenses include a commercial development plan, with penalties, financial or even revocation of the license, if the plan is not followed, e.g., the license falls too far behind. This study examines whether developmental milestone schedules based on a small molecule drug development model are useful and realistic in setting expectations for biopharmaceutical product development. We reviewed the monitoring records of all exclusive Public Health Service (PHS) commercial development license agreements for small molecule drugs or therapeutics based on biotechnology (biopharmaceuticals) executed by the National Institutes of Health (NIH) Office of Technology Transfer (OTT) between 2003 and 2009. We found that most biopharmaceutical development license agreements required amending because developmental milestones in the negotiated schedule could not be met by the licensee. This was in stark contrast with license agreements for small molecule chemical compounds which rarely needed changes to their developmental milestone schedules. As commercial development licenses for biopharmaceuticals make up the vast majority of NIH’s exclusive license agreements, there is clearly a need to: 1) more closely examine how these benchmark schedules are formed, 2) try to understand the particular risk factors contributing to benchmark schedule non-compliance, and 3) devise alternatives to the current license benchmark schedule structural model. Schedules that properly weigh the most relevant risk factors such as technology classification (e.g., vaccine vs recombinant antibody vs gene therapy), likelihood of unforeseen regulatory issues, and company size/structure may help assure compliance with original license benchmark schedules. This understanding, coupled with a modified approach to the license negotiation process that makes use of a clear and comprehensive term sheet to minimize ambiguities should result in a more realistic benchmark schedule. PMID:22162900

  9. A proposal for the classification of biological weapons sensu lato.

    PubMed

    Rozsa, Lajos

    2014-12-01

    Due to historical and legislation reasons, the category of bioweapons is rather poorly defined. Authors often disagree on involving or excluding agents like hormones, psychochemicals, certain plants and animals (such as weeds or pests) or synthetic organisms. Applying a wide definition apparently threatens by eroding the regime of international legislation, while narrow definitions abandon several important issues. Therefore, I propose a category of 'biological weapons sensu lato' (BWsl) that is defined here as any tool of human aggression whose acting principle is based on disciplines of biology including particularly microbiology, epidemiology, medical biology, physiology, psychology, pharmacology and ecology, but excluding those based on inorganic agents. Synthetically produced equivalents (not necessarily exact copies) and mock weapons are also included. This definition does not involve any claim to subject all these weapons to international legislation but serves a purely scholarly purpose. BWsl may be properly categorized on the base of the magnitude of the human population potentially targeted (4 levels: individuals, towns, countries, global) and the biological nature of the weapons' intended effects (4 levels: agricultural-ecological agents, and non-pathogenic, pathogenic, or lethal agents against humans).

  10. Torsion in gauge theory

    NASA Astrophysics Data System (ADS)

    Nieh, H. T.

    2018-02-01

    The potential conflict between torsion and gauge symmetry in the Riemann-Cartan curved spacetime was noted by Kibble in his 1961 pioneering paper and has since been discussed by many authors. Kibble suggested that, to preserve gauge symmetry, one should forgo the covariant derivative in favor of the ordinary derivative in the definition of the field strength Fμ ν for massless gauge theories, while for massive vector fields, covariant derivatives should be adopted. This view was further emphasized by Hehl et al. in their influential 1976 review paper. We address the question of whether this deviation from normal procedure by forgoing covariant derivatives in curved spacetime with torsion could give rise to inconsistencies in the theory, such as the quantum renormalizability of a realistic interacting theory. We demonstrate in this paper the one-loop renormalizability of a realistic gauge theory of gauge bosons interacting with Dirac spinors, such as the SU(3) chromodynamics, for the case of a curved Riemann-Cartan spacetime with totally antisymmetric torsion. This affirmative confirmation is one step toward providing justification for the assertion that the flat-space definition of the gauge-field strength should be adopted as the proper definition.

  11. Raising Quality and Achievement. A College Guide to Benchmarking.

    ERIC Educational Resources Information Center

    Owen, Jane

    This booklet introduces the principles and practices of benchmarking as a way of raising quality and achievement at further education colleges in Britain. Section 1 defines the concept of benchmarking. Section 2 explains what benchmarking is not and the steps that should be taken before benchmarking is initiated. The following aspects and…

  12. Benchmarking in Education: Tech Prep, a Case in Point. IEE Brief Number 8.

    ERIC Educational Resources Information Center

    Inger, Morton

    Benchmarking is a process by which organizations compare their practices, processes, and outcomes to standards of excellence in a systematic way. The benchmarking process entails the following essential steps: determining what to benchmark and establishing internal baseline data; identifying the benchmark; determining how that standard has been…

  13. Benchmarks: The Development of a New Approach to Student Evaluation.

    ERIC Educational Resources Information Center

    Larter, Sylvia

    The Toronto Board of Education Benchmarks are libraries of reference materials that demonstrate student achievement at various levels. Each library contains video benchmarks, print benchmarks, a staff handbook, and summary and introductory documents. This book is about the development and the history of the benchmark program. It has taken over 3…

  14. Preliminary report of histopathology associated with infection with tongue worms in Australian dogs and cattle.

    PubMed

    Shamsi, Shokoofeh; Loukopoulos, Panayiotis; McSpadden, Kate; Baker, Sara; Jenkins, David

    2018-05-23

    Tongue worms utilise herbivorous mammals as intermediate hosts and reside in the nasopharynx of carnivores as their definitive hosts. A recent study in south eastern Australia showed an unexpectedly high infection (67%) of wild dogs with these parasites. The present study aimed at determining the pathogenicity of the parasite in both definitive (dog) and intermediate (cattle) hosts by histopathology. The definitive host showed multifocal haemorrhage of the interstitium of the nasal mucosa, multifocal mucosal erosion, congestion and haemorrhage, with haemosiderin laden macrophages present in those foci and distortion and destruction of the nasal mucosa. Histopathologic examination of lymph nodes from an infected cow showed diffuse eosinophilic granulomatous necrotising lymphadenitis and perinodal panniculitis with intralesional parasitic remnants and comparatively large numbers of eosinophils. A large, ~300-500 μm diameter, area of necrosis was also observed in one lymph node. This is the first time a study has been undertaken in Australia to determine the pathogenicity of tongue worms in both their definitive and intermediate hosts. This is a preliminary study and to properly estimate the health impact of infection with this pathogenic parasites on Australian production and companion animals more studies are necessary. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Negligible senescence: how will we know it when we see it?

    PubMed

    Heward, Christopher B

    2006-01-01

    The recent public claim that "SENS is a practical, foreseeable approach to curing aging" has stirred considerable controversy among bio-gerontologists. Testing this hypothesis will not only require precise definitions for the somewhat subjective terms "practical," "foreseeable," and "curing," it will require a precise definition of the term "aging." To facilitate proper experimental design, this definition must focus on the nature of aging itself, not its causes or consequences. Aging in mammals is a process that begins early in adult life and continues steadily thereafter until death. It is manifested by a decline in the functional capacity (or, more precisely, reserve capacity) of a variety of vital physiologic systems leading to increasing risk of morbidity and mortality over time. Aging, however, cannot be measured by simply monitoring morbidity and/or mortality. Aging can only be measured by monitoring the decline of global functional capacity itself. This, in turn, will require an operational definition of aging expressed as a rate function (i.e., it will have units expressing aging as an overall rate of functional change per unit time). Widespread acceptance of such global indexes of aging rate in animal models and humans will greatly facilitate research activity specifically designed to increase the understanding of aging mechanisms and antiaging interventions.

  16. HS06 Benchmark for an ARM Server

    NASA Astrophysics Data System (ADS)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  17. The interpretation of physical activity, exercise, and sedentary behaviours by persons with multiple sclerosis.

    PubMed

    Kinnett-Hopkins, Dominique; Learmonth, Yvonne; Hubbard, Elizabeth; Pilutti, Lara; Roberts, Sarah; Fanning, Jason; Wójcicki, Thomas; McAuley, Edward; Motl, Robert

    2017-11-07

    This study adopted a qualitative research design with directed content analysis and examined the interpretations of physical activity, exercise, and sedentary behaviour by persons with multiple sclerosis. Fifty three persons with multiple sclerosis who were enrolled in an exercise trial took part in semi-structured interviews regarding personal interpretations of physical activity, exercise, and sedentary behaviours. Forty three percent of participants indicated a consistent understanding of physical activity, 42% of participants indicated a consistent understanding of exercise, and 83% of participants indicated a consistent understanding of sedentary behaviour with the standard definitions. There was evidence of definitional ambiguity (i.e., 57, 58, and 11% of the sample for physical activity, exercise, and sedentary behaviour, respectively); 6% of the sample inconsistently defined sedentary behaviour with standard definitions. Some participants described physical activity in a manner that more closely aligned with exercise and confused sedentary behaviour with exercise or sleeping/napping. Results highlight the need to provide and utilise consistent definitions for accurate understanding, proper evaluation and communication of physical activity, exercise, and sedentary behaviours among persons with multiple sclerosis. The application of consistent definitions may minimise ambiguity, alleviate the equivocality of findings in the literature, and translate into improved communication about these behaviours in multiple sclerosis. Implications for Rehabilitation The symptoms of multiple sclerosis can be managed through participation in physical activity and exercise. Persons with multiple sclerosis are not engaging in sufficient levels of physical activity and exercise for health benefits. Rehabilitation professionals should use established definitions of physical activity, exercise, and sedentary behaviours when communicating about these behaviours among persons with multiple sclerosis.

  18. The Chiari 3 Malformation and a Systemic Review of the Literature.

    PubMed

    Young, Richard M; Shafa, Justin S; Myseros, John S

    2015-01-01

    Chiari type 3 is a rare hindbrain malformation that has been reported in the literature primarily as case reports and case series. Radiological, pathophysiological and surgical definitions of the malformation are inconsistent in the literature and subsequently can be confusing, and outcomes have also been uniformly poor. The definition of this rare malformation will be clarified through a case presentation. A retrospective review of prior publications in the PubMed and MEDLINE databases was performed looking for reports of 'Chiari 3 +/- malformation' and 'occipital encephalocele'. Relevant papers were reviewed and compiled into table format with associated descriptions of a Chiari type 3 malformation. A case illustration is presented with radiological and intraoperative imaging to reinforce and clarify the definition. Upon review of the prior publications in the detail of the descriptions and imaging associated with each article, there is a wide range of variability in the description of what is considered a Chiari 3 malformation. Occipital, occipitocervical and high cervical defects have all been described as Chiari 3 malformation. Our case illustration presents a patient with an occipitocervical encephalocele with neural elements, which is the classic and accepted definition of the Chiari 3 malformation. Chiari type 3 is a rare congenital malformation, and prior publications describing this developmental disorder have not demonstrated a consensus in its definition. In addition, outcomes have traditionally been reported as poor. This case illustration of a Chiari type 3 enforces the definition of an occipitocervical encephalocele with hindbrain herniation, and with proper management not all Chiari 3 malformation patients have bad outcomes. © 2015 S. Karger AG, Basel.

  19. Factors related to progression and graduation rates for RN-to-bachelor of science in nursing programs: searching for realistic benchmarks.

    PubMed

    Robertson, Sue; Canary, Cheryl Westlake; Orr, Marsha; Herberg, Paula; Rutledge, Dana N

    2010-03-01

    Measurement and analysis of progression and graduation rates is a well-established activity in schools of nursing. Such rates are indices of program effectiveness and student success. The Commission on Collegiate Nursing Education (2008), in its recently revised Standards for Accreditation of Baccalaureate and Graduate Degree Nursing Programs, specifically dictated that graduation rates (including discussion of entry points, timeframes) be calculated for each degree program. This context affects what is considered timely progression to graduation. If progression and graduation rates are critical outcomes, then schools must fully understand their measurement as well as interpretation of results. Because no national benchmarks for nursing student progression/graduation rates exist, schools try to set expectations that are realistic yet academically sound. RN-to-bachelor of science in nursing (BSN) students are a unique cohort of baccalaureate learners who need to be understood within their own learning context. The purposes of this study were to explore issues and processes of measuring progression and graduation rates in an RN-to-BSN population and to identify factors that facilitate/hinder their successful progression to work toward establishing benchmarks for success. Using data collected from 14 California schools of nursing with RN-to-BSN programs, RN-to-BSN students were identified as generally older, married, and going to school part-time while working and juggling family responsibilities. The study found much program variation in definition of terms and measures used to report progression and graduation rates. A literature review supported the use of terms such as attrition, retention, persistence, graduation, completion, and success rates, in an overlapping and sometimes synonymous fashion. Conceptual clarity and standardization of measurements are needed to allow comparisons and setting of realistic benchmarks. One of the most important factors identified in this study is the potentially prolonged RN-to-BSN timeline to graduation. This underlines the need to look beyond standardized educational norms for graduation rates and consider the realities of "persistence" by which these students are successful in completing their studies. It also raises the question of whether student success and program success/effectiveness are two separate measures or two separate events on one progression timeline. While clarifying our thinking about success in this population of students, the study raised many questions that warrant further research and debate.

  20. The General Concept of Benchmarking and Its Application in Higher Education in Europe

    ERIC Educational Resources Information Center

    Nazarko, Joanicjusz; Kuzmicz, Katarzyna Anna; Szubzda-Prutis, Elzbieta; Urban, Joanna

    2009-01-01

    The purposes of this paper are twofold: a presentation of the theoretical basis of benchmarking and a discussion on practical benchmarking applications. Benchmarking is also analyzed as a productivity accelerator. The authors study benchmarking usage in the private and public sectors with due consideration of the specificities of the two areas.…

Top