Sample records for research studies evaluating

  1. Research Governance and the Role of Evaluation: A Comparative Study

    ERIC Educational Resources Information Center

    Molas-Gallart, Jordi

    2012-01-01

    Through a comparative study of the United Kingdom and Spain, this article addresses the effect of different research governance structures on the functioning and uses of research evaluation. It distinguishes three main evaluation uses: distributive, improvement, and controlling. Research evaluation in the United Kingdom plays important…

  2. School Evaluation and Accreditation: A Bibliography of Research Studies.

    ERIC Educational Resources Information Center

    Diamond, Joan

    1982-01-01

    This 97-item bibliography cites research in the following categories: purposes and structures of school accreditation/evaluation; the school evaluation process, involving self-study, team visits, and implementation; evaluation of the accreditation/evaluation process; external factors influencing school accreditation/evaluation; and objectivity in…

  3. Evaluating Mixed Research Studies: A Mixed Methods Approach

    ERIC Educational Resources Information Center

    Leech, Nancy L.; Dellinger, Amy B.; Brannagan, Kim B.; Tanaka, Hideyuki

    2010-01-01

    The purpose of this article is to demonstrate application of a new framework, the validation framework (VF), to assist researchers in evaluating mixed research studies. Based on an earlier work by Dellinger and Leech, a description of the VF is delineated. Using the VF, three studies from education, health care, and counseling fields are…

  4. Helping Students Evaluate the Validity of a Research Study.

    ERIC Educational Resources Information Center

    Morgan, George A.; Gliner, Jeffrey A.

    Students often have difficulty in evaluating the validity of a study. A conceptually and linguistically meaningful framework for evaluating research studies is proposed that is based on the discussion of internal and external validity of T. D. Cook and D. T. Campbell (1979). The proposal includes six key dimensions, three related to internal…

  5. Ensuring Data Quality in Extension Research and Evaluation Studies

    ERIC Educational Resources Information Center

    Radhakrishna, Rama; Tobin, Daniel; Brennan, Mark; Thomson, Joan

    2012-01-01

    This article presents a checklist as a guide for Extension professionals to use in research and evaluation studies they carry out. A total of 40 statements grouped under eight data quality components--relevance, objectivity, validity, reliability, integrity, generalizability, completeness, and utility--are identified to ensure that research…

  6. Development and evaluation of a study design typology for human research.

    PubMed

    Carini, Simona; Pollock, Brad H; Lehmann, Harold P; Bakken, Suzanne; Barbour, Edward M; Gabriel, Davera; Hagler, Herbert K; Harper, Caryn R; Mollah, Shamim A; Nahm, Meredith; Nguyen, Hien H; Scheuermann, Richard H; Sim, Ida

    2009-11-14

    A systematic classification of study designs would be useful for researchers, systematic reviewers, readers, and research administrators, among others. As part of the Human Studies Database Project, we developed the Study Design Typology to standardize the classification of study designs in human research. We then performed a multiple observer masked evaluation of active research protocols in four institutions according to a standardized protocol. Thirty-five protocols were classified by three reviewers each into one of nine high-level study designs for interventional and observational research (e.g., N-of-1, Parallel Group, Case Crossover). Rater classification agreement was moderately high for the 35 protocols (Fleiss' kappa = 0.442) and higher still for the 23 quantitative studies (Fleiss' kappa = 0.463). We conclude that our typology shows initial promise for reliably distinguishing study design types for quantitative human research.

  7. Can the impact of public involvement on research be evaluated? A mixed methods study.

    PubMed

    Barber, Rosemary; Boote, Jonathan D; Parry, Glenys D; Cooper, Cindy L; Yeeles, Philippa; Cook, Sarah

    2012-09-01

      Public involvement is central to health and social research policies, yet few systematic evaluations of its impact have been carried out, raising questions about the feasibility of evaluating the impact of public involvement.   To investigate whether it is feasible to evaluate the impact of public involvement on health and social research.   Mixed methods including a two-round Delphi study with pre-specified 80% consensus criterion, with follow-up interviews. UK and international panellists came from different settings, including universities, health and social care institutions and charitable organizations. They comprised researchers, members of the public, research managers, commissioners and policy makers, self-selected as having knowledge and/or experience of public involvement in health and/or social research; 124 completed both rounds of the Delphi process. A purposive sample of 14 panellists was interviewed.   Consensus was reached that it is feasible to evaluate the impact of public involvement on 5 of 16 impact issues: identifying and prioritizing research topics, disseminating research findings and on key stakeholders. Qualitative analysis revealed the complexities of evaluating a process that is subjective and socially constructed. While many panellists believed that it is morally right to involve the public in research, they also considered that it is appropriate to evaluate the impact of public involvement.   This study found consensus among panellists that it is feasible to evaluate the impact of public involvement on some research processes, outcomes and on key stakeholders. The value of public involvement and the importance of evaluating its impact were endorsed. © 2011 Blackwell Publishing Ltd.

  8. Can the impact of public involvement on research be evaluated? A mixed methods study

    PubMed Central

    Barber, Rosemary; Boote, Jonathan D; Parry, Glenys D; Cooper, Cindy L; Yeeles, Philippa; Cook, Sarah

    2011-01-01

    Abstract Background  Public involvement is central to health and social research policies, yet few systematic evaluations of its impact have been carried out, raising questions about the feasibility of evaluating the impact of public involvement. Objective  To investigate whether it is feasible to evaluate the impact of public involvement on health and social research. Methods  Mixed methods including a two‐round Delphi study with pre‐specified 80% consensus criterion, with follow‐up interviews. UK and international panellists came from different settings, including universities, health and social care institutions and charitable organizations. They comprised researchers, members of the public, research managers, commissioners and policy makers, self‐selected as having knowledge and/or experience of public involvement in health and/or social research; 124 completed both rounds of the Delphi process. A purposive sample of 14 panellists was interviewed. Results  Consensus was reached that it is feasible to evaluate the impact of public involvement on 5 of 16 impact issues: identifying and prioritizing research topics, disseminating research findings and on key stakeholders. Qualitative analysis revealed the complexities of evaluating a process that is subjective and socially constructed. While many panellists believed that it is morally right to involve the public in research, they also considered that it is appropriate to evaluate the impact of public involvement. Conclusions  This study found consensus among panellists that it is feasible to evaluate the impact of public involvement on some research processes, outcomes and on key stakeholders. The value of public involvement and the importance of evaluating its impact were endorsed. PMID:21324054

  9. 76 FR 17130 - Office of Planning, Research and Evaluation Advisory Committee on Head Start Research and Evaluation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-28

    ... Impact Study fits within this agenda. The Committee will provide advice regarding future research efforts... Planning, Research and Evaluation Advisory Committee on Head Start Research and Evaluation AGENCY... for Head Start Research and Evaluation. General Function of Committee: The Advisory Committee for Head...

  10. Institute for Developmental Studies Interim Progress Report. Part II: Research and Evaluation.

    ERIC Educational Resources Information Center

    Deutsch, Martin; And Others

    The Institute for Developmental Studies (IDS) is engaged in research aimed at specifying what the academic handicaps of deprived children are, what causes these handicaps, and what can be done to overcome them. This IDS report on their research and evaluation program is divided into two sections. The first, "Summaries of Basic Research, Applied…

  11. A Historical Reflection on Research Evaluation Studies, Their Recurrent Themes and Challenges. Technical Report

    ERIC Educational Resources Information Center

    Marjanovic, Sonja; Hanney, Stephen; Wooding, Steven

    2009-01-01

    This report critically examines studies of how scientific research drives innovation which is then translated into socio-economic benefits. It focuses on research evaluation insights that are relevant not only to the academic community, but also to policymakers and evaluation practitioners--and particularly to biomedical and health research…

  12. Evaluating institutional capacity for research ethics in Africa: a case study from Botswana.

    PubMed

    Hyder, Adnan A; Zafar, Waleed; Ali, Joseph; Ssekubugu, Robert; Ndebele, Paul; Kass, Nancy

    2013-07-30

    The increase in the volume of research conducted in Low and Middle Income Countries (LMIC), has brought a renewed international focus on processes for ethical conduct of research. Several programs have been initiated to strengthen the capacity for research ethics in LMIC. However, most such programs focus on individual training or development of ethics review committees. The objective of this paper is to present an approach to institutional capacity assessment in research ethics and application of this approach in the form of a case study from an institution in Africa. We adapted the Octagon model originally used by the Swedish International Development Cooperation Agency to assess an organization along eight domains in research ethics: basic values and identity; structure and organization; ability to carry out activities; relevance of activities to stated goals; capacity of staff and management; administrative, financing and accounting systems; its relations with target groups; and the national context. We used a mixed methods approach to collect empirical data at the University of Botswana from March to December 2010. The overall shape of the external evaluation Octagon suggests that strengths of the University of Botswana are in the areas of structure, relevance, production and identity; while the university still needs more work in the areas of systems of finance, target groups, and environment. The Octagons also show the similarities and discrepancies between the 'external' and 'internal' evaluations and provide an opportunity for exploration of these different assessments. For example, the discrepant score for 'identity' between internal and external evaluations allows for an exploration of what constitutes a strong identity for research ethics at the University of Botswana and how it can be strengthened. There is a general lack of frameworks for evaluating research ethics capacity in LMICs. We presented an approach that stresses evaluation from both internal

  13. Evaluating institutional capacity for research ethics in Africa: a case study from Botswana

    PubMed Central

    2013-01-01

    Background The increase in the volume of research conducted in Low and Middle Income Countries (LMIC), has brought a renewed international focus on processes for ethical conduct of research. Several programs have been initiated to strengthen the capacity for research ethics in LMIC. However, most such programs focus on individual training or development of ethics review committees. The objective of this paper is to present an approach to institutional capacity assessment in research ethics and application of this approach in the form of a case study from an institution in Africa. Methods We adapted the Octagon model originally used by the Swedish International Development Cooperation Agency to assess an organization along eight domains in research ethics: basic values and identity; structure and organization; ability to carry out activities; relevance of activities to stated goals; capacity of staff and management; administrative, financing and accounting systems; its relations with target groups; and the national context. We used a mixed methods approach to collect empirical data at the University of Botswana from March to December 2010. Results The overall shape of the external evaluation Octagon suggests that strengths of the University of Botswana are in the areas of structure, relevance, production and identity; while the university still needs more work in the areas of systems of finance, target groups, and environment. The Octagons also show the similarities and discrepancies between the 'external' and 'internal' evaluations and provide an opportunity for exploration of these different assessments. For example, the discrepant score for 'identity' between internal and external evaluations allows for an exploration of what constitutes a strong identity for research ethics at the University of Botswana and how it can be strengthened. Conclusions There is a general lack of frameworks for evaluating research ethics capacity in LMICs. We presented an approach that

  14. Researching evaluation influence: a review of the literature.

    PubMed

    Herbert, James Leslie

    2014-10-01

    The impact of an evaluation is an important consideration in designing and carrying out evaluations. Evaluation influence is a way of thinking about the effect that an evaluation can have in the broadest possible terms, which its proponents argue will lead to a systematic body of evidence about influential evaluation practices. This literature review sets out to address three research questions: How have researchers defined evaluation influence; how is this reflected in the research; and what does the research suggest about the utility of evaluation influence as a conceptual framework. Drawing on studies that had cited one of the key evaluation influence articles and conducted original research on some aspect of influence this article reviewed the current state of the literature toward the goal of developing a body of evidence about how to practice influential evaluation. Twenty-eight studies were found that have drawn on evaluation influence, which were categorized into (a) descriptive studies, (b) analytical studies, and (c) hypothesis testing. Despite the prominence of evaluation influence in the literature, there is slow progress toward a persuasive body of literature. Many of the studies reviewed offered vague and inconsistent definitions and have applied influence in an unspecified way in the research. It is hoped that this article will stimulate interest in the systematic study of influence mechanisms, leading to improvements in the potential for evaluation to affect positive social change. © The Author(s) 2014.

  15. Forestry research evaluation: current progress, future directions.

    Treesearch

    Christopher D. Risbrudt; Pamela J. Jakes

    1985-01-01

    Research evaluation is a relatively recent endeavor in forestry economics. This workshop represents most of the current and recently completed studies available in this subfield of forestry and evaluation. Also included are discussions of scientists and policymakers concerning the uses of forestry research evaluations, evaluation problems encountered, solutions...

  16. Study Designs and Evaluation Models for Emergency Department Public Health Research

    PubMed Central

    Broderick, Kerry B.; Ranney, Megan L.; Vaca, Federico E.; D’Onofrio, Gail; Rothman, Richard E.; Rhodes, Karin V.; Becker, Bruce; Haukoos, Jason S.

    2011-01-01

    Public health research requires sound design and thoughtful consideration of potential biases that may influence the validity of results. It also requires careful implementation of protocols and procedures that are likely to translate from the research environment to actual clinical practice. This article is the product of a breakout session from the 2009 Academic Emergency Medicine consensus conference entitled “Public Health in the ED: Screening, Surveillance, and Intervention” and serves to describe in detail aspects of performing emergency department (ED)-based public health research, while serving as a resource for current and future researchers. In doing so, the authors describe methodologic features of study design, participant selection and retention, and measurements and analyses pertinent to public health research. In addition, a number of recommendations related to research methods and future investigations related to public health work in the ED are provided. Public health investigators are poised to make substantial contributions to this important area of research, but this will only be accomplished by employing sound research methodology in the context of rigorous program evaluation. PMID:20053232

  17. The evaluation of complex interventions in palliative care: an exploration of the potential of case study research strategies.

    PubMed

    Walshe, Catherine

    2011-12-01

    Complex, incrementally changing, context dependent and variable palliative care services are difficult to evaluate. Case study research strategies may have potential to contribute to evaluating such complex interventions, and to develop this field of evaluation research. This paper explores definitions of case study (as a unit of study, a process, and a product) and examines the features of case study research strategies which are thought to confer benefits for the evaluation of complex interventions in palliative care settings. Ten features of case study that are thought to be beneficial in evaluating complex interventions in palliative care are discussed, drawing from exemplars of research in this field. Important features are related to a longitudinal approach, triangulation, purposive instance selection, comprehensive approach, multiple data sources, flexibility, concurrent data collection and analysis, search for proving-disproving evidence, pattern matching techniques and an engaging narrative. The limitations of case study approaches are discussed including the potential for subjectivity and their complex, time consuming and potentially expensive nature. Case study research strategies have great potential in evaluating complex interventions in palliative care settings. Three key features need to be exploited to develop this field: case selection, longitudinal designs, and the use of rival hypotheses. In particular, case study should be used in situations where there is interplay and interdependency between the intervention and its context, such that it is difficult to define or find relevant comparisons.

  18. Evaluation research in occupational health services: general principles and a systematic review of empirical studies.

    PubMed

    Hulshof, C T; Verbeek, J H; van Dijk, F J; van der Weide, W E; Braam, I T

    1999-06-01

    To study the nature and extent of evaluation research in occupational health services (OHSs). Literature review of evaluation research in OHSs. On the basis of a conceptual model of OHS evaluation, empirical studies are categorised into aspects of input, process, output, outcome, and OHS core activities. Many methods to evaluate OHSs or OHS activities exist, depending on the objective and object of evaluation. The amount of empirical studies on evaluation of OHSs or OHS activities that met the non-restrictive inclusion criteria, was remarkably limited. Most of the 52 studies were more descriptive than evaluative. The methodological quality of most studies was not high. A differentiated picture of the evidence of effectiveness of OHSs arises. Occupational health consultations and occupational rehabilitation are hardly studied despite much time spent on the consultation by occupational physicians in most countries. The lack of effectiveness and efficiency of the pre-employment examination should lead to its abandonment as a means of selection of personnel by OHSs. Periodic health monitoring or surveillance, and education on occupational health hazards can be carried out with reasonable process quality. Identification and evaluation of occupational health hazards by a workplace survey can be done with a high output quality, which, however, does not guarantee a favourable outcome. Although rigorous study designs are not always applicable or feasible in daily practice, much more effort should be directed at the scientific evaluation of OHSs and OHS instruments. To develop evidence-based occupational health care the quality of evaluation studies should be improved. In particular, process and outcome of consultation and rehabilitation activities of occupational physicians need to be studied more.

  19. The evolution of an evaluation: a case study using the tribal participatory research model.

    PubMed

    Richmond, Lucinda S; Peterson, Donna J; Betts, Sherry C

    2008-10-01

    This article presents a case study of how the evaluation design for a dating violence prevention and/or youth development program for American Indian youth in Arizona evolved throughout the project. Particular attention is given to how the evaluation design was guided by the tribal participatory research model. A brief rationale for the project is presented along with literature on culturally competent evaluation and research with American Indians. A description of the project and the unique communities in which it was implemented is provided. The focus of the article is the process of how the evaluation plan changed and how various factors influenced this process (e.g., feedback from community stakeholders, conversations with funder, results of process evaluation, suggestions from literature, the authors' experience working in American Indian communities). The authors conclude with lessons learned for others to consider as they develop working relationships and evaluation plans in similar communities.

  20. The Critical Research Evaluation Tool (CRET): A Teaching and Learning Tool to Evaluate Research for Cultural Competence.

    PubMed

    Love, Katie L

    The aim of this study was to present the Critical Research Evaluation Tool (CRET) which teaches evaluation of the researchers' worldview, applicability to multicultural populations, and ethics surrounding potential harms to communities. To provide best cultural care nurses' need to understand how historical/social/political experiences impact health and also influence research. The Student using the CRET reported receiving a strong foundation in research fundamentals, gaining a better understanding of critical frameworks in research, and learning more about themselves and reflecting on their own privileges and biases. The CRET provides nursing students and nursing faculty with a tool for examining diversity and ultimately decreasing health disparity.

  1. [Participation, knowledge production, and evaluative research: participation by different actors in a mental health study].

    PubMed

    Furtado, Juarez Pereira; Campos, Rosana Onocko

    2008-11-01

    This article reflects on the interrelations between participation, knowledge production, and public policy evaluation in light of issues from our own experience with evaluative research on a municipal network of Psychosocial Care Centers (CAPS) in Brazil. The article discusses the coordination of the complex process and the potentials and limits of partnerships for conducting qualitative evaluative studies in mental health with participation by different social actors. The authors conclude that qualitative evaluative research aligned with the perspective of including different points of view representing various segments is the best approach for understanding the numerous spin-offs from the implementation of services linked to the Brazilian psychiatric reform movement, given the inherent specificities of the mental health field.

  2. Learning Evaluation: blending quality improvement and implementation research methods to study healthcare innovations.

    PubMed

    Balasubramanian, Bijal A; Cohen, Deborah J; Davis, Melinda M; Gunn, Rose; Dickinson, L Miriam; Miller, William L; Crabtree, Benjamin F; Stange, Kurt C

    2015-03-10

    In healthcare change interventions, on-the-ground learning about the implementation process is often lost because of a primary focus on outcome improvements. This paper describes the Learning Evaluation, a methodological approach that blends quality improvement and implementation research methods to study healthcare innovations. Learning Evaluation is an approach to multi-organization assessment. Qualitative and quantitative data are collected to conduct real-time assessment of implementation processes while also assessing changes in context, facilitating quality improvement using run charts and audit and feedback, and generating transportable lessons. Five principles are the foundation of this approach: (1) gather data to describe changes made by healthcare organizations and how changes are implemented; (2) collect process and outcome data relevant to healthcare organizations and to the research team; (3) assess multi-level contextual factors that affect implementation, process, outcome, and transportability; (4) assist healthcare organizations in using data for continuous quality improvement; and (5) operationalize common measurement strategies to generate transportable results. Learning Evaluation principles are applied across organizations by the following: (1) establishing a detailed understanding of the baseline implementation plan; (2) identifying target populations and tracking relevant process measures; (3) collecting and analyzing real-time quantitative and qualitative data on important contextual factors; (4) synthesizing data and emerging findings and sharing with stakeholders on an ongoing basis; and (5) harmonizing and fostering learning from process and outcome data. Application to a multi-site program focused on primary care and behavioral health integration shows the feasibility and utility of Learning Evaluation for generating real-time insights into evolving implementation processes. Learning Evaluation generates systematic and rigorous cross

  3. Organising, Providing and Evaluating Technical Training for Early Career Researchers: A Case Study

    ERIC Educational Resources Information Center

    van Besouw, Rachel M.; Rogers, Katrine S.; Powles, Christopher J.; Papadopoulos, Timos; Ku, Emery M.

    2013-01-01

    This paper considers the importance of providing technical training opportunities for Early Career Researchers (ECRs) worldwide through the case study of a MATLAB training programme, which was proposed, organised, managed and evaluated by a team of five ECRs at the University of Southampton. The effectiveness of the programme in terms of the…

  4. Evaluating Research Articles from Start to Finish.

    ERIC Educational Resources Information Center

    Girden, Ellen R.

    This book in intended to train students in reading a research report critically. It uses actual research articles as examples including both good and flawed studies in each category and provides interpretation and evaluation of the appropriateness of the statistical analyses in each study. Individual chapters usually include two sample studies and…

  5. Evaluating Prior Scholarship in Literature Reviews of Research Articles: A Comparative Study of Practices in Two Research Paradigms

    ERIC Educational Resources Information Center

    Kwan, Becky S. C.; Chan, Hang; Lam, Colin

    2012-01-01

    Evaluations of prior scholarship play a crucial role in the literature review (LR) of a research article by showing how the boundary of an area of inquiry can be further advanced by the writer's work. Yet, many inexperienced writers find evaluating others' work a major challenge. Although the task has received some attention in research and…

  6. Study design in medical research: part 2 of a series on the evaluation of scientific publications.

    PubMed

    Röhrig, Bernd; du Prel, Jean-Baptist; Blettner, Maria

    2009-03-01

    The scientific value and informativeness of a medical study are determined to a major extent by the study design. Errors in study design cannot be corrected afterwards. Various aspects of study design are discussed in this article. Six essential considerations in the planning and evaluation of medical research studies are presented and discussed in the light of selected scientific articles from the international literature as well as the authors' own scientific expertise with regard to study design. The six main considerations for study design are the question to be answered, the study population, the unit of analysis, the type of study, the measuring technique, and the calculation of sample size. This article is intended to give the reader guidance in evaluating the design of studies in medical research. This should enable the reader to categorize medical studies better and to assess their scientific quality more accurately.

  7. 76 FR 789 - Office of Planning, Research and Evaluation Advisory Committee on Head Start Research and Evaluation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-06

    ... this agenda. The Committee will provide advice regarding future research efforts to inform HHS about... Planning, Research and Evaluation Advisory Committee on Head Start Research and Evaluation AGENCY... for Head Start Research and Evaluation. General Function of Committee: The Advisory Committee for Head...

  8. A Decade of Research on Evaluation: A Systematic Review of Research on Evaluation Published between 2005 and 2014

    ERIC Educational Resources Information Center

    Coryn, Chris L. S.; Wilson, Lyssa N.; Westine, Carl D.; Hobson, Kristin A.; Ozeki, Satoshi; Fiekowsky, Erica L.; Greenman, Gregory D., II; Schröter, Daniela C.

    2017-01-01

    Although investigations into evaluation theories, methods, and practices have been occurring since the late 1970s, research on evaluation (RoE) has seemingly increased in the past decade. In this review, 257 studies published in 14 evaluation-focused journals over a 10-year period (between 2005 and 2014) were identified as RoE and then classified…

  9. The impact of consumer involvement in research: an evaluation of consumer involvement in the London Primary Care Studies Programme.

    PubMed

    Wyatt, Katrina; Carter, Mary; Mahtani, Vinita; Barnard, Angela; Hawton, Annie; Britten, Nicky

    2008-06-01

    The value of consumer involvement in health services research is widely recognized. While there is a growing body of evidence about the principles of good consumer involvement, there is little research about the effect that involvement can have on the research. This evaluation assessed the level and impact of consumer involvement in the London Primary Care Studies Programme (LPCSP), all of whose individual projects had to demonstrate substantial involvement as a condition of funding. To evaluate consumer involvement in the LPSCP and understand what impact consumers had on the research process and outcomes. A multi-method case study approach was undertaken, using survey techniques, interviews, focus groups, observation and scrutiny of written documents. The overall data set comprised 61 questionnaires, 44 semi-structured interviews, 2 focus groups and 15 hours of observation of meetings. Eleven primary care-based research projects which together made up the LPCSP. An in-depth description of consumer involvement in the Programme was produced. Nine projects had consumers as co-applicants, four projects had been completed before the evaluation began and one was still ongoing at the time of the evaluation. Of the eight projects which have produced final reports, all met their aims and objectives. Consumers had had an additional impact in the research, in the initial design of the study, in recruitment of the research subjects, in developing data collection tools, in collecting the data, in analysis and disseminating the findings. Consumer involvement in National Health Service research is a relatively recent policy development and while there is an increasing amount of literature about how and why consumers should be involved in research, there is less evidence about the impact of such involvement. This evaluation provides evidence about the impact that consumers have not only on the research process but also on the outcomes of the research.

  10. A Disability and Health Institutional Research Capacity Building and Infrastructure Model Evaluation: A Tribal College-Based Case Study

    ERIC Educational Resources Information Center

    Moore, Corey L.; Manyibe, Edward O.; Sanders, Perry; Aref, Fariborz; Washington, Andre L.; Robertson, Cherjuan Y.

    2017-01-01

    Purpose: The purpose of this multimethod study was to evaluate the institutional research capacity building and infrastructure model (IRCBIM), an emerging innovative and integrated approach designed to build, strengthen, and sustain adequate disability and health research capacity (i.e., research infrastructure and investigators' research skills)…

  11. Evaluating a team-based approach to research capacity building using a matched-pairs study design.

    PubMed

    Holden, Libby; Pager, Susan; Golenko, Xanthe; Ware, Robert S; Weare, Robyn

    2012-03-12

    There is a continuing need for research capacity building initiatives for primary health care professionals. Historically strategies have focused on interventions aimed at individuals but more recently theoretical frameworks have proposed team-based approaches. Few studies have evaluated these new approaches. This study aims to evaluate a team-based approach to research capacity building (RCB) in primary health using a validated quantitative measure of research capacity in individual, team and organisation domains. A non-randomised matched-pairs trial design was used to evaluate the impact of a multi-strategy research capacity building intervention. Four intervention teams recruited from one health service district were compared with four control teams from outside the district, matched on service role and approximate size. All were multi-disciplinary allied health teams with a primary health care role. Random-effects mixed models, adjusting for the potential clustering effect of teams, were used to determine the significance of changes in mean scores from pre- to post-intervention. Comparisons of intervention versus control groups were made for each of the three domains: individual, team and organisation. The Individual Domain measures the research skills of the individual, whereas Team and Organisation Domains measure the team/organisation's capacity to support and foster research, including research culture. In all three domains (individual, team and organisation) there were no occasions where improvements were significantly greater for the control group (comprising the four control teams, n = 32) compared to the intervention group (comprising the four intervention teams, n = 37) either in total domain score or domain item scores. However, the intervention group had a significantly greater improvement in adjusted scores for the Individual Domain total score and for six of the fifteen Individual Domain items, and to a lesser extent with Team and Organisation

  12. [Evaluation of arguments in research reports].

    PubMed

    Botes, A

    1999-06-01

    Some authors on research methodology are of opinion that research reports are based on the logic of reasoning and that such reports communicate with the reader by presenting logical, coherent arguments (Böhme, 1975:206; Mouton, 1996:69). This view implies that researchers draw specific conclusions and that such conclusions are justified by way of reasoning (Doppelt, 1998:105; Giere, 1984:26; Harre, 1965:11; Leherer & Wagner, 1983 & Pitt, 1988:7). The structure of a research report thus consists mainly of conclusions and reasons for such conclusions (Booth, Colomb & Williams, 1995:97). From this it appears that justification by means of reasoning is a standard procedure in research and research reports. Despite the fact that the logic of research is based on reasoning, that the justification of research findings by way of reasoning appears to be standard procedure and that the structure of a research report comprises arguments, the evaluation or assessment of research, as described in most textbooks on research methodology (Burns & Grove, 1993:647; Creswell, 1994:193; LoBiondo-Wood & Haber, 1994:441/481) does not focus on the arguments of research. The evaluation criteria for research reports which are set in these textbooks are related to the way in which the research process is carried out and focus on the measures for internal, external, theoretical, measurement and inferential validity. This means that criteria for the evaluation of research are comprehensive and they should be very specific in respect of each type of research (for example quantitative or qualitative). When the evaluation of research reports is focused on arguments and logic, there could probably be one set of universal standards against which all types of human science research reports can be assessed. Such a universal set of standards could possibly simplify the evaluation of research reports in the human sciences since they can be used to assess all the critical aspects of research reports

  13. Evaluating Research Administration: Methods and Utility

    ERIC Educational Resources Information Center

    Marina, Sarah; Davis-Hamilton, Zoya; Charmanski, Kara E.

    2015-01-01

    Three studies were jointly conducted by the Office of Research Administration and Office of Proposal Development at Tufts University to evaluate the services within each respective office. The studies featured assessments that used, respectively, (1) quantitative metrics; (2) a quantitative satisfaction survey with limited qualitative questions;…

  14. Health services research evaluation principles. Broadening a general framework for evaluating health information technology.

    PubMed

    Sockolow, P S; Crawford, P R; Lehmann, H P

    2012-01-01

    Our forthcoming national experiment in increased health information technology (HIT) adoption funded by the American Recovery and Reinvestment Act of 2009 will require a comprehensive approach to evaluating HIT. The quality of evaluation studies of HIT to date reveals a need for broader evaluation frameworks that limits the generalizability of findings and the depth of lessons learned. Develop an informatics evaluation framework for health information technology (HIT) integrating components of health services research (HSR) evaluation and informatics evaluation to address identified shortcomings in available HIT evaluation frameworks. A systematic literature review updated and expanded the exhaustive review by Ammenwerth and deKeizer (AdK). From retained studies, criteria were elicited and organized into classes within a framework. The resulting Health Information Technology Research-based Evaluation Framework (HITREF) was used to guide clinician satisfaction survey construction, multi-dimensional analysis of data, and interpretation of findings in an evaluation of a vanguard community health care EHR. The updated review identified 128 electronic health record (EHR) evaluation studies and seven evaluation criteria not in AdK: EHR Selection/Development/Training; Patient Privacy Concerns; Unintended Consequences/ Benefits; Functionality; Patient Satisfaction with EHR; Barriers/Facilitators to Adoption; and Patient Satisfaction with Care. HITREF was used productively and was a complete evaluation framework which included all themes that emerged. We can recommend to future EHR evaluators that they consider adding a complete, research-based HIT evaluation framework, such as HITREF, to their evaluation tools suite to monitor HIT challenges as the federal government strives to increase HIT adoption.

  15. Research performance evaluation: the experience of an independent medical research institute.

    PubMed

    Schapper, Catherine C; Dwyer, Terence; Tregear, Geoffrey W; Aitken, MaryAnne; Clay, Moira A

    2012-05-01

    Evaluation of the social and economic outcomes of health research funding is an area of intense interest and debate. Typically, approaches have sought to assess the impact of research funding by medical charities or regional government bodies. Independent research institutes have a similar need for accountability in investment decisions but have different objectives and funding, thus the existing approaches are not appropriate. An evaluation methodology using eight indicators was developed to assess research performance across three broad categories: knowledge creation; inputs to research; and commercial, clinical and public health outcomes. The evaluation approach was designed to provide a balanced assessment across laboratory, clinical and public health research. With a diverse research agenda supported by a large number of researchers, the Research Performance Evaluation process at the Murdoch Childrens Research Institute has, by necessity, been iterative and responsive to the needs of the Institute and its staff. Since its inception 5 years ago, data collection systems have been refined, the methodology has been adjusted to capture appropriate data, staff awareness and participation has increased, and issues regarding the methodology and scoring have been resolved. The Research Performance Evaluation methodology described here provides a fair and transparent means of disbursing internal funding. It is also a powerful tool for evaluating the Institute's progress towards achieving its strategic goals, and is therefore a key driver for research excellence.

  16. Research on Livable Community Evaluation Based on GIS

    NASA Astrophysics Data System (ADS)

    Yin, Zhangcai; Wu, Yang; Jin, Zhanghaonan; Zhang, Xu

    2018-01-01

    Community is the basic unit of the city. Research on livable community could provide a bottom-up research path for the realization of livable city. Livability is the total factor affecting the quality of community life. In this paper, livable community evaluation indexes are evaluated based on GIS and fuzzy comprehensive evaluation method. Then the sum-index and sub-index of community livability are both calculated. And community livable evaluation index system is constructed based on the platform of GIS. This study provides theoretical support for the construction and management of livable communities, so as to guide the development and optimization of city.

  17. 75 FR 35816 - Office of Planning, Research and Evaluation Advisory Committee on Head Start Research and Evaluation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-23

    ... Planning, Research and Evaluation Advisory Committee on Head Start Research and Evaluation AGENCY...: This notice announces the re-establishment of the Advisory Committee on Head Start Research and... carrying out the independent research. FOR FURTHER INFORMATION CONTACT: Jennifer Brooks, Office of Planning...

  18. Formative Qualitative Evaluation for "Exploratory" ITS Research.

    ERIC Educational Resources Information Center

    Murray, Tom

    1993-01-01

    Discusses evaluation methods applicable to exploratory research areas, provides an overview of qualitative and formative methods for exploratory research on intelligent tutoring systems (ITS) and describes an exploratory study in ITS knowledge acquisition which involved working with three educators to build an ITS for high school physics.…

  19. Basic Exploratory Research versus Guideline-Compliant Studies Used for Hazard Evaluation and Risk Assessment: Bisphenol A as a Case Study

    PubMed Central

    Tyl, Rochelle W.

    2009-01-01

    Background Myers et al. [Environ Health Perspect 117:309–315 (2009)] argued that Good Laboratory Practices (GLPs) cannot be used as a criterion for selecting data for risk assessment, using bisphenol A (BPA) as a case study. They did not discuss the role(s) of guideline-compliant studies versus basic/exploratory research studies, and they criticized both GLPs and guideline-compliant studies and their roles in formal hazard evaluation and risk assessment. They also specifically criticized our published guideline-compliant dietary studies on BPA in rats and mice and 17β-estradiol (E2) in mice. Objectives As the study director/first author of the criticized E2 and BPA studies, I discuss the uses of basic research versus guideline-compliant studies, how testing guidelines are developed and revised, how new end points are validated, and the role of GLPs. I also provide an overview of the BPA guideline-compliant and exploratory research animal studies and describe BPA pharmacokinetics in rats and humans. I present responses to specific criticisms by Myers et al. Discussion and conclusions Weight-of-evidence evaluations have consistently concluded that low-level BPA oral exposures do not adversely affect human developmental or reproductive health, and I encourage increased validation efforts for “new” end points for inclusion in guideline studies, as well as performance of robust long-term studies to follow early effects (observed in small exploratory studies) to any adverse consequences. PMID:20049112

  20. 76 FR 76417 - Office of Planning, Research and Evaluation Advisory Committee on Head Start Research and Evaluation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-07

    ... provide advice regarding future research efforts to inform HHS about how to guide the development and... Planning, Research and Evaluation Advisory Committee on Head Start Research and Evaluation AGENCY... will be open to the public. Name of Committee: Advisory Committee for Head Start Research and...

  1. Citation Ranking versus Peer Evaluation of Senior Faculty Research Performance: A Case Study of Kurdish Scholarship.

    ERIC Educational Resources Information Center

    Meho, Lokman I.; Sonnenwald, Diane H.

    2000-01-01

    Analyzes the relationship between citation ranking and peer evaluation in assessing senior faculty research performance. Describes a study of faculty specializing in Kurdish studies that investigated to what degree citation ranking correlates with data from citation content analysis, book reviews, and peer ranking. (Contains 72 references.)…

  2. Frameworks for evaluating health research capacity strengthening: a qualitative study

    PubMed Central

    2013-01-01

    Background Health research capacity strengthening (RCS) projects are often complex and hard to evaluate. In order to inform health RCS evaluation efforts, we aimed to describe and compare key characteristics of existing health RCS evaluation frameworks: their process of development, purpose, target users, structure, content and coverage of important evaluation issues. A secondary objective was to explore what use had been made of the ESSENCE framework, which attempts to address one such issue: harmonising the evaluation requirements of different funders. Methods We identified and analysed health RCS evaluation frameworks published by seven funding agencies between 2004 and 2012, using a mixed methods approach involving structured qualitative analyses of documents, a stakeholder survey and consultations with key contacts in health RCS funding agencies. Results The frameworks were intended for use predominantly by the organisations themselves, and most were oriented primarily towards funders’ internal organisational performance requirements. The frameworks made limited reference to theories that specifically concern RCS. Generic devices, such as logical frameworks, were typically used to document activities, outputs and outcomes, but with little emphasis on exploring underlying assumptions or contextual constraints. Usage of the ESSENCE framework appeared limited. Conclusions We believe that there is scope for improving frameworks through the incorporation of more accessible information about how to do evaluation in practice; greater involvement of stakeholders, following evaluation capacity building principles; greater emphasis on explaining underlying rationales of frameworks; and structuring frameworks so that they separate generic and project-specific aspects of health RCS evaluation. The third and fourth of these improvements might assist harmonisation. PMID:24330628

  3. Evaluative studies in nuclear medicine research: positron computed tomography assessment. Final report, January 1, 1982-December 31, 1982

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potchen, E.J.; Harris, G.I.; Gift, D.A. Reinhard, D.K.

    Results are reported of the final phase of the study effort generally titled Evaluative Studies in Nuclear Medicine Research. The previous work is reviewed and extended to an assessment providing perspectives on medical applications of positron emission tomographic (PET) systems, their technological context, and the related economic and marketing environment. Methodologies developed and used in earlier phases of the study were continued, but specifically extended to include solicitation of opinion from commercial organizations deemed to be potential developers, manufacturers and marketers of PET systems. Several factors which influence the demand for clinical uses of PET are evaluated and discussed. Themore » recent Federal funding of applied research with PET systems is found to be a necessary and encouraging event toward a determination that PET either is a powerful research tool limited to research, or whether it also presents major clinical utility. A comprehensive, updated bibliography of current literature related to the development, applications and economic considerations of PET technology is appended.« less

  4. Health systems research training enhances workplace research skills: a qualitative evaluation.

    PubMed

    Adams, Jolene; Schaffer, Angela; Lewin, Simon; Zwarenstein, Merrick; van der Walt, Hester

    2003-01-01

    In-service education is a widely used means of enhancing the skills of health service providers, for example, in undertaking research. However, the transfer of skills acquired during an education course to the workplace is seldom evaluated. The objectives of this study were to assess learner, teacher, and health service manager perceptions of the usefulness, in the work setting, of skills taught on a health systems research education course in South Africa and to assess the extent to which the course stimulated awareness and development of health systems research in the work setting. The education course was evaluated using a qualitative approach. Respondents were selected for interview using purposive sampling. Interviews were conducted with 39 respondents, including all of the major stakeholders. The interviews lasted between 20 and 60 minutes and were conducted either face to face or over the telephone. Thematic analysis was applied to the data, and key themes were identified. The course demystified health systems research and stimulated interest in reading and applying research findings. The course also changed participants' attitudes to routine data collection and was reported to have facilitated the application of informal research or problem-solving methods to everyday work situations. However, inadequate support within the workplace was a significant obstacle to applying the skills learned. A 2-week intensive, experiential course in health systems research methods can provide a mechanism for introducing basic research skills to a wide range of learners. Qualitative evaluation is a useful approach for assessing the impacts of education courses.

  5. Evaluation of a Research Mentorship Program in Community Care

    ERIC Educational Resources Information Center

    Ploeg, Jenny; de Witt, Lorna; Hutchison, Brian; Hayward, Lynda; Grayson, Kim

    2008-01-01

    This article describes the results of a qualitative case study evaluating a research mentorship program in community care settings in Ontario, Canada. The purpose of the program was to build evaluation and research capacity among staff of community care agencies through a mentorship program. Data were collected through in-depth, semi-structured…

  6. Historical Perspectives: A Review and Evaluation of 76 Studies of the Defense Research Enterprise, 1945-2015

    DTIC Science & Technology

    2016-08-01

    MANAGEMENT FELLOW U.S. DEPARTMENT OF DEFENSE This report summarizes recommendations from 76 prior studies of the Department of Defense Research...Enterprise. A brief summary and evaluation of each study is provided, and recommendations are grouped according to management areas. Enduring themes...Prominent Trends in Historical Recommendations .................................................................................. 5 Knowledge Management

  7. Operations Research as a Metaphor for Evaluation. Research on Evaluation Program Paper and Report Series.

    ERIC Educational Resources Information Center

    Page, Ellis B.

    One of a series of research reports examining objective principles successfully used in other fields which can lend integrity and legitimacy to evaluation, this report presents an overview of operations research (OR) as a potential source of evaluation methodology. The nature of the methods common to this discipline are summarized and the…

  8. Evaluation of research in biomedical ontologies

    PubMed Central

    Dumontier, Michel; Gkoutos, Georgios V.

    2013-01-01

    Ontologies are now pervasive in biomedicine, where they serve as a means to standardize terminology, to enable access to domain knowledge, to verify data consistency and to facilitate integrative analyses over heterogeneous biomedical data. For this purpose, research on biomedical ontologies applies theories and methods from diverse disciplines such as information management, knowledge representation, cognitive science, linguistics and philosophy. Depending on the desired applications in which ontologies are being applied, the evaluation of research in biomedical ontologies must follow different strategies. Here, we provide a classification of research problems in which ontologies are being applied, focusing on the use of ontologies in basic and translational research, and we demonstrate how research results in biomedical ontologies can be evaluated. The evaluation strategies depend on the desired application and measure the success of using an ontology for a particular biomedical problem. For many applications, the success can be quantified, thereby facilitating the objective evaluation and comparison of research in biomedical ontology. The objective, quantifiable comparison of research results based on scientific applications opens up the possibility for systematically improving the utility of ontologies in biomedical research. PMID:22962340

  9. Systematic review of methods for evaluating healthcare research economic impact

    PubMed Central

    2010-01-01

    Background The economic benefits of healthcare research require study so that appropriate resources can be allocated to this research, particularly in developing countries. As a first step, we performed a systematic review to identify the methods used to assess the economic impact of healthcare research, and the outcomes. Method An electronic search was conducted in relevant databases using a combination of specific keywords. In addition, 21 relevant Web sites were identified. Results The initial search yielded 8,416 articles. After studying titles, abstracts, and full texts, 18 articles were included in the analysis. Eleven other reports were found on Web sites. We found that the outcomes assessed as healthcare research payback included direct cost-savings, cost reductions in healthcare delivery systems, benefits from commercial advancement, and outcomes associated with improved health status. Two methods were used to study healthcare research payback: macro-economic studies, which examine the relationship between research studies and economic outcome at the aggregated level, and case studies, which examine specific research projects to assess economic impact. Conclusions Our study shows that different methods and outcomes can be used to assess the economic impacts of healthcare research. There is no unique methodological approach for the economic evaluation of such research. In our systematic search we found no research that had evaluated the economic return of research in low and middle income countries. We therefore recommend a consensus on practical guidelines at international level on the basis of more comprehensive methodologies (such as Canadian Academic of Health Science and payback frameworks) in order to build capacity, arrange for necessary informative infrastructures and promote necessary skills for economic evaluation studies. PMID:20196839

  10. Current Research: Measurement and Evaluation of the Collection.

    ERIC Educational Resources Information Center

    Mancall, Jacqueline C.

    1982-01-01

    Reviews the literature on the evaluation of library collections, discusses the research specifically on collection evaluation in school libraries, and draws attention to the types of studies that are needed in the school library media field. A 22-item reference list is included. (JL)

  11. Performance Evaluation Tests for Environmental Research (PETER): evaluation of 114 measures

    NASA Technical Reports Server (NTRS)

    Bittner, A. C. Jr; Carter, R. C.; Kennedy, R. S.; Harbeson, M. M.; Krause, M.

    1986-01-01

    The goal of the Performance Evaluation Tests for Environmental Research (PETER) Program was to identify a set of measures of human capabilities for use in the study of environmental and other time-course effects. 114 measures studied in the PETER Program were evaluated and categorized into four groups based upon task stability and task definition. The Recommended category contained 30 measures that clearly obtained total stabilization and had an acceptable level of reliability efficiency. The Acceptable-But-Redundant category contained 15 measures. The 37 measures in the Marginal category, which included an inordinate number of slope and other derived measures, usually had desirable features which were outweighed by faults. The 32 measures in the Unacceptable category had either differential instability or weak reliability efficiency. It is our opinion that the 30 measures in the Recommended category should be given first consideration for environmental research applications. Further, it is recommended that information pertaining to preexperimental practice requirements and stabilized reliabilities should be utilized in repeated-measures environmental studies.

  12. 78 FR 55068 - Request for Information To Inform the Title III Evaluation and Research Studies Agenda

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-09

    ... DEPARTMENT OF EDUCATION [Docket ID ED-2013-OELA-0117] Request for Information To Inform the Title III Evaluation and Research Studies Agenda AGENCY: Office of English Language Acquisition, Language..., Title III of the ESEA requires States to develop English language proficiency (ELP) standards that are...

  13. Evaluating a dementia learning community: exploratory study and research implications.

    PubMed

    Sheaff, Rod; Sherriff, Ian; Hennessy, Catherine Hagan

    2018-02-05

    Access times for, the costs and overload of hospital services are an increasingly salient issue for healthcare managers in many countries. Rising demand for hospital care has been attributed partly to unplanned admissions for older people, and among these partly to the increasing prevalence of dementia. The paper makes a preliminary evaluation of the logic model of a Dementia Learning Community (DLC) intended to reduce unplanned hospital admissions from care homes of people with dementia. A dementia champion in each DLC care home trained other staff in dementia awareness and change management with the aims of changing work routines, improving quality of life, and reducing demands on external services. Controlled mixed methods realistic evaluation comparing 13 intervention homes with 10 controls in England during 2013-15. Each link in the assumed logic model was tested to find whether that link appeared to exist in the DLC sites, and if so whether its effects appeared greater there than in control sites, in terms of selected indicators of quality of life (DCM Well/Ill-Being, QUALID, end-of-life planning); and impacts on ambulance call-outs and hospital admissions. The training was implemented as planned, and triggered cycles of Plan-Do-Study-Act activity in all the intervention care homes. Residents' well-being scores, measured by dementia care mapping, improved markedly in half of the intervention homes but not in the other half, where indeed some scores deteriorated markedly. Most other care quality indicators studied did not significantly improve during the study period. Neither did ambulance call-out or emergency hospital admission rates. PDSA cycles appeared to be the more 'active ingredient' in this intervention. The reasons why they impacted on well-being in half of the intervention sites, and not the others, require further research. A larger, longer study would be necessary to measure definitively any impacts on unplanned hospital admissions. Our evidence

  14. [Assessment of research papers in medical university staff evaluation].

    PubMed

    Zhou, Qing-hui

    2012-06-01

    Medical university staff evaluation is a substantial branch of education administration for medical university. Output number of research papers as a direct index reflecting the achievements in academic research, plays an important role in academic research evaluation. Another index, influence of the research paper, is an indirect index for academic research evaluation. This paper mainly introduced some commonly used indexes in evaluation of academic research papers currently, and analyzed the applicability and limitation of each index. The author regards that academic research evaluation in education administration, which is mainly based on evaluation of academic research papers, should combine the evaluation of journals where the papers are published with peer review of the papers, and integrate qualitative evaluation with quantitative evaluation, for the purpose of setting up an objective academic research evaluation system for medical university staff.

  15. Designing, Teaching, and Evaluating Two Complementary Mixed Methods Research Courses

    ERIC Educational Resources Information Center

    Christ, Thomas W.

    2009-01-01

    Teaching mixed methods research is difficult. This longitudinal explanatory study examined how two classes were designed, taught, and evaluated. Curriculum, Research, and Teaching (EDCS-606) and Mixed Methods Research (EDCS-780) used a research proposal generation process to highlight the importance of the purpose, research question and…

  16. Interviewer as instrument: accounting for human factors in evaluation research.

    PubMed

    Brown, Joel H

    2006-04-01

    This methodological study examines an original data collection model designed to incorporate human factors and enhance data richness in qualitative and evaluation research. Evidence supporting this model is drawn from in-depth youth and adult interviews in one of the largest policy/program evaluations undertaken in the United States, the Drug, Alcohol, and Tobacco Education evaluation (77 districts, 118 schools). When applying the explicit observation technique (EOT)--the strategic and nonjudgmental disclosure of nonverbal human factor cues by the interviewer to the respondent during interview--data revealed the observation disclosure pattern. Here, respondents linked perceptions with policy or program implementation or effectiveness evidence. Although more research is needed, it is concluded that the EOT yields richer data when compared with traditional semistructured interviews and, thus, holds promise to enhance qualitative and evaluation research methods. Validity and reliability as well as qualitative and evaluation research considerations are discussed.

  17. FHWA Research and Technology Evaluation: Roundabout Research Final Report

    DOT National Transportation Integrated Search

    2018-06-01

    This evaluation assesses the effects of the Federal Highway Administrations (FHWAs) investment in roundabout research and related activities on the availability and quality of such research, the adoption of roundabouts in the United States, and...

  18. Research and Evaluation in Medical Education

    ERIC Educational Resources Information Center

    Ferris, Helena A.; Collins, Mary E.

    2015-01-01

    The landscape of medical education is continuously evolving, as are the needs of the learner. The appropriate use of research and evaluation is key when assessing the need for change and instituting one's innovative endeavours. This paper demonstrates how research seeks to generate new knowledge, whereas evaluation uses information acquired from…

  19. Functional Capacity Evaluation Research: Report from the Second International Functional Capacity Evaluation Research Meeting.

    PubMed

    James, C L; Reneman, M F; Gross, D P

    2016-03-01

    Functional capacity evaluations are an important component of many occupational rehabilitation programs and can play a role in facilitating reintegration to work thus improving health and disability outcomes. The field of functional capacity evaluation (FCE) research has continued to develop over recent years, with growing evidence on the reliability, validity and clinical utility of FCE within different patient and healthy worker groups. The second International FCE Research Conference was held in Toronto, Canada on October 2nd 2014 adjacent to the 2014 Work Disability Prevention Integration conference. This paper describes the outcomes of the conference. Fifty-four participants from nine countries attended the conference where eleven research projects and three workshops were presented. The conference provided an opportunity to discuss FCE practice, present new research and provide a forum for discourse around the issues pertinent to FCE use. Conference presentations covered aspects of FCE use including the ICF-FCE interface, aspects of reliability and validity, consideration of specific injury populations, comparisons of FCE components and a lively debate on the merits of 'Man versus Machine' in FCE's. Researchers, clinicians, and other professionals in the FCE area have a common desire to improve the content and quality of FCE research and to collaborate to further develop research across systems, cultures and countries.

  20. Commentary: Writing and Evaluating Qualitative Research Reports

    PubMed Central

    Thompson, Deborah; Aroian, Karen J.; McQuaid, Elizabeth L.; Deatrick, Janet A.

    2016-01-01

    Objective To provide an overview of qualitative methods, particularly for reviewers and authors who may be less familiar with qualitative research. Methods A question and answer format is used to address considerations for writing and evaluating qualitative research. Results and Conclusions When producing qualitative research, individuals are encouraged to address the qualitative research considerations raised and to explicitly identify the systematic strategies used to ensure rigor in study design and methods, analysis, and presentation of findings. Increasing capacity for review and publication of qualitative research within pediatric psychology will advance the field’s ability to gain a better understanding of the specific needs of pediatric populations, tailor interventions more effectively, and promote optimal health. PMID:27118271

  1. Research evaluation support services in biomedical libraries

    PubMed Central

    Gutzman, Karen Elizabeth; Bales, Michael E.; Belter, Christopher W.; Chambers, Thane; Chan, Liza; Holmes, Kristi L.; Lu, Ya-Ling; Palmer, Lisa A.; Reznik-Zellen, Rebecca C.; Sarli, Cathy C.; Suiter, Amy M.; Wheeler, Terrie R.

    2018-01-01

    Objective The paper provides a review of current practices related to evaluation support services reported by seven biomedical and research libraries. Methods A group of seven libraries from the United States and Canada described their experiences with establishing evaluation support services at their libraries. A questionnaire was distributed among the libraries to elicit information as to program development, service and staffing models, campus partnerships, training, products such as tools and reports, and resources used for evaluation support services. The libraries also reported interesting projects, lessons learned, and future plans. Results The seven libraries profiled in this paper report a variety of service models in providing evaluation support services to meet the needs of campus stakeholders. The service models range from research center cores, partnerships with research groups, and library programs with staff dedicated to evaluation support services. A variety of products and services were described such as an automated tool to develop rank-based metrics, consultation on appropriate metrics to use for evaluation, customized publication and citation reports, resource guides, classes and training, and others. Implementing these services has allowed the libraries to expand their roles on campus and to contribute more directly to the research missions of their institutions. Conclusions Libraries can leverage a variety of evaluation support services as an opportunity to successfully meet an array of challenges confronting the biomedical research community, including robust efforts to report and demonstrate tangible and meaningful outcomes of biomedical research and clinical care. These services represent a transformative direction that can be emulated by other biomedical and research libraries. PMID:29339930

  2. Research evaluation support services in biomedical libraries.

    PubMed

    Gutzman, Karen Elizabeth; Bales, Michael E; Belter, Christopher W; Chambers, Thane; Chan, Liza; Holmes, Kristi L; Lu, Ya-Ling; Palmer, Lisa A; Reznik-Zellen, Rebecca C; Sarli, Cathy C; Suiter, Amy M; Wheeler, Terrie R

    2018-01-01

    The paper provides a review of current practices related to evaluation support services reported by seven biomedical and research libraries. A group of seven libraries from the United States and Canada described their experiences with establishing evaluation support services at their libraries. A questionnaire was distributed among the libraries to elicit information as to program development, service and staffing models, campus partnerships, training, products such as tools and reports, and resources used for evaluation support services. The libraries also reported interesting projects, lessons learned, and future plans. The seven libraries profiled in this paper report a variety of service models in providing evaluation support services to meet the needs of campus stakeholders. The service models range from research center cores, partnerships with research groups, and library programs with staff dedicated to evaluation support services. A variety of products and services were described such as an automated tool to develop rank-based metrics, consultation on appropriate metrics to use for evaluation, customized publication and citation reports, resource guides, classes and training, and others. Implementing these services has allowed the libraries to expand their roles on campus and to contribute more directly to the research missions of their institutions. Libraries can leverage a variety of evaluation support services as an opportunity to successfully meet an array of challenges confronting the biomedical research community, including robust efforts to report and demonstrate tangible and meaningful outcomes of biomedical research and clinical care. These services represent a transformative direction that can be emulated by other biomedical and research libraries.

  3. Using a community of inquiry framework to teach a nursing and midwifery research subject: An evaluative study.

    PubMed

    Mills, Jane; Yates, Karen; Harrison, Helena; Woods, Cindy; Chamberlain-Salaun, Jennifer; Trueman, Scott; Hitchins, Marnie

    2016-08-01

    Postgraduate nursing students' negative perceptions about a core research subject at an Australian university led to a revision and restructure of the subject using a Communities of Inquiry framework. Negative views are often expressed by nursing and midwifery students about the research process. The success of evidence-based practice is dependent on changing these views. A Community of Inquiry is an online teaching, learning, thinking, and sharing space created through the combination of three domains-teacher presence (related largely to pedagogy), social presence, and cognitive presence (critical thinking). Evaluate student satisfaction with a postgraduate core nursing and midwifery subject in research design, theory, and methodology, which was delivered using a Communities of Inquiry framework. This evaluative study incorporated a validated Communities of Inquiry survey (n=29) and interviews (n=10) and was conducted at an Australian university. Study participants were a convenience sample drawn from 56 postgraduate students enrolled in a core research subject. Survey data were analysed descriptively and interviews were coded thematically. Five main themes were identified: subject design and delivery; cultivating community through social interaction; application-knowledge, practice, research; student recommendations; and technology and technicalities. Student satisfaction was generally high, particularly in the areas of cognitive presence (critical thinking) and teacher presence (largely pedagogy related). Students' views about the creation of a "social presence" were varied but overall, the framework was effective in stimulating both inquiry and a sense of community. The process of research is, in itself, the creation of a "community of inquiry." This framework showed strong potential for use in the teaching of nurse research subjects; satisfaction was high as students reported learning, not simply the theory and the methods of research, but also how to engage

  4. How to Assess Quality of Research in Iran, From Input to Impact? Introduction of Peer-Based Research Evaluation Model in Iran.

    PubMed

    Ebadifar, Asghar; Baradaran Eftekhari, Monir; Owlia, Parviz; Habibi, Elham; Ghalenoee, Elham; Bagheri, Mohammad Reza; Falahat, Katayoun; Eltemasi, Masoumeh; Sobhani, Zahra; Akhondzadeh, Shahin

    2017-11-01

    Research evaluation is a systematic and objective process to measure relevance, efficiency and effectiveness of research activities, and peer review is one of the most important tools for assessing quality of research. The aim of this study was introducing research evaluation indicators based on peer reviewing. This study was implemented in 4 stages. A list of objective-oriented evaluation indicators were designed in 4 axes, including; governance and leadership, structure, knowledge production and research impact. The top 10% medical sciences research centers (RCs) were evaluated based on peer review. Adequate equipment and laboratory instruments, high quality research publication and national or international cooperation were the main strengths in medical sciences RCs and the most important weaknesses included failure to adhere to strategic plans, parallel actions in similar fields, problems in manpower recruitment, knowledge translation & exchange (KTE) in service providers and policy makers' levels. Peer review evaluation can improve the quality of research.

  5. How Does Research Evaluation Impact Educational Research? Exploring Intended and Unintended Consequences of Research Assessment in the United Kingdom, 1986-2014

    ERIC Educational Resources Information Center

    Marques, Marcelo; Powell, Justin J. W.; Zapp, Mike; Biesta, Gert

    2017-01-01

    Research evaluation systems in many countries aim to improve the quality of higher education. Among the first of such systems, the UK's Research Assessment Exercise (RAE) dating from 1986 is now the Research Excellence Framework (REF). Highly institutionalised, it transforms research to be more accountable. While numerous studies describe the…

  6. Fuels research studies at NASA Lewis

    NASA Technical Reports Server (NTRS)

    Antoine, A. C.

    1982-01-01

    Fuels research studies carried out in a variety of areas related to aviation propulsion, ground transportation, and stationary power generation systems are discussed. The major efforts are directed to studies on fuels for jet aircraft. These studies involve fuels preparation, fuels analysis, and fuel quality evaluations. The scope and direction of research activities in these areas is discussed, descriptions of Lewis capabilities and facilities given, and results of recent research efforts reported.

  7. 76 FR 42712 - Advisory Committee on Head Start Research and Evaluation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-19

    ... Research and Evaluation will provide feedback on the published final report for the Head Start Impact Study... recommendations on follow-up research, including additional analysis of the Head Start Impact Study data. The... research agenda, including--but not limited to--how the Head Start Impact Study fits within this agenda...

  8. Soft Research on a Hard Subject: Student Evaluations Reconsidered

    ERIC Educational Resources Information Center

    Soper, John C.

    1973-01-01

    Methods of evaluation of faculty classroom performance are considered. The author cites research studies which attempt to assess the validity of student evaluations of teachers. Data are presented suggesting that the students' perceptions of their teachers' abilities are not connected with what those students learn. (SM)

  9. Obstacles to Using Prior Research and Evaluations.

    ERIC Educational Resources Information Center

    Orwin, Robert G.

    1985-01-01

    The manner in which results and methods are reported influences the ability of the synthesis of prior studies for planning new evaluations. Confidence ratings, coding conventions, and supplemental evidence can partially overcome the difficulties. Planners must acknowledge the influence of their own judgement in using prior research. (Author)

  10. An Evaluation Methodology for Longitudinal Studies of Short Term Cancer Research Training Programs

    PubMed Central

    Padilla, Luz A.; Venkatesh, Raam; Daniel, Casey L.; Desmond, Renee A.; Brooks, C. Michael; Waterbor, John W.

    2014-01-01

    The need to familiarize medical students and graduate health professional students with research training opportunities that cultivate the appeal of research careers is vital to the future of research. Comprehensive evaluation of a cancer research training program can be achieved through longitudinal tracking of program alumni to assess the program’s impact on each participant’s career path and professional achievements. With advances in technology and smarter means of communication, effective ways to track alumni have changed. In order to collect data on the career outcomes and achievements of nearly 500 short-term cancer research training program alumni from 1999–2013, we sought to contact each alumnus to request completion of a survey instrument online, or by means of a telephone interview. The effectiveness of each contact method that we used was quantified according to ease of use and time required. The most reliable source of contact information for tracking alumni from the early years of the program was previous tracking results; and for alumni from the later years, the most important source of contact information was university alumni records that provided email addresses and telephone numbers. Personal contacts with former preceptors were sometimes helpful, as were generic search engines and people search engines. Social networking was of little value for most searches. Using information from two or more sources in combination was most effective in tracking alumni. These results provide insights and tools for other research training programs that wish to track their alumni for long-term program evaluation. PMID:25412722

  11. Evaluation and Ranking of Researchers – Bh Index

    PubMed Central

    Bharathi, D. Gnana

    2013-01-01

    Evaluation and ranking of every author is very crucial as it is widely used to evaluate the performance of the researcher. This article proposes a new method, called Bh-Index, to evaluate the researchers based on the publications and citations. The method is built on h-Index and only the h-core articles are taken into consideration. The method assigns value additions to those articles that receive significantly high citations in comparison to the h-Index of the researcher. It provides a wide range of values for a given h-Index and effective evaluation even for a short period. Use of Bh-Index along with the h-Index gives a powerful tool to evaluate the researchers. PMID:24349183

  12. Toward an agenda for evaluation of qualitative research.

    PubMed

    Stige, Brynjulf; Malterud, Kirsti; Midtgarden, Torjus

    2009-10-01

    Evaluation is essential for research quality and development, but the diversity of traditions that characterize qualitative research suggests that general checklists or shared criteria for evaluation are problematic. We propose an approach to research evaluation that encourages reflexive dialogue through use of an evaluation agenda. In proposing an evaluation agenda we shift attention from rule-based judgment to reflexive dialogue. Unlike criteria, an agenda may embrace pluralism, and does not request consensus on ontological, epistemological, and methodological issues, only consensus on what themes warrant discussion. We suggest an evaluation agenda-EPICURE-with two dimensions communicated through use of two acronyms.The first, EPIC, refers to the challenge of producing rich and substantive accounts based on engagement, processing, interpretation, and (self-)critique. The second-CURE-refers to the challenge of dealing with preconditions and consequences of research, with a focus on (social) critique, usefulness, relevance, and ethics. The seven items of the composite agenda EPICURE are presented and exemplified. Features and implications of the agenda approach to research evaluation are then discussed.

  13. An Evaluation of Policy-Related Rehabilitation Research.

    ERIC Educational Resources Information Center

    Berkowitz, Monroe; And Others

    The monograph evaluates the methodological quality and the policy utility of research in the area of rehabilitation of the handicapped. Analysis was based on a multidisciplinary review of 477 sample reports from more than 4,000 screened project reports. Chapter 1 delineates the nature of the study; Chapter 2 reviews the Federal/State…

  14. Educating Parents About Pediatric Research: Children and Clinical Studies Website Qualitative Evaluation.

    PubMed

    Marceau, Lisa D; Welch, Lisa C; Pemberton, Victoria L; Pearson, Gail D

    2016-07-01

    A gap in information about pediatric clinical trials exists, and parents remain uncertain about what is involved in research studies involving children. We aimed to understand parent perspectives about pediatric clinical research after viewing the online Children and Clinical Studies (CaCS) program. Using a qualitative descriptive study design, we conducted focus groups with parents and phone interviews with physicians. Three themes emerged providing approaches to improve parent's understanding of clinical research by including strategies where parents (a) hear from parents like themselves to learn about pediatric research, (b) receive general clinical research information to complement study-specific details, and (c) are provided more information about the role of healthy child volunteers. Parents found the website a valuable tool that would help them make a decision about what it means to participate in research. This tool can assist parents, providers, and researchers by connecting general information with study-specific information. © The Author(s) 2015.

  15. Case Study Observational Research: A Framework for Conducting Case Study Research Where Observation Data Are the Focus.

    PubMed

    Morgan, Sonya J; Pullon, Susan R H; Macdonald, Lindsay M; McKinlay, Eileen M; Gray, Ben V

    2017-06-01

    Case study research is a comprehensive method that incorporates multiple sources of data to provide detailed accounts of complex research phenomena in real-life contexts. However, current models of case study research do not particularly distinguish the unique contribution observation data can make. Observation methods have the potential to reach beyond other methods that rely largely or solely on self-report. This article describes the distinctive characteristics of case study observational research, a modified form of Yin's 2014 model of case study research the authors used in a study exploring interprofessional collaboration in primary care. In this approach, observation data are positioned as the central component of the research design. Case study observational research offers a promising approach for researchers in a wide range of health care settings seeking more complete understandings of complex topics, where contextual influences are of primary concern. Future research is needed to refine and evaluate the approach.

  16. Alternative approaches to forestry research evaluation: an assessment.

    Treesearch

    Pamela J. Jakes; Earl C. Leatherberry

    1986-01-01

    Reviews research evaluation techniques in a variety of fields an assesses the usefulness of various approaches or combinations of approaches for forestry research evaluation. Presents an evaluation framework that will help users develop an approach suitable for their specific problem.

  17. Evaluating the "Evaluative State": Implications for Research in Higher Education.

    ERIC Educational Resources Information Center

    Dill, David D.

    1998-01-01

    Examines the "evaluative state" that is, public management-based evaluation systems--in the context of experiences in the United Kingdom and New Zealand, and suggests that further research is needed to examine problems in the evaluative state itself, in how market competition impacts upon it, and how academic oligarchies influence the…

  18. Commentary: Writing and Evaluating Qualitative Research Reports.

    PubMed

    Wu, Yelena P; Thompson, Deborah; Aroian, Karen J; McQuaid, Elizabeth L; Deatrick, Janet A

    2016-06-01

    To provide an overview of qualitative methods, particularly for reviewers and authors who may be less familiar with qualitative research. A question and answer format is used to address considerations for writing and evaluating qualitative research. When producing qualitative research, individuals are encouraged to address the qualitative research considerations raised and to explicitly identify the systematic strategies used to ensure rigor in study design and methods, analysis, and presentation of findings. Increasing capacity for review and publication of qualitative research within pediatric psychology will advance the field's ability to gain a better understanding of the specific needs of pediatric populations, tailor interventions more effectively, and promote optimal health. © The Author 2016. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. A model for evaluating academic research centers: Case study of the Asian/Pacific Islander Youth Violence Prevention Center.

    PubMed

    Nishimura, Stephanie T; Hishinuma, Earl S; Goebert, Deborah A; Onoye, Jane M M; Sugimoto-Matsuda, Jeanelle J

    2018-02-01

    To provide one model for evaluating academic research centers, given their vital role in addressing public health issues. A theoretical framework is described for a comprehensive evaluation plan for research centers. This framework is applied to one specific center by describing the center's Logic Model and Evaluation Plan, including a sample of the center's activities. Formative and summative evaluation information is summarized. In addition, a summary of outcomes is provided: improved practice and policy; reduction of risk factors and increase in protective factors; reduction of interpersonal youth violence in the community; and national prototype for prevention of interpersonal youth violence. Research centers are important mechanisms to advance science and improve people's quality of life. Because of their more infrastructure-intensive and comprehensive approach, they also require substantial resources for success, and thus, also require careful accountability. It is therefore important to comprehensively evaluate these centers. As provided herein, a more systematic and structured approach utilizing logic models, an evaluation plan, and successful processes can provide research centers with a functionally useful method in their evaluation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. A Preliminary Rubric Design to Evaluate Mixed Methods Research

    ERIC Educational Resources Information Center

    Burrows, Timothy J.

    2013-01-01

    With the increase in frequency of the use of mixed methods, both in research publications and in externally funded grants there are increasing calls for a set of standards to assess the quality of mixed methods research. The purpose of this mixed methods study was to conduct a multi-phase analysis to create a preliminary rubric to evaluate mixed…

  1. Harvard Project Physics Research and Evaluation Bibliography.

    ERIC Educational Resources Information Center

    Welch, Wayne W.

    The articles listed were written as part of the research and evaluation program of Project Physics. Many articles are concerned with problems of educational research while others are devoted directly to the evaluation of Project Physics. The articles are listed in four categories according to the availability of reprints: (1) published articles…

  2. Evaluating Federal Support for Poverty Research.

    ERIC Educational Resources Information Center

    National Academy of Sciences - National Research Council, Washington, DC.

    Federal support for research on poverty is discussed, and principal funding agencies are identified. The value of research on poverty for policy making is discussed and evaluated, with particular attention to the work of the Institute for Research on Poverty. The report recommends that the system for funding the Institute be improved, that…

  3. Evaluating an interdisciplinary undergraduate training program in health promotion research.

    PubMed

    Misra, Shalini; Harvey, Richard H; Stokols, Daniel; Pine, Kathleen H; Fuqua, Juliana; Shokair, Said M; Whiteley, John M

    2009-04-01

    The University of California at Irvine Interdisciplinary Summer Undergraduate Research Experience (ID-SURE) program had three objectives: (1) designing an interdisciplinary health promotion training curriculum for undergraduate research fellows; (2) developing measures for evaluating and assessing program-related educational processes and products; and (3) comparing these educational process and product measures between groups of students who did or did not receive the training. A total of 101 students participated in the ID-SURE program during 2005, 2006, and 2007. A longitudinal research design was employed whereby students' interdisciplinary attitudes and behaviors were assessed at the beginning and end of the training program. The interdisciplinary and intellectual qualities of students' academic and research products were assessed at the conclusion of the training activities. In addition, ID-SURE participants' interdisciplinary attitudes, behaviors, and research products were compared to those of 70 participants in another fellowship program that did not have an interdisciplinary training component. Exposing undergraduate research fellows to the interdisciplinary curriculum led to increased participation in, and positive attitudes about, interdisciplinary classroom and laboratory activities. Products, such as the integrative and interdisciplinary quality of student research projects, showed no differences when compared to those of undergraduates who were not exposed to the interdisciplinary curriculum. However, undergraduates exposed to the training engaged in more interdisciplinary behaviors at the end of the program than students who were not trained in interdisciplinary research techniques. The findings from this study offer evidence for the efficacy of the ID-SURE program for training undergraduate students in transdisciplinary concepts, methods, and skills that are needed for effective scientific collaboration. Additionally, this study makes two important

  4. Research Issues in Evaluating Learning Pattern Development in Higher Education

    ERIC Educational Resources Information Center

    Richardson, John T. E.

    2013-01-01

    This article concludes the special issue of "Studies in Educational Evaluation" concerned with "Evaluating learning pattern development in higher education" by discussing research issues that have emerged from the previous contributions. The article considers in turn: stability versus variability in learning patterns; old versus new analytic…

  5. Promoting and evaluating scientific rigour in qualitative research.

    PubMed

    Baillie, Lesley

    2015-07-15

    This article explores perspectives on qualitative research and the variety of views concerning rigour in the research process. Evaluating and ensuring the quality of research are essential considerations for practitioners who are appraising evidence to inform their practice or research. Several criteria and principles for evaluating quality in qualitative research are presented, recognising that their application in practice is influenced by the qualitative methodology used. The article examines a range of techniques that a qualitative researcher can use to promote rigour and apply it to practice.

  6. 9 CFR 104.4 - Products for research and evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... BIOLOGICAL PRODUCTS § 104.4 Products for research and evaluation. (a) An application for a U.S. Veterinary Biological Product Permit to import a biological product for research and evaluation shall be accompanied by... a biological product for research and evaluation shall not be issued unless the scientific...

  7. 9 CFR 104.4 - Products for research and evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... BIOLOGICAL PRODUCTS § 104.4 Products for research and evaluation. (a) An application for a U.S. Veterinary Biological Product Permit to import a biological product for research and evaluation shall be accompanied by... a biological product for research and evaluation shall not be issued unless the scientific...

  8. Youth participatory research and evaluation to inform a Chagas disease prevention program in Ecuador.

    PubMed

    Marco-Crespo, Belén; Casapulla, Sharon; Nieto-Sanchez, Claudia; Urrego, J Guillermo Gómez; Grijalva, Mario J

    2018-04-30

    This qualitative study engaged a group of young people in participatory research and evaluation activities in order to study to what extent engaging youth in health interventions can inform research and evaluation processes. We applied a youth participatory research and evaluation approach (PRE) to inform research and evaluation on the impact of a Chagas disease control program in southern Ecuador. Our main interest was to examine the methodological contributions of PRE to knowledge sharing for health intervention planning in the context of global health and neglected tropical diseases. The results of this study suggest that by demystifying research and evaluation practices and rendering them accessible and relevant, marginalized youth can develop critical and reflexive thinking skills that could be useful for decision-making on health promotion. Our findings also reveal the potential of youth as active participants in project development in ways that enhance, validate, and improve health interventions. Young people are interested in learning about and sharing local knowledge that can benefit research and evaluation processes. Despite the numerous strengths demonstrated by PRE, the inherent complexities of international development, such as cultural differences, asymmetrical power relations, and the ongoing challenges of sustainability, remain. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. 7 CFR 3406.20 - Evaluation criteria for research proposals.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... current faculty in the natural or social sciences; provide a better research environment, state-of-the-art... 7 Agriculture 15 2011-01-01 2011-01-01 false Evaluation criteria for research proposals. 3406.20... Research Proposal § 3406.20 Evaluation criteria for research proposals. The maximum score a research...

  10. The Burden of Research on Trauma for Respondents: A Prospective and Comparative Study on Respondents Evaluations and Predictors

    PubMed Central

    van der Velden, Peter G.; Bosmans, Mark W. G.; Scherpenzeel, Annette C.

    2013-01-01

    The possible burden of participating in trauma research is an important topic for Ethical Committees (EC's), Review Boards (RB's) and researchers. However, to what extent research on trauma is more burdensome than non-trauma research is unknown. Little is known about which factors explain respondents evaluations on the burden: to what extent are they trauma-related or dependent on other factors such as personality and how respondents evaluate research in general? Data of a large probability based multi-wave internet panel, with surveys on politics and values, personality and health in 2009 and 2011, and a survey on trauma in 2012 provided the unique opportunity to address these questions. Results among respondents confronted with these events in the past 2 years (N = 950) showed that questions on trauma were significantly and systematically evaluated as less pleasant (enjoyed less), more difficult, but also stimulated respondents to think about things more than almost all previous non-trauma surveys. Yet, the computed effect sizes indicated that the differences were (very) small and often meaningless. No differences were found between users and non-users of mental services, in contrast to posttraumatic stress symptoms. Evaluations of the burden of previous surveys in 2011 on politics and values, personality and health most strongly, systematically and independently predicted the burden of questions on trauma, and not posttraumatic stress symptoms, event-related coping self-efficacy and personality factors. For instance, multiple linear regression analyses showed that 30% of the variance of how (un)pleasant questions on trauma and life-events were evaluated, was explained by how (un)pleasant the 3 surveys in 2011 were evaluated, in contrast to posttraumatic stress symptoms (not significant) and coping self-efficacy (5%). Findings question why EC's, RB's and researchers should be more critical of the possible burden of trauma research than of the possible burden

  11. Transdisciplinary Research and Evaluation for Community Health Initiatives

    PubMed Central

    Harper, Gary W.; Neubauer, Leah C.; Bangi, Audrey K.; Francisco, Vincent T.

    2010-01-01

    Transdisciplinary research and evaluation projects provide valuable opportunities to collaborate on interventions to improve the health and well-being of individuals and communities. Given team members’ diverse backgrounds and roles or responsibilities in such projects, members’ perspectives are significant in strengthening a project’s infrastructure and improving its organizational functioning. This article presents an evaluation mechanism that allows team members to express the successes and challenges incurred throughout their involvement in a multisite transdisciplinary research project. Furthermore, their feedback is used to promote future sustainability and growth. Guided by a framework known as organizational development, the evaluative process was conducted by a neutral entity, the Quality Assurance Team. A mixed-methods approach was utilized to garner feedback and clarify how the research project goals could be achieved more effectively and efficiently. The multiple benefits gained by those involved in this evaluation and implications for utilizing transdisciplinary research and evaluation teams for health initiatives are detailed. PMID:18936267

  12. Evaluation of a library outreach program to research labs.

    PubMed

    Brandenburg, Marci D; Doss, Alan; Frederick, Tracie E

    2010-07-01

    The goal of this study was to conduct an outcomes-based evaluation of the National Cancer Institute-Frederick (NCI-F) Scientific Library's Laptop Librarian service, where librarians took a laptop and spent time in research buildings. The authors used statistics from the Laptop Librarian sessions, a NCI-F community-wide online survey, and in-person interviews to evaluate the service. The Laptop Librarian service increased the accessibility of librarians and saved patrons' time. Users gained useful information and expressed overall satisfaction with the service. The Laptop Librarian service proves to be a useful means for increasing access to librarians and providing users with necessary information at this government research facility.

  13. The Neighborhood Voice: evaluating a mobile research vehicle for recruiting African Americans to participate in cancer control studies.

    PubMed

    Alcaraz, Kassandra I; Weaver, Nancy L; Andresen, Elena M; Christopher, Kara; Kreuter, Matthew W

    2011-09-01

    The Neighborhood Voice is a vehicle customized for conducting health research in community settings. It brings research studies into neighborhoods affected most by health disparities and reaches groups often underrepresented in research samples. This paper reports on the experience and satisfaction of 599 African American women who participated in research on board the Neighborhood Voice. Using bivariate, psychometric, and logistic regression analyses, we examined responses to a brief post-research survey. Most women (71%) reported that they had never previously participated in research, and two-thirds (68%) rated their Neighborhood Voice experience as excellent. Satisfaction scores were highest among first-time research participants (p < .05). Women's ratings of the Neighborhood Voice on Comfort (OR = 4.9; 95% CI = 3.0, 7.9) and Convenience (OR = 1.8; 95% CI = 1.2, 2.9) significantly predicted having an excellent experience. Mobile research facilities may increase participation among disadvantaged and minority populations. Our brief survey instrument is a model for evaluating such outreach.

  14. Innovations for Evaluation Research: Multiform Protocols, Visual Analog Scaling, and the Retrospective Pretest-Posttest Design.

    PubMed

    Chang, Rong; Little, Todd D

    2018-06-01

    In this article, we review three innovative methods: multiform protocols, visual analog scaling, and the retrospective pretest-posttest design that can be used in evaluation research. These three techniques have been proposed for decades, but unfortunately, they are still not utilized readily in evaluation research. Our goal is to familiarize researchers with these underutilized research techniques that could reduce personnel effort and costs for data collection while producing better inferences for a study. We begin by discussing their applications and special unique features. We then discuss each technique's strengths and limitations and offer practical tips on how to better implement these methods in evaluation research. We then showcase two recent empirical studies that implement these methods in real-world evaluation research applications.

  15. The Stuttering Treatment Research Evaluation and Assessment Tool (STREAT): Evaluating Treatment Research as Part of Evidence-Based Practice

    ERIC Educational Resources Information Center

    Davidow, Jason H.; Bothe, Anne K.; Bramlett, Robin E.

    2006-01-01

    Purpose: This article presents, and explains the issues behind, the Stuttering Treatment Research Evaluation and Assessment Tool (STREAT), an instrument created to assist clinicians, researchers, students, and other readers in the process of critically appraising reports of stuttering treatment research. Method: The STREAT was developed by…

  16. How to report a research study.

    PubMed

    Cronin, Paul; Rawson, James V; Heilbrun, Marta E; Lee, Janie M; Kelly, Aine M; Sanelli, Pina C; Bresnahan, Brian W; Paladin, Angelisa M

    2014-09-01

    Incomplete reporting hampers the evaluation of results and bias in clinical research studies. Guidelines for reporting study design and methods have been developed to encourage authors and journals to include the required elements. Recent efforts have been made to standardize the reporting of clinical health research including clinical guidelines. In this article, the reporting of diagnostic test accuracy studies, screening studies, therapeutic studies, systematic reviews and meta-analyses, cost-effectiveness assessments (CEA), recommendations and/or guidelines, and medical education studies is discussed. The available guidelines, many of which can be found at the Enhancing the QUAlity and Transparency Of health Research network, on how to report these different types of health research are also discussed. We also hope that this article can be used in academic programs to educate the faculty and trainees of the available resources to improve our health research. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  17. Interviewer as Instrument: Accounting for Human Factors in Evaluation Research

    ERIC Educational Resources Information Center

    Brown, Joel H.

    2006-01-01

    This methodological study examines an original data collection model designed to incorporate human factors and enhance data richness in qualitative and evaluation research. Evidence supporting this model is drawn from in-depth youth and adult interviews in one of the largest policy/program evaluations undertaken in the United States, the Drug,…

  18. Commentary: Writing and evaluating qualitative research reports

    USDA-ARS?s Scientific Manuscript database

    An overview of qualitative methods is provided, particularly for reviewers and authors who may be less familiar with qualitative research. A question and answer format is used to address considerations for writing and evaluating qualitative research. When producing qualitative research, individuals ...

  19. Acoustic evaluation of standing trees : recent research development

    Treesearch

    Xiping Wang; Robert J. Ross; Peter Carter

    2005-01-01

    This paper presents some research results from recent trial studies on measuring acoustic velocities on standing trees of five softwood species. The relationships between tree velocities measured by time of flight method and log velocities measured by resonance method were evaluated. Theoretical and empirical models were developed for adjusting observed tree velocity...

  20. A Course Model for Teaching Research Evaluation in Colleges of Pharmacy.

    ERIC Educational Resources Information Center

    Draugalis, JoLaine R.; Slack, Marion K.

    1992-01-01

    A University of Arizona undergraduate pharmacy course designed to develop student skills in evaluation of research has five parts: introduction to the scientific method; statistical techniques/data analysis review; research design; fundamentals of clinical studies; and practical applications. Prerequisites include biostatistics and drug…

  1. Conference presentation to publication: a retrospective study evaluating quality of abstracts and journal articles in medical education research.

    PubMed

    Stephenson, Christopher R; Vaa, Brianna E; Wang, Amy T; Schroeder, Darrell R; Beckman, Thomas J; Reed, Darcy A; Sawatsky, Adam P

    2017-11-09

    There is little evidence regarding the comparative quality of abstracts and articles in medical education research. The Medical Education Research Study Quality Instrument (MERSQI), which was developed to evaluate the quality of reporting in medical education, has strong validity evidence for content, internal structure, and relationships to other variables. We used the MERSQI to compare the quality of reporting for conference abstracts, journal abstracts, and published articles. This is a retrospective study of all 46 medical education research abstracts submitted to the Society of General Internal Medicine 2009 Annual Meeting that were subsequently published in a peer-reviewed journal. We compared MERSQI scores of the abstracts with scores for their corresponding published journal abstracts and articles. Comparisons were performed using the signed rank test. Overall MERSQI scores increased significantly for published articles compared with conference abstracts (11.33 vs 9.67; P < .001) and journal abstracts (11.33 vs 9.96; P < .001). Regarding MERSQI subscales, published articles had higher MERSQI scores than conference abstracts in the domains of sampling (1.59 vs 1.34; P = .006), data analysis (3.00 vs 2.43; P < .001), and validity of evaluation instrument (1.04 vs 0.28; P < .001). Published articles also had higher MERSQI scores than journal abstracts in the domains of data analysis (3.00 vs 2.70; P = .004) and validity of evaluation instrument (1.04 vs 0.26; P < .001). To our knowledge, this is the first study to compare the quality of medical education abstracts and journal articles using the MERSQI. Overall, the quality of articles was greater than that of abstracts. However, there were no significant differences between abstracts and articles for the domains of study design and outcomes, which indicates that these MERSQI elements may be applicable to abstracts. Findings also suggest that abstract quality is generally preserved

  2. Research on Seeing and Evaluating People.

    ERIC Educational Resources Information Center

    Geis, Florence L.; And Others

    This paper presents a review of research on the processes involved in evaluating others, especially in the area of discrimination against women. The first section defines perceptual bias and presents research data which show that perception is not a faithful representation of reality, but a product of previous beliefs and values. Section II…

  3. Does Research on Evaluation Matter? Findings from a Survey of American Evaluation Association Members and Prominent Evaluation Theorists and Scholars

    ERIC Educational Resources Information Center

    Coryn, Chris L. S.; Ozeki, Satoshi; Wilson, Lyssa N.; Greenman, Gregory D., II; Schröter, Daniela C.; Hobson, Kristin A.; Azzam, Tarek; Vo, Anne T.

    2016-01-01

    Research on evaluation theories, methods, and practices has increased considerably in the past decade. Even so, little is known about whether published findings from research on evaluation are read by evaluators and whether such findings influence evaluators' thinking about evaluation or their evaluation practice. To address these questions, and…

  4. Discovering the Future of the Case Study Method in Evaluation Research.

    ERIC Educational Resources Information Center

    Yin, Robert K.

    1994-01-01

    It is assumed that evaluators of the future will still be interested in case study methodology. Scenarios that ignore a case study method, that look back to a distinctive case study method, and that see the case study method as an integrating force in the qualitative-quantitative debate are explored. (SLD)

  5. DOE/DOT Crude Oil Characterization Research Study, Task 2 Test Report on Evaluating Crude Oil Sampling and Analysis Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lord, David; Allen, Ray; Rudeen, David

    The Crude Oil Characterization Research Study is designed to evaluate whether crude oils currently transported in North America, including those produced from "tight" formations, exhibit physical or chemical properties that are distinct from conventional crudes, and how these properties associate with combustion hazards with may be realized during transportation and handling.

  6. Evaluation in a Research and Development Context.

    ERIC Educational Resources Information Center

    Cooley, William W.

    Educational research and development (R&D) has often been characterized as a neat, linear sequence of discrete steps, moving from research through development to evaluation and dissemination. Although the inadequacies of such linear models of educational research and development have been pointed out previously, these models have been so much…

  7. Innovative technology transfer of nondestructive evaluation research

    Treesearch

    Brian Brashaw; Robert J. Ross; Xiping Wang

    2008-01-01

    Technology transfer is often an afterthought for many nondestructive evaluation (NDE) researchers. Effective technology transfer should be considered during the planning and execution of research projects. This paper outlines strategies for using technology transfer in NDE research and presents a wide variety of technology transfer methods used by a cooperative...

  8. Solving a methodological challenge in work stress evaluation with the Stress Assessment and Research Toolkit (StART): a study protocol.

    PubMed

    Guglielmi, Dina; Simbula, Silvia; Vignoli, Michela; Bruni, Ilaria; Depolo, Marco; Bonfiglioli, Roberta; Tabanelli, Maria Carla; Violante, Francesco Saverio

    2013-06-22

    Stress evaluation is a field of strong interest and challenging due to several methodological aspects in the evaluation process. The aim of this study is to propose a study protocol to test a new method (i.e., the Stress Assessment and Research Toolkit) to assess psychosocial risk factors at work. This method addresses several methodological issues (e.g., subjective vs. objective, qualitative vs quantitative data) by assessing work-related stressors using different kinds of data: i) organisational archival data (organisational indicators sheet); ii) qualitative data (focus group); iii) worker perception (questionnaire); and iv) observational data (observational checklist) using mixed methods research. In addition, it allows positive and negative aspects of work to be considered conjointly, using an approach that considers at the same time job demands and job resources. The integration of these sources of data can reduce the theoretical and methodological bias related to stress research in the work setting, allows researchers and professionals to obtain a reliable description of workers' stress, providing a more articulate vision of psychosocial risks, and allows a large amount of data to be collected. Finally, the implementation of the method ensures in the long term a primary prevention for psychosocial risk management in that it aims to reduce or modify the intensity, frequency or duration of organisational demands.

  9. A New Tool for Identifying Research Standards and Evaluating Research Performance

    ERIC Educational Resources Information Center

    Bacon, Donald R.; Paul, Pallab; Stewart, Kim A.; Mukhopadhyay, Kausiki

    2012-01-01

    Much has been written about the evaluation of faculty research productivity in promotion and tenure decisions, including many articles that seek to determine the rank of various marketing journals. Yet how faculty evaluators combine journal quality, quantity, and author contribution to form judgments of a scholar's performance is unclear. A…

  10. Invited commentary: Evaluating epidemiologic research methods--the importance of response rate calculation.

    PubMed

    Harris, M Anne

    2010-09-15

    Epidemiologic research that uses administrative records (rather than registries or clinical surveys) to identify cases for study has been increasingly restricted because of concerns about privacy, making unbiased population-based research less practicable. In their article, Nattinger et al. (Am J Epidemiol. 2010;172(6):637-644) present a method for using administrative data to contact participants that has been well received. However, the methods employed for calculating and reporting response rates require further consideration, particularly the classification of untraceable cases as ineligible. Depending on whether response rates are used to evaluate the potential for bias to influence study results or to evaluate the acceptability of the method of contact, different fractions may be considered. To improve the future study of epidemiologic research methods, a consensus on the calculation and reporting of study response rates should be sought.

  11. Evaluating the Non-Academic Impact of Academic Research: Design Considerations

    ERIC Educational Resources Information Center

    Gunn, Andrew; Mintrom, Michael

    2017-01-01

    Evaluation of academic research plays a significant role in government efforts to steer public universities. The scope of such evaluation is now being extended to include the "relevance" or "impact" of academic research outside the academy. We address how evaluation of non-academic research impact can promote more such impact…

  12. Head Start Evaluation and Research Center, University of Kansas. Final Report on Research Activities.

    ERIC Educational Resources Information Center

    Etzel, Barbara C.; And Others

    This document is the final report to the Institute of Educational Development for Head Start Research Evaluation activities at the University of Kansas for 1966-67. It contains 16 separate reports of studies completed or in the process of completion. The subject matter of the reports contains 15 distinct topics and warrants individual abstracts.…

  13. Evaluating the risks of clinical research: direct comparative analysis.

    PubMed

    Rid, Annette; Abdoler, Emily; Roberson-Nay, Roxann; Pine, Daniel S; Wendler, David

    2014-09-01

    Many guidelines and regulations allow children and adolescents to be enrolled in research without the prospect of clinical benefit when it poses minimal risk. However, few systematic methods exist to determine when research risks are minimal. This situation has led to significant variation in minimal risk judgments, raising concern that some children are not being adequately protected. To address this concern, we describe a new method for implementing the widely endorsed "risks of daily life" standard for minimal risk. This standard defines research risks as minimal when they do not exceed the risks posed by daily life activities or routine examinations. This study employed a conceptual and normative analysis, and use of an illustrative example. Different risks are composed of the same basic elements: Type, likelihood, and magnitude of harm. Hence, one can compare the risks of research and the risks of daily life by comparing the respective basic elements with each other. We use this insight to develop a systematic method, direct comparative analysis, for implementing the "risks of daily life" standard for minimal risk. The method offers a way of evaluating research procedures that pose the same types of risk as daily life activities, such as the risk of experiencing anxiety, stress, or other psychological harm. We thus illustrate how direct comparative analysis can be applied in practice by using it to evaluate whether the anxiety induced by a respiratory CO2 challenge poses minimal or greater than minimal risks in children and adolescents. Direct comparative analysis is a systematic method for applying the "risks of daily life" standard for minimal risk to research procedures that pose the same types of risk as daily life activities. It thereby offers a method to protect children and adolescents in research, while ensuring that important studies are not blocked because of unwarranted concerns about research risks.

  14. Integrating service development with evaluation in telehealthcare: an ethnographic study.

    PubMed

    Finch, Tracy; May, Carl; Mair, Frances; Mort, Maggie; Gask, Linda

    2003-11-22

    To identify issues that facilitate the successful integration of evaluation and development of telehealthcare services. Ethnographic study using various qualitative research techniques to obtain data from several sources, including in-depth semistructured interviews, project steering group meetings, and public telehealthcare meetings. Seven telehealthcare evaluation projects (four randomised controlled trials and three pragmatic service evaluations) in the United Kingdom, studied over two years. Projects spanned a range of specialties-dermatology, psychiatry, respiratory medicine, cardiology, and oncology. Clinicians, managers, technical experts, and researchers involved in the projects. Key problems in successfully integrating evaluation and service development in telehealthcare are, firstly, defining existing clinical practices (and anticipating changes) in ways that permit measurement; secondly, managing additional workload and conflicting responsibilities brought about by combining clinical and research responsibilities (including managing risk); and, thirdly, understanding various perspectives on effectiveness and the limitations of evaluation results beyond the context of the research study. Combined implementation and evaluation of telehealthcare systems is complex, and is often underestimated. The distinction between quantitative outcomes and the workability of the system is important for producing evaluative knowledge that is of practical value. More pragmatic approaches to evaluation, that permit both quantitative and qualitative methods, are required to improve the quality of such research and its relevance for service provision in the NHS.

  15. Testimony in Narrative Educational Research: A Qualitative Interview, Narrative Analysis and Epistemological Evaluation

    ERIC Educational Resources Information Center

    Christopher, Justin

    2017-01-01

    The purpose of this study is to assess issues that arise in the context of epistemological claims in narrative educational research by means of narrative analysis and epistemological evaluation. The research questions which guided the study were: 1) To what extent is epistemology considered by narrative educational researchers?; 2) What issues do…

  16. [Research progress on identification and quality evaluation of glues medicines].

    PubMed

    Li, Hui-Hu; Ren, Gang; Chen, Li-Min; Zhong, Guo-Yue

    2018-01-01

    Glues medicines is a special kind of traditional Chinese medicine.As the market demand is large, the raw materials are in short supply and lacks proper quality evaluation technology, which causes inconsistent quality of products on the market. Its authentic identification and evaluation stay a problem to be solved. In this paper, the research progress of the methods and techniques of the evaluation of the identification and quality of glues medicines were reviewed. The researches of medicinal glue type identification and quality evaluation mainly concentrated in four aspects of medicinal materials of physical and chemical properties, trace elements, organic chemicals and biological genetic methods and techniques. The methods of physicochemical properties include thermal analysis, gel electrophoresis, isoelectric focusing electrophoresis, infrared spectroscopy, gel exclusion chromatography, and circular dichroism. The methods including atomic absorption spectrometry, X-ray fluorescence spectrometry, plasma emission spectrometry and visible spectrophotometry were used for the study of the trace elements of glues medicines. The organic chemical composition was studied by methods of composition of amino acids, content detection, odor detection, lipid soluble component, organic acid detection. Methods based on the characteristics of biogenetics include DNA, polypeptide and amino acid sequence difference analysis. Overall, because of relative components similarity of the glues medicines (such as amino acids, proteins and peptides), its authenticity and quality evaluation index is difficult to judge objectively, all sorts of identification evaluation methods have different characteristics, but also their limitations. It indicates that further study should focus on identification of evaluation index and various technology integrated application combining with the characteristics of the production process. Copyright© by the Chinese Pharmaceutical Association.

  17. Photovoltaic evaluation study

    NASA Astrophysics Data System (ADS)

    Johnson, G.; Heikkilae, M.; Melasuo, T.; Spanner, S.

    Realizing the value and potential of PV-power as well as the growing need for increased cooperation and sharing of knowledge in the field of photovoltaics, FINNIDA and UNICEF decided to undertake a study of selected PV-projects. There were two main objectives for the study: To gather, compile, evaluate and share information on the photovoltaic technology appropriate to developing countries, and to promote the interest and competence of Finnish research institutes, consultants and manufacturers in photovoltaic development. For this purpose a joint evaluation of significant, primarily UN-supported projects providing for the basic needs of rural communities was undertaken. The Gambia and Kenya offered a variety of such projects, and were chosen as target countries for the study. The projects were chosen to be both comparable and complimentary. In the Gambia, the main subject was a partially integrated health and telecommunications project, but a long-operating drinking water pumping system was also studied. In Kenya, a health project in the Turkana area was examined, and also a large scale water pumping installation for fish farming. Field visits were made in order to verify and supplement the data gathered through document research and earlier investigations. Individual data gathering sheets for the project form the core of this study and are intended to give the necessary information in an organized and accessible format. The findings could practically be condensed into one sentence: PV-systems work very well, if properly designed and installed, but the resources and requirements of the recipients must be considered to a higher degree.

  18. Evaluation of Participatory Research in Developing Community Leadership Skills.

    ERIC Educational Resources Information Center

    Karim, Wazir-Jahan B.

    1982-01-01

    This paper attempts to evaluate and explain the dynamic processes of decision-making and leadership development through participatory research, using the Malaysian experience as a case study. The focus is on the structural and situational constraints in the Malaysian rural society, the formal political machinery and the implementation of…

  19. In-silico studies in Chinese herbal medicines' research: evaluation of in-silico methodologies and phytochemical data sources, and a review of research to date.

    PubMed

    Barlow, D J; Buriani, A; Ehrman, T; Bosisio, E; Eberini, I; Hylands, P J

    2012-04-10

    The available databases that catalogue information on traditional Chinese medicines are reviewed in terms of their content and utility for in-silico research on Chinese herbal medicines, as too are the various protein database resources, and the software available for use in such studies. The software available for bioinformatics and 'omics studies of Chinese herbal medicines are summarised, and a critical evaluation given of the various in-silico methods applied in screening Chinese herbal medicines, including classification trees, neural networks, support vector machines, docking and inverse docking algorithms. Recommendations are made regarding any future in-silico studies of Chinese herbal medicines. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  20. Talk, trust and time: a longitudinal study evaluating knowledge translation and exchange processes for research on violence against women.

    PubMed

    Wathen, C Nadine; Sibbald, Shannon L; Jack, Susan M; Macmillan, Harriet L

    2011-09-06

    Violence against women (VAW) is a major public health problem. Translation of VAW research to policy and practice is an area that remains understudied, but provides the opportunity to examine knowledge translation and exchange (KTE) processes in a complex, multi-stakeholder context. In a series of studies including two randomized trials, the McMaster University VAW Research Program studied one key research gap: evidence about the effectiveness of screening women for exposure to intimate partner violence. This project developed and evaluated KTE strategies to share research findings with policymakers, health and community service providers, and women's advocates. A longitudinal cross-sectional design, applying concurrent mixed data collection methods (surveys, interviews, and focus groups), was used to evaluate the utility of specific KTE strategies, including a series of workshops and a day-long Family Violence Knowledge Exchange Forum, on research sharing, uptake, and use. Participants valued the opportunity to meet with researchers, provide feedback on key messages, and make personal connections with other stakeholders. A number of factors specific to the knowledge itself, stakeholders' contexts, and the nature of the knowledge gap being addressed influenced the uptake, sharing, and use of the research. The types of knowledge use changed across time, and were specifically related to both the types of decisions being made, and to stage of decision making; most reported use was conceptual or symbolic, with few examples of instrumental use. Participants did report actively sharing the research findings with their own networks. Further examination of these second-order knowledge-sharing processes is required, including development of appropriate methods and measures for its assessment. Some participants reported that they would not use the research evidence in their decision making when it contradicted professional experiences, while others used it to support

  1. Talk, trust and time: a longitudinal study evaluating knowledge translation and exchange processes for research on violence against women

    PubMed Central

    2011-01-01

    Background Violence against women (VAW) is a major public health problem. Translation of VAW research to policy and practice is an area that remains understudied, but provides the opportunity to examine knowledge translation and exchange (KTE) processes in a complex, multi-stakeholder context. In a series of studies including two randomized trials, the McMaster University VAW Research Program studied one key research gap: evidence about the effectiveness of screening women for exposure to intimate partner violence. This project developed and evaluated KTE strategies to share research findings with policymakers, health and community service providers, and women's advocates. Methods A longitudinal cross-sectional design, applying concurrent mixed data collection methods (surveys, interviews, and focus groups), was used to evaluate the utility of specific KTE strategies, including a series of workshops and a day-long Family Violence Knowledge Exchange Forum, on research sharing, uptake, and use. Results Participants valued the opportunity to meet with researchers, provide feedback on key messages, and make personal connections with other stakeholders. A number of factors specific to the knowledge itself, stakeholders' contexts, and the nature of the knowledge gap being addressed influenced the uptake, sharing, and use of the research. The types of knowledge use changed across time, and were specifically related to both the types of decisions being made, and to stage of decision making; most reported use was conceptual or symbolic, with few examples of instrumental use. Participants did report actively sharing the research findings with their own networks. Further examination of these second-order knowledge-sharing processes is required, including development of appropriate methods and measures for its assessment. Some participants reported that they would not use the research evidence in their decision making when it contradicted professional experiences, while others

  2. Evaluation of the state water-resources research institutes

    USGS Publications Warehouse

    Ertel, M.O.

    1988-01-01

    Water resources research institutes, as authorized by the Water Resources Research Act of 1984 (Public Law 98-242), are located in each state and in the District of Columbia, Guam, Puerto Rico , and the Virgin Islands. Public Law 98-242 mandated an onsite evaluation of each of these institutes to determine whether ' . . .the quality and relevance of its water resources research and its effectiveness as an institution for planning, conducting, and arranging for research warrant its continued support in the national interest. ' The results of these evaluations, which were conducted between September 1985 and June 1987, are summarized. The evaluation teams found that all 54 institutes are meeting the basic objectives of the authorizing legislation in that they: (1) use the grant funds to support research that addresses water problems of state and regional concern; (2) provide opportunities for training of water scientists through student involvement on research projects; and (3) promote the application of research results through preparation of technical reports and contributions to the technical literature. The differences among institutes relate primarily to degrees of effectiveness, and most often are determined by the financial, political, and geographical contexts in which the institutes function and by the quality of their leadership. (Lantz-PTT)

  3. A Research Synthesis of the Evaluation Capacity Building Literature

    ERIC Educational Resources Information Center

    Labin, Susan N.; Duffy, Jennifer L.; Meyers, Duncan C.; Wandersman, Abraham; Lesesne, Catherine A.

    2012-01-01

    The continuously growing demand for program results has produced an increased need for evaluation capacity building (ECB). The "Integrative ECB Model" was developed to integrate concepts from existing ECB theory literature and to structure a synthesis of the empirical ECB literature. The study used a broad-based research synthesis method with…

  4. DETROIT EXPOSURE AND AEROSOL RESEARCH STUDY (DEARS)

    EPA Science Inventory

    The Detroit Exposure and Aerosol Research Study (DEARS) is a residential and personal exposure field monitoring study that is being conducted in Detroit MI over a three-year period from 2004 to 2007. The primary goal of the study is to evaluate and describe the relationship betw...

  5. Research on uncertainty evaluation measure and method of voltage sag severity

    NASA Astrophysics Data System (ADS)

    Liu, X. N.; Wei, J.; Ye, S. Y.; Chen, B.; Long, C.

    2018-01-01

    Voltage sag is an inevitable serious problem of power quality in power system. This paper focuses on a general summarization and reviews on the concepts, indices and evaluation methods about voltage sag severity. Considering the complexity and uncertainty of influencing factors, damage degree, the characteristics and requirements of voltage sag severity in the power source-network-load sides, the measure concepts and their existing conditions, evaluation indices and methods of voltage sag severity have been analyzed. Current evaluation techniques, such as stochastic theory, fuzzy logic, as well as their fusion, are reviewed in detail. An index system about voltage sag severity is provided for comprehensive study. The main aim of this paper is to propose thought and method of severity research based on advanced uncertainty theory and uncertainty measure. This study may be considered as a valuable guide for researchers who are interested in the domain of voltage sag severity.

  6. Solving a methodological challenge in work stress evaluation with the Stress Assessment and Research Toolkit (StART): a study protocol

    PubMed Central

    2013-01-01

    Background Stress evaluation is a field of strong interest and challenging due to several methodological aspects in the evaluation process. The aim of this study is to propose a study protocol to test a new method (i.e., the Stress Assessment and Research Toolkit) to assess psychosocial risk factors at work. Design This method addresses several methodological issues (e.g., subjective vs. objective, qualitative vs quantitative data) by assessing work-related stressors using different kinds of data: i) organisational archival data (organisational indicators sheet); ii) qualitative data (focus group); iii) worker perception (questionnaire); and iv) observational data (observational checklist) using mixed methods research. In addition, it allows positive and negative aspects of work to be considered conjointly, using an approach that considers at the same time job demands and job resources. Discussion The integration of these sources of data can reduce the theoretical and methodological bias related to stress research in the work setting, allows researchers and professionals to obtain a reliable description of workers’ stress, providing a more articulate vision of psychosocial risks, and allows a large amount of data to be collected. Finally, the implementation of the method ensures in the long term a primary prevention for psychosocial risk management in that it aims to reduce or modify the intensity, frequency or duration of organisational demands. PMID:23799950

  7. Childhood Obesity Research Demonstration (CORD): Evaluation plan

    USDA-ARS?s Scientific Manuscript database

    The Childhood Obesity Research Demonstration (CORD) project evaluation will determine the extent to which the CORD model of linking primary care (PC) interventions to public health (PH) interventions in multiple community sectors affects BMI and behavior in children (2 to 12 years). The evaluation c...

  8. Integrating service development with evaluation in telehealthcare: an ethnographic study

    PubMed Central

    Finch, Tracy; May, Carl; Mair, Frances; Mort, Maggie; Gask, Linda

    2003-01-01

    Objectives To identify issues that facilitate the successful integration of evaluation and development of telehealthcare services. Design Ethnographic study using various qualitative research techniques to obtain data from several sources, including in-depth semistructured interviews, project steering group meetings, and public telehealthcare meetings. Setting Seven telehealthcare evaluation projects (four randomised controlled trials and three pragmatic service evaluations) in the United Kingdom, studied over two years. Projects spanned a range of specialties—dermatology, psychiatry, respiratory medicine, cardiology, and oncology. Participants Clinicians, managers, technical experts, and researchers involved in the projects. Results and discussion Key problems in successfully integrating evaluation and service development in telehealthcare are, firstly, defining existing clinical practices (and anticipating changes) in ways that permit measurement; secondly, managing additional workload and conflicting responsibilities brought about by combining clinical and research responsibilities (including managing risk); and, thirdly, understanding various perspectives on effectiveness and the limitations of evaluation results beyond the context of the research study. Conclusions Combined implementation and evaluation of telehealthcare systems is complex, and is often underestimated. The distinction between quantitative outcomes and the workability of the system is important for producing evaluative knowledge that is of practical value. More pragmatic approaches to evaluation, that permit both quantitative and qualitative methods, are required to improve the quality of such research and its relevance for service provision in the NHS. PMID:14630758

  9. Models and Mechanisms for Evaluating Government-Funded Research: An International Comparison

    ERIC Educational Resources Information Center

    Coryn, Chris L. S.; Hattie, John A.; Scriven, Michael; Hartmann, David J.

    2007-01-01

    This research describes, classifies, and comparatively evaluates national models and mechanisms used to evaluate research and allocate research funding in 16 countries. Although these models and mechanisms vary widely in terms of how research is evaluated and financed, nearly all share the common characteristic of relating funding to some measure…

  10. Evaluation of FRA trespass prevention research study

    DOT National Transportation Integrated Search

    2015-06-01

    The United States Department of Transportations (US DOT) John A. Volpe National Transportation Systems Center (Volpe Center), under the direction of the US DOT Federal Railroad Administration (FRA) Office of Research and Development (R&D), conduct...

  11. Making Team Science Better: Applying Improvement-Oriented Evaluation Principles to Evaluation of Cooperative Research Centers

    ERIC Educational Resources Information Center

    Gray, Denis O.

    2008-01-01

    The rise of the research center has changed the landscape of U.S. research enter-prise. It has also created a number of evaluation challenges, particularly when considering strategically focused, multifaceted cooperative research centers (CRCs). The author argues that although recent CRC evaluation efforts have gone a long way toward meeting the…

  12. The Childhood Obesity Declines Project: Implications for Research and Evaluation Approaches.

    PubMed

    Young-Hyman, Deborah; Morris, Kathryn; Kettel Khan, Laura; Dawkins-Lyn, Nicola; Dooyema, Carrie; Harris, Carole; Jernigan, Jan; Ottley, Phyllis; Kauh, Tina

    2018-03-01

    Childhood obesity remains prevalent and is increasing in some disadvantaged populations. Numerous research, policy and community initiatives are undertaken to impact this pandemic. Understudied are natural experiments. The need to learn from these efforts is paramount. Resulting evidence may not be readily available to inform future research, community initiatives, and policy development/implementation. We discuss the implications of using an adaptation of the Systematic Screening and Assessment (SSA) method to evaluate the Childhood Obesity Declines (COBD) project. The project examined successful initiatives, programs and policies in four diverse communities which were concurrent with significant declines in child obesity. In the context of other research designs and evaluation schemas, rationale for use of SSA is presented. Evidence generated by this method is highlighted and guidance suggested for evaluation of future studies of community-based childhood obesity prevention initiatives. Support for the role of stakeholder collaboratives, in particular the National Collaborative on Childhood Obesity Research, as a synergistic vehicle to accelerate research on childhood obesity is discussed. SSA mapped active processes and provided contextual understanding of multi-level/component simultaneous efforts to reduce rates of childhood obesity in community settings. Initiatives, programs and policies were not necessarily coordinated. And although direct attribution of intervention/initiative/policy components could not be made, the what, by who, how, to whom was temporally associated with statistically significant reductions in childhood obesity. SSA provides evidence for context and processes which are not often evaluated in other data analytic methods. SSA provides an additional tool to layer with other evaluation approaches.

  13. Evaluating performance feedback: a research study into issues of credibility and utility for nursing clinicians.

    PubMed

    Fereday, Jennifer; Muir-Cochrane, Eimear

    2004-01-01

    Performance feedback is information provided to employees about how well they are performing in their work role. The nursing profession has a long history of providing formal, written performance reviews, traditionally from a manager to subordinate, with less formal feedback sources including peers, clients and multidisciplinary team members. This paper is based on one aspect of a PhD research study exploring the dynamics of performance feedback primarily from the nursing clinicians' perspective. The research reported here discusses the impact of the social relationship (between the source and recipient of performance feedback) on the recipient's evaluation of feedback as being 'credible' and 'useful' for self-assessment. Focus group interviews were utilised to ascertain the nursing clinicians' perspectives of performance feedback. Thematic analysis of the data was informed by the Social Phenomenology of Alfred Schutz (1967) specifically his theories of intersubjective understanding. Findings supported the level of familiarity between the feedback source and the nursing clinician as a significant criterion influencing the acceptance or rejection of feedback. Implications for the selection of performance feedback sources and processes within nursing are discussed.

  14. Presentation of a Novel Model for Evaluation of Commercialization of Research and Development: Case Study of the Pharmaceutical Biotechnology Industry

    PubMed Central

    Emami, Hassan; Radfar, Reza

    2017-01-01

    The current situation in Iran suggests an appropriate basis for developing biotechnology industries, because the patents for the majority of hi-tech medicines registered in developed countries are ending. Biosimilar and technology-oriented companies which do not have patents will have the opportunity to enter the biosimilar market and move toward innovative initiatives. The present research proposed a model by which one can evaluate commercialization of achievements obtained from research with a focus on the pharmaceutical biotechnology industry. This is a descriptive-analytic study where mixed methodology is followed by a heuristic approach. The statistical population was pharmaceutical biotechnology experts at universities and research centers in Iran. Structural equations were employed in this research. The results indicate that there are three effective layers within commercialization in the proposed model. These are a general layer (factors associated with management, human capital, legal infrastructure, communication infrastructure, a technical and executive infrastructures, and financial factors), industrial layer (internal industrial factors and pharmaceutical industry factors), and a third layer that included national and international aspects. These layers comprise 6 domains, 21 indices, 41 dimensions, and 126 components. Compilation of these layers (general layer, industrial layer, and national and international aspects) can serve commercialization of research and development as an effective evaluation package. PMID:29201110

  15. Presentation of a Novel Model for Evaluation of Commercialization of Research and Development: Case Study of the Pharmaceutical Biotechnology Industry.

    PubMed

    Emami, Hassan; Radfar, Reza

    2017-01-01

    The current situation in Iran suggests an appropriate basis for developing biotechnology industries, because the patents for the majority of hi-tech medicines registered in developed countries are ending. Biosimilar and technology-oriented companies which do not have patents will have the opportunity to enter the biosimilar market and move toward innovative initiatives. The present research proposed a model by which one can evaluate commercialization of achievements obtained from research with a focus on the pharmaceutical biotechnology industry. This is a descriptive-analytic study where mixed methodology is followed by a heuristic approach. The statistical population was pharmaceutical biotechnology experts at universities and research centers in Iran. Structural equations were employed in this research. The results indicate that there are three effective layers within commercialization in the proposed model. These are a general layer (factors associated with management, human capital, legal infrastructure, communication infrastructure, a technical and executive infrastructures, and financial factors), industrial layer (internal industrial factors and pharmaceutical industry factors), and a third layer that included national and international aspects. These layers comprise 6 domains, 21 indices, 41 dimensions, and 126 components. Compilation of these layers (general layer, industrial layer, and national and international aspects) can serve commercialization of research and development as an effective evaluation package.

  16. Evaluation of doctoral nursing programs in Japan by faculty members and their educational and research activities.

    PubMed

    Arimoto, Azusa; Gregg, Misuzu F; Nagata, Satoko; Miki, Yuko; Murashima, Sachiyo

    2012-07-01

    Evaluation of doctoral programs in nursing is becoming more important with the rapid increase in the programs in Japan. This study aimed to evaluate doctoral nursing programs by faculty members and to analyze the relationship of the evaluation with educational and research activities of faculty members in Japan. Target settings were all 46 doctoral nursing programs. Eighty-five faculty members from 28 programs answered the questionnaire, which included 17 items for program evaluation, 12 items for faculty evaluation, 9 items for resource evaluation, 3 items for overall evaluations, and educational and research activities. A majority gave low evaluations for sources of funding, the number of faculty members and support staff, and administrative systems. Faculty members who financially supported a greater number of students gave a higher evaluation for extramural funding support, publication, provision of diverse learning experiences, time of supervision, and research infrastructure. The more time a faculty member spent on advising doctoral students, the higher were their evaluations on the supportive learning environment, administrative systems, time of supervision, and timely feedback on students' research. The findings of this study indicate a need for improvement in research infrastructure, funding sources, and human resources to achieve quality nursing doctoral education in Japan. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Researcher development program of the primary health care research, evaluation and development strategy.

    PubMed

    McIntyre, Ellen; Brun, Lyn; Cameron, Helen

    2011-01-01

    The Research Development Program (RDP) was initiated in 2004 under the Primary Health Care Research, Evaluation and Development (PHCRED) Strategy to increase the number and range of people with knowledge and skills in primary health care research and evaluation. RDP Fellows were invited to participate in an online survey about the effect the program had on their research knowledge, attitudes and practice. The response rate was 42% (105/248). Most were female (88%) with 66% aged between 31 and 50 years. Over two-thirds (72%) were health practitioners. Activities undertaken during the RDP ranged from literature reviews, developing a research question, preparing ethics submissions, attending and presenting at conferences and seminars, preparing papers and reports, and submitting grant applications. Despite the fact that only 52% agreed that the RDP time was adequate, 94% agreed that the RDP was a valuable experience, with 89% expressing interest in undertaking further research. These results indicate that this program has had a positive effect on the RDP Fellows in terms of their knowledge about research, their attitude to research, and the way they use research in their work.

  18. Promoting research and audit at medical school: evaluating the educational impact of participation in a student-led national collaborative study.

    PubMed

    Chapman, Stephen J; Glasbey, James C D; Khatri, Chetan; Kelly, Michael; Nepogodiev, Dmitri; Bhangu, Aneel; Fitzgerald, J Edward F

    2015-03-13

    Medical students often struggle to engage in extra-curricular research and audit. The Student Audit and Research in Surgery (STARSurg) network is a novel student-led, national research collaborative. Student collaborators contribute data to national, clinical studies while gaining an understanding of audit and research methodology and ethical principles. This study aimed to evaluate the educational impact of participation. Participation in the national, clinical project was supported with training interventions, including an academic training day, an online e-learning module, weekly discussion forums and YouTube® educational videos. A non-mandatory, online questionnaire assessed collaborators' self-reported confidence in performing key academic skills and their perceptions of audit and research prior to and following participation. The group completed its first national clinical study ("STARSurgUK") with 273 student collaborators across 109 hospital centres. Ninety-seven paired pre- and post-study participation responses (35.5%) were received (male = 51.5%; median age = 23). Participation led to increased confidence in key academic domains including: communication with local research governance bodies (p < 0.001), approaching clinical staff to initiate local collaboration (p < 0.001), data collection in a clinical setting (p < 0.001) and presentation of scientific results (p < 0.013). Collaborators also reported an increased appreciation of research, audit and study design (p < 0.001). Engagement with the STARSurg network empowered students to participate in a national clinical study, which increased their confidence and appreciation of academic principles and skills. Encouraging active participation in collaborative, student-led, national studies offers a novel approach for delivering essential academic training.

  19. State of health economic evaluation research in Saudi Arabia: a review.

    PubMed

    Al-Aqeel, Sinaa A

    2012-01-01

    If evaluation of economic evidence is to be used increasingly in Saudi Arabia, a review of the published literature would be useful to inform policy decision-makers of the current state of research and plan future research agendas. The purpose of this paper is to provide a critical review of the state of health economic evaluation research within the Saudi context with regard to the number, characteristics, and quality of published articles. A literature search was conducted on May 8, 2011 to identify health economic articles pertaining to Saudi Arabia in the PubMed, Embase, and EconLit databases, using the following terms alone or in combination: "cost*", "economics", "health economics", "cost-effectiveness", "cost-benefit", "cost minimization", "cost utility analysis", and "Saudi". Reference lists of the articles identified were also searched for further articles. The tables of contents of the Saudi Pharmaceutical Journal and the Saudi Medical Journal were reviewed for the previous 5 years. The search identified 535 citations. Based on a reading of abstracts and titles, 477 papers were excluded. Upon reviewing the full text of the remaining 58 papers, 43 were excluded. Fifteen papers were included. Ten were categorized as full economic evaluations and five as partial economic evaluations. These articles were published between 1997 and 2010. The majority of the studies identified did not clearly state the perspective of their evaluation. There are many concerns about the methods used to collect outcome and costs data. Only one study used some sort of sensitivity analysis to assess the effects of uncertainty on the robustness of its conclusions. This review highlights major flaws in the design, analysis, and reporting of the identified economic analyses. Such deficiencies mean that the local economic evidence available to decision-makers is not very useful. Thus, building research capability in health economics is warranted.

  20. Evaluating a collaborative IT based research and development project.

    PubMed

    Khan, Zaheer; Ludlow, David; Caceres, Santiago

    2013-10-01

    In common with all projects, evaluating an Information Technology (IT) based research and development project is necessary in order to discover whether or not the outcomes of the project are successful. However, evaluating large-scale collaborative projects is especially difficult as: (i) stakeholders from different countries are involved who, almost inevitably, have diverse technological and/or application domain backgrounds and objectives; (ii) multiple and sometimes conflicting application specific and user-defined requirements exist; and (iii) multiple and often conflicting technological research and development objectives are apparent. In this paper, we share our experiences based on the large-scale integrated research project - The HUMBOLDT project - with project duration of 54 months, involving contributions from 27 partner organisations, plus 4 sub-contractors from 14 different European countries. In the HUMBOLDT project, a specific evaluation methodology was defined and utilised for the user evaluation of the project outcomes. The user evaluation performed on the HUMBOLDT Framework and its associated nine application scenarios from various application domains, resulted in not only an evaluation of the integrated project, but also revealed the benefits and disadvantages of the evaluation methodology. This paper presents the evaluation methodology, discusses in detail the process of applying it to the HUMBOLDT project and provides an in-depth analysis of the results, which can be usefully applied to other collaborative research projects in a variety of domains. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Evaluating the Risks of Clinical Research: Direct Comparative Analysis

    PubMed Central

    Abdoler, Emily; Roberson-Nay, Roxann; Pine, Daniel S.; Wendler, David

    2014-01-01

    Abstract Objectives: Many guidelines and regulations allow children and adolescents to be enrolled in research without the prospect of clinical benefit when it poses minimal risk. However, few systematic methods exist to determine when research risks are minimal. This situation has led to significant variation in minimal risk judgments, raising concern that some children are not being adequately protected. To address this concern, we describe a new method for implementing the widely endorsed “risks of daily life” standard for minimal risk. This standard defines research risks as minimal when they do not exceed the risks posed by daily life activities or routine examinations. Methods: This study employed a conceptual and normative analysis, and use of an illustrative example. Results: Different risks are composed of the same basic elements: Type, likelihood, and magnitude of harm. Hence, one can compare the risks of research and the risks of daily life by comparing the respective basic elements with each other. We use this insight to develop a systematic method, direct comparative analysis, for implementing the “risks of daily life” standard for minimal risk. The method offers a way of evaluating research procedures that pose the same types of risk as daily life activities, such as the risk of experiencing anxiety, stress, or other psychological harm. We thus illustrate how direct comparative analysis can be applied in practice by using it to evaluate whether the anxiety induced by a respiratory CO2 challenge poses minimal or greater than minimal risks in children and adolescents. Conclusions: Direct comparative analysis is a systematic method for applying the “risks of daily life” standard for minimal risk to research procedures that pose the same types of risk as daily life activities. It thereby offers a method to protect children and adolescents in research, while ensuring that important studies are not blocked because of unwarranted concerns about

  2. How to use bibliometric methods in evaluation of scientific research? An example from Finnish schizophrenia research.

    PubMed

    Koskinen, Johanna; Isohanni, Matti; Paajala, Henna; Jääskeläinen, Erika; Nieminen, Pentti; Koponen, Hannu; Tienari, Pekka; Miettunen, Jouko

    2008-01-01

    We present bibliometric methods that can be utilized in evaluation processes of scientific work. In this paper, we present some practical clues using Finnish schizophrenia research as an example and comparing the research output of different institutions. Bibliometric data and indicators including publication counts, impact factors and received citations were used as tools for evaluating research performance in Finnish schizophrenia research. The articles and citations were searched from the Web of Science database. We used schizophrenia as a keyword and defined address Finland, and limited years to 1996-2005. When we analysed Finnish schizophrenia research, altogether 265 articles met our criteria. There were differences in impact factors and received citations between institutions. The number of annually published Finnish schizophrenia articles has tripled since the mid-1990s. International co-operation was common (43%). Bibliometric methods revealed differences between institutions, indicating that the methods can be applied in research evaluation. The coverage of databases as well as the precision of their search engines can be seen as limitations. Bibliometric methods offer a practical and impartial way to estimate publication profiles of researchers and research groups. According to our experience, these methods can be used as an evaluation instrument in research together with other methods, such as expert opinions and panels.

  3. Design, Implementation and Evaluation of School-Based Sexual Health Education in Sub-Saharan Africa: A Qualitative Study of Researchers' Perspectives

    ERIC Educational Resources Information Center

    Sani, A. Sadiq; Abraham, Charles; Denford, Sarah; Mathews, Catherine

    2018-01-01

    This study investigated facilitators and challenges to designing, implementing and evaluating school-based sexual health education in sub-Saharan Africa, using interviews with intervention designers and researchers. At the pre-planning and planning stages, participants reported that facilitating factors included addressing the reproductive health…

  4. Do College Students Notice Errors in Evidence When Critically Evaluating Research Findings?

    ERIC Educational Resources Information Center

    Rodriguez, Fernando; Ng, Annalyn; Shah, Priti

    2016-01-01

    The authors examined college students' ability to critically evaluate scientific evidence, specifically, whether first- and second-year students noticed when poor interpretations were drawn from research evidence. Fifty students evaluated a set of eight psychological studies, first in an informal context, then again in a critical-thinking context.…

  5. ASPER Research and Evaluation Projects 1970-79.

    ERIC Educational Resources Information Center

    1980

    This inventory of research and evaluation projects completed during calendar years 1970-1979 for the Office of the Assistant Secretary for Policy, Education, and Research (ASPER) of the Department of Labor contains summaries of projects on economic, social, and policy background; the labor market; the nature and impact of Department of Labor…

  6. Using mixed methods to develop and evaluate complex interventions in palliative care research.

    PubMed

    Farquhar, Morag C; Ewing, Gail; Booth, Sara

    2011-12-01

    there is increasing interest in combining qualitative and quantitative research methods to provide comprehensiveness and greater knowledge yield. Mixed methods are valuable in the development and evaluation of complex interventions. They are therefore particularly valuable in palliative care research where the majority of interventions are complex, and the identification of outcomes particularly challenging. this paper aims to introduce the role of mixed methods in the development and evaluation of complex interventions in palliative care, and how they may be used in palliative care research. the paper defines mixed methods and outlines why and how mixed methods are used to develop and evaluate complex interventions, with a pragmatic focus on design and data collection issues and data analysis. Useful texts are signposted and illustrative examples provided of mixed method studies in palliative care, including a detailed worked example of the development and evaluation of a complex intervention in palliative care for breathlessness. Key challenges to conducting mixed methods in palliative care research are identified in relation to data collection, data integration in analysis, costs and dissemination and how these might be addressed. the development and evaluation of complex interventions in palliative care benefit from the application of mixed methods. Mixed methods enable better understanding of whether and how an intervention works (or does not work) and inform the design of subsequent studies. However, they can be challenging: mixed method studies in palliative care will benefit from working with agreed protocols, multidisciplinary teams and engaging staff with appropriate skill sets.

  7. [Evaluation and prioritisation of the scientific research in Spain. Researchers' point of view].

    PubMed

    María Martín-Moreno, José; Juan Toharia, José; Gutiérrez Fuentes, José Antonio

    2008-12-01

    The assessment and prioritisation of research activity are essential components of any Science, Technology and Industry System. Data on researchers' perspectives in this respect are scarce. The objective of this paper was to describe Spanish scientists' point of view on the current evaluation system in Spain and how they believe this system should be functionally structured. From the sampling frame formed by established Spanish scientists, listed in the databases of CSIC and FIS (Institute of Health Carlos III), clinical, biomedical-non clinical, and physics and chemical researchers were randomly selected. Two hundred and eleven interviews were carried out by means of a computer-assisted telephone interviewing system. Researchers expressed their acknowledgement of progress in the Spanish research field but made their wish clear to progress towards better scientific scenarios. In their assessment, they gave a score of 5.4 to scientific policy, as opposed to 9.4 when speaking about the goals, reflecting the desire for a better policy definition, with clear objectives, stable strategies and better coordination of R&D activities (the current coordination received a score of 3.9, while the desirable coordination was valued as high as 9.2). There was certain agreement regarding the need for a prioritisation criteria which preserves some degree of creativity by researchers. They also stated that they would like to see an independent research structure with social prestige and influence. The interviewed researchers believe that the evaluation of scientific activities is fundamental in formulating a sound scientific policy. Prioritisation should arise from appropriate evaluation. Strategies properly coordinated among all the stakeholders (including the private sector) should be fostered. Budget sufficiency, stability, and better organization of independent researchers should be the backbone of any strategy tailored to increase their capacity to influence future scientific

  8. Evaluation of the work of hospital districts' research ethics committees in Finland.

    PubMed

    Halila, Ritva

    2014-12-01

    The main task of research ethics committees (RECs) is to assess research studies before their start. In this study, 24 RECs that evaluate medical research were sent questionnaires about their structure and functions. The RECs were divided into two separate groups: those working in university hospital districts (uRECs) and those in central hospital districts (non-uRECs). The two groups were different in many respects: the uRECs were bigger in size, covered a wider range of disciplines (both medical and non-medical), had better resources and more frequent and regular meetings. After the survey was performed and analysed, the Medical Research Act was amended so that only hospital districts with a medical faculty in their region had a duty to establish ethics committees. After the amendment, the number of RECs evaluating medical research in Finland decreased from 25 to 9. The ethics committees that remained had wider expertise and were better equipped already by the time of this survey. Only one non-uREC was continuing its work, and this was being done under the governance of a university hospital district. Simple measures were used for qualitative analysis of the work of RECs that evaluate medical research. These showed differences between RECs. This may be helpful in establishing an ethics committee network in a research field or administrational area. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  9. Conceptualizing Indicator Domains for Evaluating Action Research

    ERIC Educational Resources Information Center

    Piggot-Irvine, Eileen; Rowe, Wendy; Ferkins, Lesley

    2015-01-01

    The focus of this paper is to share thinking about meta-level evaluation of action research (AR), and to introduce indicator domains for assessing and measuring inputs, outputs and outcomes. Meta-level and multi-site evaluation has been rare in AR beyond project implementation and participant satisfaction. The paper is the first of several…

  10. 9 CFR 104.4 - Products for research and evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 9 Animals and Animal Products 1 2014-01-01 2014-01-01 false Products for research and evaluation. 104.4 Section 104.4 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT... BIOLOGICAL PRODUCTS § 104.4 Products for research and evaluation. (a) An application for a U.S. Veterinary...

  11. Methodological issues in oral health research: intervention studies.

    PubMed

    O'Mullane, Denis; James, Patrice; Whelton, Helen; Parnell, Carmel

    2012-02-01

    To provide a broad overview of methodological issues in the design and evaluation of intervention studies in dental public health, with particular emphasis on explanatory trials, pragmatic trials and complex interventions. We present a narrative summary of selected publications from the literature outlining both historical and recent challenges in the design and evaluation of intervention studies and describe some recent tools that may help researchers to address these challenges. It is now recognised that few intervention studies in dental public health are purely explanatory or pragmatic. We describe the PRECIS tool which can be used by trialists to assess and display the position of their trial on a continuum between the extremes of explanatory and pragmatic trials. The tool aims to help trialists make design decisions that are in line with their stated aims. The increasingly complex nature of dental public health interventions presents particular design and evaluation challenges. The revised Medical Research Council (MRC) guidance for the development and evaluation of complex interventions which emphasises the importance of planning and process evaluation is a welcome development. We briefly describe the MRC guidance and outline some examples of complex interventions in the field of oral health. The role of observational studies in monitoring public health interventions when the conduct of RCTs is not appropriate or feasible is acknowledged. We describe the STROBE statement and outline the implications of the STROBE guidelines for dental public health. The methodological challenges in the design, conduct and reporting of intervention studies in oral health are considerable. The need to provide reliable evidence to support innovative new strategies in oral health policy is a major impetus in these fields. No doubt the 'Methodological Issues in Oral Health Research' group will have further opportunities to highlight this work. © 2012 John Wiley & Sons A/S.

  12. Economic Evaluation alongside Multinational Studies: A Systematic Review of Empirical Studies

    PubMed Central

    Oppong, Raymond; Jowett, Sue; Roberts, Tracy E.

    2015-01-01

    Purpose of the study This study seeks to explore methods for conducting economic evaluations alongside multinational trials by conducting a systematic review of the methods used in practice and the challenges that are typically faced by the researchers who conducted the economic evaluations. Methods A review was conducted for the period 2002 to 2012, with potentially relevant articles identified by searching the Medline, Embase and NHS EED databases. Studies were included if they were full economic evaluations conducted alongside a multinational trial. Results A total of 44 studies out of a possible 2667 met the inclusion criteria. Methods used for the analyses varied between studies, indicating a lack of consensus on how economic evaluation alongside multinational studies should be carried out. The most common challenge appeared to be related to addressing differences between countries, which potentially hinders the generalisability and transferability of results. Other challenges reported included inadequate sample sizes and choosing cost-effectiveness thresholds. Conclusions It is recommended that additional guidelines be developed to aid researchers in this area and that these be based on an understanding of the challenges associated with multinational trials and the strengths and limitations of alternative approaches. Guidelines should focus on ensuring that results will aid decision makers in their individual countries. PMID:26121465

  13. Research Studies in Business Education Completed in 1972

    ERIC Educational Resources Information Center

    Byrnside, O. J., Jr.

    1973-01-01

    Research studies are classified as economic, business education survey, curriculum, evaluation, teaching methods, guidance, teaching materials, teacher preparation, tests, and history of business education. (MU)

  14. Program Evaluation - Automotive Lightweighting Materials Program Research and Development Projects Assessment of Benefits - Case Studies No. 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, S.

    This report is the second of a series of studies to evaluate research and development (R&D) projects funded by the Automotive Lightweighting Materials (ALM) Program of the Office of Advanced Automotive Technologies (OAAT) of the U.S. Department of Energy (DOE). The objectives of the program evaluation are to assess short-run outputs and long-run outcomes that may be attributable to the ALM R&D projects. The ALM program focuses on the development and validation of advanced technologies that significantly reduce automotive vehicle body and chassis weight without compromising other attributes such as safety, performance, recyclability, and cost. Funded projects range from fundamentalmore » materials science research to applied research in production environments. Collaborators on these projects include national laboratories, universities, and private sector firms, such as leading automobile manufacturers and their suppliers. Three ALM R&D projects were chosen for this evaluation: Design and Product Optimization for Cast Light Metals, Durability of Lightweight Composite Structures, and Rapid Tooling for Functional Prototyping of Metal Mold Processes. These projects were chosen because they have already been completed. The first project resulted in development of a comprehensive cast light metal property database, an automotive application design guide, computerized predictive models, process monitoring sensors, and quality assurance methods. The second project, the durability of lightweight composite structures, produced durability-based design criteria documents, predictive models for creep deformation, and minimum test requirements and suggested test methods for establishing durability properties and characteristics of random glass-fiber composites for automotive structural composites. The durability project supported Focal Project II, a validation activity that demonstrates ALM program goals and reduces the lead time for bringing new technology into the

  15. Evaluation systems for clinical governance development: a comparative study.

    PubMed

    Hooshmand, Elaheh; Tourani, Sogand; Ravaghi, Hamid; Ebrahimipour, Hossein

    2014-01-01

    Lack of scientific and confirmed researches and expert knowledge about evaluation systems for clinical governance development in Iran have made studies on different evaluation systems for clinical governance development a necessity. These studies must provide applied strategies to design criteria of implementing clinical governance for hospital's accreditation. This is a descriptive and comparative study on development of clinical governance models all over the world. Data have been gathered by reviewing related articles. Models have been studied in comprehensive review method. The evaluated models of clinical governance development were Australian, NHS, SPOCK and OPTIGOV. The final aspects extracted from these models were Responsiveness, Policies and Strategies, Organizational Structure, Allocating Resources, Education and Occupational Development, Performance Evaluation, External Evaluation, Patient Oriented Approach, Risk Management, Personnel's Participation, Information Technology, Human Resources, Research and Development, Evidence Based Medicine, Clinical Audit, Health Technology Assessment and Quality. These results are applicable for completing the present criteria which evaluating clinical governance application and provide practical framework to evaluate country's hospital on the basis of clinical governance elements.

  16. Primary prevention research: a preliminary review of program outcome studies.

    PubMed

    Schaps, E; Churgin, S; Palley, C S; Takata, B; Cohen, A Y

    1980-07-01

    This article reviews 35 drug abuse prevention program evaluations employing drug-specific outcome measures. Many of these evaluations assessed the effects of "new generation" prevention strategies: affective, peer-oriented, and multidimensional approaches. Only 14 studies evaluated purely informational programs. Evaluations were analyzed to ascertain (1) characteristics of the programs under study, (2) characteristics of the research designs, and (3) patterns among findings. This review provides some evidence that the newer prevention strategies may produce more positive and fewer negative outcomes than did older drug information approaches. Over 70% of the programs using the newer strategies produced some positive effects; only 29% showed negative effects. In contrast, 46% of informational programs showed positive effects; 46% showed negative effects. These findings must be approached with great caution, since the research was frequently scientifically inadequate, and since rigor of research was negatively correlated with intensity and duration of program services.

  17. Impact of referral source and study applicants' preference for randomly assigned service on research enrollment, service engagement, and evaluative outcomes.

    PubMed

    Macias, Cathaleene; Barreira, Paul; Hargreaves, William; Bickman, Leonard; Fisher, William; Aronson, Elliot

    2005-04-01

    The inability to blind research participants to their experimental conditions is the Achilles' heel of mental health services research. When one experimental condition receives more disappointed participants, or more satisfied participants, research findings can be biased in spite of random assignment. The authors explored the potential for research participants' preference for one experimental program over another to compromise the generalizability and validity of randomized controlled service evaluations as well as cross-study comparisons. Three Cox regression analyses measured the impact of applicants' service assignment preference on research project enrollment, engagement in assigned services, and a service-related outcome, competitive employment. A stated service preference, referral by an agency with a low level of continuity in outpatient care, and willingness to switch from current services were significant positive predictors of research enrollment. Match to service assignment preference was a significant positive predictor of service engagement, and mismatch to assignment preference was a significant negative predictor of both service engagement and employment outcome. Referral source type and service assignment preference should be routinely measured and statistically controlled for in all studies of mental health service effectiveness to provide a sound empirical base for evidence-based practice.

  18. Research Evaluation and the Assessment of Public Value

    ERIC Educational Resources Information Center

    Molas-Gallart, Jordi

    2015-01-01

    Funding organisations are increasingly asking academics to show evidence of the economic and social value generated by their research. These requests have often been associated with the emergence of a so-called "new social contract for research" and are related to the implementation of new research evaluation systems. Although the…

  19. Research methodology workshops evaluation using the Kirkpatrick's model: translating theory into practice.

    PubMed

    Abdulghani, Hamza Mohammad; Shaik, Shaffi Ahamed; Khamis, Nehal; Al-Drees, Abdulmajeed Abdulrahman; Irshad, Mohammad; Khalil, Mahmoud Salah; Alhaqwi, Ali Ibrahim; Isnani, Arthur

    2014-04-01

    Qualitative and quantitative evaluation of academic programs can enhance the development, effectiveness, and dissemination of comparative quality reports as well as quality improvement efforts. To evaluate the five research methodology workshops through assessing participants' satisfaction, knowledge and skills gain and impact on practices by the Kirkpatrick's evaluation model. The four level Kirkpatrick's model was applied for the evaluation. Training feedback questionnaires, pre and post tests, learner development plan reports and behavioral surveys were used to evaluate the effectiveness of the workshop programs. Of the 116 participants, 28 (24.1%) liked with appreciation, 62 (53.4%) liked with suggestions and 26 (22.4%) disliked the programs. Pre and post MCQs tests mean scores showed significant improvement of relevant basic knowledge and cognitive skills by 17.67% (p ≤ 0.005). Pre-and-post tests scores on workshops sub-topics also significantly improved for the manuscripts (p ≤ 0.031) and proposal writing (p ≤ 0.834). As for the impact, 56.9% of participants started research, and 6.9% published their studies. The results from participants' performance revealed an overall positive feedback and 79% of participant reported transfer of training skills at their workplace. The course outcomes achievement and suggestions given for improvements offer insight into the program which were encouraging and very useful. Encouraging "research culture" and work-based learning are probably the most powerful determinants for research promotion. These findings therefore encourage faculty development unit to continue its training and development in the research methodology aspects.

  20. Evaluation of XV-15 tilt rotor aircraft for flying qualities research application

    NASA Technical Reports Server (NTRS)

    Radford, R. C.; Schelhorn, A. E.; Siracuse, R. J.; Till, R. D.; Wasserman, R.

    1976-01-01

    The results of a design review study and evaluation of the XV-15 Tilt Rotor Research Aircraft for flying qualities research application are presented. The objectives of the program were to determine the capability of the XV-15 aircraft and the V/STOLAND system as a safe, inflight facility to provide meaningful research data on flying qualities, flight control systems, and information display systems.

  1. The evaluation of the individual impact factor of researchers and research centers using the RC algorithm.

    PubMed

    Cordero-Villafáfila, Amelia; Ramos-Brieva, Jesus A

    2015-01-01

    The RC algorithm quantitatively evaluates the personal impact factor of the scientific production of isolated researchers. The authors propose an adaptation of RC to evaluate the personal impact factor of research centers, hospitals and other research groups. Thus, these could be classified according to the accredited impact of the results of their scientific work between researchers of the same scientific area. This could be useful for channelling budgets and grants for research. Copyright © 2013 SEP y SEPB. Published by Elsevier España. All rights reserved.

  2. The design and research of poverty alleviation monitoring and evaluation system: a case study in the Jiangxi province

    NASA Astrophysics Data System (ADS)

    Mo, Hong-yuan; Wang, Ying-jie; Yu, Zhuo-yuan

    2009-07-01

    The Poverty Alleviation Monitoring and Evaluation System (PAMES) is introduced in this paper. The authors present environment platform selection, and details of system design and realization. Different with traditional research of poverty alleviation, this paper develops a new analytical geo-visualization approach to study the distribution and causes of poverty phenomena within Geographic Information System (GIS). Based on the most detailed poverty population data, the spatial location and population statistical indicators of poverty village in Jiangxi province, the distribution characteristics of poverty population are detailed. The research results can provide much poverty alleviation decision support from a spatial-temporal view. It should be better if the administrative unit of poverty-stricken area to be changed from county to village according to spatial distribution pattern of poverty.

  3. Formative Ethnographic Research to Improve Evaluation of a Novel Water System in Ghana

    PubMed Central

    Alcorn, Ted E.; Opryszko, Melissa C.; Schwab, Kellogg J.

    2011-01-01

    The accessibility of potable water is fundamental to public health. A private for-profit company is installing kiosk-based drinking-water systems in rural and peri-urban villages in Ghana, and we evaluated their performance. Preceding an observational study to measure the effect of these kiosks on the incidence of water-related disease in recipient communities, we conducted ethnographic research to assess local water-related practices and the ways these practices would affect adoption of the new technology. We conducted fieldwork in two communities in Ghana and interviewed stakeholders throughout the water sector. Our findings illustrate the complexity of water-related behaviors and indicate several factors that may sustain disease transmission despite the presence of the new technology. This formative ethnographic research also improved the precision of our subsequent evaluation of the intervention by providing a site-specific, culturally-appropriate knowledge base. This study demonstrates the value of incorporating qualitative research techniques into evaluations of water-related projects. PMID:21540392

  4. Practical Assessment, Research & Evaluation, 2000-2001.

    ERIC Educational Resources Information Center

    Rudner, Lawrence M., Ed.; Schafer, William D., Ed.

    2001-01-01

    This document consists of papers published in the electronic journal "Practical Assessment, Research & Evaluation" during 2000-2001: (1) "Advantages of Hierarchical Linear Modeling" (Jason W. Osborne); (2) "Prediction in Multiple Regression" (Jason W. Osborne); (3) Scoring Rubrics: What, When, and How?"…

  5. Process Evaluation for Improving K12 Program Effectiveness: Case Study of a National Institutes of Health Building Interdisciplinary Research Careers in Women's Health Research Career Development Program.

    PubMed

    Raymond, Nancy C; Wyman, Jean F; Dighe, Satlaj; Harwood, Eileen M; Hang, Mikow

    2018-06-01

    Process evaluation is an important tool in quality improvement efforts. This article illustrates how a systematic and continuous evaluation process can be used to improve the quality of faculty career development programs by using the University of Minnesota's Building Interdisciplinary Research Careers in Women's Health (BIRCWH) K12 program as an exemplar. Data from a rigorous process evaluation incorporating quantitative and qualitative measurements were analyzed and reviewed by the BIRCWH program leadership on a regular basis. Examples are provided of how this evaluation model and processes were used to improve many aspects of the program, thereby improving scholar, mentor, and advisory committee members' satisfaction and scholar outcomes. A rigorous evaluation plan can increase the effectiveness and impact of a research career development plan.

  6. Five Steps to Successfully Implement and Evaluate Propensity Score Matching in Clinical Research Studies.

    PubMed

    Staffa, Steven J; Zurakowski, David

    2018-01-09

    -matched data to compare outcomes among treatment groups.PSM is becoming an increasingly more popular statistical methodology in medical research. It often allows for improved evaluation of a treatment effect that may otherwise be invalid due to a lack of balance between the 2 treatment groups with regard to confounding variables. PSM may increase the level of evidence of a study and in turn increases the strength and generalizability of its results. Our step-by-step approach provides a useful strategy for anesthesiologists to implement PSM in their future research.

  7. Enhancing Research Capacity for Global Health: Evaluation of a Distance-Based Program for International Study Coordinators

    PubMed Central

    Wilson, Lynda Law; Rice, Marti; Jones, Carolynn T.; Joiner, Cynthia; LaBorde, Jennifer; McCall, Kimberly; Jester, Penelope M; Carter, Sheree C.; Boone, Chrissy; Onwuzuligbo, Uzoma; Koneru, Alaya

    2013-01-01

    Introduction Due to the increasing number of clinical trials conducted globally, there is a need for quality continuing education for health professionals in clinical research manager (CRM) roles. This paper describes the development, implementation, and evaluation of a distance-based continuing education program for CRMs working outside the United States. Methods A total of 692 applications were received from CRMs in 50 countries. Of these, 166 were admitted to the program in two cohorts. The program, taught online and in English, included four required and one optional course. Course materials were also provided as hard copies and on CDs. A pretest/posttest design was used to evaluate the outcome of the program in terms of changes in knowledge, participants’ capacity-building activities at their research sites; and participant and supervisor perceptions of program impact. Results Participants demonstrated significant improvements in knowledge about clinical research, rated course content and teaching strategies positively, and identified the opportunity for interactions with international peers as a major program strength. Challenges for participants were limited time to complete assignments and erratic internet access. Participants offered capacity building programs to 5061 individuals at their research sites. Supervisors indicated that they would recommend the program and perceived the program improved CRM effectiveness and site research capacity. Findings Results suggest that this type of continuing education program addresses a growing need for education of CRMs working in countries that have previously had limited involvement with global clinical trials. PMID:23512562

  8. MRM Evaluation Research Program

    NASA Technical Reports Server (NTRS)

    Taylor, James C.

    1998-01-01

    This is an interim report on the current output of the MRM evaluation research program. During 1998 this research program has used new and existing data to create an important tool for the development and improvement of "maintenance resource management" (MRM). Thousands of surveys completed by participants in airline MRM training and/or behavior change programs have, for the first time, been consolidated into a panel of "MRM Attitudes and Opinion Profiles." These profiles can be used to compare the attitudes about decision making and communication in any given company at any stage in its MRM program with attitudes of a large sample of like employees during a similar period in their MRM involvement. This panel of comparison profiles for attitudes and opinions is a tool to help audit the effectiveness of a maintenance human factors program. The profile panel is the first of several tools envisioned for applying the information accumulating in MRM databases produced as one of the program's long range objectives.

  9. Quantitative evaluation research of glare from automotive headlamps

    NASA Astrophysics Data System (ADS)

    Wang, Tiecheng; Qian, Rui; Cao, Ye; Gao, Mingqiu

    2018-01-01

    This study concerns the quantized evaluation research of glare from automotive headlamps. In the actual regulations, only one point in the test screen is set for judging whether driver can bear the light caused by headlamps of opposing vehicle. To evaluating practical effect of glare, we accept a glare zone with the probability distribution information of the oncoming driver's eye position. In this focus area, glare level of headlamp is represented by weighted luminous flux. To confirm the most comfortable illuminance value to human eyes at 50 m, we used test point B50L as observation position, and collected 1,000 subjective evaluation data from 20 test personnel in different ages during two months. Basing on the assessment results, we calculated 0.60 lx as recommended value for standardized testing procedure at 25 m. Then we figured out 0.38 lm as optimum value, and 0.25 / 1.20 lm as limiting values depending on regulations. We tested 40 sample vehicles with different levels to verify the sectional nonlinear quantitative evaluation method we designed, and analyzed the typical test results.

  10. Evaluation and Characterization of Health Economics and Outcomes Research in SAARC Nations.

    PubMed

    Mehta, Manthan; Nerurkar, Rajan

    2018-05-01

    To identify, evaluate, and characterize the variety, quality, and intent of the health economics and outcomes research studies being conducted in SAARC (South Asian Association for Regional Cooperation) nations. Studies published in English language between 1990 and 2015 were retrieved from Medline databases using relevant search strategies. Studies were independently reviewed as per Cochrane methodology and information on the type of research and outcomes were extracted. Quality of reporting was assessed. Of the 2638 studies screened from eight SAARC nations, a total of 179 were included for review (India = 140; Bangladesh = 12; Sri Lanka = 8; Pakistan = 7; Afghanistan = 5; Nepal = 4; Bhutan = 2; Maldives = 1). The broad study categories were cost-effectiveness analyses (CEAs = 76 studies), cost analyses (35 studies), and burden of illness (BOI=26 studies). The outcomes evaluated were direct costs, indirect costs, and incremental cost-effectiveness ratio (ICER), quality-adjusted life-years (QALYs), and disability-adjusted life-years (DALYs). Cost of medicines, consultation and hospital charges, and monitoring costs were assessed as direct medical costs along with non-direct medical costs such as travel and food for patients and caregivers. The components of indirect costs were loss of income of patients and caregivers and loss of productivity. Quality of life (QoL) was assessed in 48 studies. The most commonly used instrument for assessing QoL was the WHO-Quality of Life BREF (WHOQOL-BREF) questionnaire (76%). The Quality of Health Economic Studies (QHES) score was used for quality assessment of full economic studies (44 studies). The mean QHES score was 43.76. This review identifies various patterns of health economic studies in eight SAARC nations. The quality of economic evaluation studies for health care in India, Bangladesh, Sri Lanka, Pakistan, Afghanistan, Nepal, Bhutan, and Maldives needs improvement. There is a need to generate the capacity of researchers

  11. Evaluation of a 'virtual' approach to commissioning health research.

    PubMed

    McCourt, Christine A; Morgan, Philip A; Youll, Penny

    2006-10-18

    The objective of this study was to evaluate the implementation of a 'virtual' (computer-mediated) approach to health research commissioning. This had been introduced experimentally in a DOH programme--the 'Health of Londoners Programme'--in order to assess whether is could enhance the accessibility, transparency and effectiveness of commissioning health research. The study described here was commissioned to evaluate this novel approach, addressing these key questions. A naturalistic-experimental approach was combined with principles of action research. The different commissioning groups within the programme were randomly allocated to either the traditional face-to-face mode or the novel 'virtual' mode. Mainly qualitative data were gathered including observation of all (virtual and face-to-face) commissioning meetings; semi-structured interviews with a purposive sample of participants (n = 32/66); structured questionnaires and interviews with lead researchers of early commissioned projects. All members of the commissioning groups were invited to participate in collaborative enquiry groups which participated actively in the analysis process. The virtual process functioned as intended, reaching timely and relatively transparent decisions that participants had confidence in. Despite the potential for greater access using a virtual approach, few differences were found in practice. Key advantages included physical access, a more flexible and extended time period for discussion, reflection and information gathering and a more transparent decision-making process. Key challenges were the reduction of social cues available in a computer-mediated medium that require novel ways of ensuring appropriate dialogue, feedback and interaction. However, in both modes, the process was influenced by a range of factors and was not technology driven. There is potential for using computer-mediated communication within the research commissioning process. This may enhance access

  12. Research Practices, Evaluation and Infrastructure in the Digital Environment

    ERIC Educational Resources Information Center

    Houghton, John W.

    2004-01-01

    This paper examines changing research practices in the digital environment and draws out implications for research evaluation and the development of research infrastructure. Reviews of the literature, quantitative indicators of research activities and our own field research in Australia suggest that a new mode of knowledge production is emerging,…

  13. An evaluation of the 'Designated Research Team' approach to building research capacity in primary care.

    PubMed

    Cooke, Jo; Nancarrow, Susan; Dyas, Jane; Williams, Martin

    2008-06-27

    This paper describes an evaluation of an initiative to increase the research capability of clinical groups in primary and community care settings in a region of the United Kingdom. The 'designated research team' (DRT) approach was evaluated using indicators derived from a framework of six principles for research capacity building (RCB) which include: building skills and confidence, relevance to practice, dissemination, linkages and collaborations, sustainability and infrastructure development. Information was collated on the context, activities, experiences, outputs and impacts of six clinical research teams supported by Trent Research Development Support Unit (RDSU) as DRTs. Process and outcome data from each of the teams was used to evaluate the extent to which the DRT approach was effective in building research capacity in each of the six principles (as evidenced by twenty possible indicators of research capacity development). The DRT approach was found to be well aligned to the principles of RCB and generally effective in developing research capabilities. It proved particularly effective in developing linkages, collaborations and skills. Where research capacity was slow to develop, this was reflected in poor alignment between the principles of RCB and the characteristics of the team, their activities or environment. One team was unable to develop a research project and the funding was withdrawn at an early stage. For at least one individual in each of the remaining five teams, research activity was sustained beyond the funding period through research partnerships and funding successes. An enabling infrastructure, including being freed from clinical duties to undertake research, and support from senior management were found to be important determinants of successful DRT development. Research questions of DRTs were derived from practice issues and several projects generated outputs with potential to change daily practice, including the use of research evidence in

  14. The TIPS Evaluation Project: A Theory-Driven Approach to Dissemination Research.

    ERIC Educational Resources Information Center

    Mulvey, Kevin P.; Hayashi, Susan W.; Hubbard, Susan M.; Kopstien, Andrea; Huang, Judy Y.

    2003-01-01

    Introduces the special section that focuses on four major studies under the treatment improvement protocols (TIPs) evaluation project. Provides an overview of each article, and addresses the value of using a theory-driven approach to dissemination research. (SLD)

  15. Management systems research study

    NASA Technical Reports Server (NTRS)

    Bruno, A. V.

    1975-01-01

    The development of a Monte Carlo simulation of procurement activities at the NASA Ames Research Center is described. Data cover: simulation of the procurement cycle, construction of a performance evaluation model, examination of employee development, procedures and review of evaluation criteria for divisional and individual performance evaluation. Determination of the influences and apparent impact of contract type and structure and development of a management control system for planning and controlling manpower requirements.

  16. An evaluative study of clinical preceptorship.

    PubMed

    Kaviani, N; Stillwell, Y

    2000-04-01

    Clinical preceptorships, in collaboration between clinical agencies and educational institutions have been documented as an effective and innovative means of facilitating student learning, providing advantages for both the clinical and educational settings. A preceptorship programme of 100 hours duration was developed and delivered by the nurse education institute, in consultation with a health care organization. The objectives of the preceptorship programme were to help registered nurses, in partnership with clinical nurse educators, to effectively integrate, support and assist the development of clinical competence in the undergraduate nursing student. Following the implementation of the preceptorship programme a research study was conducted to evaluate programme effectiveness. The purpose of the study was to examine preceptors, preceptees, and nurse managers' preceptions of the preceptor role and factors which influenced the performance of preceptors. The methods used in this study included those commonly found in evaluation research. That is, participants were drawn from those who were involved, either directly or indirectly, in the preceptorship programme, namely preceptors, preceptees and nurse managers. Using focus groups, they were each asked to identify the outcomes of the programme in practice. Study findings highlighted the importance of formal preceptor preparation, which was shown to enhance teaching and learning opportunities for student preceptees, personal and professional development of the preceptors, and the promotion of positive partnerships between nurse educators and nurse practitioners. The need for formal recognition of the preceptor role in practice, particularly in relation to the provision of adequate time and resources, emerged from the study. The research findings enabled the development of an evaluative model of preceptorship, which highlights the intrinsic and extrinsic factors impacting on the preceptor role.

  17. Alternatives to Peer Review: Novel Approaches for Research Evaluation

    PubMed Central

    Birukou, Aliaksandr; Wakeling, Joseph Rushton; Bartolini, Claudio; Casati, Fabio; Marchese, Maurizio; Mirylenka, Katsiaryna; Osman, Nardine; Ragone, Azzurra; Sierra, Carles; Wassef, Aalam

    2011-01-01

    In this paper we review several novel approaches for research evaluation. We start with a brief overview of the peer review, its controversies, and metrics for assessing efficiency and overall quality of the peer review. We then discuss five approaches, including reputation-based ones, that come out of the research carried out by the LiquidPub project and research groups collaborated with LiquidPub. Those approaches are alternative or complementary to traditional peer review. We discuss pros and cons of the proposed approaches and conclude with a vision for the future of the research evaluation, arguing that no single system can suit all stakeholders in various communities. PMID:22174702

  18. Evaluation of a learner-designed course for teaching health research skills in Ghana

    PubMed Central

    Bates, Imelda; Ansong, Daniel; Bedu-Addo, George; Agbenyega, Tsiri; Akoto, Alex Yaw Osei; Nsiah-Asare, Anthony; Karikari, Patrick

    2007-01-01

    Background In developing countries the ability to conduct locally-relevant health research and high quality education are key tools in the fight against poverty. The objective of our study was to evaluate the effectiveness of a novel UK accredited, learner-designed research skills course delivered in a teaching hospital in Ghana. Methods Study participants were 15 mixed speciality health professionals from Komfo Anokye Teaching Hospital, Kumasi, Ghana. Effectiveness measures included process, content and outcome indicators to evaluate changes in learners' confidence and competence in research, and assessment of the impact of the course on changing research-related thinking and behaviour. Results were verified using two independent methods. Results 14/15 learners gained research competence assessed against UK Quality Assurance Agency criteria. After the course there was a 36% increase in the groups' positive responses to statements concerning confidence in research-related attitudes, intentions and actions. The greatest improvement (45% increase) was in learners' actions, which focused on strengthening institutional research capacity. 79% of paired before/after responses indicated positive changes in individual learners' research-related attitudes (n = 53), 81% in intention (n = 52) and 85% in action (n = 52). The course had increased learners' confidence to start and manage research, and enhanced life-long skills such as reflective practice and self-confidence. Doing their own research within the work environment, reflecting on personal research experiences and utilising peer support and pooled knowledge were critical elements that promoted learning. Conclusion Learners in Ghana were able to design and undertake a novel course that developed individual and institutional research capacity and met international standards. Learning by doing and a supportive peer community at work were critical elements in promoting learning in this environment where tutors were scarce

  19. Study of Methods for Assessing Research Topic Elicitation and pRioritization (SMARTER): Study Protocol to Compare Qualitative Research Methods and Advance Patient Engagement in Research

    PubMed Central

    Comstock, Bryan

    2017-01-01

    Background Involving patients as partners in research is a defining characteristic of patient-centered outcomes research (PCOR). While patients’ experiential knowledge of a health condition or treatment may yield research priorities not reflected by researchers and policy makers, the methods for identifying and effectively collaborating with patients are still evolving. Patient registries and crowdsourcing may offer ease of access and convenience to both researchers and patients. Surveys and focus groups, including online modalities, have been described for prioritizing research topics. However, little is known about how these different methods compare in producing consistent priorities and similar perceptions of engagement quality among participants. Objective The aims of this study are (1) to compare how different engagement methods used to elicit patient priorities for research perform as measured by rankings for priorities generated and participant satisfaction; and (2) to determine characteristics of individuals choosing to participate in research prioritization activities. Methods Participants in the Back pain Outcomes using Longitudinal Data (BOLD) patient registry, established to evaluate the natural history of back pain among individuals 65 years and older, and participants on the Amazon Mechanical Turk (MTurk) crowdsourcing platform, to provide input on priorities for research via a questionnaire, are invited. For BOLD participants, we subsequently randomize interested respondents to 1 of 3 interactive prioritization activities to further develop priorities: a Delphi panel, an online crowd voting activity, or an in-person facilitated prioritization activity using nominal group technique (NGT). Participants involved in each activity complete a survey to evaluate the quality of the experience and a subset of these participants discuss their experience further in an interview. Descriptive statistics are used to characterize the rankings produced by each

  20. Evaluating meta-ethnography: systematic analysis and synthesis of qualitative research.

    PubMed

    Campbell, R; Pound, P; Morgan, M; Daker-White, G; Britten, N; Pill, R; Yardley, L; Pope, C; Donovan, J

    2011-12-01

    Methods for reviewing and synthesising findings from quantitative research studies in health care are well established. Although there is recognition of the need for qualitative research to be brought into the evidence base, there is no consensus about how this should be done and the methods for synthesising qualitative research are at a relatively early stage of development. To evaluate meta-ethnography as a method for synthesising qualitative research studies in health and health care. Two full syntheses of qualitative research studies were conducted between April 2002 and September 2004 using meta-ethnography: (1) studies of medicine-taking and (2) studies exploring patients' experiences of living with rheumatoid arthritis. Potentially relevant studies identified in multiple literature searches conducted in July and August 2002 (electronically and by hand) were appraised using a modified version of the Critical Appraisal Skills Programme questions for understanding qualitative research. Candidate papers were excluded on grounds of lack of relevance to the aims of the synthesis or because the work failed to employ qualitative methods of data collection and analysis. Thirty-eight studies were entered into the medicine-taking synthesis, one of which did not contribute to the final synthesis. The synthesis revealed a general caution about taking medicine, and that the practice of lay testing of medicines was widespread. People were found to take their medicine passively or actively or to reject it outright. Some, in particular clinical areas, were coerced into taking it. Those who actively accepted their medicine often modified the regimen prescribed by a doctor, without the doctor's knowledge. The synthesis concluded that people often do not take their medicines as prescribed because of concern about the medicines themselves. 'Resistance' emerged from the synthesis as a concept that best encapsulated the lay response to prescribed medicines. It was suggested that a

  1. Planting Healthy Roots: Using Documentary Film to Evaluate and Disseminate Community-Based Participatory Research.

    PubMed

    Brandt, Heather M; Freedman, Darcy A; Friedman, Daniela B; Choi, Seul Ki; Seel, Jessica S; Guest, M Aaron; Khang, Leepao

    2016-01-01

    Documentary filmmaking approaches incorporating community engagement and awareness raising strategies may be a promising approach to evaluate community-based participatory research. The study purpose was 2-fold: (1) to evaluate a documentary film featuring the formation and implementation of a farmers' market and (2) to assess whether the film affected awareness regarding food access issues in a food-desert community with high rates of obesity. The coalition model of filmmaking, a model consistent with a community-based participatory research (CBPR) approach, and personal stories, community profiles, and expert interviews were used to develop a documentary film (Planting Healthy Roots). The evaluation demonstrated high levels of approval and satisfaction with the film and CBPR essence of the film. The documentary film aligned with a CBPR approach to document, evaluate, and disseminate research processes and outcomes.

  2. Design and Evaluation of a Personal Digital Assistant-based Research Platform for Cochlear Implants

    PubMed Central

    Ali, Hussnain; Lobo, Arthur P.; Loizou, Philipos C.

    2014-01-01

    This paper discusses the design, development, features, and clinical evaluation of a personal digital assistant (PDA)-based platform for cochlear implant research. This highly versatile and portable research platform allows researchers to design and perform complex experiments with cochlear implants manufactured by Cochlear Corporation with great ease and flexibility. The research platform includes a portable processor for implementing and evaluating novel speech processing algorithms, a stimulator unit which can be used for electrical stimulation and neurophysio-logic studies with animals, and a recording unit for collecting electroencephalogram/evoked potentials from human subjects. The design of the platform for real time and offline stimulation modes is discussed for electric-only and electric plus acoustic stimulation followed by results from an acute study with implant users for speech intelligibility in quiet and noisy conditions. The results are comparable with users’ clinical processor and very promising for undertaking chronic studies. PMID:23674422

  3. Impact of Referral Source and Study Applicants’ Preference for Randomly Assigned Service on Research Enrollment, Service Engagement, and Evaluative Outcomes

    PubMed Central

    Macias, Cathaleene; Barreira, Paul; Hargreaves, William; Bickman, Leonard; Fisher, William; Aronson, Elliot

    2009-01-01

    Objective The inability to blind research participants to their experimental conditions is the Achilles’ heel of mental health services research. When one experimental condition receives more disappointed participants, or more satisfied participants, research findings can be biased in spite of random assignment. The authors explored the potential for research participants’ preference for one experimental program over another to compromise the generalizability and validity of randomized controlled service evaluations as well as cross-study comparisons. Method Three Cox regression analyses measured the impact of applicants’ service assignment preference on research project enrollment, engagement in assigned services, and a service-related outcome, competitive employment. Results A stated service preference, referral by an agency with a low level of continuity in outpatient care, and willingness to switch from current services were significant positive predictors of research enrollment. Match to service assignment preference was a significant positive predictor of service engagement, and mismatch to assignment preference was a significant negative predictor of both service engagement and employment outcome. Conclusions Referral source type and service assignment preference should be routinely measured and statistically controlled for in all studies of mental health service effectiveness to provide a sound empirical base for evidence-based practice. PMID:15800153

  4. Neuroscience-related research in Ghana: a systematic evaluation of direction and capacity.

    PubMed

    Quansah, Emmanuel; Karikari, Thomas K

    2016-02-01

    Neurological and neuropsychiatric diseases account for considerable healthcare, economic and social burdens in Ghana. In order to effectively address these burdens, appropriately-trained scientists who conduct high-impact neuroscience research will be needed. Additionally, research directions should be aligned with national research priorities. However, to provide information about current neuroscience research productivity and direction, the existing capacity and focus need to be identified. This would allow opportunities for collaborative research and training to be properly explored and developmental interventions to be better targeted. In this study, we sought to evaluate the existing capacity and direction of neuroscience-related research in Ghana. To do this, we examined publications reporting research investigations authored by scientists affiliated with Ghanaian institutions in specific areas of neuroscience over the last two decades (1995-2015). 127 articles that met our inclusion criteria were systematically evaluated in terms of research foci, annual publication trends and author affiliations. The most actively-researched areas identified include neurocognitive impairments in non-nervous system disorders, depression and suicide, epilepsy and seizures, neurological impact of substance misuse, and neurological disorders. These studies were mostly hospital and community-based surveys. About 60% of these articles were published in the last seven years, suggesting a recent increase in research productivity. However, data on experimental and clinical research outcomes were particularly lacking. We suggest that future investigations should focus on the following specific areas where information was lacking: large-scale disease epidemiology, effectiveness of diagnostic platforms and therapeutic treatments, and the genetic, genomic and molecular bases of diseases.

  5. Assessing Statistical Change Indices in Selected Social Work Intervention Research Studies

    ERIC Educational Resources Information Center

    Ham, Amanda D.; Huggins-Hoyt, Kimberly Y.; Pettus, Joelle

    2016-01-01

    Objectives: This study examined how evaluation and intervention research (IR) studies assessed statistical change to ascertain effectiveness. Methods: Studies from six core social work journals (2009-2013) were reviewed (N = 1,380). Fifty-two evaluation (n= 27) and intervention (n = 25) studies met the inclusion criteria. These studies were…

  6. Implementing health research through academic and clinical partnerships: a realistic evaluation of the Collaborations for Leadership in Applied Health Research and Care (CLAHRC).

    PubMed

    Rycroft-Malone, Jo; Wilkinson, Joyce E; Burton, Christopher R; Andrews, Gavin; Ariss, Steven; Baker, Richard; Dopson, Sue; Graham, Ian; Harvey, Gill; Martin, Graham; McCormack, Brendan G; Staniszewska, Sophie; Thompson, Carl

    2011-07-19

    The English National Health Service has made a major investment in nine partnerships between higher education institutions and local health services called Collaborations for Leadership in Applied Health Research and Care (CLAHRC). They have been funded to increase capacity and capability to produce and implement research through sustained interactions between academics and health services. CLAHRCs provide a natural 'test bed' for exploring questions about research implementation within a partnership model of delivery. This protocol describes an externally funded evaluation that focuses on implementation mechanisms and processes within three CLAHRCs. It seeks to uncover what works, for whom, how, and in what circumstances. This study is a longitudinal three-phase, multi-method realistic evaluation, which deliberately aims to explore the boundaries around knowledge use in context. The evaluation funder wishes to see it conducted for the process of learning, not for judging performance. The study is underpinned by a conceptual framework that combines the Promoting Action on Research Implementation in Health Services and Knowledge to Action frameworks to reflect the complexities of implementation. Three participating CLARHCS will provide in-depth comparative case studies of research implementation using multiple data collection methods including interviews, observation, documents, and publicly available data to test and refine hypotheses over four rounds of data collection. We will test the wider applicability of emerging findings with a wider community using an interpretative forum. The idea that collaboration between academics and services might lead to more applicable health research that is actually used in practice is theoretically and intuitively appealing; however the evidence for it is limited. Our evaluation is designed to capture the processes and impacts of collaborative approaches for implementing research, and therefore should contribute to the evidence

  7. Implementing health research through academic and clinical partnerships: a realistic evaluation of the Collaborations for Leadership in Applied Health Research and Care (CLAHRC)

    PubMed Central

    2011-01-01

    Background The English National Health Service has made a major investment in nine partnerships between higher education institutions and local health services called Collaborations for Leadership in Applied Health Research and Care (CLAHRC). They have been funded to increase capacity and capability to produce and implement research through sustained interactions between academics and health services. CLAHRCs provide a natural 'test bed' for exploring questions about research implementation within a partnership model of delivery. This protocol describes an externally funded evaluation that focuses on implementation mechanisms and processes within three CLAHRCs. It seeks to uncover what works, for whom, how, and in what circumstances. Design and methods This study is a longitudinal three-phase, multi-method realistic evaluation, which deliberately aims to explore the boundaries around knowledge use in context. The evaluation funder wishes to see it conducted for the process of learning, not for judging performance. The study is underpinned by a conceptual framework that combines the Promoting Action on Research Implementation in Health Services and Knowledge to Action frameworks to reflect the complexities of implementation. Three participating CLARHCS will provide in-depth comparative case studies of research implementation using multiple data collection methods including interviews, observation, documents, and publicly available data to test and refine hypotheses over four rounds of data collection. We will test the wider applicability of emerging findings with a wider community using an interpretative forum. Discussion The idea that collaboration between academics and services might lead to more applicable health research that is actually used in practice is theoretically and intuitively appealing; however the evidence for it is limited. Our evaluation is designed to capture the processes and impacts of collaborative approaches for implementing research, and

  8. [Research progress on mechanical performance evaluation of artificial intervertebral disc].

    PubMed

    Li, Rui; Wang, Song; Liao, Zhenhua; Liu, Weiqiang

    2018-03-01

    The mechanical properties of artificial intervertebral disc (AID) are related to long-term reliability of prosthesis. There are three testing methods involved in the mechanical performance evaluation of AID based on different tools: the testing method using mechanical simulator, in vitro specimen testing method and finite element analysis method. In this study, the testing standard, testing equipment and materials of AID were firstly introduced. Then, the present status of AID static mechanical properties test (static axial compression, static axial compression-shear), dynamic mechanical properties test (dynamic axial compression, dynamic axial compression-shear), creep and stress relaxation test, device pushout test, core pushout test, subsidence test, etc. were focused on. The experimental techniques using in vitro specimen testing method and testing results of available artificial discs were summarized. The experimental methods and research status of finite element analysis were also summarized. Finally, the research trends of AID mechanical performance evaluation were forecasted. The simulator, load, dynamic cycle, motion mode, specimen and test standard would be important research fields in the future.

  9. Applying evidence from economic evaluations to translate cancer survivorship research into care.

    PubMed

    de Moor, Janet S; Alfano, Catherine M; Breen, Nancy; Kent, Erin E; Rowland, Julia

    2015-09-01

    This paper summarizes recommendations stemming from the meeting, Applying Evidence from Economic Evaluations to Translate Cancer Survivorship Research into Care, hosted by the National Cancer Institute. The meeting convened funded investigators, experts in cancer control, survivorship, health economics, and team science to identify the economic and health services data needed to facilitate the dissemination of cancer survivorship interventions into care and how survivorship and health economic investigators can successfully collaborate together and with other stakeholders. Recommendations from the meeting are as follows. First, investigators must engage key stakeholders early in the planning process to understand the outcomes and cost domains on which they base decisions. Second, evaluations of intervention efficacy and value should be conducted using standardized and comparable measures and analytic approaches to enable comparisons across studies. Finally, a health economist should be included during the planning phase of the study so that the economic evaluation is pursued in concert with the survivorship intervention. Economic analyses, from the perspective of key stakeholders, must be incorporated into survivorship intervention research. The results from these analyses should be disseminated in a manner that is transparent, accessible, and comparable across studies. To optimize cancer survivors' health and quality of life, it is essential deliver high-quality and high-value care. Incorporating economic analyses into survivorship intervention research can inform the translation of effective interventions into practice.

  10. Novel bibliometric scores for evaluating research quality and output: a correlation study with established indexes.

    PubMed

    Scotti, Valeria; De Silvestri, Annalisa; Scudeller, Luigia; Abele, Paola; Topuz, Funda; Curti, Moreno

    2016-12-23

    Novel bibliometric indexes (commonly known as altmetrics) are gaining interest within the scientific community and might represent an important alternative measure of research quality and output. We evaluate how these new metrics correlate with established bibliometric indexes such as the impact factor (IF), currently used as a measure of scientific production as well as a criterion for scientific research funding, and how they might be helpful in assessing the impact of research. We calculated altmetrics scores for all the articles published at our institution during a single year and examined the correlation between altmetrics scores and IFs as a measure of research quality and impact in all departments. For all articles from the various departments published in a single year, the altmetrics score and the sum of all IFs showed a strong and significant correlation (Spearman's rho 0.88). The correlation was significant also when the major components of altmetrics, including Facebook, Twitter and Mendeley, were analyzed. The implementation of altmetrics has been found to be easy and effective at both the researcher and librarian levels. The novel bibliographic index altmetrics is consistent and reliable and can complement or be considered a valid alternative to standard bibliometric indexes to benchmark output and quality of research for academic and funding purposes.

  11. Evaluation of university scientific research ability based on the output of sci-tech papers: A D-AHP approach.

    PubMed

    Zong, Fan; Wang, Lifang

    2017-01-01

    University scientific research ability is an important indicator to express the strength of universities. In this paper, the evaluation of university scientific research ability is investigated based on the output of sci-tech papers. Four university alliances from North America, UK, Australia, and China, are selected as the case study of the university scientific research evaluation. Data coming from Thomson Reuters InCites are collected to support the evaluation. The work has contributed new framework to the issue of university scientific research ability evaluation. At first, we have established a hierarchical structure to show the factors that impact the evaluation of university scientific research ability. Then, a new MCDM method called D-AHP model is used to implement the evaluation and ranking of different university alliances, in which a data-driven approach is proposed to automatically generate the D numbers preference relations. Next, a sensitivity analysis has been given to show the impact of weights of factors and sub-factors on the evaluation result. At last, the results obtained by using different methods are compared and discussed to verify the effectiveness and reasonability of this study, and some suggestions are given to promote China's scientific research ability.

  12. Research designs for studies evaluating the effectiveness of change and improvement strategies.

    PubMed

    Eccles, M; Grimshaw, J; Campbell, M; Ramsay, C

    2003-02-01

    The methods of evaluating change and improvement strategies are not well described. The design and conduct of a range of experimental and non-experimental quantitative designs are considered. Such study designs should usually be used in a context where they build on appropriate theoretical, qualitative and modelling work, particularly in the development of appropriate interventions. A range of experimental designs are discussed including single and multiple arm randomised controlled trials and the use of more complex factorial and block designs. The impact of randomisation at both group and individual levels and three non-experimental designs (uncontrolled before and after, controlled before and after, and time series analysis) are also considered. The design chosen will reflect both the needs (and resources) in any particular circumstances and also the purpose of the evaluation. The general principle underlying the choice of evaluative design is, however, simple-those conducting such evaluations should use the most robust design possible to minimise bias and maximise generalisability.

  13. Interpreting Outcomes: Using Focus Groups in Evaluation Research

    ERIC Educational Resources Information Center

    Ansay, Sylvia J.; Perkins, Daniel F.; Nelson, John

    2004-01-01

    Although focus groups continue to gain popularity in marketing and social science research, their use in program evaluation has been limited. Here we demonstrate how focus groups can benefit evaluators, program staff, policy makers and administrators by providing an in-depth understanding of program effectiveness from the perspective of…

  14. Evaluation of WYDOT's research center and research program.

    DOT National Transportation Integrated Search

    2008-03-01

    This study examined multiple aspects of the Wyoming Department of Transportations Research Program. It provides numerous observations of : the overall program and the research investment portfolio as well as guidance for developing a strategic res...

  15. Teaching Research and Practice Evaluation Skills to Graduate Social Work Students

    ERIC Educational Resources Information Center

    Wong, Stephen E.; Vakharia, Sheila P.

    2012-01-01

    Objective: The authors examined outcomes of a graduate course on evaluating social work practice that required students to use published research, quantitative measures, and single-system designs in a simulated practice evaluation project. Method: Practice evaluation projects from a typical class were analyzed for the number of research references…

  16. Integrating knowledge generation with knowledge diffusion and utilization: a case study analysis of the Consortium for Applied Research and Evaluation in Mental Health.

    PubMed

    Vingilis, Evelyn; Hartford, Kathleen; Schrecker, Ted; Mitchell, Beth; Lent, Barbara; Bishop, Joan

    2003-01-01

    Knowledge diffusion and utilization (KDU) have become a key focus in the health research community because of the limited success to date of research findings to inform health policies, programs and services. Yet, evidence indicates that successful KDU is often predicated on the early involvement of potential knowledge users in the conceptualization and conduct of the research and on the development of a "partnership culture". This study describes the integration of KDU theory with practice via a case study analysis of the Consortium for Applied Research and Evaluation in Mental Health (CAREMH). This qualitative study, using a single-case design, included a number of data sources: proposals, meeting minutes, presentations, publications, reports and curricula vitae of CAREMH members. CAREMH has adopted the following operational strategies to increase KDU capacity: 1) viewing research as a means and not as an end; 2) bringing the university and researcher to the community; 3) using participatory research methods; 4) embracing transdisciplinary research and interactions; and 5) using connectors. Examples of the iterative process between researchers and potential knowledge users in their contribution to knowledge generation, diffusion and utilization are provided. This case study supports the importance of early and ongoing involvement of relevant potential knowledge users in research to enhance its utilization potential. It also highlights the need for re-thinking research funding approaches.

  17. Designing and evaluating a STEM teacher learning opportunity in the research university.

    PubMed

    Hardré, Patricia L; Ling, Chen; Shehab, Randa L; Herron, Jason; Nanny, Mark A; Nollert, Matthias U; Refai, Hazem; Ramseyer, Christopher; Wollega, Ebisa D

    2014-04-01

    This study examines the design and evaluation strategies for a year-long teacher learning and development experience, including their effectiveness, efficiency and recommendations for strategic redesign. Design characteristics include programmatic features and outcomes: cognitive, affective and motivational processes; interpersonal and social development; and performance activities. Program participants were secondary math and science teachers, partnered with engineering faculty mentors, in a research university-based education and support program. Data from multiple sources demonstrated strengths and weaknesses in design of the program's learning environment, including: face-to-face and via digital tools; on-site and distance community interactions; and strategic evaluation tools and systems. Implications are considered for the strategic design and evaluation of similar grant-funded research experiences intended to support teacher learning, development and transfer. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Exploring perceived barriers, drivers, impacts and the need for evaluation of public involvement in health and social care research: a modified Delphi study

    PubMed Central

    Snape, D; Kirkham, J; Britten, N; Froggatt, K; Gradinger, F; Lobban, F; Popay, Jennie; Wyatt, K; Jacoby, Ann

    2014-01-01

    Objective To explore areas of consensus and conflict in relation to perceived public involvement (PI) barriers and drivers, perceived impacts of PI and ways of evaluating PI approaches in health and social care research. Background Internationally and within the UK the recognition of potential benefits of PI in health and social care research is gathering momentum and PI is increasingly identified by organisations as a prerequisite for funding. However, there is relatively little examination of the impacts of PI and how those impacts might be measured. Design Mixed method, three-phase, modified Delphi technique, conducted as part of a larger MRC multiphase project. Sample Clinical and non-clinical academics, members of the public, research managers, commissioners and funders. Findings This study found high levels of consensus about the most important barriers and drivers to PI. There was acknowledgement that tokenism was common in relation to PI; and strong support for the view that demonstrating the impacts and value of PI was made more difficult by tokenistic practice. PI was seen as having intrinsic value; nonetheless, there was clear support for the importance of evaluating its impact. Research team cohesion and appropriate resources were considered essential to effective PI implementation. Panellists agreed that PI can be challenging, but can be facilitated by clear guidance, together with models of good practice and measurable standards. Conclusions This study is the first to present empirical evidence of the opinions voiced by key stakeholders on areas of consensus and conflict in relation to perceived PI barriers and drivers, perceived impacts of PI and the need to evaluate PI. As such it further contributes to debate around best practice in PI, the potential for tokenism and how best to evaluate the impacts of PI. These findings have been used in the development of the Public Involvement Impact Assessment Framework (PiiAF), an online resource which offers

  19. Academic research groups: evaluation of their quality and quality of their evaluation

    NASA Astrophysics Data System (ADS)

    Berche, Bertrand; Holovatch, Yuri; Kenna, Ralph; Mryglod, Olesya

    2016-02-01

    In recent years, evaluation of the quality of academic research has become an increasingly important and influential business. It determines, often to a large extent, the amount of research funding flowing into universities and similar institutes from governmental agencies and it impacts upon academic careers. Policy makers are becoming increasingly reliant upon, and influenced by, the outcomes of such evaluations. In response, university managers are increasingly attracted to simple metrics as guides to the dynamics of the positions of their various institutions in league tables. However, these league tables are invariably drawn up by inexpert bodies such as newspapers and magazines, using arbitrary measures and criteria. Terms such as “critical mass” and “h-index” are bandied about without understanding of what they actually mean. Rather than accepting the rise and fall of universities, departments and individuals on a turbulent sea of arbitrary measures, we suggest it is incumbent upon the scientific community itself to clarify their nature. Here we report on recent attempts to do that by properly defining critical mass and showing how group size influences research quality. We also examine currently predominant metrics and show that these fail as reliable indicators of group research quality.

  20. Some Observations on the Relationships Between Research Productivity and Student Evaluations of Courses and Teaching.

    ERIC Educational Resources Information Center

    Stallings, William A.; Singhal, Sushila

    "Does the good researcher tend to be a good teacher, and vice versa?" University administrators contend that teaching and research are equally important, though students claim that researchers neglect teaching and professors claim that only their research efforts are rewarded. In this study, course and instructor evaluations were defined…

  1. Mediation Effect of Research Skills Proficiency on the Core Self-Evaluations--Research Engagement Relationship among Master of Education Students in Uganda

    ERIC Educational Resources Information Center

    Atibuni, Dennis Zami; Olema, David Kani; Ssenyonga, Joseph; Karl, Steffens; Kibanja, Grace Milly

    2017-01-01

    This study investigated the mediation effect of research skills proficiency on the relationship between core self-evaluations and research engagement among Master of Education students in Uganda. Questionnaire surveys including closed ended questions were administered to two cohorts of the students, 2011/2012 and 2012/2013, (N = 102). Results…

  2. Evaluating International Research Ethics Capacity Development: An Empirical Approach

    PubMed Central

    Ali, Joseph; Kass, Nancy E.; Sewankambo, Nelson K.; White, Tara D.; Hyder, Adnan A.

    2014-01-01

    The US National Institutes of health, Fogarty International Center (NIH-FIC) has, for the past 13 years, been a leading funder of international research ethics education for resource-limited settings. Nearly half of the NIH-FIC funding in this area has gone to training programs that train individuals from sub-Saharan Africa. Identifying the impact of training investments, as well as the potential predictors of post-training success, can support curricular decision-making, help establish funding priorities, and recognize the ultimate outcomes of trainees and training programs. Comprehensive evaluation frameworks and targeted evaluation tools for bioethics training programs generally, and for international research ethics programs in particular, are largely absent from published literature. This paper shares an original conceptual framework, data collection tool, and detailed methods for evaluating the inputs, processes, outputs, and outcomes of research ethics training programs serving individuals in resource-limited settings. This paper is part of a collection of papers analyzing the Fogarty International Center’s International Research Ethics Education and Curriculum Development program. PMID:24782071

  3. Current Research Studies

    MedlinePlus

    ... Success Home > Explore Research > Current Research Studies Current Research Studies Email Print + Share The Crohn’s & Colitis Foundation ... conducted online. Learn more about IBD Partners. Clinical Research Alliance The Clinical Research Alliance is a network ...

  4. Enhancing Research Capacity for Global Health: Evaluation of a Distance-Based Program for International Study Coordinators

    ERIC Educational Resources Information Center

    Wilson, Lynda Law; Rice, Marti; Jones, Carolynn T.; Joiner, Cynthia; LaBorde, Jennifer; McCall, Kimberly; Jester, Penelope M.; Carter, Sheree C.; Boone, Chrissy; Onwuzuligbo, Uzoma; Koneru, Alaya

    2013-01-01

    Introduction: Due to the increasing number of clinical trials conducted globally, there is a need for quality continuing education for health professionals in clinical research manager (CRM) roles. This article describes the development, implementation, and evaluation of a distance-based continuing education program for CRMs working outside the…

  5. Evaluating the design and reporting of pragmatic trials in osteoarthritis research.

    PubMed

    Ali, Shabana Amanda; Kloseck, Marita; Lee, Karen; Walsh, Kathleen Ellen; MacDermid, Joy C; Fitzsimmons, Deborah

    2018-01-01

    Among the challenges in health research is translating interventions from controlled experimental settings to clinical and community settings where chronic disease is managed daily. Pragmatic trials offer a method for testing interventions in real-world settings but are seldom used in OA research. The aim of this study was to evaluate the literature on pragmatic trials in OA research up to August 2016 in order to identify strengths and weaknesses in the design and reporting of these trials. We used established guidelines to assess the degree to which 61 OA studies complied with pragmatic trial design and reporting. We assessed design according to the pragmatic-explanatory continuum indicator summary and reporting according to the pragmatic trials extension of the CONsolidated Standards of Reporting Trials guidelines. None of the pragmatic trials met all 11 criteria evaluated and most of the trials met between 5 and 8 of the criteria. Criteria most often unmet pertained to practitioner expertise (by requiring specialists) and criteria most often met pertained to primary outcome analysis (by using intention-to-treat analysis). Our results suggest a lack of highly pragmatic trials in OA research. We identify this as a point of opportunity to improve research translation, since optimizing the design and reporting of pragmatic trials can facilitate implementation of evidence-based interventions for OA care. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  6. Using the Nine Common Themes of Good Practice checklist as a tool for evaluating the research priority setting process of a provincial research and program evaluation program.

    PubMed

    Mador, Rebecca L; Kornas, Kathy; Simard, Anne; Haroun, Vinita

    2016-03-23

    Given the context-specific nature of health research prioritization and the obligation to effectively allocate resources to initiatives that will achieve the greatest impact, evaluation of priority setting processes can refine and strengthen such exercises and their outcomes. However, guidance is needed on evaluation tools that can be applied to research priority setting. This paper describes the adaption and application of a conceptual framework to evaluate a research priority setting exercise operating within the public health sector in Ontario, Canada. The Nine Common Themes of Good Practice checklist, described by Viergever et al. (Health Res Policy Syst 8:36, 2010) was used as the conceptual framework to evaluate the research priority setting process developed for the Locally Driven Collaborative Projects (LDCP) program in Ontario, Canada. Multiple data sources were used to inform the evaluation, including a review of selected priority setting approaches, surveys with priority setting participants, document review, and consultation with the program advisory committee. The evaluation assisted in identifying improvements to six elements of the LDCP priority setting process. The modifications were aimed at improving inclusiveness, information gathering practices, planning for project implementation, and evaluation. In addition, the findings identified that the timing of priority setting activities and level of control over the process were key factors that influenced the ability to effectively implement changes. The findings demonstrate the novel adaptation and application of the 'Nine Common Themes of Good Practice checklist' as a tool for evaluating a research priority setting exercise. The tool can guide the development of evaluation questions and enables the assessment of key constructs related to the design and delivery of a research priority setting process.

  7. Research and Evaluation in Operational Competency-Based Teacher Education Programs.

    ERIC Educational Resources Information Center

    Dickson, George E., Ed.

    1975-01-01

    This is a collection of papers presented at a 1974 conference on research and evaluation in operational competency-based teacher education (CBTE) programs. Two conceptual models for research and evaluation of CBTE activities were presented at the conference and the presentations of these models are the first two chapters of this collection: "A…

  8. Integrating and evaluating sex and gender in health research.

    PubMed

    Day, Suzanne; Mason, Robin; Lagosky, Stephanie; Rochon, Paula A

    2016-10-10

    Both sex (biological factors) and gender (socio-cultural factors) shape health. To produce the best possible health research evidence, it is essential to integrate sex and gender considerations throughout the research process. Despite growing recognition of the importance of these factors, progress towards sex and gender integration as standard practice has been both slow and uneven in health research. In this commentary, we examine the challenges of integrating sex and gender from the research perspective, as well as strategies that can be used by researchers, funders and journal editors to address these challenges. Barriers to the integration of sex and gender in health research include problems with inconsistent terminology, difficulties in applying the concepts of sex and gender, failure to recognise the impact of sex and gender, and challenges with data collection and datasets. We analyse these barriers as strategic points of intervention for improving the integration of sex and gender at all stages of the research process. To assess the relative success of these strategies in any given study, researchers, funders and journal editors would benefit from a tool to evaluate the quality of sex and gender integration in order to establish benchmarks in research excellence. These assessment tools are needed now amidst growing institutional recognition that both sex and gender are necessary elements for advancing the quality and utility of health research evidence.

  9. Evaluation of a 'virtual' approach to commissioning health research

    PubMed Central

    McCourt, Christine A; Morgan, Philip A; Youll, Penny

    2006-01-01

    Background The objective of this study was to evaluate the implementation of a 'virtual' (computer-mediated) approach to health research commissioning. This had been introduced experimentally in a DOH programme – the 'Health of Londoners Programme' – in order to assess whether is could enhance the accessibility, transparency and effectiveness of commissioning health research. The study described here was commissioned to evaluate this novel approach, addressing these key questions. Methods A naturalistic-experimental approach was combined with principles of action research. The different commissioning groups within the programme were randomly allocated to either the traditional face-to-face mode or the novel 'virtual' mode. Mainly qualitative data were gathered including observation of all (virtual and face-to-face) commissioning meetings; semi-structured interviews with a purposive sample of participants (n = 32/66); structured questionnaires and interviews with lead researchers of early commissioned projects. All members of the commissioning groups were invited to participate in collaborative enquiry groups which participated actively in the analysis process. Results The virtual process functioned as intended, reaching timely and relatively transparent decisions that participants had confidence in. Despite the potential for greater access using a virtual approach, few differences were found in practice. Key advantages included physical access, a more flexible and extended time period for discussion, reflection and information gathering and a more transparent decision-making process. Key challenges were the reduction of social cues available in a computer-mediated medium that require novel ways of ensuring appropriate dialogue, feedback and interaction. However, in both modes, the process was influenced by a range of factors and was not technology driven. Conclusion There is potential for using computer-mediated communication within the research commissioning

  10. Evaluations of Sexual Assault Prevention Programs in Military Settings: A Synthesis of the Research Literature.

    PubMed

    Orchowski, Lindsay M; Berry-Cabán, Cristóbal S; Prisock, Kara; Borsari, Brian; Kazemi, Donna M

    2018-03-01

    The prevention of sexual assault (SA) in the U.S. military is a significant priority. This study applied the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines to a literature search that identified research evaluating SA prevention programs conducted within military settings. Only six studies published between 2005 and 2016 met criteria for inclusion in the review. Studies demonstrated high heterogeneity in the: (1) conceptual framework of the prevention approach; (2) target population and timing of administration; (3) study recruitment methods; (4) methodological design; (5) method of delivery, program dosage and theory of change; and (6) outcome administration and efficacy. Scientific rigor according to the Oxford Center for Evidence-based Medicine was also variable. Several gaps in the research base were identified. Specifically, research evaluating SA prevention programs have only been conducted among U.S. Army and U.S. Navy samples. Most studies did not examine whether program participation was associated with reductions in rates of sexual violence. Studies also lacked utilization of a long-term follow-up period. Additionally, studies did not reflect the types of SA prevention programs currently being implemented in military settings. Taken together, further research is needed to enhance the evidence base for SA prevention in the military, and to evaluate the effectiveness of the approaches currently being conducted with service members.

  11. Evaluating Community-Based Participatory Research to Improve Community-Partnered Science and Community Health

    PubMed Central

    Hicks, Sarah; Duran, Bonnie; Wallerstein, Nina; Avila, Magdalena; Belone, Lorenda; Lucero, Julie; Magarati, Maya; Mainer, Elana; Martin, Diane; Muhammad, Michael; Oetzel, John; Pearson, Cynthia; Sahota, Puneet; Simonds, Vanessa; Sussman, Andrew; Tafoya, Greg; Hat, Emily White

    2013-01-01

    Background Since 2007, the National Congress of American Indians (NCAI) Policy Research Center (PRC) has partnered with the Universities of New Mexico and Washington to study the science of community-based participatory research (CBPR). Our goal is to identify facilitators and barriers to effective community–academic partnerships in American Indian and other communities, which face health disparities. Objectives We have described herein the scientific design of our National Institutes of Health (NIH)-funded study (2009–2013) and lessons learned by having a strong community partner leading the research efforts. Methods The research team is implementing a mixed-methods study involving a survey of principal investigators (PIs) and partners across the nation and in-depth case studies of CBPR projects. Results We present preliminary findings on methods and measures for community-engaged research and eight lessons learned thus far regarding partnership evaluation, advisory councils, historical trust, research capacity development of community partner, advocacy, honoring each other, messaging, and funding. Conclusions Study methodologies and lessons learned can help community–academic research partnerships translate research in communities. PMID:22982842

  12. Meta-evaluation of published studies on evaluation of health disaster preparedness exercises through a systematic review.

    PubMed

    Sheikhbardsiri, Hojjat; Yarmohammadian, Mohammad H; Khankeh, Hamid Reza; Nekoei-Moghadam, Mahmoud; Raeisi, Ahmad Reza

    2018-01-01

    Exercise evaluation is one of the most important steps and sometimes neglected in designing and taking exercises, in this stage of exercise, it systematically identifying, gathering, and interpreting related information to indicate how an exercise has fulfilled its objectives. The present study aimed to assess the most important evaluation techniques applied in evaluating health exercises for emergencies and disasters. This was meta-evaluation study through a systematic review. In this research, we searched papers based on specific and relevant keywords in research databases including ISI web of science, PubMed, Scopus, Science Direct, Ovid, ProQuest, Wiley, Google Scholar, and Persian database such as ISC and SID. The search keywords and strategies are followed; "simulation," "practice," "drill," "exercise," "instrument," "tool," "questionnaire," " measurement," "checklist," "scale," "test," "inventory," "battery," "evaluation," "assessment," "appraisal," "emergency," "disaster," "cricise," "hazard," "catastrophe,: "hospital", "prehospital," "health centers," "treatment centers," were used in combination with Boolean operators OR and AND. The research findings indicate that there are different techniques and methods for data collection to evaluate performance exercises of health centers and affiliated organizations in disasters and emergencies including debriefing inventories, self-report, questionnaire, interview, observation, shooting video, and photographing, electronic equipment which can be individually or collectively used depending on exercise objectives or purposes. Taking exercise in the health sector is one of the important steps in preparation and implementation of disaster risk management programs. This study can be thus utilized to improve preparedness of different sectors of health system according to the latest available evaluation techniques and methods for better implementation of disaster exercise evaluation stages.

  13. Evaluating a Research Training Programme for People with Intellectual Disabilities Participating in Inclusive Research: The Views of Participants.

    PubMed

    Fullana, Judit; Pallisera, Maria; Català, Elena; Puyalto, Carolina

    2017-07-01

    This article presents the results of evaluating a research training programme aimed at developing the skills of people with intellectual disabilities to actively participate in inclusive research. The present authors opted for a responsive approach to evaluation, using a combination of interviews, questionnaires and focus groups to gather information on the views of students, trainers and members of the research team regarding how the programme progressed, the learning achieved and participants' satisfaction with the programme. The evaluation showed that most of the participants were satisfied with the programme and provided guidelines for planning contents and materials, demonstrating the usefulness of these types of programme in constructing the research group and empowering people with intellectual disabilities to participate in research. The evaluation revealed that the programme had been a positive social experience that fostered interest in lifelong learning for people with intellectual disabilities. © 2016 John Wiley & Sons Ltd.

  14. Evaluation of university scientific research ability based on the output of sci-tech papers: A D-AHP approach

    PubMed Central

    Wang, Lifang

    2017-01-01

    University scientific research ability is an important indicator to express the strength of universities. In this paper, the evaluation of university scientific research ability is investigated based on the output of sci-tech papers. Four university alliances from North America, UK, Australia, and China, are selected as the case study of the university scientific research evaluation. Data coming from Thomson Reuters InCites are collected to support the evaluation. The work has contributed new framework to the issue of university scientific research ability evaluation. At first, we have established a hierarchical structure to show the factors that impact the evaluation of university scientific research ability. Then, a new MCDM method called D-AHP model is used to implement the evaluation and ranking of different university alliances, in which a data-driven approach is proposed to automatically generate the D numbers preference relations. Next, a sensitivity analysis has been given to show the impact of weights of factors and sub-factors on the evaluation result. At last, the results obtained by using different methods are compared and discussed to verify the effectiveness and reasonability of this study, and some suggestions are given to promote China’s scientific research ability. PMID:28212446

  15. Applying Generalizability Theory To Evaluate Treatment Effect in Single-Subject Research.

    ERIC Educational Resources Information Center

    Lefebvre, Daniel J.; Suen, Hoi K.

    An empirical investigation of methodological issues associated with evaluating treatment effect in single-subject research (SSR) designs is presented. This investigation: (1) conducted a generalizability (G) study to identify the sources of systematic and random measurement error (SRME); (2) used an analytic approach based on G theory to integrate…

  16. Study of Methods for Assessing Research Topic Elicitation and pRioritization (SMARTER): Study Protocol to Compare Qualitative Research Methods and Advance Patient Engagement in Research.

    PubMed

    Lavallee, Danielle C; Comstock, Bryan; Scott, Mary R; Avins, Andrew L; Nerenz, David R; Edwards, Todd C; Patrick, Donald L; Lawrence, Sarah O; Bauer, Zoya; Truitt, Anjali R; Jarvik, Jeffrey G

    2017-09-07

    Involving patients as partners in research is a defining characteristic of patient-centered outcomes research (PCOR). While patients' experiential knowledge of a health condition or treatment may yield research priorities not reflected by researchers and policy makers, the methods for identifying and effectively collaborating with patients are still evolving. Patient registries and crowdsourcing may offer ease of access and convenience to both researchers and patients. Surveys and focus groups, including online modalities, have been described for prioritizing research topics. However, little is known about how these different methods compare in producing consistent priorities and similar perceptions of engagement quality among participants. The aims of this study are (1) to compare how different engagement methods used to elicit patient priorities for research perform as measured by rankings for priorities generated and participant satisfaction; and (2) to determine characteristics of individuals choosing to participate in research prioritization activities. Participants in the Back pain Outcomes using Longitudinal Data (BOLD) patient registry, established to evaluate the natural history of back pain among individuals 65 years and older, and participants on the Amazon Mechanical Turk (MTurk) crowdsourcing platform, to provide input on priorities for research via a questionnaire, are invited. For BOLD participants, we subsequently randomize interested respondents to 1 of 3 interactive prioritization activities to further develop priorities: a Delphi panel, an online crowd voting activity, or an in-person facilitated prioritization activity using nominal group technique (NGT). Participants involved in each activity complete a survey to evaluate the quality of the experience and a subset of these participants discuss their experience further in an interview. Descriptive statistics are used to characterize the rankings produced by each method and compare the top 5

  17. Integrity in Biomedical Research: A Systematic Review of Studies in China.

    PubMed

    Yi, Nannan; Nemery, Benoit; Dierickx, Kris

    2018-05-02

    Recent empirical evidence has demonstrated that research misconduct occurs to a substantial degree in biomedical research. It has been suggested that scientific integrity is also of concern in China, but this seems to be based largely on anecdotal evidence. We, therefore, sought to explore the Chinese situation, by making a systematic review of published empirical studies on biomedical research integrity in China. One of our purposes was also to summarize the existing body of research published in Chinese. We searched the China National Knowledge Infrastructure, Wanfang Data, PubMed and Web of Science for potentially relevant studies, and included studies meeting our inclusion criteria, i.e. mainly those presenting empirically obtained data about the practice of research in China. All the data was extracted and synthesized using an inductive approach. Twenty-one studies were included for review. Two studies used qualitative methods (interviews) and nineteen studies used quantitative methods (questionnaires). Studies involved mainly medical postgraduates and nurses and they investigated awareness, attitudes, perceptions and experiences of research integrity and misconduct. Most of the participants in these 21 studies reported that research integrity is of great importance and that they obey academic norms during their research. Nevertheless, the occurrence of research misbehaviors, such as fabrication, falsification, plagiarism, improper authorship and duplicate submission was also reported. Strengthening research integrity training, developing the governance system and improving the scientific evaluation system were areas of particular attention in several studies. Our review demonstrates that a substantial number of articles have been devoted to research integrity in China, but only a few studies provide empirical evidence. With more safeguard measures of research integrity being taken in China, it would be crucial to conduct more research to explore researchers

  18. Emerging Methodologies in Pediatric Palliative Care Research: Six Case Studies

    PubMed Central

    Nelson, Katherine E.; Gerhardt, Cynthia A.; Rosenberg, Abby R.; Widger, Kimberley; Faerber, Jennifer A.; Feudtner, Chris

    2018-01-01

    Given the broad focus of pediatric palliative care (PPC) on the physical, emotional, and spiritual needs of children with potentially life-limiting illnesses and their families, PPC research requires creative methodological approaches. This manuscript, written by experienced PPC researchers, describes issues encountered in our own areas of research and the novel methods we have identified to target them. Specifically, we discuss potential approaches to: assessing symptoms among nonverbal children, evaluating medical interventions, identifying and treating problems related to polypharmacy, addressing missing data in longitudinal studies, evaluating longer-term efficacy of PPC interventions, and monitoring for inequities in PPC service delivery. PMID:29495384

  19. How to Critically Evaluate Case Studies in Social Work

    ERIC Educational Resources Information Center

    Lee, Eunjung; Mishna, Faye; Brennenstuhl, Sarah

    2010-01-01

    The purpose of this article is to develop guidelines to assist practitioners and researchers in evaluating and developing rigorous case studies. The main concern in evaluating a case study is to accurately assess its quality and ultimately to offer clients social work interventions informed by the best available evidence. To assess the quality of…

  20. Indicators as Judgment Devices: An Empirical Study of Citizen Bibliometrics in Research Evaluation

    ERIC Educational Resources Information Center

    Hammarfelt, Björn; Rushforth, Alexander D.

    2017-01-01

    A researcher's number of publications has been a fundamental merit in the competition for academic positions since the late 18th century. Today, the simple counting of publications has been supplemented with a whole range of bibliometric indicators, which supposedly not only measures the volume of research but also its impact. In this study, we…

  1. Research on evaluation techniques for immersive multimedia

    NASA Astrophysics Data System (ADS)

    Hashim, Aslinda M.; Romli, Fakaruddin Fahmi; Zainal Osman, Zosipha

    2013-03-01

    Nowadays Immersive Multimedia covers most usage in tremendous ways, such as healthcare/surgery, military, architecture, art, entertainment, education, business, media, sport, rehabilitation/treatment and training areas. Moreover, the significant of Immersive Multimedia to directly meet the end-users, clients and customers needs for a diversity of feature and purpose is the assembly of multiple elements that drive effective Immersive Multimedia system design, so evaluation techniques is crucial for Immersive Multimedia environments. A brief general idea of virtual environment (VE) context and `realism' concept that formulate the Immersive Multimedia environments is then provided. This is followed by a concise summary of the elements of VE assessment technique that is applied in Immersive Multimedia system design, which outlines the classification space for Immersive Multimedia environments evaluation techniques and gives an overview of the types of results reported. A particular focus is placed on the implications of the Immersive Multimedia environments evaluation techniques in relation to the elements of VE assessment technique, which is the primary purpose of producing this research. The paper will then conclude with an extensive overview of the recommendations emanating from the research.

  2. Why, and how, mixed methods research is undertaken in health services research in England: a mixed methods study.

    PubMed

    O'Cathain, Alicia; Murphy, Elizabeth; Nicholl, Jon

    2007-06-14

    Recently, there has been a surge of international interest in combining qualitative and quantitative methods in a single study--often called mixed methods research. It is timely to consider why and how mixed methods research is used in health services research (HSR). Documentary analysis of proposals and reports of 75 mixed methods studies funded by a research commissioner of HSR in England between 1994 and 2004. Face-to-face semi-structured interviews with 20 researchers sampled from these studies. 18% (119/647) of HSR studies were classified as mixed methods research. In the documentation, comprehensiveness was the main driver for using mixed methods research, with researchers wanting to address a wider range of questions than quantitative methods alone would allow. Interviewees elaborated on this, identifying the need for qualitative research to engage with the complexity of health, health care interventions, and the environment in which studies took place. Motivations for adopting a mixed methods approach were not always based on the intrinsic value of mixed methods research for addressing the research question; they could be strategic, for example, to obtain funding. Mixed methods research was used in the context of evaluation, including randomised and non-randomised designs; survey and fieldwork exploratory studies; and instrument development. Studies drew on a limited number of methods--particularly surveys and individual interviews--but used methods in a wide range of roles. Mixed methods research is common in HSR in the UK. Its use is driven by pragmatism rather than principle, motivated by the perceived deficit of quantitative methods alone to address the complexity of research in health care, as well as other more strategic gains. Methods are combined in a range of contexts, yet the emerging methodological contributions from HSR to the field of mixed methods research are currently limited to the single context of combining qualitative methods and

  3. Why, and how, mixed methods research is undertaken in health services research in England: a mixed methods study

    PubMed Central

    O'Cathain, Alicia; Murphy, Elizabeth; Nicholl, Jon

    2007-01-01

    Background Recently, there has been a surge of international interest in combining qualitative and quantitative methods in a single study – often called mixed methods research. It is timely to consider why and how mixed methods research is used in health services research (HSR). Methods Documentary analysis of proposals and reports of 75 mixed methods studies funded by a research commissioner of HSR in England between 1994 and 2004. Face-to-face semi-structured interviews with 20 researchers sampled from these studies. Results 18% (119/647) of HSR studies were classified as mixed methods research. In the documentation, comprehensiveness was the main driver for using mixed methods research, with researchers wanting to address a wider range of questions than quantitative methods alone would allow. Interviewees elaborated on this, identifying the need for qualitative research to engage with the complexity of health, health care interventions, and the environment in which studies took place. Motivations for adopting a mixed methods approach were not always based on the intrinsic value of mixed methods research for addressing the research question; they could be strategic, for example, to obtain funding. Mixed methods research was used in the context of evaluation, including randomised and non-randomised designs; survey and fieldwork exploratory studies; and instrument development. Studies drew on a limited number of methods – particularly surveys and individual interviews – but used methods in a wide range of roles. Conclusion Mixed methods research is common in HSR in the UK. Its use is driven by pragmatism rather than principle, motivated by the perceived deficit of quantitative methods alone to address the complexity of research in health care, as well as other more strategic gains. Methods are combined in a range of contexts, yet the emerging methodological contributions from HSR to the field of mixed methods research are currently limited to the single

  4. Integrating Public Health Policy, Practice, Evaluation, Surveillance, and Research: The School Health Action Planning and Evaluation System

    PubMed Central

    Cameron, Roy; Manske, Stephen; Brown, K. Stephen; Jolin, Mari Alice; Murnaghan, Donna; Lovato, Chris

    2007-01-01

    The Canadian Cancer Society and the National Cancer Institute of Canada have charged their Centre for Behavioral Research and Program Evaluation with contributing to the development of the country’s systemic capacity to link research, policy, and practice related to population-level interventions. Local data collection and feedback systems are integral to this capacity. Canada’s School Health Action Planning and Evaluation System (SHAPES) allows data to be collected from all of a school’s students, and these data are used to produce computer-generated school “health profiles.” SHAPES is being used for intervention planning, evaluation, surveillance, and research across Canada. Strong demand and multipartner investment suggest that SHAPES is adding value in all of these domains. Such systems can contribute substantially to evidence-informed public health practice, public engagement, participatory action research, and relevant, timely population intervention research. PMID:17329662

  5. Visual soil evaluation - future research requirements

    NASA Astrophysics Data System (ADS)

    Emmet-Booth, Jeremy; Forristal, Dermot; Fenton, Owen; Ball, Bruce; Holden, Nick

    2017-04-01

    A review of Visual Soil Evaluation (VSE) techniques (Emmet-Booth et al., 2016) highlighted their established utility for soil quality assessment, though some limitations were identified; (1) The examination of aggregate size, visible intra-porosity and shape forms a key assessment criterion in almost all methods, thus limiting evaluation to structural form. The addition of criteria that holistically examine structure may be desirable. For example, structural stability can be indicated using dispersion tests or examining soil surface crusting, while the assessment of soil colour may indirectly indicate soil organic matter content, a contributor to stability. Organic matter assessment may also indicate structural resilience, along with rooting, earthworm numbers or shrinkage cracking. (2) Soil texture may influence results or impeded method deployment. Modification of procedures to account for extreme texture variation is desirable. For example, evidence of compaction in sandy or single grain soils greatly differs to that in clayey soils. Some procedures incorporate separate classification systems or adjust deployment based on texture. (3) Research into impacts of soil moisture content on VSE evaluation criteria is required. Criteria such as rupture resistance and shape may be affected by moisture content. It is generally recommended that methods are deployed on moist soils and quantification of influences of moisture variation on results is necessary. (4) Robust sampling strategies for method deployment are required. Dealing with spatial variation differs between methods, but where methods can be deployed over large areas, clear instruction on sampling is required. Additionally, as emphasis has been placed on the agricultural production of soil, so the ability of VSE for exploring structural quality in terms of carbon storage, water purification and biodiversity support also requires research. References Emmet-Booth, J.P., Forristal. P.D., Fenton, O., Ball, B

  6. Developmental Advising for Marginalized Community College Students: An Action Research Study

    ERIC Educational Resources Information Center

    Jones, Terrica S.

    2013-01-01

    The purpose of this action research study was to understand, evaluate, and improve the developmental advising practices used at a Washington State community college. This action research study endeavored to strengthen the developmental advising model originally designed to support the college's marginalized students. Guiding questions for the…

  7. Research design issues for evaluating complex multicomponent interventions in neighborhoods and communities.

    PubMed

    Komro, Kelli A; Flay, Brian R; Biglan, Anthony; Wagenaar, Alexander C

    2016-03-01

    Major advances in population health will not occur unless we translate existing knowledge into effective multicomponent interventions, implement and maintain these in communities, and develop rigorous translational research and evaluation methods to ensure continual improvement and sustainability. We discuss challenges and offer approaches to evaluation that are key for translational research stages 3 to 5 to advance optimized adoption, implementation, and maintenance of effective and replicable multicomponent strategies. The major challenges we discuss concern (a) multiple contexts of evaluation/research, (b) complexity of packages of interventions, and (c) phases of evaluation/research questions. We suggest multiple alternative research designs that maintain rigor but accommodate these challenges and highlight the need for measurement systems. Longitudinal data collection and a standardized continuous measurement system are fundamental to the evaluation and refinement of complex multicomponent interventions. To be useful to T3-T5 translational research efforts in neighborhoods and communities, such a system would include assessments of the reach, implementation, effects on immediate outcomes, and effects of the comprehensive intervention package on more distal health outcomes.

  8. An Evaluation of Research Training: The Testing, Research, and Data Processing Unit of the University Counseling Center. Research Report #5-88.

    ERIC Educational Resources Information Center

    Thompson, Chalmer E.; Sedlacek, William E.

    The relative contributions of a research assistantship experience to graduate training programs in counseling psychology and student personnel were evaluated. The following areas were assessed: (1) the extent to which research competencies are enhanced among former research assistants; (2) the extent to which attitudes toward research are enhanced…

  9. Research and evaluation in the transformation of primary care.

    PubMed

    Peek, C J; Cohen, Deborah J; deGruy, Frank V

    2014-01-01

    Across the United States, primary care practices are engaged in demonstration projects and quality improvement efforts aimed at integrating behavioral health and primary care. Efforts to make sustainable changes at the frontline of care have identified new research and evaluation needs. These efforts enable clinics and larger health care communities to learn from demonstration projects regarding what works and what does not when integrating mental health, substance use, and primary care under realistic circumstances. To do this, implementers need to measure their successes and failures to inform local improvement processes, including the efforts of those working on integration in separate but similar settings. We review how new research approaches, beyond the contributions of traditional controlled trials, are needed to inform integrated behavioral health. Illustrating with research examples from the field, we describe how research traditions can be extended to meet these new research and learning needs of frontline implementers. We further suggest that a shared language and set of definitions for the field (not just for a particular study) are critical for the aggregation of knowledge and learning across practices and for policymaking and business modeling.

  10. Evaluating Faculty Work: Expectations and Standards of Faculty Performance in Research Universities

    ERIC Educational Resources Information Center

    Hardre, Patricia; Cox, Michelle

    2009-01-01

    Expectations and the way they are communicated can influence employees' motivation and performance. Previous research has demonstrated individual effects of workplace climate and individual differences on faculty productivity. The present study focused on the characteristics of institutional performance standards, evaluation processes and…

  11. OFFICE OF RESEARCH AND DEVELOPMENT'S FOUR LAB STUDY: TOXICOLOGICCAL AND CHEMICAL EVALUATION OF COMPLEX MIXTURES OF DISINFECTION BY-PRODUCTS (DBPS) AND QUALITY ASSURANCE ACTIVITIES FOR A LARGE U. S. EPA MULTILABORATORY STUDY

    EPA Science Inventory

    Office of Research and Development's Four Lab Study: Toxicological and Chemical Evaluation of Complex Mixtures of Disinfection By-Products (DBPs), and Quality Assurance Activities for a Large U.S. EPA Multilaboratoty Study

    Thomas J. Hughes, Project and QA Manager, Expe...

  12. Collecting and Utilizing Evaluation Research for Public Good and on Behalf of African American Children

    ERIC Educational Resources Information Center

    Thomas, Veronica G.; McKie, Brooke K.

    2006-01-01

    A study indicates that researchers entrusted with evaluating the educational outcomes of African American children must engage their practice for the public good and on behalf of these students. The Howard University Evaluation Training Institute is used as a guide to describe the steps for conducting quality evaluations, and to highlight the…

  13. EPA Research Evaluating CAFO Impacts on Ground Water Quality

    EPA Science Inventory

    An overview of several projects will be presented on a research program currently underway at ORD’s Ground Water and Ecosystems Restoration Division (GWERD) to evaluate CAFO impacts on ground water quality. The overall research objectives are to characterize the potential for gro...

  14. Insight into Evaluation Practice: A Content Analysis of Designs and Methods Used in Evaluation Studies Published in North American Evaluation-Focused Journals

    ERIC Educational Resources Information Center

    Christie, Christina A.; Fleischer, Dreolin Nesbitt

    2010-01-01

    To describe the recent practice of evaluation, specifically method and design choices, the authors performed a content analysis on 117 evaluation studies published in eight North American evaluation-focused journals for a 3-year period (2004-2006). The authors chose this time span because it follows the scientifically based research (SBR)…

  15. Time-to-event methodology improved statistical evaluation in register-based health services research.

    PubMed

    Bluhmki, Tobias; Bramlage, Peter; Volk, Michael; Kaltheuner, Matthias; Danne, Thomas; Rathmann, Wolfgang; Beyersmann, Jan

    2017-02-01

    Complex longitudinal sampling and the observational structure of patient registers in health services research are associated with methodological challenges regarding data management and statistical evaluation. We exemplify common pitfalls and want to stimulate discussions on the design, development, and deployment of future longitudinal patient registers and register-based studies. For illustrative purposes, we use data from the prospective, observational, German DIabetes Versorgungs-Evaluation register. One aim was to explore predictors for the initiation of a basal insulin supported therapy in patients with type 2 diabetes initially prescribed to glucose-lowering drugs alone. Major challenges are missing mortality information, time-dependent outcomes, delayed study entries, different follow-up times, and competing events. We show that time-to-event methodology is a valuable tool for improved statistical evaluation of register data and should be preferred to simple case-control approaches. Patient registers provide rich data sources for health services research. Analyses are accompanied with the trade-off between data availability, clinical plausibility, and statistical feasibility. Cox' proportional hazards model allows for the evaluation of the outcome-specific hazards, but prediction of outcome probabilities is compromised by missing mortality information. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Delirium diagnosis methodology used in research: a survey-based study.

    PubMed

    Neufeld, Karin J; Nelliot, Archana; Inouye, Sharon K; Ely, E Wesley; Bienvenu, O Joseph; Lee, Hochang Benjamin; Needham, Dale M

    2014-12-01

    To describe methodology used to diagnose delirium in research studies evaluating delirium detection tools. The authors used a survey to address reference rater methodology for delirium diagnosis, including rater characteristics, sources of patient information, and diagnostic process, completed via web or telephone interview according to respondent preference. Participants were authors of 39 studies included in three recent systematic reviews of delirium detection instruments in hospitalized patients. Authors from 85% (N = 33) of the 39 eligible studies responded to the survey. The median number of raters per study was 2.5 (interquartile range: 2-3); 79% were physicians. The raters' median duration of clinical experience with delirium diagnosis was 7 years (interquartile range: 4-10), with 5% having no prior clinical experience. Inter-rater reliability was evaluated in 70% of studies. Cognitive tests and delirium detection tools were used in the delirium reference rating process in 61% (N = 21) and 45% (N = 15) of studies, respectively, with 33% (N = 11) using both and 27% (N = 9) using neither. When patients were too drowsy or declined to participate in delirium evaluation, 70% of studies (N = 23) used all available information for delirium diagnosis, whereas 15% excluded such patients. Significant variability exists in reference standard methods for delirium diagnosis in published research. Increasing standardization by documenting inter-rater reliability, using standardized cognitive and delirium detection tools, incorporating diagnostic expert consensus panels, and using all available information in patients declining or unable to participate with formal testing may help advance delirium research by increasing consistency of case detection and improving generalizability of research results. Copyright © 2014 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  17. Selection, Evaluation, and Modification of a Standard Operating Procedure as a Mechanism for Introducing an Undergraduate Student to Chemical Research: A Case Study

    ERIC Educational Resources Information Center

    Claycomb, Gregory D.; Venable, Frances A.

    2015-01-01

    In an effort to broaden the selection of research opportunities available to a student registered in a one-semester, upper-level independent study course at a primarily undergraduate institution (PUI), a highly motivated student was asked to select, evaluate, and modify a standard operating procedure (SOP). The student gained valuable experience…

  18. Evaluation of the adequacy of information from research on infant mortality in Recife, Pernambuco, Brazil.

    PubMed

    Oliveira, Conceição Maria de; Guimarães, Maria José Bezerra; Bonfim, Cristine Vieira do; Frias, Paulo Germano; Antonino, Verônica Cristina Sposito; Guimarães, Aline Luzia Sampaio; Medeiros, Zulma Maria

    2018-03-01

    This study is an evaluation of infant death research in Recife, Pernambuco (PE). It is a cross-sectional study with 120 variables grouped into six dimensions (prenatal, birth, child care, family characteristics, occurrence of death, and conclusion and recommendations), weighted by consensus technique. The research was classifiedas adequate, partially adequate or inadequate according to a composite indicator assessment (ICA). There was dissension on 11 variables (9 in prenatal dimension, one in labor and birth, and 1 in the conclusions and recommendations). Of the 568 deaths studied, 56.2% have adequate research. The occurrence of death was the best-evaluated dimension and prenatal the poorest. The preparation of the ICA enables professionals and managers of child health policies to identify bottlenecks in the investigation of infant deaths for better targeting of actions, and contributing to the discussion about surveillance in other cities and states.

  19. Integrating utilization-focused evaluation with business process modeling for clinical research improvement.

    PubMed

    Kagan, Jonathan M; Rosas, Scott; Trochim, William M K

    2010-10-01

    New discoveries in basic science are creating extraordinary opportunities to design novel biomedical preventions and therapeutics for human disease. But the clinical evaluation of these new interventions is, in many instances, being hindered by a variety of legal, regulatory, policy and operational factors, few of which enhance research quality, the safety of study participants or research ethics. With the goal of helping increase the efficiency and effectiveness of clinical research, we have examined how the integration of utilization-focused evaluation with elements of business process modeling can reveal opportunities for systematic improvements in clinical research. Using data from the NIH global HIV/AIDS clinical trials networks, we analyzed the absolute and relative times required to traverse defined phases associated with specific activities within the clinical protocol lifecycle. Using simple median duration and Kaplan-Meyer survival analysis, we show how such time-based analyses can provide a rationale for the prioritization of research process analysis and re-engineering, as well as a means for statistically assessing the impact of policy modifications, resource utilization, re-engineered processes and best practices. Successfully applied, this approach can help researchers be more efficient in capitalizing on new science to speed the development of improved interventions for human disease.

  20. Assessing Student Perception of Practice Evaluation Knowledge in Introductory Research Methods

    ERIC Educational Resources Information Center

    Baker, Lisa R.; Pollio, David E.; Hudson, Ashley

    2011-01-01

    The authors explored the use of the Practice Evaluation Knowledge Scale (PEKS) to assess student perception of acquisition and retention of practice evaluation knowledge from an undergraduate research methods class. The authors sampled 2 semesters of undergraduate social work students enrolled in an introductory research methods course.…

  1. 9 CFR 112.9 - Biological products imported for research and evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 9 Animals and Animal Products 1 2012-01-01 2012-01-01 false Biological products imported for research and evaluation. 112.9 Section 112.9 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION... PACKAGING AND LABELING § 112.9 Biological products imported for research and evaluation. A biological...

  2. 9 CFR 112.9 - Biological products imported for research and evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 9 Animals and Animal Products 1 2013-01-01 2013-01-01 false Biological products imported for research and evaluation. 112.9 Section 112.9 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION... PACKAGING AND LABELING § 112.9 Biological products imported for research and evaluation. A biological...

  3. 9 CFR 112.9 - Biological products imported for research and evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 1 2011-01-01 2011-01-01 false Biological products imported for research and evaluation. 112.9 Section 112.9 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION... PACKAGING AND LABELING § 112.9 Biological products imported for research and evaluation. A biological...

  4. 9 CFR 112.9 - Biological products imported for research and evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 9 Animals and Animal Products 1 2014-01-01 2014-01-01 false Biological products imported for research and evaluation. 112.9 Section 112.9 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION... PACKAGING AND LABELING § 112.9 Biological products imported for research and evaluation. A biological...

  5. 9 CFR 112.9 - Biological products imported for research and evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Biological products imported for research and evaluation. 112.9 Section 112.9 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION... PACKAGING AND LABELING § 112.9 Biological products imported for research and evaluation. A biological...

  6. An Evaluation Research Model for System-Wide Textbook Selection.

    ERIC Educational Resources Information Center

    Talmage, Harriet; Walberg, Herbert T.

    One component of an evaluation research model for system-wide selection of curriculum materials is reported: implementation of an evaluation design for obtaining data that permits professional and lay persons to base curriculum materials decisions on a "best fit" principle. The design includes teacher characteristics, learning environment…

  7. An Evaluative Case Study: The Influence of Institutional Policies, Procedures, and Practices on Completion of Nontraditional Transfer Students at a Private, Religious-Based, Doctoral Degree-Granting, Moderate Research University

    ERIC Educational Resources Information Center

    Pack, Elizabeth Myra

    2017-01-01

    The purpose of this single, intrinsic, evaluative case study was to examine the problem of nontraditional transfer student completion at a private, religious-based, doctoral degree-granting, moderate research university in North Carolina. The following research questions guided the study: (a) How do institutional policies, procedures, and…

  8. Research capacity building in midwifery: Case study of an Australian Graduate Midwifery Research Intern Programme.

    PubMed

    Hauck, Yvonne L; Lewis, Lucy; Bayes, Sara; Keyes, Louise

    2015-09-01

    Having the research capacity to identify problems, create new knowledge and most importantly translate this knowledge into practice is essential within health care. Midwifery, as well as other health professions in Australia, is challenged in building its research capacity to contribute evidence to inform clinical practice. The aim of this project was to evaluate an innovative Graduate Midwifery Research Intern Programme offered at a tertiary obstetric hospital in Western Australia, to determine what was working well and how the programme could be improved. A case study approach was used to gain feedback from graduate midwives within a Graduate Research Intern (GRI) Programme. In addition outcomes were compiled of all projects the GRI midwives contributed to. Six GRI midwives participated in a survey comprising of four open ended questions to provide feedback about the programme. Findings confirm that the GRI programme increased the graduates understanding of how research works, its capacity to define a problem, generate new knowledge and inform clinical practice. The GRI midwives' feedback suggested the programme opened their thinking to future study and gave them enhanced insight into women's experiences around childbirth. To grow our knowledge as a professional group, midwives must develop and promote programmes to build our pool of research capable midwives. By sharing our programme evaluation we hope to entice other clinical settings to consider the value in replicating such a programme within their context. Copyright © 2015 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.

  9. Using Research and Evaluation to Support Comprehensive Reform

    ERIC Educational Resources Information Center

    Brock, Thomas; Mayer, Alexander K.; Rutschow, Elizabeth Zachry

    2016-01-01

    This chapter explores the role that research and evaluation play in supporting comprehensive reform in community colleges, focusing on lessons from two major initiatives: Achieving the Dream and Completion by Design.

  10. Methodology for evaluation of railroad technology research projects

    DOT National Transportation Integrated Search

    1981-04-01

    This Project memorandum presents a methodology for evaluating railroad research projects. The methodology includes consideration of industry and societal benefits, with special attention given to technical risks, implementation considerations, and po...

  11. Planting Healthy Roots: Using Documentary Film to Evaluate and Disseminate Community-Based Participatory Research

    PubMed Central

    Brandt, Heather M.; Freedman, Darcy A.; Friedman, Daniela B.; Choi, Seul Ki; Seel, Jessica S.; Guest, M. Aaron; Khang, Leepao

    2016-01-01

    The study purpose was twofold: (1) to evaluate a documentary film featuring the formation and implementation of a farmers’ market and (2) to assess whether the film affected awareness regarding food access issues in a food desert community with high rates of obesity. The coalition model of filmmaking, a model consistent with a community-based participatory research (CBPR) approach, and personal stories, community profiles, and expert interviews were used to develop a documentary film (Planting Healthy Roots). Evaluation demonstrated high levels of approval and satisfaction with the film and CBPR essence of the film. The documentary film aligned with a CBPR approach to document, evaluate, and disseminate research processes and outcomes. PMID:27536929

  12. Evaluation of bonded concrete overlays over asphalt under accelerated loading : research project capsule.

    DOT National Transportation Integrated Search

    2014-05-01

    The overall objective of this research study is to evaluate the structural performance and loadcarrying : capacity of bonded concrete overlay pavement structures through accelerated pavement : testing and document the experience of mix design and con...

  13. Evaluating a Research Training Programme for People with Intellectual Disabilities Participating in Inclusive Research: The Views of Participants

    ERIC Educational Resources Information Center

    Fullana, Judit; Pallisera, Maria; Català, Elena; Puyalto, Carolina

    2017-01-01

    Background: This article presents the results of evaluating a research training programme aimed at developing the skills of people with intellectual disabilities to actively participate in inclusive research. Methods: The present authors opted for a responsive approach to evaluation, using a combination of interviews, questionnaires and focus…

  14. The Utilization of Evaluation Research in Litigation.

    ERIC Educational Resources Information Center

    Saks, Michael J.

    1980-01-01

    The judicial branches of government have unique needs for evaluative information and limited resources; this must be recognized in order to increase the utilization of applied social research by courts and lawyers. (Available from: Jossey-Bass, Inc., 433 California St., San Francisco, CA 94104, single issue, $6.95.) (GDC)

  15. Taking stock of four decades of quantitative research on stakeholder participation and evaluation use: a systematic map.

    PubMed

    Daigneault, Pierre-Marc

    2014-08-01

    Stakeholder participation and evaluation use have attracted a lot of attention from practitioners, theorists and researchers. A common hypothesis is that participation is positively associated with evaluation use. Whereas the number of empirical studies conducted on this topic is impressive, quantitative research has held a minority position within this scientific production. This study mobilizes systematic review methods to 'map' the empirical literature that has quantitatively studied participation and use. The goal is to take stock and assess the strength of evidence of this literature (but not to synthesize the findings) and, based on this assessment, to provide directions for future research. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. The Role of Formative Evaluation in Implementation Research and the QUERI Experience

    PubMed Central

    Stetler, Cheryl B; Legro, Marcia W; Wallace, Carolyn M; Bowman, Candice; Guihan, Marylou; Hagedorn, Hildi; Kimmel, Barbara; Sharp, Nancy D; Smith, Jeffrey L

    2006-01-01

    This article describes the importance and role of 4 stages of formative evaluation in our growing understanding of how to implement research findings into practice in order to improve the quality of clinical care. It reviews limitations of traditional approaches to implementation research and presents a rationale for new thinking and use of new methods. Developmental, implementation-focused, progress-focused, and interpretive evaluations are then defined and illustrated with examples from Veterans Health Administration Quality Enhancement Research Initiative projects. This article also provides methodologic details and highlights challenges encountered in actualizing formative evaluation within implementation research. PMID:16637954

  17. Ethical Evaluation of Mental Health Social Research: Agreement Between Researchers and Ethics Committees.

    PubMed

    Mondragón Barrios, Liliana; Guarneros García, Tonatiuh; Jiménez Tapia, Alberto

    2017-07-01

    The objective of this article is to compare various ethical issues considered by social scientists and research ethics committees in the evaluation of mental health social research protocols. We contacted 47 social scientists and 10 members of ethics committees in Mexico with two electronic national surveys that requested information from both groups related to the application of ethical principles in mental health social research. The results showed no significant difference between these groups in the value placed on the ethical issues explored. Based on this finding, we make proposals to strengthen the collaboration between the two groups.

  18. The promise and challenge of practice-research collaborations: Guiding principles and strategies for initiating, designing, and implementing program evaluation research.

    PubMed

    Secret, Mary; Abell, Melissa L; Berlin, Trey

    2011-01-01

    The authors present a set of guiding principles and strategies to facilitate the collaborative efforts of social work researchers and practitioners as they initiate, design, and implement outcome evaluations of human service interventions and programs. Beginning with an exploration of the interpersonal barriers to practice-research collaborations, and building on their experiences in successfully completing a community-based research evaluation, the authors identify specific relationship-focused principles and strategies and illustrate how these approaches can guide practice-research teams through the various sequential activities of the evaluation research process. In particular, it is suggested that practice-research collaborations can be formed, strengthened, and sustained by emphasis on a spirit of discovery and shared leadership at the start of the relationship, use of a comprehensive evaluation model to clarify and frame the evaluation and program goals, beginning where the client is when selecting research methodology and measurement tools, commitment to keeping the program first and recording everything during the implementation and data-collection stages, discussion of emerging findings and presentation of findings in graphic format at the data-analysis stage, and a total team approach at the dissemination stage.

  19. Transdisciplinary research for impact: protocol for a realist evaluation of the relationship between transdisciplinary research collaboration and knowledge translation

    PubMed Central

    Archibald, Mandy M; Harvey, Gillian; Kitson, Alison L

    2018-01-01

    Introduction Transdisciplinary teams are increasingly regarded as integral to conducting effective research. Similarly, knowledge translation is often seen as a solution to improving the relevance and benefits of health research. Yet, whether, how, for whom and under which circumstances transdisciplinary research influences knowledge translation is undertheorised, which limits its potential impact. The proposed research aims to identify the contexts and mechanisms by which transdisciplinary research contributes to developing shared understandings and behaviours of knowledge translation between team members. Methods and analysis Using a longitudinal case-study design approach to realist evaluation, we outline a study protocol examining whether, how, if and for whom transdisciplinary collaboration can impact knowledge translation understandings and behaviours within a 5-year transdisciplinary Centre of Research Excellence. Data are being collected between February 2017 and December 2020 over four rounds of theory development, refinement and testing using interviews, observation, document review and visual elicitation as data sources. Ethics and dissemination The Health Research Ethics Committee of the University of Adelaide approved this study. Findings will be communicated with team members at scheduled intervals throughout the study verbally and by means of creative reflective approaches (eg, arts elicitation, journalling). This research will be used to help support optimal team functioning by identifying strategies to support knowledge sharing and communication within and beyond the team to facilitate attainment of research objectives. Academic dissemination will occur through publication and presentations. PMID:29627820

  20. Research on evaluation methods for water regulation ability of dams in the Huai River Basin

    NASA Astrophysics Data System (ADS)

    Shan, G. H.; Lv, S. F.; Ma, K.

    2016-08-01

    Water environment protection is a global and urgent problem that requires correct and precise evaluation. Evaluation methods have been studied for many years; however, there is a lack of research on the methods of assessing the water regulation ability of dams. Currently, evaluating the ability of dams has become a practical and significant research orientation because of the global water crisis, and the lack of effective ways to manage a dam's regulation ability has only compounded this. This paper firstly constructs seven evaluation factors and then develops two evaluation approaches to implement the factors according to the features of the problem. Dams of the Yin Shang ecological control section in the Huai He River basin are selected as an example to demonstrate the method. The results show that the evaluation approaches can produce better and more practical suggestions for dam managers.

  1. Practical Assessment, Research and Evaluation, 2002-2003.

    ERIC Educational Resources Information Center

    Rudner, Lawrence M., Ed.; Schaefer, William D., Ed.

    2000-01-01

    This document consists of the first 10 articles of volume 8 of the electronic journal "Practical Assessment, Research & Evaluation" published in 2002-2003: (1) "Using Electronic Surveys: Advice from Survey Professionals" (David M. Shannon, Todd E. Johnson, Shelby Searcy, and Alan Lott); (2) "Four Assumptions of Multiple Regression That Researchers…

  2. Situated Research Design and Methodological Choices in Formative Program Evaluation

    ERIC Educational Resources Information Center

    Supovitz, Jonathan

    2013-01-01

    Design-based implementation research offers the opportunity to rethink the relationships between intervention, research, and situation to better attune research and evaluation to the program development process. Using a heuristic called the intervention development curve, I describe the rough trajectory that programs typically follow as they…

  3. Consumer and community involvement in health and medical research: evaluation by online survey of Australian training workshops for researchers.

    PubMed

    McKenzie, Anne; Alpers, Kirsten; Heyworth, Jane; Phuong, Cindy; Hanley, Bec

    2016-01-01

    In Australia, since 2009, the Consumer and Community Involvement Program (formerly the Consumer and Community Participation Program) has developed and run workshops to help people working in health and medical research involve more consumers (patients) and community members (the public) in their research. In 2012, workshop attendees were invited to do an online survey to find out the effect, if any, that attending a workshop had on their awareness of and attitudes to consumer and community involvement. They were also asked about changes in their behaviour when it came to the involvement of consumers and the community in their work. The study found that, for people who answered the survey, more than double the number found consumer and community involvement very relevant after attending a workshop, compared with the number who thought that before attending one. Also, amongst those who answered the survey, 94 % thought that the workshop increased their understanding about involvement. Background There is limited evidence of the benefits of providing training workshops for researchers on how to involve consumers (patients) and the community (public) in health and medical research. Australian training workshops were evaluated to contribute to the evidence base. The key objective was to evaluate the impact of the workshops in increasing awareness of consumer and community involvement; changing attitudes to future implementation of involvement activities and influencing behaviour in the methods of involvement used. A secondary objective was to use a formal evaluation survey to build on the anecdotal feedback received from researchers about changes in awareness, attitudes and behaviours. Methods The study used a cross-sectional, online survey of researchers, students, clinicians, administrators and members of non-government organisations who attended Consumer and Community Involvement Program training workshops between 2009 and 2012 to ascertain changes to awareness

  4. Evaluation in health promotion: thoughts from inside a human research ethics committee.

    PubMed

    Allen, Judy; Flack, Felicity

    2015-12-01

    Health promotion research, quality improvement and evaluation are all activities that raise ethical issues. In this paper, the Chair and a member of human resear ch ethics committees provide an insiders' point of view on how to demonstrate ethical conduct in health promotion research and quality improvement. Several common issues raised by health promotion research and evaluation are discussed including researcher integrity, conflicts of interest, use of information, consent and privacy.

  5. Action Research Networks: Role and Purpose in the Evaluation of Research Outcomes and Impacts

    ERIC Educational Resources Information Center

    Zornes, Deborah; Ferkins, Lesley; Piggot-Irvine, Eileen

    2016-01-01

    The focus of this paper is to share thinking about networks in action research (AR) and to consider their role, purpose, and how networks' outcomes and impacts might be evaluated. Networks are often a by-product of AR projects, yet research focused on the network itself as part of a project is rare. The paper is one of several associated with the…

  6. 9 CFR 104.4 - Products for research and evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... the producer for a small quantity of such product for in vitro Research and Evaluation tests: Provided... Administrator may require in order to assess the product's impact on the environment. (b)(1) A permit to import... and evaluation anywhere in or from the United States unless authorized by the Administrator in...

  7. 9 CFR 104.4 - Products for research and evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... the producer for a small quantity of such product for in vitro Research and Evaluation tests: Provided... Administrator may require in order to assess the product's impact on the environment. (b)(1) A permit to import... and evaluation anywhere in or from the United States unless authorized by the Administrator in...

  8. On the automatic activation of attitudes: a quarter century of evaluative priming research.

    PubMed

    Herring, David R; White, Katherine R; Jabeen, Linsa N; Hinojos, Michelle; Terrazas, Gabriela; Reyes, Stephanie M; Taylor, Jennifer H; Crites, Stephen L

    2013-09-01

    Evaluation is a fundamental concept in psychological science. Limitations of self-report measures of evaluation led to an explosion of research on implicit measures of evaluation. One of the oldest and most frequently used implicit measurement paradigms is the evaluative priming paradigm developed by Fazio, Sanbonmatsu, Powell, and Kardes (1986). This paradigm has received extensive attention in psychology and is used to investigate numerous phenomena ranging from prejudice to depression. The current review provides a meta-analysis of a quarter century of evaluative priming research: 73 studies yielding 125 independent effect sizes from 5,367 participants. Because judgments people make in evaluative priming paradigms can be used to tease apart underlying processes, this meta-analysis examined the impact of different judgments to test the classic encoding and response perspectives of evaluative priming. As expected, evidence for automatic evaluation was found, but the results did not exclusively support either of the classic perspectives. Results suggest that both encoding and response processes likely contribute to evaluative priming but are more nuanced than initially conceptualized by the classic perspectives. Additionally, there were a number of unexpected findings that influenced evaluative priming such as segmenting trials into discrete blocks. We argue that many of the findings of this meta-analysis can be explained with 2 recent evaluative priming perspectives: the attentional sensitization/feature-specific attention allocation and evaluation window perspectives. (c) 2013 APA, all rights reserved.

  9. Project Head Start Research and Evaluation Center, Syracuse University Research Institute. Final Report, November 1, 1967.

    ERIC Educational Resources Information Center

    Hall, Vernon; And Others

    This document describes the research activities of the Syracuse University Evaluation and Research Center for the year September 1, 1966 through August 31, 1967. This final report is organized on the basis of six research projects, which have been abstracted under the following titles and numbers: (1) Experiments in Grammatical Processing in…

  10. Government Research Evaluations and Academic Freedom: A UK and Australian Comparison

    ERIC Educational Resources Information Center

    Martin-Sardesai, Ann; Irvine, Helen; Tooley, Stuart; Guthrie, James

    2017-01-01

    Performance management systems have been an inevitable consequence of the development of government research evaluations (GREs) of university research, and have also inevitably affected the working life of academics. The aim of this paper is to track the development of GREs over the past 25 years, by critically evaluating their adoption in the UK…

  11. [Qualitative Research in Health Services Research - Discussion Paper, Part 3: Quality of Qualitative Research].

    PubMed

    Stamer, M; Güthlin, C; Holmberg, C; Karbach, U; Patzelt, C; Meyer, T

    2015-12-01

    The third and final discussion paper of the German Network of Health Services Research's (DNVF) "Qualitative Methods Working Group" demonstrates methods for the evaluation and quality of qualitative research in health services research. In this paper we discuss approaches described in evaluating qualitative studies, including: an orientation to the general principles of empirical research, an approach-specific course of action, as well as procedures based on the research-process and criteria-oriented approaches. Divided into general and specific aspects to be considered in a qualitative study quality evaluation, the central focus of the discussion paper undertakes an extensive examination of the process and criteria-oriented approaches. The general aspects include the participation of relevant groups in the research process as well as ethical aspects of the research and data protection issues. The more specific aspects in evaluating the quality of qualitative research include considerations about the research interest, research questions, and the selection of data collection methods and types of analyses. The formulated questions are intended to guide reviewers and researchers to evaluate and to develop qualitative research projects appropriately. The intention of this discussion paper is to ensure a transparent research culture, and to reflect on and discuss the methodological and research approach of qualitative studies in health services research. With this paper we aim to initiate a discussion on high quality evaluation of qualitative health services research. © Georg Thieme Verlag KG Stuttgart · New York.

  12. Evaluation of resistivity meters for concrete quality assurance : [research summary].

    DOT National Transportation Integrated Search

    2015-07-01

    This research evaluated a series of MoDOT : concrete mixtures to verify existing : relationships between surface resistivity (SR), : rapid chloride permeability (RCP), chloride ion : diffusion, and the AASHTO penetrability : classes. The research als...

  13. Key success factors of health research centers: A mixed method study.

    PubMed

    Tofighi, Shahram; Teymourzadeh, Ehsan; Heydari, Majid

    2017-08-01

    In order to achieve success in future goals and activities, health research centers are required to identify their key success factors. This study aimed to extract and rank the factors affecting the success of research centers at one of the medical universities in Iran. This study is a mixed method (qualitative-quantitative) study, which was conducted between May to October in 2016. The study setting was 22 health research centers. In qualitative phase, we extracted the factors affecting the success in research centers through purposeful interviews with 10 experts of centers, and classified them into themes and sub-themes. In the quantitative phase, we prepared a questionnaire and scored and ranked the factors recognized by 54 of the study samples by Friedman test. Nine themes and 42 sub-themes were identified. Themes included: strategic orientation, management, human capital, support, projects, infrastructure, communications and collaboration, paradigm and innovation and they were rated respectively as components of success in research centers. Among the 42 identified factors, 10 factors were ranked respectively as the key factors of success, and included: science and technology road map, strategic plan, evaluation indexes, committed human resources, scientific evaluation of members and centers, innovation in research and implementation, financial support, capable researchers, equipment infrastructure and teamwork. According to the results, the strategic orientation was the most important component in the success of research centers. Therefore, managers and authorities of research centers should pay more attention to strategic areas in future planning, including the science and technology road map and strategic plan.

  14. Key success factors of health research centers: A mixed method study

    PubMed Central

    Tofighi, Shahram; Teymourzadeh, Ehsan; Heydari, Majid

    2017-01-01

    Background In order to achieve success in future goals and activities, health research centers are required to identify their key success factors. Objective This study aimed to extract and rank the factors affecting the success of research centers at one of the medical universities in Iran. Methods This study is a mixed method (qualitative-quantitative) study, which was conducted between May to October in 2016. The study setting was 22 health research centers. In qualitative phase, we extracted the factors affecting the success in research centers through purposeful interviews with 10 experts of centers, and classified them into themes and sub-themes. In the quantitative phase, we prepared a questionnaire and scored and ranked the factors recognized by 54 of the study samples by Friedman test. Results Nine themes and 42 sub-themes were identified. Themes included: strategic orientation, management, human capital, support, projects, infrastructure, communications and collaboration, paradigm and innovation and they were rated respectively as components of success in research centers. Among the 42 identified factors, 10 factors were ranked respectively as the key factors of success, and included: science and technology road map, strategic plan, evaluation indexes, committed human resources, scientific evaluation of members and centers, innovation in research and implementation, financial support, capable researchers, equipment infrastructure and teamwork. Conclusion According to the results, the strategic orientation was the most important component in the success of research centers. Therefore, managers and authorities of research centers should pay more attention to strategic areas in future planning, including the science and technology road map and strategic plan. PMID:28979733

  15. Women and the Crossroads of Science: Thoughts on Policy, Research, and Evaluation

    NASA Astrophysics Data System (ADS)

    Dietz, James S.; Anderson, Bernice; Katzenmeyer, Conrad

    In this essay, the authors examine the crosscutting themes of this special issue as they pertain to policy, research, and evaluation of women and science. Past and current research, theory, frameworks, and programs are discussed in the context of challenges and innovations for methods and policy. The authors assert that the crossroads for gender equity studies lies at the intersection of science and society and argue for the need to build a base of cumulative knowledge for policy and practice.

  16. Report of the Anthropology Curriculum Study Project-Research Program.

    ERIC Educational Resources Information Center

    Parsons, T. W.; And Others

    The study evaluated an Anthropology Curriculum Study Project course, "Patterns in Human History," used with high school students in a one year field test situation. Ethnographic and cognitive components of the curriculum were examined. The specific objective of the research was to examine the behavioral effects on students, teachers, and…

  17. Usability Evaluation of a Research Repository and Collaboration Web Site

    ERIC Educational Resources Information Center

    Zhang, Tao; Maron, Deborah J.; Charles, Christopher C.

    2013-01-01

    This article reports results from an empirical usability evaluation of Human-Animal Bond Research Initiative Central as part of the effort to develop an open access research repository and collaboration platform for human-animal bond researchers. By repurposing and altering key features of the original HUBzero system, Human-Animal Bond Research…

  18. [The positioning of nursing research in the academic studies: the origin and development of qualitative and quantitative studies].

    PubMed

    Lu, Pei-Pei; Ting, Shing-Shiang; Chen, Mei-Ling; Tang, Woung-Ru

    2005-12-01

    The purpose of this study is to discuss the historical context of qualitative and quantitative research so as to explain the principle of qualitative study and examine the positioning of nursing research within academic study as a whole. This paper guides the readers towards the historical context from empirical science, discusses the influences of qualitative and quantitative research on nursing research, then investigates the nature of research paradigms, examines the positioning of nursing research, which includes the characteristics of fields such as natural science, humanity and social studies, and science, and lastly, presents the research standard proposed by Yardley in 2000. The research paradigms include Positivism, Postpositivism, Criticism, and Constructivism, which can be compared with Ontology, Epistemology, and Methodology. The nature of the paradigm is to determine the assumption of the paradigm on the basis of Ontology, Epistemology, and Methodology. The paradigm determines how the researcher views the world and decides on what to answer, how to research, and how to answer. The difference in academic environment is reflected in the long-term dialogue between qualitative and quantitative studies, as well as the standard for criticism. This paper introduces the method of evaluation of the quality of qualitative study proposed by Yardley in 2002, namely the sensitivity of the context, the promise and conscientiousness, transparency and consistency, influence and significance. The paper is intended to provide a guideline for readers in evaluating the quality of qualitative study.

  19. Evaluating the BK 21 Program. Research Brief

    ERIC Educational Resources Information Center

    Seong, Somi; Popper, Steven W.; Goldman, Charles A.; Evans, David K.; Grammich, Clifford A.

    2008-01-01

    The Brain Korea 21 program (BK21), an effort to improve Korean universities and research, has attracted a great deal of attention in Korea, producing the need to understand how well the program is meeting its goals. RAND developed a logic model for identifying program goals and dynamics, suggested quantitative and qualitative evaluation methods,…

  20. Research on Comprehensive Evaluation Method for Heating Project Based on Analytic Hierarchy Processing

    NASA Astrophysics Data System (ADS)

    Han, Shenchao; Yang, Yanchun; Liu, Yude; Zhang, Peng; Li, Siwei

    2018-01-01

    It is effective to reduce haze in winter by changing the distributed heat supply system. Thus, the studies on comprehensive index system and scientific evaluation method of distributed heat supply project are essential. Firstly, research the influence factors of heating modes, and an index system with multiple dimension including economic, environmental, risk and flexibility was built and all indexes were quantified. Secondly, a comprehensive evaluation method based on AHP was put forward to analyze the proposed multiple and comprehensive index system. Lastly, the case study suggested that supplying heat with electricity has great advantage and promotional value. The comprehensive index system of distributed heating supply project and evaluation method in this paper can evaluate distributed heat supply project effectively and provide scientific support for choosing the distributed heating project.

  1. National Seminar on Research in Evaluation of Occupational Education.

    ERIC Educational Resources Information Center

    North Carolina State Univ., Raleigh. Center for Occupational Education.

    The purpose of this seminar, attended by 21 participants, was to examine issues, problems, and components of models for the evaluation of occupational education. A primary objective was to stimulate interest in evaluation as an object of research effort. Papers presented include: (1) "The Value Structure of Society Toward Work" by Arthur R. Jones,…

  2. Transdisciplinary research for impact: protocol for a realist evaluation of the relationship between transdisciplinary research collaboration and knowledge translation.

    PubMed

    Archibald, Mandy M; Lawless, Michael; Harvey, Gillian; Kitson, Alison L

    2018-04-07

    Transdisciplinary teams are increasingly regarded as integral to conducting effective research. Similarly, knowledge translation is often seen as a solution to improving the relevance and benefits of health research. Yet, whether, how, for whom and under which circumstances transdisciplinary research influences knowledge translation is undertheorised, which limits its potential impact. The proposed research aims to identify the contexts and mechanisms by which transdisciplinary research contributes to developing shared understandings and behaviours of knowledge translation between team members. Using a longitudinal case-study design approach to realist evaluation, we outline a study protocol examining whether, how, if and for whom transdisciplinary collaboration can impact knowledge translation understandings and behaviours within a 5-year transdisciplinary Centre of Research Excellence. Data are being collected between February 2017 and December 2020 over four rounds of theory development, refinement and testing using interviews, observation, document review and visual elicitation as data sources. The Health Research Ethics Committee of the University of Adelaide approved this study. Findings will be communicated with team members at scheduled intervals throughout the study verbally and by means of creative reflective approaches (eg, arts elicitation, journalling). This research will be used to help support optimal team functioning by identifying strategies to support knowledge sharing and communication within and beyond the team to facilitate attainment of research objectives. Academic dissemination will occur through publication and presentations. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. Scenario studies as a synthetic and integrative research activity for Long-Term Ecological Research

    Treesearch

    Jonathan R. Thompson; Arnim Wiek; Frederick J. Swanson; Stephen R. Carpenter; Nancy Fresco; Teresa Hollingsworth; Thomas A. Spies; David R. Foster

    2012-01-01

    Scenario studies have emerged as a powerful approach for synthesizing diverse forms of research and for articulating and evaluating alternative socioecological futures. Unlike predictive modeling, scenarios do not attempt to forecast the precise or probable state of any variable at a given point in the future. Instead, comparisons among a set of contrasting scenarios...

  4. Exploring Organizational Evaluation Capacity and Evaluation Capacity Building: A Delphi Study of Taiwanese Elementary and Junior High Schools

    ERIC Educational Resources Information Center

    Cheng, Shu-Huei; King, Jean A.

    2017-01-01

    Researchers have conducted numerous empirical studies on evaluation capacity (EC) and evaluation capacity building (ECB) in Western cultural settings. However, little is known about these practices in non-Western contexts. To that end, this study identified the major dimensions of EC and feasible ECB approaches in Taiwanese elementary and junior…

  5. FHWA research and technology evaluation program summary report spring 2016

    DOT National Transportation Integrated Search

    2016-08-01

    This report summarizes the 16 evaluations being conducted by the Volpe National Transportation Systems Center on behalf of FHWAs Research and Technology Program. The FHWA R&T Program furthers the Turner-Fairbank Highway Research Centers goal of...

  6. Evaluating UK research in speech and language therapy.

    PubMed

    Lewison, Grant; Carding, Paul

    2003-01-01

    There has been a steady growth in recent years in British higher-degree training in speech and language therapy. But what is the standing of UK research in the subject and its component areas which should underpin and inform such training? How can such research be evaluated? The intention was to compare UK publications relevant to speech and language therapy with those of other countries, both quantitatively and qualitatively. We sought then to examine the UK papers in more detail to analyse their sources of funding, their geographical distribution and the ways in which they could appropriately be evaluated. Papers were selectively retrieved from the Science Citation Index and the Social Sciences Citation Index for 1991-2000 by means of a filter based on journal names and paper title words. They were subsequently checked to remove many false positives. The papers were classified into one of seven subject areas and by their research level (from clinical to basic). Their importance was estimated through their potential impact on other researchers, as determined by the citation score of their journals, by the numbers of citations they actually received and by the subjective esteem in which the various journals were held by UK speech and language researchers. World output of speech and language therapy papers has averaged 1000 papers per year during the 1990s, and has grown by half over the period. UK output has been about 12% of the total, compared with 10% in biomedicine, and is published in high impact journals relative to the norm for the field, which is quite a low rate compared with biomedicine overall. Almost half the UK papers had no funding acknowledgements, with the private-non-profit and industrial sectors playing less of a role than in other biomedical areas. Papers in seven subject areas showed substantial differences in their performance on the four criteria selected. The state of British speech and language research appears to be satisfactory, with an

  7. An Evaluation of the Research Evidence on the Early Start Denver Model

    ERIC Educational Resources Information Center

    Baril, Erika M.; Humphreys, Betsy P.

    2017-01-01

    The Early Start Denver Model (ESDM) has been gaining popularity as a comprehensive treatment model for children ages 12 to 60 months with autism spectrum disorders (ASD). This article evaluates the research on the ESDM through an analysis of study design and purpose; child participants; setting, intervention agents, and context; density and…

  8. A Conceptual Framework for Graduate Teaching Assistant Professional Development Evaluation and Research

    PubMed Central

    Reeves, Todd D.; Marbach-Ad, Gili; Miller, Kristen R.; Ridgway, Judith; Gardner, Grant E.; Schussler, Elisabeth E.; Wischusen, E. William

    2016-01-01

    Biology graduate teaching assistants (GTAs) are significant contributors to the educational mission of universities, particularly in introductory courses, yet there is a lack of empirical data on how to best prepare them for their teaching roles. This essay proposes a conceptual framework for biology GTA teaching professional development (TPD) program evaluation and research with three overarching variable categories for consideration: outcome variables, contextual variables, and moderating variables. The framework’s outcome variables go beyond GTA satisfaction and instead position GTA cognition, GTA teaching practice, and undergraduate learning outcomes as the foci of GTA TPD evaluation and research. For each GTA TPD outcome variable, key evaluation questions and example assessment instruments are introduced to demonstrate how the framework can be used to guide GTA TPD evaluation and research plans. A common conceptual framework is also essential to coordinating the collection and synthesis of empirical data on GTA TPD nationally. Thus, the proposed conceptual framework serves as both a guide for conducting GTA TPD evaluation at single institutions and as a means to coordinate research across institutions at a national level. PMID:27193291

  9. Research status and evaluation system of heat source evaluation method for central heating

    NASA Astrophysics Data System (ADS)

    Sun, Yutong; Qi, Junfeng; Cao, Yi

    2018-02-01

    The central heating boiler room is a regional heat source heating center. It is also a kind of the urban environment pollution, it is an important section of building energy efficiency. This article through to the evaluation method of central heating boiler room and overviews of the researches during domestic and overseas, summarized the main influence factors affecting energy consumption of industrial boiler under the condition of stable operation. According to the principle of establishing evaluation index system. We can find that is great significance in energy saving and environmental protection for the content of the evaluation index system of the centralized heating system.

  10. An Investigation of Research-Based Teaching Practices through the Teacher Evaluations in Indiana Public Schools

    ERIC Educational Resources Information Center

    Sargent, Michael Steven

    2014-01-01

    The purpose of this study was to identify if a relationship existed between the implementation of professional evaluation processes and the use of research-based teaching practices, factoring in both perceptions of principals and practicing teachers. The variables of professional development on the evaluation model and the principal's years of…

  11. Evaluating department of transportation's research program : a methodology and case study.

    DOT National Transportation Integrated Search

    2012-06-01

    An effective research program within a transportation organization can be a valuable asset to accomplish the goals of the overall : mission. Determining whether a research program is pursuing relevant research projects and obtaining results for the s...

  12. Molasses supplementation of grazing dairy cows: summary of case study, continuous culture fermenter trials, and controlled research farm study

    USDA-ARS?s Scientific Manuscript database

    This fact sheet summarizes the results of a three-tiered research approach (case study, two continuous culture fermenter studies, and a controlled research farm study) to evaluate molasses as an alternative supplement source for grazing dairy cows. A two-year case study of a New York organic dairy f...

  13. An integrated environment for tactical guidance research and evaluation

    NASA Technical Reports Server (NTRS)

    Goodrich, Kenneth H.; Mcmanus, John W.

    1990-01-01

    NASA-Langley's Tactical Guidance Research and Evaluation System (TGRES) constitutes an integrated environment for the development of tactical guidance algorithms and evaluating the effects of novel technologies; the modularity of the system allows easy modification or replacement of system elements in order to conduct evaluations of alternative technologies. TGRES differs from existing systems in its capitalization on AI programming techniques for guidance-logic implementation. Its ability to encompass high-fidelity, six-DOF simulation models will facilitate the analysis of complete aircraft dynamics.

  14. Research priorities in medical education: A national study.

    PubMed

    Tootoonchi, Mina; Yamani, Nikoo; Changiz, Tahereh; Yousefy, Alireza

    2012-01-01

    One preliminary step to strengthen medical education research would be determining the research priorities. The aim of this study was to determine the research priorities of medical education in Iran in 2007-2008. This descriptive study was carried out in two phases. Phase one was performed in 3 stages and used Delphi technique among academic staffs of Isfahan University of Medical Sciences. The three stages included a brainstorming workshop for 140 faculty members and educational experts resulting in a list of research priorities, then, in the second and third stages 99 and 76 questionnaires were distributed among faculty members. In the second phase, the final questionnaires were mailed to educational research center managers of universities type I, II and III, and were distributed among 311 academic members and educational experts to rate the items on a numerical scale ranging from 1 to 10. The most important research priorities included faculty members' development methods, faculty members' motives, satisfaction and welfare, criteria and procedures of faculty members' promotion, teaching methods and learning techniques, job descriptions and professional skills of graduates, quality management in education, second language, clinical education, science production in medicine, faculty evaluation and information technology. This study shows the medial education research priorities in national level and in different types of medical universities in Iran. It is recommended that faculty members and research administrators consider the needs and requirements of education and plan the researches in education according to these priorities.

  15. Research priorities in medical education: A national study

    PubMed Central

    Tootoonchi, Mina; Yamani, Nikoo; Changiz, Tahereh; Yousefy, Alireza

    2012-01-01

    BACKGROUND: One preliminary step to strengthen medical education research would be determining the research priorities. The aim of this study was to determine the research priorities of medical education in Iran in 2007-2008. METHODS: This descriptive study was carried out in two phases. Phase one was performed in 3 stages and used Delphi technique among academic staffs of Isfahan University of Medical Sciences. The three stages included a brainstorming workshop for 140 faculty members and educational experts resulting in a list of research priorities, then, in the second and third stages 99 and 76 questionnaires were distributed among faculty members. In the second phase, the final questionnaires were mailed to educational research center managers of universities type I, II and III, and were distributed among 311 academic members and educational experts to rate the items on a numerical scale ranging from 1 to 10. RESULTS: The most important research priorities included faculty members’ development methods, faculty members’ motives, satisfaction and welfare, criteria and procedures of faculty members’ promotion, teaching methods and learning techniques, job descriptions and professional skills of graduates, quality management in education, second language, clinical education, science production in medicine, faculty evaluation and information technology. CONCLUSIONS: This study shows the medial education research priorities in national level and in different types of medical universities in Iran. It is recommended that faculty members and research administrators consider the needs and requirements of education and plan the researches in education according to these priorities. PMID:23248661

  16. Research Capacity Building: A Historically Black College/University-Based Case Study of a Peer-to-Peer Mentor Research Team Model

    ERIC Educational Resources Information Center

    Moore, Corey L.; Manyibe, Edward O.; Aref, Fariborz; Washington, Andre L.

    2017-01-01

    Purpose: To evaluate a peer-to-peer mentor research team model (PPMRTM) in building investigators' research skills (i.e., research methods and grant writing) at a historically Black college/university (HBCU) in the United States. Method: Three different theories (i.e., planned change, critical mass, and self-efficacy), contemporary study findings,…

  17. Educational research methods for researching innovations in teaching, learning and assessment: The nursing lecturer as researcher.

    PubMed

    Marks-Maran, Diane

    2015-11-01

    The author, who has had previous experience as a nurse researcher, has been engaged in helping nurse lecturers to undertake evaluation research studies into innovations in their teaching, learning and assessment methods. In order to undertake this work successfully, it was important to move from thinking like a nurse researcher to thinking like an educational researcher and developing the role of the nursing lecturer as researcher of their teaching. This article explores the difference between evaluation and evaluation research and argues for the need to use educational research methods when undertaking evaluation research into innovations in teaching, learning and assessment. A new model for educational evaluation research is presented together with two case examples of the model in use. The model has been tested on over 30 research studies into innovations in teaching, learning and assessment over the past 8 years. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Meta-evaluation of published studies on evaluation of health disaster preparedness exercises through a systematic review

    PubMed Central

    Sheikhbardsiri, Hojjat; Yarmohammadian, Mohammad H; Khankeh, Hamid Reza; Nekoei-Moghadam, Mahmoud; Raeisi, Ahmad Reza

    2018-01-01

    OBJECTIVE: Exercise evaluation is one of the most important steps and sometimes neglected in designing and taking exercises, in this stage of exercise, it systematically identifying, gathering, and interpreting related information to indicate how an exercise has fulfilled its objectives. The present study aimed to assess the most important evaluation techniques applied in evaluating health exercises for emergencies and disasters. METHODS: This was meta-evaluation study through a systematic review. In this research, we searched papers based on specific and relevant keywords in research databases including ISI web of science, PubMed, Scopus, Science Direct, Ovid, ProQuest, Wiley, Google Scholar, and Persian database such as ISC and SID. The search keywords and strategies are followed; “simulation,” “practice,” “drill,” “exercise,” “instrument,” “tool,” “questionnaire,” “ measurement,” “checklist,” “scale,” “test,” “inventory,” “battery,” “evaluation,” “assessment,” “appraisal,” “emergency,” “disaster,” “cricise,” “hazard,” “catastrophe,: “hospital”, “prehospital,” “health centers,” “treatment centers,” were used in combination with Boolean operators OR and AND. RESULTS: The research findings indicate that there are different techniques and methods for data collection to evaluate performance exercises of health centers and affiliated organizations in disasters and emergencies including debriefing inventories, self-report, questionnaire, interview, observation, shooting video, and photographing, electronic equipment which can be individually or collectively used depending on exercise objectives or purposes. CONCLUSION: Taking exercise in the health sector is one of the important steps in preparation and implementation of disaster risk management programs. This study can be thus utilized to improve preparedness of different sectors of health system according to the latest

  19. Academic and Educational Development: Research, Evaluation and Changing Practice in Higher Education. Staff and Educational Development Series.

    ERIC Educational Resources Information Center

    Macdonald, Ranald; Wisdom, James

    This practice-oriented book brings together research and evaluation approaches and supporting case studies from educational researchers and teachers. The emphasis is on changing practice in higher education and the research that underpins desirable development. Following an introduction, chapter 1 presents Educational Development Changing Practice…

  20. Surgery resident selection and evaluation. A critical incident study.

    PubMed

    Edwards, J C; Currie, M L; Wade, T P; Kaminski, D L

    1993-03-01

    This article reports a study of the process of selecting and evaluating general surgery residents. In personnel psychology terms, a job analysis of general surgery was conducted using the Critical Incident Technique (CIT). The researchers collected 235 critical incidents through structured interviews with 10 general surgery faculty members and four senior residents. The researchers then directed the surgeons in a two-step process of sorting the incidents into categories and naming the categories. The final essential categories of behavior to define surgical competence were derived through discussion among the surgeons until a consensus was formed. Those categories are knowledge/self-education, clinical performance, diagnostic skills, surgical skills, communication skills, reliability, integrity, compassion, organization skills, motivation, emotional control, and personal appearance. These categories were then used to develop an interview evaluation form for selection purposes and a performance evaluation form to be used throughout residency training. Thus a continuum of evaluation was established. The categories and critical incidents were also used to structure the interview process, which has demonstrated increased interview validity and reliability in many other studies. A handbook for structuring the interviews faculty members conduct with applicants was written, and an interview training session was held with the faculty. The process of implementation of the structured selection interviews is being documented currently through qualitative research.

  1. [Methods in health services research. The example of the evaluation of the German disease management programmes].

    PubMed

    Morfeld, M; Wirtz, M

    2006-02-01

    According to the established definition of Pfaff, health services research analyses patients' path through the institutions of the health care system. The focus is on development, evaluation and implementation of innovative measures of health care. By increasing its quality health services research strives for an improvement of efficacy and efficiency of the health care system. In order to allow for an appropriate evaluation it is essential to differentiate between structure, process and outcome quality referring to (1) the health care system in its entirety, (2) specific health care units as well as (3) processes of communication in different settings. Health services research comprises a large array of scientific disciplines like public health, medicine, social sciences and social care. For the purpose of managing its tasks adequately a special combination of instruments and methodological procedures is needed. Thus, diverse techniques of evaluation research as well as special requirements for study designs and assessment procedures are of vital importance. The example of the German disease management programmes illustrates the methodical requirements for a scientific evaluation.

  2. Cluster-randomized Studies in Educational Research: Principles and Methodological Aspects

    PubMed Central

    Dreyhaupt, Jens; Mayer, Benjamin; Keis, Oliver; Öchsner, Wolfgang; Muche, Rainer

    2017-01-01

    An increasing number of studies are being performed in educational research to evaluate new teaching methods and approaches. These studies could be performed more efficiently and deliver more convincing results if they more strictly applied and complied with recognized standards of scientific studies. Such an approach could substantially increase the quality in particular of prospective, two-arm (intervention) studies that aim to compare two different teaching methods. A key standard in such studies is randomization, which can minimize systematic bias in study findings; such bias may result if the two study arms are not structurally equivalent. If possible, educational research studies should also achieve this standard, although this is not yet generally the case. Some difficulties and concerns exist, particularly regarding organizational and methodological aspects. An important point to consider in educational research studies is that usually individuals cannot be randomized, because of the teaching situation, and instead whole groups have to be randomized (so-called “cluster randomization”). Compared with studies with individual randomization, studies with cluster randomization normally require (significantly) larger sample sizes and more complex methods for calculating sample size. Furthermore, cluster-randomized studies require more complex methods for statistical analysis. The consequence of the above is that a competent expert with respective special knowledge needs to be involved in all phases of cluster-randomized studies. Studies to evaluate new teaching methods need to make greater use of randomization in order to achieve scientifically convincing results. Therefore, in this article we describe the general principles of cluster randomization and how to implement these principles, and we also outline practical aspects of using cluster randomization in prospective, two-arm comparative educational research studies. PMID:28584874

  3. Cluster-randomized Studies in Educational Research: Principles and Methodological Aspects.

    PubMed

    Dreyhaupt, Jens; Mayer, Benjamin; Keis, Oliver; Öchsner, Wolfgang; Muche, Rainer

    2017-01-01

    An increasing number of studies are being performed in educational research to evaluate new teaching methods and approaches. These studies could be performed more efficiently and deliver more convincing results if they more strictly applied and complied with recognized standards of scientific studies. Such an approach could substantially increase the quality in particular of prospective, two-arm (intervention) studies that aim to compare two different teaching methods. A key standard in such studies is randomization, which can minimize systematic bias in study findings; such bias may result if the two study arms are not structurally equivalent. If possible, educational research studies should also achieve this standard, although this is not yet generally the case. Some difficulties and concerns exist, particularly regarding organizational and methodological aspects. An important point to consider in educational research studies is that usually individuals cannot be randomized, because of the teaching situation, and instead whole groups have to be randomized (so-called "cluster randomization"). Compared with studies with individual randomization, studies with cluster randomization normally require (significantly) larger sample sizes and more complex methods for calculating sample size. Furthermore, cluster-randomized studies require more complex methods for statistical analysis. The consequence of the above is that a competent expert with respective special knowledge needs to be involved in all phases of cluster-randomized studies. Studies to evaluate new teaching methods need to make greater use of randomization in order to achieve scientifically convincing results. Therefore, in this article we describe the general principles of cluster randomization and how to implement these principles, and we also outline practical aspects of using cluster randomization in prospective, two-arm comparative educational research studies.

  4. Building research capacity and productivity among advanced practice nurses: an evaluation of the Community of Practice model.

    PubMed

    Gullick, Janice G; West, Sandra H

    2016-03-01

    The aim of this study was to evaluate Wenger's Community of Practice as a framework for building research capacity and productivity. While research productivity is an expected domain in influential models of advanced nursing practice, internationally it remains largely unmet. Establishment of nursing research capacity precedes productivity and consequently, there is a strong imperative to identify successful capacity-building models for nursing-focussed research in busy clinical environments. Prospective, longitudinal, qualitative descriptive design was used in this study. Bruyn's participant observation framed evaluation of a Community of Practice comprising 25 advanced practice nurses. Data from focus groups, education evaluations, blog/email transcripts and field observations, collected between 2007 and 2014, were analysed using a qualitative descriptive method. The Community of Practice model invited differing levels of participation, allowed for evolution of the research community and created a rhythm of research-related interactions and enduring research relationships. Participants described the value of research for their patients and families and the significance of the developing research culture in providing richness to their practice and visibility of their work to multidisciplinary colleagues. Extensive examples of research dissemination and enrolment in doctoral programmes further confirmed this value. A Community of Practice framework is a powerful model enabling research capacity and productivity evidenced by publication. In developing a solid foundation for a nursing research culture, it should be recognized that research skills, confidence and growth develop over an extended period of time and success depends on skilled coordination and leadership. © 2015 John Wiley & Sons Ltd.

  5. Qualitative studies. Their role in medical research.

    PubMed Central

    Huston, P.; Rowan, M.

    1998-01-01

    OBJECTIVE: To define qualitative research in terms of its philosophical roots, the questions it addresses, its methods and analyses, and the type of results it can offer. DATA SOURCES: MEDLINE and CINAHL (Cumulative Index to Nursing and Allied Health Literature) databases were searched for the years January 1985 to April 1998. The search strategy consisted of "textword" terms that searched in the "title" field of both databases. Qualitative research and evaluation textbooks in health and the social sciences were also used. QUALITY OF EVIDENCE: The information on qualitative research is based on the most recent and valid evidence from the health and social science fields. MAIN MESSAGE: Qualitative research seeks to understand and interpret personal experience to explain social phenomena, including those related to health. It can address questions that quantitative research cannot, such as why people do not adhere to a treatment regimen or why a certain health care intervention is successful. It uses many methods of data collection, including participant observation, case studies, and interviews, and numerous approaches to data analysis that range from the quasistatistical to the intuitive and inductive. CONCLUSIONS: Qualitative research, a form of research completely different from quantitative research, can provide important insights into health-related phenomena and can enrich further research inquiries. PMID:9839063

  6. Species, habitats, society: an evaluation of research supporting EU's Natura 2000 network.

    PubMed

    Popescu, Viorel D; Rozylowicz, Laurentiu; Niculae, Iulian M; Cucu, Adina L; Hartel, Tibor

    2014-01-01

    The Natura 2000 network is regarded as one of the conservation success stories in the global effort to protect biodiversity. However, significant challenges remain in Natura 2000 implementation, owing to its rapid expansion, and lack of a coherent vision for its future. Scientific research is critical for identifying conservation priorities, setting management goals, and reconciling biodiversity protection and society in the complex political European landscape. Thus, there is an urgent need for a comprehensive evaluation of published Natura 2000 research to highlight prevalent research themes, disciplinary approaches, and spatial entities. We conducted a systematic review of 572 scientific articles and conference proceedings focused on Natura 2000 research, published between 1996 and 2014. We grouped these articles into 'ecological' and 'social and policy' categories. Using a novel application of network analysis of article keywords, we found that Natura 2000 research forms a cohesive small-world network, owing to the emphasis on ecological research (79% of studies, with a strong focus on spatial conservation planning), and the underrepresentation of studies addressing 'social and policy' issues (typically focused on environmental impact assessment, multi-level governance, agri-environment policy, and ecosystem services valuation). 'Ecological' and 'social and policy' research shared only general concepts (e.g., Natura 2000, Habitats Directive) suggesting a disconnection between these disciplines. The UK and the Mediterranean basin countries dominated Natura 2000 research, and there was a weak correlation between number of studies and proportion of national territory protected. Approximately 40% of 'social and policy' research and 26% of 'ecological' studies highlighted negative implications of Natura 2000, while 21% of studies found positive social and biodiversity effects. We emphasize the need for designing inter- and transdisciplinary research in order to

  7. Integrated Technology Rotor/Flight Research Rotor (ITR/FRR) concept definition study

    NASA Technical Reports Server (NTRS)

    Hughes, C. W.

    1983-01-01

    Studies were conducted by Hughes Helicopters, Inc. (HHI) for the Applied Technology Laboratory and Aeromechanics Laboratory, U.S. Army Research and Technology Laboratories (AVRADCOM) and the Ames Research Center, National Aeronautics and Space Administration (NASA). Results of predesign studies of advanced main rotor hubs, including bearingless designs, are presented in this report. In addition, the Government's rotor design goals and specifications were reviewed and evaluated. Hub concepts were designed and qualitatively evaluated in order to select the two most promising concepts for further development. Various flexure designs, control systems, and pitchcase designs were investigated during the initial phases of this study. The two designs selected for additional development were designated the V-strap and flat-strap cruciform hubs. These hubs were designed for a four bladed rotor and were sized for 18,400 pounds gross weight with the same diameter (62 feet) and solidity (23 inch chord) as the existing rotor on the Rotor Systems Research Aircraft (RSRA).

  8. Evaluating the application of research-based guidance to the design of an emergency preparedness leaflet.

    PubMed

    Hellier, E; Edworthy, J; Newbold, L; Titchener, K; Tucker, M; Gabe-Thomas, E

    2014-09-01

    Guidelines for the design of emergency communications were derived from primary research and interrogation of the literature. The guidelines were used to re-design a nuclear emergency preparedness leaflet routinely distributed to households in the local area. Pre-test measures of memory for, and self-reported understanding of, nuclear safety information were collected. The findings revealed high levels of non-receipt of the leaflet, and among those who did receive it, memory for safety advice was poor. Subjective evaluations of the trial leaflet suggested that it was preferred and judged easier to understand than the original. Objective measures of memory for the two leaflets were also recorded, once after the study period, and again one week or four weeks later. Memory for the advice was better, at all time periods, when participants studied the trial leaflet. The findings showcase evaluation of emergency preparedness literature and suggest that extant research findings can be applied to the design of communications to improve memory and understandability. Studies are described that showcase the use of research-based guidelines to design emergency communications and provide both subjective and objective data to support designing emergency communications in this way. In addition, the research evaluates the effectiveness of emergency preparedness leaflets that are routinely distributed to households. This work is of relevance to academics interested in risk communication and to practitioners involved in civil protection and emergency preparedness. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  9. Research Study

    ERIC Educational Resources Information Center

    Glick, Ashley

    2010-01-01

    Background: Action Research about my 2nd grade classroom in the Buffalo School District. I examined three areas of interest and tried to find some conclusions related to behavior management. Purpose: The purpose of this study is how will implementing procedures, rules, and consequences help improve student behavior. Research Design: Descriptive;…

  10. A Theoretical and Methodological Evaluation of Leadership Research.

    ERIC Educational Resources Information Center

    Lashbrook, Velma J.; Lashbrook, William B.

    This paper isolates some of the strengths and weaknesses of leadership research by evaluating it from both a theoretical and methodological perspective. The seven theories or approaches examined are: great man, trait, situational, style, functional, social influence, and interaction positions. General theoretical, conceptual, and measurement…

  11. Research Project Evaluation-Learnings from the PATHWAYS Project Experience.

    PubMed

    Galas, Aleksander; Pilat, Aleksandra; Leonardi, Matilde; Tobiasz-Adamczyk, Beata

    2018-05-25

    Every research project faces challenges regarding how to achieve its goals in a timely and effective manner. The purpose of this paper is to present a project evaluation methodology gathered during the implementation of the Participation to Healthy Workplaces and Inclusive Strategies in the Work Sector (the EU PATHWAYS Project). The PATHWAYS project involved multiple countries and multi-cultural aspects of re/integrating chronically ill patients into labor markets in different countries. This paper describes key project's evaluation issues including: (1) purposes, (2) advisability, (3) tools, (4) implementation, and (5) possible benefits and presents the advantages of a continuous monitoring. Project evaluation tool to assess structure and resources, process, management and communication, achievements, and outcomes. The project used a mixed evaluation approach and included Strengths (S), Weaknesses (W), Opportunities (O), and Threats (SWOT) analysis. A methodology for longitudinal EU projects' evaluation is described. The evaluation process allowed to highlight strengths and weaknesses and highlighted good coordination and communication between project partners as well as some key issues such as: the need for a shared glossary covering areas investigated by the project, problematic issues related to the involvement of stakeholders from outside the project, and issues with timing. Numerical SWOT analysis showed improvement in project performance over time. The proportion of participating project partners in the evaluation varied from 100% to 83.3%. There is a need for the implementation of a structured evaluation process in multidisciplinary projects involving different stakeholders in diverse socio-environmental and political conditions. Based on the PATHWAYS experience, a clear monitoring methodology is suggested as essential in every multidisciplinary research projects.

  12. The International Endometriosis Evaluation Program (IEEP Study) - A Systematic Study for Physicians, Researchers and Patients.

    PubMed

    Burghaus, S; Fehm, T; Fasching, P A; Blum, S; Renner, S K; Baier, F; Brodkorb, T; Fahlbusch, C; Findeklee, S; Häberle, L; Heusinger, K; Hildebrandt, T; Lermann, J; Strahl, O; Tchartchian, G; Bojahr, B; Porn, A; Fleisch, M; Reicke, S; Füger, T; Hartung, C-P; Hackl, J; Beckmann, M W; Renner, S P

    2016-08-01

    Endometriosis is a heterogeneous disease characterized by a range of different presentations. It is usually diagnosed when patients present with pain and/or infertility, but it has also been diagnosed in asymptomatic patients. Because of the different diagnostic approaches and diverse therapies, time to diagnosis can vary considerably and the definitive diagnosis may be delayed, with some cases not being diagnosed for several years. Endometriosis patients have many unmet needs. A systematic registration and follow-up of endometriosis patients could be useful to obtain an insight into the course of the disease. The validation of biomarkers could contribute to the development of diagnostic and predictive tests which could help select patients for surgical assessment earlier and offer better predictions about patients who might benefit from medical, surgical or other interventions. The aim is also to obtain a better understanding of the etiology, pathogenesis and progression of the disease. To do this, an online multicenter documentation system was introduced to facilitate the establishment of a prospective multicenter case-control study, the IEEP (International Endometriosis Evaluation Program) study. We report here on the first 696 patients with endometriosis included in the program between June 2013 and June 2015. A documentation system was created, and the structure and course of the study were mapped out with regard to data collection and the collection of biomaterials. The documentation system permits the history and clinical data of patients with endometriosis to be recorded. The IEEP combines this information with biomaterials and uses it for scientific studies. The recorded data can also be used to evaluate clinical quality control measures such as the certification parameters used by the EEL (European Endometriosis League) to assess certified endometriosis centers.

  13. Practical Guidelines for Evaluating Sampling Designs in Survey Studies.

    ERIC Educational Resources Information Center

    Fan, Xitao; Wang, Lin

    The popularity of sample surveys in evaluation and research makes it necessary for consumers to tell a good survey from a poor one. Several sources were identified that gave advice on how to evaluate a sample design used in a survey study. The sources are either too limited or too extensive to be useful practically. The purpose of this paper is to…

  14. An Official American Thoracic Society Research Statement: A Research Framework for Pulmonary Nodule Evaluation and Management

    PubMed Central

    Horeweg, Nanda; Jett, James R.; Midthun, David E.; Powell, Charles A.; Wiener, Renda Soylemez; Wisnivesky, Juan P.; Gould, Michael K.

    2015-01-01

    Background: Pulmonary nodules are frequently detected during diagnostic chest imaging and as a result of lung cancer screening. Current guidelines for their evaluation are largely based on low-quality evidence, and patients and clinicians could benefit from more research in this area. Methods: In this research statement from the American Thoracic Society, a multidisciplinary group of clinicians, researchers, and patient advocates reviewed available evidence for pulmonary nodule evaluation, characterized six focus areas to direct future research efforts, and identified fundamental gaps in knowledge and strategies to address them. We did not use formal mechanisms to prioritize one research area over another or to achieve consensus. Results: There was widespread agreement that novel tests (including novel imaging tests and biopsy techniques, biomarkers, and prognostic models) may improve diagnostic accuracy for identifying cancerous nodules. Before they are used in clinical practice, however, better evidence is needed to show that they improve more distal outcomes of importance to patients. In addition, the pace of research and the quality of clinical care would be improved by the development of registries that link demographic and nodule characteristics with patient-level outcomes. Methods to share data from registries are also necessary. Conclusions: This statement may help researchers to develop impactful and innovative research projects and enable funders to better judge research proposals. We hope that it will accelerate the pace and increase the efficiency of discovery to improve the quality of care for patients with pulmonary nodules. PMID:26278796

  15. Involving the public in epidemiological public health research: a qualitative study of public and stakeholder involvement in evaluation of a population-wide natural policy experiment

    PubMed Central

    Nylén, Lotta; Burström, Bo; Whitehead, Margaret

    2018-01-01

    Background Public involvement in research is considered good practice by European funders; however, evidence of its research impact is sparse, particularly in relation to large-scale epidemiological research. Objectives To explore what difference public and stakeholder involvement made to the interpretation of findings from an evaluation of a natural policy experiment to influence the wider social determinants of health: ‘Flexicurity’. Setting Stockholm County, Sweden. Participants Members of the public from different occupational groups represented by blue-collar and white-collar trade union representatives. Also, members of three stakeholder groups: the Swedish national employment agency; an employers’ association and politicians sitting on a national labour market committee. Total: 17 participants. Methods Qualitative study of process and outcomes of public and stakeholder participation in four focused workshops on the interpretation of initial findings from the flexicurity evaluation. Outcome measures New insights from participants benefiting the interpretation of our research findings or conceptualisation of future research. Results Participants sensed more drastic and nuanced change in the Swedish welfare system over recent decades than was evident from our literature reviews and policy analysis. They also elaborated hidden developments in the Swedish labour market that were increasingly leading to ‘insiders’ and ‘outsiders’, with differing experiences and consequences for financial and job security. Their explanation of the differential effects of the various collective agreements for different occupational groups was new and raised further potential research questions. Their first-hand experience provided new insights into how changes to the social protection system were contributing to the increasing trends in poverty among unemployed people with limiting long-standing illness. The politicians provided further reasoning behind some of the

  16. Evaluating Comparative Effectiveness Research Priorities for Care Coordination in Chronic Obstructive Pulmonary Disease: A Community-Based eDelphi Study

    PubMed Central

    Alber, Julia; Paige, Samantha; Castro, Daniela; Singh, Briana

    2015-01-01

    .92, SD 1.67); (3) pulmonary rehabilitation as a model for care (mean 3.72; SD 1.93); (4) quality of care coordination (mean 4.12, SD 2.41); and (5) comprehensive COPD patient education (mean 4.27, SD 2.38). Stakeholder comments on the relative importance of these care coordination topics primarily addressed the importance of comparing strategies for COPD symptom management and evaluating new methods for patient-provider communication. Approximately one half of the virtual panel assembled indicated that a Web-based stakeholder engagement network could enable more online community meetings (n=19/37, 51%) and facilitate more opportunities to suggest, comment on, and vote for new CER ideas in COPD (n=18/37, 49%). Conclusions Members of this unique virtual advisory board engaged in a structured Web-based communication process that identified the most important community-specific COPD care coordination research topics and questions. Findings from this study support the need for more CER that evaluates quality of care measures used to assess the delivery of treatments and interventions among medically underserved patients with COPD. PMID:26268741

  17. Inter-rater agreement in evaluation of disability: systematic review of reproducibility studies.

    PubMed

    Barth, Jürgen; de Boer, Wout E L; Busse, Jason W; Hoving, Jan L; Kedzia, Sarah; Couban, Rachel; Fischer, Katrin; von Allmen, David Y; Spanjer, Jerry; Kunz, Regina

    2017-01-25

     To explore agreement among healthcare professionals assessing eligibility for work disability benefits.  Systematic review and narrative synthesis of reproducibility studies.  Medline, Embase, and PsycINFO searched up to 16 March 2016, without language restrictions, and review of bibliographies of included studies.  Observational studies investigating reproducibility among healthcare professionals performing disability evaluations using a global rating of working capacity and reporting inter-rater reliability by a statistical measure or descriptively. Studies could be conducted in insurance settings, where decisions on ability to work include normative judgments based on legal considerations, or in research settings, where decisions on ability to work disregard normative considerations. : Teams of paired reviewers identified eligible studies, appraised their methodological quality and generalisability, and abstracted results with pretested forms. As heterogeneity of research designs and findings impeded a quantitative analysis, a descriptive synthesis stratified by setting (insurance or research) was performed.  From 4562 references, 101 full text articles were reviewed. Of these, 16 studies conducted in an insurance setting and seven in a research setting, performed in 12 countries, met the inclusion criteria. Studies in the insurance setting were conducted with medical experts assessing claimants who were actual disability claimants or played by actors, hypothetical cases, or short written scenarios. Conditions were mental (n=6, 38%), musculoskeletal (n=4, 25%), or mixed (n=6, 38%). Applicability of findings from studies conducted in an insurance setting to real life evaluations ranged from generalisable (n=7, 44%) and probably generalisable (n=3, 19%) to probably not generalisable (n=6, 37%). Median inter-rater reliability among experts was 0.45 (range intraclass correlation coefficient 0.86 to κ-0.10). Inter-rater reliability was poor in six studies (37

  18. Those Responsible for Approving Research Studies Have Poor Knowledge of Research Study Design: a Knowledge Assessment of Institutional Review Board Members.

    PubMed

    Mhaskar, Rahul; Pathak, Elizabeth Barnett; Wieten, Sarah; Guterbock, Thomas M; Kumar, Ambuj; Djulbegovic, Benjamin

    2015-08-01

    Institutional Review Board (IRB) members have a duty to protect the integrity of the research process, but little is known about their basic knowledge of clinical research study designs. A nationwide sample of IRB members from major US research universities completed a web-based questionnaire consisting of 11 questions focusing on basic knowledge about clinical research study designs. It included questions about randomized controlled trials (RCTs) and other observational research study designs. Potential predictors (age, gender, educational attainment, type of IRB, current IRB membership, years of IRB service, clinical research experience, and self-identification as a scientist) of incorrect answers were evaluated using multivariate logistic regression models. 148 individuals from 36 universities participated. The majority of participants, 68.9% (102/148), were holding a medical or doctoral degree. Overall, only 26.5% (39/148) of participants achieved a perfect score of 11. On the six-question subset addressing RCTs, 46.6% (69/148) had a perfect score. Most individual questions, and the summary model of overall quiz score (perfect vs. not perfect), revealed no significant predictors - indicating that knowledge deficits were not limited to specific subgroups of IRB members. For the RCT knowledge score there was one significant predictor: compared with MDs, IRB members without a doctoral degree were three times as likely to answer at least one RCT question incorrectly (Odds Ratio: 3.00, 95% CI 1.10-8.20). However, even among MD IRB members, 34.1% (14/41) did not achieve a perfect score on the six RCT questions. This first nationwide study of IRB member knowledge about clinical research study designs found significant knowledge deficits. Knowledge deficits were not limited to laypersons or community advocate members of IRBs, as previously suggested. Akin to widespread ethical training requirements for clinical researchers, IRB members should undergo systematic

  19. Research, Teaching and Performance Evaluation in Academia: The Salience of Quality

    ERIC Educational Resources Information Center

    Cadez, Simon; Dimovski, Vlado; Zaman Groff, Maja

    2017-01-01

    The workload of most academics involves two main activities: research and teaching. Despite the dual nature of the work, career advancement usually chiefly depends on research performance. Since academics are rational actors, warnings are beginning to emerge that current predominantly research-based performance evaluation systems may be…

  20. A Participatory Action Research Approach To Evaluating Inclusive School Programs.

    ERIC Educational Resources Information Center

    Dymond, Stacy K.

    2001-01-01

    This article proposes a model for evaluating inclusive schools. Key elements of the model are inclusion of stakeholders in the evaluation process through a participatory action research approach, analysis of program processes and outcomes, use of multiple methods and measures, and obtaining perceptions from diverse stakeholder groups. (Contains…

  1. Sustaining and Improving Study Abroad Experiences Through Comparative Evaluation.

    PubMed

    Johanson, Linda S

    Researchers have related participation in study abroad experiences to many positive outcomes for nursing students; however, educators are faced with the task of not only developing meaningful study abroad opportunities but sustaining and improving them as well. Educators can evaluate repeat study abroad programs by comparing experiences, looking for trends, and conjecturing rationales. To illustrate this process, an example of a study abroad opportunity that has been repeated over 11 years is presented. The first six years have been compared to the most recent five years, revealing three categories of change for evaluation and the resulting course improvements.

  2. A Different Approach to the Evaluation of Research Libraries. Research Brief No. 6.

    ERIC Educational Resources Information Center

    Council on Library and Information Resources, Washington, DC.

    In 1996, the Council on Library and Information Resources (CLIR) supported a project at Rutgers University that applied new economic theories to measuring how well research libraries fulfill their service roles. This summary draws on the original proposal and the final report from the project's directors. The evaluation of library performance is…

  3. Evaluation of Risk Factors for Severe Pneumonia in Children: The Pneumonia Etiology Research for Child Health Study

    PubMed Central

    Deloria-Knoll, Maria; Feikin, Daniel R.; DeLuca, Andrea N.; Driscoll, Amanda J.; Moïsi, Jennifer C.; Johnson, Hope L.; Murdoch, David R.; O’Brien, Katherine L.; Levine, Orin S.; Scott, J. Anthony G.

    2012-01-01

    As a case-control study of etiology, the Pneumonia Etiology Research for Child Health (PERCH) project also provides an opportunity to assess the risk factors for severe pneumonia in hospitalized children at 7 sites. We identified relevant risk factors by literature review and iterative expert consultation. Decisions for inclusion in PERCH were based on comparability to published data, analytic plans, data collection costs and logistic feasibility, including interviewer time and subject fatigue. We aimed to standardize questions at all sites, but significant variation in the economic, cultural, and geographic characteristics of sites made it difficult to obtain this objective. Despite these challenges, the depth of the evaluation of multiple risk factors across the breadth of the PERCH sites should furnish new and valuable information about the major risk factors for childhood severe and very severe pneumonia, including risk factors for pneumonia caused by specific etiologies, in developing countries. PMID:22403226

  4. Evaluation of risk factors for severe pneumonia in children: the Pneumonia Etiology Research for Child Health study.

    PubMed

    Wonodi, Chizoba B; Deloria-Knoll, Maria; Feikin, Daniel R; DeLuca, Andrea N; Driscoll, Amanda J; Moïsi, Jennifer C; Johnson, Hope L; Murdoch, David R; O'Brien, Katherine L; Levine, Orin S; Scott, J Anthony G

    2012-04-01

    As a case-control study of etiology, the Pneumonia Etiology Research for Child Health (PERCH) project also provides an opportunity to assess the risk factors for severe pneumonia in hospitalized children at 7 sites. We identified relevant risk factors by literature review and iterative expert consultation. Decisions for inclusion in PERCH were based on comparability to published data, analytic plans, data collection costs and logistic feasibility, including interviewer time and subject fatigue. We aimed to standardize questions at all sites, but significant variation in the economic, cultural, and geographic characteristics of sites made it difficult to obtain this objective. Despite these challenges, the depth of the evaluation of multiple risk factors across the breadth of the PERCH sites should furnish new and valuable information about the major risk factors for childhood severe and very severe pneumonia, including risk factors for pneumonia caused by specific etiologies, in developing countries.

  5. Inter-rater agreement in evaluation of disability: systematic review of reproducibility studies

    PubMed Central

    Barth, Jürgen; de Boer, Wout E L; Busse, Jason W; Hoving, Jan L; Kedzia, Sarah; Couban, Rachel; Fischer, Katrin; von Allmen, David Y; Spanjer, Jerry

    2017-01-01

    Objectives To explore agreement among healthcare professionals assessing eligibility for work disability benefits. Design Systematic review and narrative synthesis of reproducibility studies. Data sources Medline, Embase, and PsycINFO searched up to 16 March 2016, without language restrictions, and review of bibliographies of included studies. Eligibility criteria Observational studies investigating reproducibility among healthcare professionals performing disability evaluations using a global rating of working capacity and reporting inter-rater reliability by a statistical measure or descriptively. Studies could be conducted in insurance settings, where decisions on ability to work include normative judgments based on legal considerations, or in research settings, where decisions on ability to work disregard normative considerations.Teams of paired reviewers identified eligible studies, appraised their methodological quality and generalisability, and abstracted results with pretested forms. As heterogeneity of research designs and findings impeded a quantitative analysis, a descriptive synthesis stratified by setting (insurance or research) was performed. Results From 4562 references, 101 full text articles were reviewed. Of these, 16 studies conducted in an insurance setting and seven in a research setting, performed in 12 countries, met the inclusion criteria. Studies in the insurance setting were conducted with medical experts assessing claimants who were actual disability claimants or played by actors, hypothetical cases, or short written scenarios. Conditions were mental (n=6, 38%), musculoskeletal (n=4, 25%), or mixed (n=6, 38%). Applicability of findings from studies conducted in an insurance setting to real life evaluations ranged from generalisable (n=7, 44%) and probably generalisable (n=3, 19%) to probably not generalisable (n=6, 37%). Median inter-rater reliability among experts was 0.45 (range intraclass correlation coefficient 0.86 to κ−0

  6. The case for applying an early-lifecycle technology evaluation methodology to comparative evaluation of requirements engineering research

    NASA Technical Reports Server (NTRS)

    Feather, Martin S.

    2003-01-01

    The premise of this paper is taht there is a useful analogy between evaluation of proposed problem solutions and evaluation of requirements engineering research itself. Both of these application areas face the challenges of evaluation early in the lifecycle, of the need to consider a wide variety of factors, and of the need to combine inputs from multiple stakeholders in making thse evaluation and subsequent decisions.

  7. An Evaluation of h-Index as a Measure of Research Productivity Among Canadian Academic Plastic Surgeons.

    PubMed

    Hu, Jiayi; Gholami, Arian; Stone, Nicholas; Bartoszko, Justyna; Thoma, Achilleas

    2018-02-01

    Evaluation of research productivity among plastic surgeons can be complex. The Hirsch index (h-index) was recently introduced to evaluate both the quality and quantity of one's research activity. It has been proposed to be valuable in assessing promotions and grant funding within academic medicine, including plastic surgery. Our objective is to evaluate research productivity among Canadian academic plastic surgeons using the h-index. A list of Canadian academic plastic surgeons was obtained from websites of academic training programs. The h-index was retrieved using the Scopus database. Relevant demographic and academic factors were collected and their effects on the h-index were analyzed using the t test and Wilcoxon Mann-Whitney U test. Nominal and categorical variables were analyzed using χ 2 test and 1-way analysis of variance. Univariate and multivariate models were built a priori. All P values were 2 sided, and P < .05 was considered to be significant. Our study on Canadian plastic surgeons involved 175 surgeons with an average h-index of 7.6. Over 80% of the surgeons were male. Both univariable and multivariable analysis showed that graduate degree ( P < .0001), academic rank ( P = .03), and years in practice ( P < .0001) were positively correlated with h-index. Limitations of the study include that the Scopus database and the websites of training programs were not always up-to-date. The h-index is a novel tool for evaluating research productivity in academic medicine, and this study shows that the h-index can also serve as a useful metric for measuring research productivity in the Canadian plastic surgery community. Plastic surgeons would be wise to familiarize themselves with the h-index concept and should consider using it as an adjunct to existing metrics such as total publication number.

  8. Evaluating patient and stakeholder engagement in research: moving from theory to practice.

    PubMed

    Esmail, Laura; Moore, Emily; Rein, Alison

    2015-03-01

    Despite the growing demand for research that engages stakeholders, there is limited evidence in the literature to demonstrate its value - or return on investment. This gap indicates a general lack of evaluation of engagement activities. To adequately inform engagement activities, we need to further investigate the dividends of engaged research, and how to evaluate these effects. This paper synthesizes the literature on hypothesized impacts of engagement, shares what has been evaluated and identifies steps needed to reduce the gap between engagement's promises and the underlying evidence supporting its practice. This assessment provides explicit guidance for better alignment of engagement's promised benefits with evaluation efforts and identifies specific areas for development of evaluative measures and better reporting processes.

  9. A Cluster-Randomized Trial of Restorative Practices: An Illustration to Spur High-Quality Research and Evaluation.

    PubMed

    Acosta, Joie D; Chinman, Matthew; Ebener, Patricia; Phillips, Andrea; Xenakis, Lea; Malone, Patrick S

    2016-01-01

    Restorative Practices in schools lack rigorous evaluation studies. As an example of rigorous school-based research, this paper describes the first randomized control trial of restorative practices to date, the Study of Restorative Practices. It is a 5-year, cluster-randomized controlled trial (RCT) of the Restorative Practices Intervention (RPI) in 14 middle schools in Maine to assess whether RPI impacts both positive developmental outcomes and problem behaviors and whether the effects persist during the transition from middle to high school. The two-year RPI intervention began in the 2014-2015 school year. The study's rationale and theoretical concerns are discussed along with methodological concerns including teacher professional development. The theoretical rationale and description of the methods from this study may be useful to others conducting rigorous research and evaluation in this area.

  10. Compendium of research and evaluations in traffic safety published

    DOT National Transportation Integrated Search

    1996-05-01

    The National Highway Traffic Safety Administration's (NHTSA's) Office of Program Development and Evaluation (OPDE) conducts research projects that investigate human attitudes, behaviors, and failures as they relate to motor vehicle crashes. OPDE focu...

  11. Program Evaluation. AAHE-ERIC/Higher Education Research Report No. 2, 1980.

    ERIC Educational Resources Information Center

    Feasley, Charles E.

    The major distinctions between evaluation and research are examined, the chief differences being the intent and type of criteria against which judgments are made. Conceptualization of the evaluation process in higher education is discussed on two levels. A collection of nine similes for understanding evaluation is examined in terms of major…

  12. Systematically Retrieving Research: A Case Study Evaluating Seven Databases

    ERIC Educational Resources Information Center

    Taylor, Brian; Wylie, Emma; Dempster, Martin; Donnelly, Michael

    2007-01-01

    Objective: Developing the scientific underpinnings of social welfare requires effective and efficient methods of retrieving relevant items from the increasing volume of research. Method: We compared seven databases by running the nearest equivalent search on each. The search topic was chosen for relevance to social work practice with older people.…

  13. A Framework for Evaluating and Enhancing Alignment in Self-Regulated Learning Research

    PubMed Central

    Dent, Amy L.; Hoyle, Rick H.

    2015-01-01

    We discuss the articles of this special issue with reference to an important yet previously only implicit dimension of study quality: alignment across the theoretical and methodological decisions that collectively define an approach to self-regulated learning. Integrating and extending work by leaders in the field, we propose a framework for evaluating alignment in the way self-regulated learning research is both conducted and reported. Within this framework, the special issue articles provide a springboard for discussing methodological promises and pitfalls of increasingly sophisticated research on the dynamic, contingent, and contextualized features of self-regulated learning. PMID:25825589

  14. Using theories of change to design monitoring and evaluation of community engagement in research: experiences from a research institute in Malawi

    PubMed Central

    Gooding, Kate; Makwinja, Regina; Nyirenda, Deborah; Vincent, Robin; Sambakunsi, Rodrick

    2018-01-01

    Background: Evaluation of community and public engagement in research is important to deepen understanding of how engagement works and to enhance its effectiveness. Theories of change have been recommended for evaluating community engagement, for their ability to make explicit intended outcomes and understandings of how engagement activities contribute to these outcomes. However, there are few documented examples of using theories of change for evaluation of engagement. This article reports experience of using theories of change to develop a framework for evaluating community engagement in research at a clinical research organisation in Malawi. We describe the steps used to develop theories of change, and the way theories of change were used to design data collection plans. Based on our experience, we reflect on the advantages and challenges of the theory of change approach. Methods: The theories of change and evaluation framework were developed through a series of workshops and meetings between engagement practitioners, monitoring and evaluation staff, and researchers. We first identified goals for engagement, then used ‘so that’ chains to clarify pathways and intermediate outcomes between engagement activities and goals. Further meetings were held to refine initial theories of change, identify priority information needs, and define feasible evaluation methods. Results: The theory of change approach had several benefits. In particular, it helped to construct an evaluation framework focused on relevant outcomes and not just activities. The process of reflecting on intended goals and pathways also helped staff to review the design of engagement activities. Challenges included practical considerations around time to consider evaluation plans among practitioners (a challenge for evaluation more generally regardless of method), and more fundamental difficulties related to identifying feasible and agreed outcomes. Conclusions: These experiences from Malawi provide

  15. Using theories of change to design monitoring and evaluation of community engagement in research: experiences from a research institute in Malawi.

    PubMed

    Gooding, Kate; Makwinja, Regina; Nyirenda, Deborah; Vincent, Robin; Sambakunsi, Rodrick

    2018-01-01

    Background: Evaluation of community and public engagement in research is important to deepen understanding of how engagement works and to enhance its effectiveness. Theories of change have been recommended for evaluating community engagement, for their ability to make explicit intended outcomes and understandings of how engagement activities contribute to these outcomes. However, there are few documented examples of using theories of change for evaluation of engagement. This article reports experience of using theories of change to develop a framework for evaluating community engagement in research at a clinical research organisation in Malawi. We describe the steps used to develop theories of change, and the way theories of change were used to design data collection plans. Based on our experience, we reflect on the advantages and challenges of the theory of change approach. Methods: The theories of change and evaluation framework were developed through a series of workshops and meetings between engagement practitioners, monitoring and evaluation staff, and researchers. We first identified goals for engagement, then used 'so that' chains to clarify pathways and intermediate outcomes between engagement activities and goals. Further meetings were held to refine initial theories of change, identify priority information needs, and define feasible evaluation methods. Results: The theory of change approach had several benefits. In particular, it helped to construct an evaluation framework focused on relevant outcomes and not just activities. The process of reflecting on intended goals and pathways also helped staff to review the design of engagement activities. Challenges included practical considerations around time to consider evaluation plans among practitioners (a challenge for evaluation more generally regardless of method), and more fundamental difficulties related to identifying feasible and agreed outcomes. Conclusions: These experiences from Malawi provide

  16. Evaluating the ethical acceptability of animal research.

    PubMed

    Bout, Henriëtte J; Fentener van Vlissingen, J Martje; Karssing, Edgar D

    2014-11-01

    The ethical acceptability of animal research is typically evaluated on a case-by-case basis. Legislation such as Directive 2010/63/EU on the protection of animals used for scientific purposes provides guidance for ethical evaluation of animal use proposals but does not dictate the outcome, leaving this determination to the ethical review committees of individual institutions. The authors assess different ethics models and how these are reflected in the guidelines of Directive 2010/63/EU. They also describe a matrix for carrying out harm-benefit analyses of animal use proposals, which they identified by examining the practices of three ethical review committees in the Netherlands. Finally, they discuss how this matrix can be applied by ethical review committees at other institutions.

  17. An Evaluative Study of the Nurse Education Program. Research Report Number 82-1.

    ERIC Educational Resources Information Center

    Capoor, Madan

    An evaluation of the nurse education program at Middlesex County College (MCC) was conducted in response to an increasing dropout rate and a decline in the passing rate of program graduates on the Licensing Board Examination (LBE). The study focused on the relationship between student background and performance and between student performance in…

  18. The Evaluation of an Early Childhood Teacher Preparation Program: An Action Research Project

    ERIC Educational Resources Information Center

    Ragno, Kerry Sullivan

    2013-01-01

    The purpose of this dissertation was to evaluate the effectiveness of an Early Childhood Development Associate of Applied Science (AAS) degree program at one community college as part of an ongoing action research project. Prior to this dissertation study, external and internal barriers prevented the associate degree program stakeholders from…

  19. Integrating Human Factors Engineering and Information Processing Approaches to Facilitate Evaluations in Criminal Justice Technology Research.

    PubMed

    Salvemini, Anthony V; Piza, Eric L; Carter, Jeremy G; Grommon, Eric L; Merritt, Nancy

    2015-06-01

    Evaluations are routinely conducted by government agencies and research organizations to assess the effectiveness of technology in criminal justice. Interdisciplinary research methods are salient to this effort. Technology evaluations are faced with a number of challenges including (1) the need to facilitate effective communication between social science researchers, technology specialists, and practitioners, (2) the need to better understand procedural and contextual aspects of a given technology, and (3) the need to generate findings that can be readily used for decision making and policy recommendations. Process and outcome evaluations of technology can be enhanced by integrating concepts from human factors engineering and information processing. This systemic approach, which focuses on the interaction between humans, technology, and information, enables researchers to better assess how a given technology is used in practice. Examples are drawn from complex technologies currently deployed within the criminal justice system where traditional evaluations have primarily focused on outcome metrics. Although this evidence-based approach has significant value, it is vulnerable to fully account for human and structural complexities that compose technology operations. Guiding principles for technology evaluations are described for identifying and defining key study metrics, facilitating communication within an interdisciplinary research team, and for understanding the interaction between users, technology, and information. The approach posited here can also enable researchers to better assess factors that may facilitate or degrade the operational impact of the technology and answer fundamental questions concerning whether the technology works as intended, at what level, and cost. © The Author(s) 2015.

  20. The next Stages in Researching Water Fluoridation: Evaluation and Surveillance

    ERIC Educational Resources Information Center

    Downer, Martin C.; Blinkhorn, Anthony S.

    2007-01-01

    Objective: (1) to provide a commentary on a conference held at the University of Manchester entitled Researching Water Fluoridation: Evaluation and Surveillance; (2) to synthesize from the proceedings of the meeting suggestions for future research and public health surveillance. Method: The main points and problematic issues raised by the speakers…

  1. Re-Mediating Practitioners' Practice for Equity in Higher Education: Evaluating the Effectiveness of Action Research

    ERIC Educational Resources Information Center

    Vines, Erin

    2012-01-01

    This study examines the influence of action research on California community college practitioners' attitudes, beliefs, and behavior using the Center for Urban Education's (CUE) Equity Scorecard tools and process. This developmental evaluation study began March 2011 and concluded April 2012. The pseudonym of the field site studied is Las Flores…

  2. The Research and Evaluation of Serious Games: Toward a Comprehensive Methodology

    ERIC Educational Resources Information Center

    Mayer, Igor; Bekebrede, Geertje; Harteveld, Casper; Warmelink, Harald; Zhou, Qiqi; van Ruijven, Theo; Lo, Julia; Kortmann, Rens; Wenzler, Ivo

    2014-01-01

    The authors present the methodological background to and underlying research design of an ongoing research project on the scientific evaluation of serious games and/or computer-based simulation games (SGs) for advanced learning. The main research questions are: (1) what are the requirements and design principles for a comprehensive social…

  3. Defining and Teaching Evaluative Thinking: Insights From Research on Critical Thinking

    ERIC Educational Resources Information Center

    Buckley, Jane; Archibald, Thomas; Hargraves, Monica; Trochim, William M.

    2015-01-01

    Evaluative thinking (ET) is an increasingly important topic in the field of evaluation, particularly among people involved in evaluation capacity building (ECB). Yet it is a construct in need of clarification, especially if it is to be meaningfully discussed, promoted, and researched. To that end, we propose that ET is essentially critical…

  4. Building research and evaluation capacity in population health: the NSW Health approach.

    PubMed

    Edwards, Barry; Stickney, Beth; Milat, Andrew; Campbell, Danielle; Thackway, Sarah

    2016-02-01

    Issue addressed An organisational culture that values and uses research and evaluation (R&E) evidence to inform policy and practice is fundamental to improving health outcomes. The 2016 NSW Government Program Evaluation Guidelines recommend investment in training and development to improve evaluation capacity. The purpose of this paper is to outline the approaches taken by the NSW Ministry of Health to develop R&E capacity and assess these against existing models of practice. Method The Ministry of Health's Centre for Epidemiology and Evidence (CEE) takes an evidence-based approach to building R&E capacity in population health. Strategies are informed by: the NSW Population Health Research Strategy, R&E communities of practice across the Ministry and health Pillar agencies and a review of the published evidence on evaluation capacity building (ECB). An internal survey is conducted biennially to monitor research activity within the Ministry's Population and Public Health Division. One representative from each of the six centres that make up the Division coordinates completion of the survey by relevant staff members for their centre. Results The review identified several ECB success factors including: implementing a tailored multifaceted approach; an organisational commitment to R&E; and offering experiential training and ongoing technical support to the workforce. The survey of research activity found that the Division funded a mix of research assets, research funding schemes, research centres and commissioned R&E projects. CEE provides technical advice and support services for staff involved in R&E and in 2015, 22 program evaluations were supported. R&E capacity building also includes a series of guides to assist policy makers, practitioners and researchers to commission, undertake and use policy-relevant R&E. Staff training includes workshops on critical appraisal, program logic and evaluation methods. From January 2013 to June 2014 divisional staff published 84

  5. The Practice of Evaluation Research and the Use of Evaluation Results.

    ERIC Educational Resources Information Center

    Van den Berg, G.; Hoeben, W. Th. J. G.

    1984-01-01

    Lack of use of educational evaluation results in the Netherlands was investigated by analyzing 14 curriculum evaluation studies. Results indicated that rational decision making with a technical (empirical) evaluation approach makes utilization of results most likely. Incremental decision making and a conformative approach make utilization least…

  6. Delegations of authority and organization; Center for Biologics Evaluation and Research, Center for Devices and Radiological Health, and Center for Drug Evaluation and Research--FDA. Final rule.

    PubMed

    1991-11-21

    The Food and Drug Administration (FDA) is amending the regulations for delegations of authority relating to premarket approval of products that are or contain a biologic, a device, or a drug. The amendment grants directors, deputy directors, and certain other supervisory personnel in the Center for Biologics Evaluation and Research (CBER), the Center for Devices and Radiological Health (CDRH), and the Center for Drug Evaluation and Research (CDER) reciprocal premarket approval authority to approve such products.

  7. Useful, Used, and Peer Approved: The Importance of Rigor and Accessibility in Postsecondary Research and Evaluation. WISCAPE Viewpoints

    ERIC Educational Resources Information Center

    Vaade, Elizabeth; McCready, Bo

    2012-01-01

    Traditionally, researchers, policymakers, and practitioners have perceived a tension between rigor and accessibility in quantitative research and evaluation in postsecondary education. However, this study indicates that both producers and consumers of these studies value high-quality work and clear findings that can reach multiple audiences. The…

  8. Proceedings IUFRO: Evaluation and planning of forestry research. International Union of Forestry Research Organizations (S6.O6-S6.O6.Ol)

    Treesearch

    Denver P. Burns

    1986-01-01

    Contains 23 papers presented in six technical sessions on forestry research management planning and evaluation. Primary topics focus on nontraditional views and sources of information and emerging technologies affecting forestry research; methods for identifying research needs and strategies required for implementation; and research evaluation at the individual,...

  9. A Study of Developing an Attitude Scale towards Authentic Learning Environments and Evaluation

    ERIC Educational Resources Information Center

    Çetinkaya, Murat

    2018-01-01

    The aim of the research is to improve a valid and reliable attributing scale which identifies authentic learning environments and evaluation attributes of the science teacher candidates. The study has been designed on the base of validity and reliability of the scale developed to evaluate the authentic learning environments. The research group is…

  10. Evaluation of the IEP Costing Procedures: A Pilot Study by Six Major Research Universities.

    ERIC Educational Resources Information Center

    Topping, Jim

    The Information Exchange Procedures (IEP) cost study project of the National Center for Higher Education Management Systems is described and its applicability to six major research universities (MRU) is assessed in this pilot study. The IEP enables peer institutions to compare information about their resources, activities, and educational…

  11. [Research about re-evaluation of screening of traditonal Chinese medicine symptoms item of post-marketing medicine Xuezhikang].

    PubMed

    He, Wei; Xie, Yanming; Wang, Yongyan

    2011-10-01

    The purpose of post-marketing Chinese medicine re-evaluation is to identify Chinese medicine clinical indications, while designing scientific and rational of Chinese medicine symptoms items are important to the result of symptoms re-evaluation. This study give screening of traditional Chinese medicine(TCM) symptoms item of post-marketing medicine Xuezhikang re-evaluation as example that reference to principle dyslipidemia clinical research, academic dissertations, Xuezhikang directions, clinical expert practice experience etc. while standardization those symptom names and screening 41 dyslipidemia common symptoms. Furthermore, this paper discuss about the accoerdance and announcements when screening symptoms item, so as to providing a research thread to manufacture PRO chart for post-marketing medicine re-evaluation.

  12. The contribution of case study design to supporting research on Clubhouse psychosocial rehabilitation.

    PubMed

    Raeburn, Toby; Schmied, Virginia; Hungerford, Catherine; Cleary, Michelle

    2015-10-01

    Psychosocial Clubhouses provide recovery-focused psychosocial rehabilitation to people with serious mental illness at over 300 sites in more than 30 countries worldwide. To deliver the services involved, Clubhouses employ a complex mix of theory, programs and relationships, with this complexity presenting a number of challenges to those undertaking Clubhouse research. This paper provides an overview of the usefulness of case study designs for Clubhouse researchers; and suggests ways in which the evaluation of Clubhouse models can be facilitated. The paper begins by providing a brief explanation of the Clubhouse model of psychosocial rehabilitation, and the need for ongoing evaluation of the services delivered. This explanation is followed by an introduction to case study design, with consideration given to the way in which case studies have been used in past Clubhouse research. It is posited that case study design provides a methodological framework that supports the analysis of either quantitative, qualitative or a mixture of both types of data to investigate complex phenomena in their everyday contexts, and thereby support the development of theory. As such, case study approaches to research are well suited to the Clubhouse environment. The paper concludes with recommendations for future Clubhouse researchers who choose to employ a case study design. While the quality of case study research that explores Clubhouses has been variable in the past, if applied in a diligent manner, case study design has a valuable contribution to make in future Clubhouse research.

  13. Agreement studies in radiology research.

    PubMed

    Farzin, B; Gentric, J-C; Pham, M; Tremblay-Paquet, S; Brosseau, L; Roy, C; Jamali, S; Chagnon, M; Darsaut, T E; Guilbert, F; Naggara, O; Raymond, J

    2017-03-01

    The goal of this study was to estimate the frequency and the quality of agreement studies published in diagnostic imaging journals. All studies published between January 2011 and December 2012 in four radiology journals were reviewed. Four trained readers evaluated agreement studies using a 24-item form that included the 15 items of the Guidelines for Reporting Reliability and Agreement Studies criteria. Of 2229 source titles, 280 studies (13%) reported agreement. The mean number of patients per study was 81±99 (SD) (range, 0-180). Justification for sample size was found in 9 studies (3%). The number of raters was≤2 in 226 studies (81%). No intra-observer study was performed in 212 (76%) articles. Confidence intervals and interpretation of statistical estimates were provided in 98 (35%) and 147 (53%) of the studies, respectively. In 168 studies (60%), the agreement study was not mentioned in the discussion section. In 8 studies (3%), reporting of the agreement study was judged to be adequate. Twenty studies (7%) were dedicated to agreement. Agreement studies are preliminary and not adequately reported. Studies dedicated to agreement are infrequent. They are research opportunities that should be promoted. Copyright © 2016 Éditions françaises de radiologie. Published by Elsevier Masson SAS. All rights reserved.

  14. An evaluation of a data linkage training workshop for research ethics committees.

    PubMed

    Tan, Kate M; Flack, Felicity S; Bear, Natasha L; Allen, Judy A

    2015-03-04

    In Australia research projects proposing the use of linked data require approval by a Human Research Ethics Committee (HREC). A sound evaluation of the ethical issues involved requires understanding of the basic mechanics of data linkage, the associated benefits and risks, and the legal context in which it occurs. The rapidly increasing number of research projects utilising linked data in Australia has led to an urgent need for enhanced capacity of HRECs to review research applications involving this emerging research methodology. The training described in this article was designed to respond to an identified need among the data linkage units in the Australian Population Health Research Network (PHRN) and HREC members in Australia. Five one-day face to face workshops were delivered in the study period to a total of 98 participants. Participants in the workshops represented all six categories of HREC membership composition listed in the National Health and Medical Research Centres' (NHMRC) National Statement on Ethical Conduct in Human Research. Participants were assessed at three time points, prior to the training (T1), immediately after the training (T2) and 8 to 17 months after the training (T3). Ninety participants completed the pre and post questionnaires; 58 of them completed the deferred questionnaire. Participants reported significant improvements in levels of knowledge, understanding and skills in each of the eight areas evaluated. The training was beneficial for those with prior experience in the area of ethics and data linkage as well as those with no prior exposure. Our preliminary work in this area demonstrates that the provision of intensive face to face ethics training in data linkage is feasible and has a significant impact on participant's confidence in reviewing HREC applications.

  15. Quantity and Quality of Economic Evaluations in U.S. Nursing Research, 1997-2015: A Systematic Review.

    PubMed

    Cook, Wendy A; Morrison, Megan L; Eaton, Linda H; Theodore, Brian R; Doorenbos, Ardith Z

    The United States has a complex healthcare system that is undergoing substantial reformations. There is a need for high-quality, economic evaluations of nursing practice. An updated review of completed economic evaluations relevant to the field of nursing within the U.S. healthcare system is timely and needed. The purpose of this study was to evaluate and describe the quantity and quality of economic evaluations in nursing-relevant research performed in the United States between 1997 and 2015. Four databases were searched. Titles, abstracts, and full-text content were reviewed to identify studies that analyzed both costs and outcomes, relevant to nursing, performed in the United States, and used the quality-adjusted life year to measure effectiveness. For included studies, data were extracted from full-text articles using criteria from U.S. Public Health Service's Panel on Cost-Effectiveness in Health and Medicine. Twenty-eight studies met the inclusion criteria. Most (n = 25, 89%) were published in the last decade of the analysis, from 2006 to 2015. Assessment of quality, based on selected items from the panel guidelines, found that the evaluations did not consistently use the recommended societal perspective, use multiple resource utilization categories, use constant dollars, discount future costs and outcomes, use a lifetime horizon, or include an indication of uncertainty in results. The only resource utilization category consistently included across studies was healthcare resources. Only 28 nursing-related studies meeting the inclusion criteria were identified as meeting robust health economic evaluation methodological criteria, and most did not include all important guideline items. Despite increases in absolute numbers of published studies over the past decade, economic evaluation has been underutilized in U.S. nursing-relevant research in the past two decades.

  16. Research and Evaluation Agenda 1993-94 for AISD 1993-94.

    ERIC Educational Resources Information Center

    Austin Independent School District, TX. Office of Research and Evaluation.

    The research and evaluation agenda for the Austin Independent School District (AISD) (Texas) is determined for each school year, subject to current needs and requests. The evaluations and other major projects for 1993-94 will focus on three major areas. First is providing school support. Testing programs mandated by state law and district policy…

  17. Allied health research positions: a qualitative evaluation of their impact.

    PubMed

    Wenke, Rachel J; Ward, Elizabeth C; Hickman, Ingrid; Hulcombe, Julie; Phillips, Rachel; Mickan, Sharon

    2017-02-06

    Research positions embedded within healthcare settings have been identified as an enabler to allied health professional (AHP) research capacity; however, there is currently limited research formally evaluating their impact. In 2008, a Health Practitioner industrial agreement funded a research capacity building initiative within Queensland Health, Australia, which included 15 new allied health research positions. The present project used a qualitative and realist approach to explore the impact of these research positions, as well as the mechanisms which facilitated or hindered their success within their respective organisations. Forty-four AHP employees from six governmental health services in Queensland, Australia, participated in the study. Individual interviews were undertaken, with individuals in research positions (n = 8) and their reporting line managers (n = 8). Four stakeholder focus groups were also conducted with clinicians, team leaders and professional heads who had engaged with the research positions. Nine key outcomes of the research positions were identified across individual, team/service and organisational/community levels. These outcomes included clinician skill development, increased research activity, clinical and service changes, increased research outputs and collaborations, enhanced research and workplace culture, improved profile of allied health, development of research infrastructure, and professional development of individuals in the research positions. Different mechanisms that influenced these outcomes were identified. These mechanisms were grouped by those related to the (1) research position itself, (2) organisational factors and (3) implementation factors. The present findings highlight the potential value of the research positions for individuals, teams and clinical services across different governmental healthcare services, and demonstrate the impact of the roles on building the internal and external profile of allied health

  18. 40 CFR 26.123 - Early termination of research support: Evaluation of applications and proposals.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 1 2011-07-01 2011-07-01 false Early termination of research support: Evaluation of applications and proposals. 26.123 Section 26.123 Protection of Environment ENVIRONMENTAL... Research Conducted or Supported by EPA § 26.123 Early termination of research support: Evaluation of...

  19. Global research priorities for interpersonal violence prevention: a modified Delphi study.

    PubMed

    Mikton, Christopher R; Tanaka, Masako; Tomlinson, Mark; Streiner, David L; Tonmyr, Lil; Lee, Bandy X; Fisher, Jane; Hegadoren, Kathy; Pim, Joam Evans; Wang, Shr-Jie Sharlenna; MacMillan, Harriet L

    2017-01-01

    To establish global research priorities for interpersonal violence prevention using a systematic approach. Research priorities were identified in a three-round process involving two surveys. In round 1, 95 global experts in violence prevention proposed research questions to be ranked in round 2. Questions were collated and organized according to the four-step public health approach to violence prevention. In round 2, 280 international experts ranked the importance of research in the four steps, and the various substeps, of the public health approach. In round 3, 131 international experts ranked the importance of detailed research questions on the public health step awarded the highest priority in round 2. In round 2, "developing, implementing and evaluating interventions" was the step of the public health approach awarded the highest priority for four of the six types of violence considered (i.e. child maltreatment, intimate partner violence, armed violence and sexual violence) but not for youth violence or elder abuse. In contrast, "scaling up interventions and evaluating their cost-effectiveness" was ranked lowest for all types of violence. In round 3, research into "developing, implementing and evaluating interventions" that addressed parenting or laws to regulate the use of firearms was awarded the highest priority. The key limitations of the study were response and attrition rates among survey respondents. However, these rates were in line with similar priority-setting exercises. These findings suggest it is premature to scale up violence prevention interventions. Developing and evaluating smaller-scale interventions should be the funding priority.

  20. Development and Evaluation of Reference Standards for Image-based Telemedicine Diagnosis and Clinical Research Studies in Ophthalmology

    PubMed Central

    Ryan, Michael C.; Ostmo, Susan; Jonas, Karyn; Berrocal, Audina; Drenser, Kimberly; Horowitz, Jason; Lee, Thomas C.; Simmons, Charles; Martinez-Castellanos, Maria-Ana; Chan, R.V. Paul; Chiang, Michael F.

    2014-01-01

    Information systems managing image-based data for telemedicine or clinical research applications require a reference standard representing the correct diagnosis. Accurate reference standards are difficult to establish because of imperfect agreement among physicians, and discrepancies between clinical vs. image-based diagnosis. This study is designed to describe the development and evaluation of reference standards for image-based diagnosis, which combine diagnostic impressions of multiple image readers with the actual clinical diagnoses. We show that agreement between image reading and clinical examinations was imperfect (689 [32%] discrepancies in 2148 image readings), as was inter-reader agreement (kappa 0.490-0.652). This was improved by establishing an image-based reference standard defined as the majority diagnosis given by three readers (13% discrepancies with image readers). It was further improved by establishing an overall reference standard that incorporated the clinical diagnosis (10% discrepancies with image readers). These principles of establishing reference standards may be applied to improve robustness of real-world systems supporting image-based diagnosis. PMID:25954463

  1. Evaluating radiographers' diagnostic accuracy in screen-reading mammograms: what constitutes a quality study?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Debono, Josephine C, E-mail: josephine.debono@bci.org.au; Poulos, Ann E; Westmead Breast Cancer Institute, Westmead, New South Wales

    The aim of this study was to first evaluate the quality of studies investigating the diagnostic accuracy of radiographers as mammogram screen-readers and then to develop an adapted tool for determining the quality of screen-reading studies. A literature search was used to identify relevant studies and a quality evaluation tool constructed by combining the criteria for quality of Whiting, Rutjes, Dinnes et al. and Brealey and Westwood. This constructed tool was then applied to the studies and subsequently adapted specifically for use in evaluating quality in studies investigating diagnostic accuracy of screen-readers. Eleven studies were identified and the constructed toolmore » applied to evaluate quality. This evaluation resulted in the identification of quality issues with the studies such as potential for bias, applicability of results, study conduct, reporting of the study and observer characteristics. An assessment of the applicability and relevance of the tool for this area of research resulted in adaptations to the criteria and the development of a tool specifically for evaluating diagnostic accuracy in screen-reading. This tool, with further refinement and rigorous validation can make a significant contribution to promoting well-designed studies in this important area of research and practice.« less

  2. Nursing research. Components of a clinical research study.

    PubMed

    Bargagliotti, L A

    1988-09-01

    Nursing research is the systematic collection and analysis of data about clinically important phenomena. While there are norms for conducting research and rules for using certain research procedures, the reader must always filter the research report against his or her nursing knowledge. The most common questions a reader should ask are "Does it make sense? Can I think of any other reasonable explanation for the findings? Do the findings fit what I have observed?" If the answers are reasonable, research findings from carefully conducted studies can provide a basis for making nursing decisions. One of the earliest accounts of nursing research, which indicates the power of making systematic observations, was Florence Nightingale's study. It compared deaths among soldiers in the Crimean War with deaths of soldiers in the barracks of London. Her research demonstrated that soldiers in the barracks had a much higher death rate than did the soldiers at war. On the basis of the study, sanitary conditions in the barracks were changed substantially.

  3. Sustaining patient and public involvement in research: A case study of a research centre

    PubMed Central

    Jinks, Clare; Carter, Pam; Rhodes, Carol; Beech, Roger; Dziedzic, Krysia; Hughes, Rhian; Blackburn, Steven; Ong, Bie Nio

    2013-01-01

    The literature on patient and public involvement (PPI) in research covers a wide range of topics. However, one area of investigation that appears under developed is the sustainability and impact of PPI beyond involvement in time-limited research projects. This paper presents a case study of PPI development in one primary care research centre in England, and its approach to making this sustainable using documentary sources and material from a formal evaluation. We provide narrative accounts of the set-up, operation and main processes of PPI, and its perceived impact. PPI requires a long-term perspective with participation and trust growing over time, and both users and researchers learning what approaches work best. PPI is a complex interplay of clarity of purpose, defined roles and relationships, organised support (paid PPI staff) and a well-funded infrastructure. ‘Soft systems’ are equally important such as flexible and informal approaches to meetings, adapting timetables and environments to meet the needs of lay members and to create spaces for relationships to develop between researchers and lay members that are based on mutual trust and respect. This case study highlights that the right combination of ethos, flexible working practices, leadership, and secure funding goes a long way to embedding PPI beyond ad hoc involvement. This allows PPI in research to be integrated in the infrastructure and sustainable. PMID:26705412

  4. 34 CFR 97.123 - Early termination of research support: Evaluation of applications and proposals.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 34 Education 1 2014-07-01 2014-07-01 false Early termination of research support: Evaluation of... Protection of Human Research Subjects) § 97.123 Early termination of research support: Evaluation of..., when the department or agency head finds an institution has materially failed to comply with the terms...

  5. 34 CFR 97.123 - Early termination of research support: Evaluation of applications and proposals.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 34 Education 1 2013-07-01 2013-07-01 false Early termination of research support: Evaluation of... Protection of Human Research Subjects) § 97.123 Early termination of research support: Evaluation of..., when the department or agency head finds an institution has materially failed to comply with the terms...

  6. 34 CFR 97.123 - Early termination of research support: Evaluation of applications and proposals.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 34 Education 1 2012-07-01 2012-07-01 false Early termination of research support: Evaluation of... Protection of Human Research Subjects) § 97.123 Early termination of research support: Evaluation of..., when the department or agency head finds an institution has materially failed to comply with the terms...

  7. 45 CFR 46.123 - Early termination of research support: Evaluation of applications and proposals.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Early termination of research support: Evaluation of applications and proposals. 46.123 Section 46.123 Public Welfare DEPARTMENT OF HEALTH AND HUMAN... Research Subjects § 46.123 Early termination of research support: Evaluation of applications and proposals...

  8. 34 CFR 97.123 - Early termination of research support: Evaluation of applications and proposals.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 1 2011-07-01 2011-07-01 false Early termination of research support: Evaluation of applications and proposals. 97.123 Section 97.123 Education Office of the Secretary, Department of Education... Protection of Human Research Subjects) § 97.123 Early termination of research support: Evaluation of...

  9. A New Approach to Evaluating the Well-Being of PhD Research Students

    ERIC Educational Resources Information Center

    Juniper, Bridget; Walsh, Elaine; Richardson, Alan; Morley, Bernard

    2012-01-01

    This study describes the development of an assessment to evaluate the well-being of PhD researchers using a clinically approved methodology that places the perceptions and experiences of the subject population at the heart of its construction. It identifies and assesses the range and relative importance of seven distinct dimensions which are shown…

  10. The Design and Evaluation of a Front-End User Interface for Energy Researchers.

    ERIC Educational Resources Information Center

    Borgman, Christine L.; And Others

    1989-01-01

    Reports on the Online Access to Knowledge (OAK) Project, which developed software to support end user access to a Department of Energy database based on the skill levels and needs of energy researchers. The discussion covers issues in development, evaluation, and the study of user behavior in designing an interface tailored to a special…

  11. Improving methods to evaluate the impacts of plant invasions: lessons from 40 years of research

    PubMed Central

    Stricker, Kerry Bohl; Hagan, Donald; Flory, S. Luke

    2015-01-01

    Methods used to evaluate the ecological impacts of biological invasions vary widely from broad-scale observational studies to removal experiments in invaded communities and experimental additions in common gardens and greenhouses. Different methods provide information at diverse spatial and temporal scales with varying levels of reliability. Thus, here we provide a synthetic and critical review of the methods used to evaluate the impacts of plant invasions and provide recommendations for future research. We review the types of methods available and report patterns in methods used, including the duration and spatial scale of studies and plant functional groups examined, from 410 peer-reviewed papers published between 1971 and 2011. We found that there has been a marked increase in papers published on plant invasion impacts since 2003 and that more than half of all studies employed observational methods while <5 % included predictive modelling. Most of the studies were temporally and spatially restricted with 51 % of studies lasting <1 year and almost half of all studies conducted in plots or mesocosms <1 m2. There was also a bias in life form studied: more than 60 % of all studies evaluated impacts of invasive forbs and graminoids while <16 % focused on invasive trees. To more effectively quantify invasion impacts, we argue that longer-term experimental research and more studies that use predictive modelling and evaluate impacts of invasions on ecosystem processes and fauna are needed. Combining broad-scale observational studies with experiments and predictive modelling may provide the most insight into invasion impacts for policy makers and land managers seeking to reduce the effects of plant invasions. PMID:25829379

  12. An evaluation of the interaction of place and community-based participatory research as a research methodology in the implementation of a sexually transmitted infection intervention for Greenlandic youth.

    PubMed

    Rink, Elizabeth

    2016-01-01

    Newly emerging research suggests that the actual physical location of a study and the geographic context in which a study is implemented influences the types of research methods most appropriate to use in a study as well as the study's research outcomes. This article presents a reflection on the extent to which place influenced the use of community-based participatory research (CBPR) as a research methodology in the implementation of an intervention to address sexually transmitted infections in Greenland. An evaluation of the interaction between place and CBPR suggests that the physicality of place influenced the intervention's successes and challenges. Future research that uses CBPR as a research methodology in sexual and reproductive health research in the Arctic warrants situating the research design, implementation and outcomes within the context of place.

  13. Developing nursing and midwifery research priorities: a Health Service Executive (HSE) North West study.

    PubMed

    Parlour, Randal; Slater, Paul

    2014-06-01

    The primary purpose of this study was to identify research priorities for nurses and midwives across the Health Service Executive (HSE) North West region. The rationale for the study was underlined during meetings of HSE North West Directors of Nursing and Midwifery in January 2011. It was agreed that a more strategic approach to generating synergy among nursing and midwifery research, evaluation, and evidence-based practice should be developed through the Nursing and Midwifery Planning and Development Unit. The research design was founded upon collaborative processes for consensus building that included the Delphi technique and nominal group technique. The study sample included a panel of experts. Data were collected between March 2011 and December 2011. Findings from this study validate the efficacy of the research methodology in enabling the effective identification of priority areas for research. These include: (a) an evaluation of the impact of postgraduate nursing and midwifery education programs focusing upon patient, professional, and organizational outcomes; (b) development and evaluation of an effective culture of nurse- and midwife-led audit across all services within a Regional Health Trust in Ireland; (c) an examination of the efficacy of approaches to clinical supervision within the context of the Irish health system; (d) an evaluation of the impact of an Advanced Nurse Practitioner role in supporting the effective management of long-term conditions within the context of Regional Health Trust primary care settings in Ireland; and (e) Supporting and developing an ethical framework for nursing and midwifery research within a Regional Health Trust in Ireland. It is anticipated that future work, outlined within this paper, will lead to important improvements in patient care and outcomes. Furthermore, this study provides evidence that a strong nursing and midwifery research agenda can be established upon genuine collaborations and partnerships across

  14. Development of a School Nursing Research Agenda in Florida: A Delphi Study

    ERIC Educational Resources Information Center

    Gordon, Shirley C.; Barry, Charlotte D.

    2006-01-01

    Research is important to the image, visibility, and viability of school nursing. Each state school nursing association should evaluate member commitment to school nursing research based on their unique set of financial, educational, and organizational resources. A 3-round Delphi study was conducted in which Florida school nurses identified…

  15. Development of a Technology Transfer Score for Evaluating Research Proposals: Case Study of Demand Response Technologies in the Pacific Northwest

    NASA Astrophysics Data System (ADS)

    Estep, Judith

    researcher and recipient relationship, specific to technology transfer. In this research, the evaluation criteria of several research organizations were assessed to understand the extent to which the success attributes that were identified in literature were considered when reviewing research proposals. While some of the organizations included a few of the success attributes, none of the organizations considered all of the attributes. In addition, none of the organizations quantified the value of the success attributes. The effectiveness of the model relies extensively on expert judgments to complete the model validation and quantification. Subject matter experts ranging from senior executives with extensive experience in technology transfer to principal research investigators from national labs, universities, utilities, and non-profit research organizations were used to ensure a comprehensive and cross-functional validation and quantification of the decision model. The quantified model was validated using a case study involving demand response (DR) technology proposals in the Pacific Northwest. The DR technologies were selected based on their potential to solve some of the region's most prevalent issues. In addition, several sensitivity scenarios were developed to test the model's response to extreme case scenarios, impact of perturbations in expert responses, and if it can be applied to other than demand response technologies. In other words, is the model technology agnostic? In addition, the flexibility of the model to be used as a tool for communicating which success attributes in a research proposal are deficient and need strengthening and how improvements would increase the overall technology transfer score were assessed. The low scoring success attributes in the case study proposals (e.g. project meetings, etc.) were clearly identified as the areas to be improved for increasing the technology transfer score. As a communication tool, the model could help a research

  16. Intervention Research and Program Evaluation in the School Setting: Issues and Alternative Research Designs

    ERIC Educational Resources Information Center

    de Anda, Diane

    2007-01-01

    This article discusses the difficulties in conducting intervention research or evaluating intervention programs in a school setting. In particular, the problems associated with randomization and obtaining control groups are examined. The use of quasi-experimental designs, specifically a paired comparison design using the individual as his or her…

  17. Pasadena City College SIGI Project Research Design. Pilot Study.

    ERIC Educational Resources Information Center

    Risser, John J.; Tulley, John E.

    A pilot study evaluation of SIGI (System of Interactive Guidance and Information) at Pasadena City College in 1974-75 tested the effectiveness of an experimental research design for an expanded field test of the system the following year. (SIGI is a computer based career guidance program designed by Educational Testing Service to assist community…

  18. Evaluative criteria for qualitative research in health care: controversies and recommendations.

    PubMed

    Cohen, Deborah J; Crabtree, Benjamin F

    2008-01-01

    We wanted to review and synthesize published criteria for good qualitative research and develop a cogent set of evaluative criteria. We identified published journal articles discussing criteria for rigorous research using standard search strategies then examined reference sections of relevant journal articles to identify books and book chapters on this topic. A cross-publication content analysis allowed us to identify criteria and understand the beliefs that shape them. Seven criteria for good qualitative research emerged: (1) carrying out ethical research; (2) importance of the research; (3) clarity and coherence of the research report; (4) use of appropriate and rigorous methods; (5) importance of reflexivity or attending to researcher bias; (6) importance of establishing validity or credibility; and (7) importance of verification or reliability. General agreement was observed across publications on the first 4 quality dimensions. On the last 3, important divergent perspectives were observed in how these criteria should be applied to qualitative research, with differences based on the paradigm embraced by the authors. Qualitative research is not a unified field. Most manuscript and grant reviewers are not qualitative experts and are likely to embrace a generic set of criteria rather than those relevant to the particular qualitative approach proposed or reported. Reviewers and researchers need to be aware of this tendency and educate health care researchers about the criteria appropriate for evaluating qualitative research from within the theoretical and methodological framework from which it emerges.

  19. Evaluative Criteria for Qualitative Research in Health Care: Controversies and Recommendations

    PubMed Central

    Cohen, Deborah J.; Crabtree, Benjamin F.

    2008-01-01

    PURPOSE We wanted to review and synthesize published criteria for good qualitative research and develop a cogent set of evaluative criteria. METHODS We identified published journal articles discussing criteria for rigorous research using standard search strategies then examined reference sections of relevant journal articles to identify books and book chapters on this topic. A cross-publication content analysis allowed us to identify criteria and understand the beliefs that shape them. RESULTS Seven criteria for good qualitative research emerged: (1) carrying out ethical research; (2) importance of the research; (3) clarity and coherence of the research report; (4) use of appropriate and rigorous methods; (5) importance of reflexivity or attending to researcher bias; (6) importance of establishing validity or credibility; and (7) importance of verification or reliability. General agreement was observed across publications on the first 4 quality dimensions. On the last 3, important divergent perspectives were observed in how these criteria should be applied to qualitative research, with differences based on the paradigm embraced by the authors. CONCLUSION Qualitative research is not a unified field. Most manuscript and grant reviewers are not qualitative experts and are likely to embrace a generic set of criteria rather than those relevant to the particular qualitative approach proposed or reported. Reviewers and researchers need to be aware of this tendency and educate health care researchers about the criteria appropriate for evaluating qualitative research from within the theoretical and methodological framework from which it emerges. PMID:18626033

  20. Case Studies of Five Teacher Supervision/Evaluation Systems.

    ERIC Educational Resources Information Center

    Patrick, Edward M.; Dawson, Judith A.

    In the 1984-85 school year, the Pennsylvania Department of Education (PDE) began to actively encourage Pennsylvania school districts to reform their teacher supervision/evaluation (TS/E) procedures. To obtain data necessary for developing TS/E models, the PDE commissioned Research for Better Schools (RBS) to conduct a study of five school…

  1. Involving the public in epidemiological public health research: a qualitative study of public and stakeholder involvement in evaluation of a population-wide natural policy experiment.

    PubMed

    Anderson de Cuevas, Rachel; Nylén, Lotta; Burström, Bo; Whitehead, Margaret

    2018-04-20

    Public involvement in research is considered good practice by European funders; however, evidence of its research impact is sparse, particularly in relation to large-scale epidemiological research. To explore what difference public and stakeholder involvement made to the interpretation of findings from an evaluation of a natural policy experiment to influence the wider social determinants of health: 'Flexicurity'. Stockholm County, Sweden. Members of the public from different occupational groups represented by blue-collar and white-collar trade union representatives. Also, members of three stakeholder groups: the Swedish national employment agency; an employers' association and politicians sitting on a national labour market committee. Total: 17 participants. Qualitative study of process and outcomes of public and stakeholder participation in four focused workshops on the interpretation of initial findings from the flexicurity evaluation. New insights from participants benefiting the interpretation of our research findings or conceptualisation of future research. Participants sensed more drastic and nuanced change in the Swedish welfare system over recent decades than was evident from our literature reviews and policy analysis. They also elaborated hidden developments in the Swedish labour market that were increasingly leading to 'insiders' and 'outsiders', with differing experiences and consequences for financial and job security. Their explanation of the differential effects of the various collective agreements for different occupational groups was new and raised further potential research questions. Their first-hand experience provided new insights into how changes to the social protection system were contributing to the increasing trends in poverty among unemployed people with limiting long-standing illness. The politicians provided further reasoning behind some of the policy changes and their intended and unintended consequences. These insights fed into

  2. Evaluation of a Multi-Case Participatory Action Research Project: The Case of SOLINSA

    ERIC Educational Resources Information Center

    Home, Robert; Rump, Niels

    2015-01-01

    Purpose: Scholars agree that evaluation of participatory action research is inherently valuable; however there have been few attempts at evaluating across methods and across interventions because the perceived success of a method is affected by context, researcher skills and the aims of the participants. This paper describes the systematic…

  3. Case Study Research Methodology in Nursing Research.

    PubMed

    Cope, Diane G

    2015-11-01

    Through data collection methods using a holistic approach that focuses on variables in a natural setting, qualitative research methods seek to understand participants' perceptions and interpretations. Common qualitative research methods include ethnography, phenomenology, grounded theory, and historic research. Another type of methodology that has a similar qualitative approach is case study research, which seeks to understand a phenomenon or case from multiple perspectives within a given real-world context.

  4. Evaluating care from a care ethical perspective:: A pilot study.

    PubMed

    Kuis, Esther E; Goossensen, Anne

    2017-08-01

    Care ethical theories provide an excellent opening for evaluation of healthcare practices since searching for (moments of) good care from a moral perspective is central to care ethics. However, a fruitful way to translate care ethical insights into measurable criteria and how to measure these criteria has as yet been unexplored: this study describes one of the first attempts. To investigate whether the emotional touchpoint method is suitable for evaluating care from a care ethical perspective. An adapted version of the emotional touchpoint interview method was used. Touchpoints represent the key moments to the experience of receiving care, where the patient recalls being touched emotionally or cognitively. Participants and research context: Interviews were conducted at three different care settings: a hospital, mental healthcare institution and care facility for older people. A total of 31 participants (29 patients and 2 relatives) took part in the study. Ethical considerations: The research was found not to be subject to the (Dutch) Medical Research Involving Human Subjects Act. A three-step care ethical evaluation model was developed and described using two touchpoints as examples. A focus group meeting showed that the method was considered of great value for partaking institutions in comparison with existing methods. Reflection and discussion: Considering existing methods to evaluate quality of care, the touchpoint method belongs to the category of instruments which evaluate the patient experience. The touchpoint method distinguishes itself because no pre-defined categories are used but the values of patients are followed, which is an essential issue from a care ethical perspective. The method portrays the insider perspective of patients and thereby contributes to humanizing care. The touchpoint method is a valuable instrument for evaluating care; it generates evaluation data about the core care ethical principle of responsiveness.

  5. An Undergraduate Research Experience on Studying Variable Stars

    NASA Astrophysics Data System (ADS)

    Amaral, A.; Percy, J. R.

    2016-06-01

    We describe and evaluate a summer undergraduate research project and experience by one of us (AA), under the supervision of the other (JP). The aim of the project was to sample current approaches to analyzing variable star data, and topics related to the study of Mira variable stars and their astrophysical importance. This project was done through the Summer Undergraduate Research Program (SURP) in astronomy at the University of Toronto. SURP allowed undergraduate students to explore and learn about many topics within astronomy and astrophysics, from instrumentation to cosmology. SURP introduced students to key skills which are essential for students hoping to pursue graduate studies in any scientific field. Variable stars proved to be an excellent topic for a research project. For beginners to independent research, it introduces key concepts in research such as critical thinking and problem solving, while illuminating previously learned topics in stellar physics. The focus of this summer project was to compare observations with structural and evolutionary models, including modelling the random walk behavior exhibited in the (O-C) diagrams of most Mira stars. We found that the random walk could be modelled by using random fluctuations of the period. This explanation agreed well with observations.

  6. Key statistical and analytical issues for evaluating treatment effects in periodontal research.

    PubMed

    Tu, Yu-Kang; Gilthorpe, Mark S

    2012-06-01

    Statistics is an indispensible tool for evaluating treatment effects in clinical research. Due to the complexities of periodontal disease progression and data collection, statistical analyses for periodontal research have been a great challenge for both clinicians and statisticians. The aim of this article is to provide an overview of several basic, but important, statistical issues related to the evaluation of treatment effects and to clarify some common statistical misconceptions. Some of these issues are general, concerning many disciplines, and some are unique to periodontal research. We first discuss several statistical concepts that have sometimes been overlooked or misunderstood by periodontal researchers. For instance, decisions about whether to use the t-test or analysis of covariance, or whether to use parametric tests such as the t-test or its non-parametric counterpart, the Mann-Whitney U-test, have perplexed many periodontal researchers. We also describe more advanced methodological issues that have sometimes been overlooked by researchers. For instance, the phenomenon of regression to the mean is a fundamental issue to be considered when evaluating treatment effects, and collinearity amongst covariates is a conundrum that must be resolved when explaining and predicting treatment effects. Quick and easy solutions to these methodological and analytical issues are not always available in the literature, and careful statistical thinking is paramount when conducting useful and meaningful research. © 2012 John Wiley & Sons A/S.

  7. [Taking evaluation of post-marketing as point of cut-in to promote systematic research of traditional Chinese medicine].

    PubMed

    Wang, Yong-yan; Wang, Zhi-fei; Xie, Yan-ming

    2014-09-01

    Research on post-marketing Chinese medicine should be the systematic study from application to mechanism. Clinical evaluation is the basis of mechanism study, we can find the clue from clinical evaluation, then make a mechanism study to find the reason, then apply the results to clinic. So it is a virtuous circle. In order to achieve it, we cannot be limited to traditional Chinese medicine, we should form multi-disciplinary team under the direction of grand science thinking, try hard to put industry-university-research institute collaboration association to use, and if necessary, explore the new model of the whole nation system. An appropriate operation mechanism is very important.

  8. Critical reflections on methodological challenge in arts and dementia evaluation and research.

    PubMed

    Gray, Karen; Evans, Simon Chester; Griffiths, Amanda; Schneider, Justine

    2017-01-01

    Methodological rigour, or its absence, is often a focus of concern for the emerging field of evaluation and research around arts and dementia. However, this paper suggests that critical attention should also be paid to the way in which individual perceptions, hidden assumptions and underlying social and political structures influence methodological work in the field. Such attention will be particularly important for addressing methodological challenges relating to contextual variability, ethics, value judgement and signification identified through a literature review on this topic. Understanding how, where and when evaluators and researchers experience such challenges may help to identify fruitful approaches for future evaluation.

  9. Application of Experimental and Quasi-Experimental Research Designs to Educational Software Evaluation.

    ERIC Educational Resources Information Center

    Muller, Eugene W.

    1985-01-01

    Develops generalizations for empirical evaluation of software based upon suitability of several research designs--pretest posttest control group, single-group pretest posttest, nonequivalent control group, time series, and regression discontinuity--to type of software being evaluated, and on circumstances under which evaluation is conducted. (MBR)

  10. Childhood Obesity Research Demonstration project: Cross-site evaluation method

    USDA-ARS?s Scientific Manuscript database

    The Childhood Obesity Research Demonstration (CORD) project links public health and primary care interventions in three projects described in detail in accompanying articles in this issue of Childhood Obesity. This article describes a comprehensive evaluation plan to determine the extent to which th...

  11. The International Endometriosis Evaluation Program (IEEP Study) – A Systematic Study for Physicians, Researchers and Patients

    PubMed Central

    Burghaus, S.; Fehm, T.; Fasching, P. A.; Blum, S.; Renner, S. K.; Baier, F.; Brodkorb, T.; Fahlbusch, C.; Findeklee, S.; Häberle, L.; Heusinger, K.; Hildebrandt, T.; Lermann, J.; Strahl, O.; Tchartchian, G.; Bojahr, B.; Porn, A.; Fleisch, M.; Reicke, S.; Füger, T.; Hartung, C.-P.; Hackl, J.; Beckmann, M. W.; Renner, S. P.

    2016-01-01

    Introduction: Endometriosis is a heterogeneous disease characterized by a range of different presentations. It is usually diagnosed when patients present with pain and/or infertility, but it has also been diagnosed in asymptomatic patients. Because of the different diagnostic approaches and diverse therapies, time to diagnosis can vary considerably and the definitive diagnosis may be delayed, with some cases not being diagnosed for several years. Endometriosis patients have many unmet needs. A systematic registration and follow-up of endometriosis patients could be useful to obtain an insight into the course of the disease. The validation of biomarkers could contribute to the development of diagnostic and predictive tests which could help select patients for surgical assessment earlier and offer better predictions about patients who might benefit from medical, surgical or other interventions. The aim is also to obtain a better understanding of the etiology, pathogenesis and progression of the disease. Material and Methods: To do this, an online multicenter documentation system was introduced to facilitate the establishment of a prospective multicenter case-control study, the IEEP (International Endometriosis Evaluation Program) study. We report here on the first 696 patients with endometriosis included in the program between June 2013 and June 2015. Results: A documentation system was created, and the structure and course of the study were mapped out with regard to data collection and the collection of biomaterials. Conclusion: The documentation system permits the history and clinical data of patients with endometriosis to be recorded. The IEEP combines this information with biomaterials and uses it for scientific studies. The recorded data can also be used to evaluate clinical quality control measures such as the certification parameters used by the EEL (European Endometriosis League) to assess certified endometriosis centers. PMID:27582581

  12. Evaluation of the Project Studies in Social Studies Course of Secondary Schools in Turkey

    ERIC Educational Resources Information Center

    Ibret, B. Unal; Recepoglu, Ergun; Karasu, Emine; Recepoglu, Serpil

    2013-01-01

    The aim of this study is to evaluate project studies in the 6th and 7th grades social studies courses of secondary schools according to opinions of students. This study is a descriptive research in the survey model. The sample is 880 students selected from 6th and 7th grades of 22 secondary schools randomly in central province of Kastamonu. As a…

  13. Methodological Reflections on the Contribution of Qualitative Research to the Evaluation of Clinical Ethics Support Services.

    PubMed

    Wäscher, Sebastian; Salloch, Sabine; Ritter, Peter; Vollmann, Jochen; Schildmann, Jan

    2017-05-01

    This article describes a process of developing, implementing and evaluating a clinical ethics support service intervention with the goal of building up a context-sensitive structure of minimal clinical-ethics in an oncology department without prior clinical ethics structure. Scholars from different disciplines have called for an improvement in the evaluation of clinical ethics support services (CESS) for different reasons over several decades. However, while a lot has been said about the concepts and methodological challenges of evaluating CESS up to the present time, relatively few empirical studies have been carried out. The aim of this article is twofold. On the one hand, it describes a process of development, modifying and evaluating a CESS intervention as part of the ETHICO research project, using the approach of qualitative-formative evaluation. On the other hand, it provides a methodological analysis which specifies the contribution of qualitative empirical methods to the (formative) evaluation of CESS. We conclude with a consideration of the strengths and limitations of qualitative evaluation research with regards to the evaluation and development of context sensitive CESS. We further discuss our own approach in contrast to rather traditional consult or committee models. © 2017 John Wiley & Sons Ltd.

  14. Global research priorities for interpersonal violence prevention: a modified Delphi study

    PubMed Central

    Tanaka, Masako; Tomlinson, Mark; Streiner, David L; Tonmyr, Lil; Lee, Bandy X; Fisher, Jane; Hegadoren, Kathy; Pim, Joam Evans; Wang, Shr-Jie Sharlenna; MacMillan, Harriet L

    2017-01-01

    Abstract Objective To establish global research priorities for interpersonal violence prevention using a systematic approach. Methods Research priorities were identified in a three-round process involving two surveys. In round 1, 95 global experts in violence prevention proposed research questions to be ranked in round 2. Questions were collated and organized according to the four-step public health approach to violence prevention. In round 2, 280 international experts ranked the importance of research in the four steps, and the various substeps, of the public health approach. In round 3, 131 international experts ranked the importance of detailed research questions on the public health step awarded the highest priority in round 2. Findings In round 2, “developing, implementing and evaluating interventions” was the step of the public health approach awarded the highest priority for four of the six types of violence considered (i.e. child maltreatment, intimate partner violence, armed violence and sexual violence) but not for youth violence or elder abuse. In contrast, “scaling up interventions and evaluating their cost–effectiveness” was ranked lowest for all types of violence. In round 3, research into “developing, implementing and evaluating interventions” that addressed parenting or laws to regulate the use of firearms was awarded the highest priority. The key limitations of the study were response and attrition rates among survey respondents. However, these rates were in line with similar priority-setting exercises. Conclusion These findings suggest it is premature to scale up violence prevention interventions. Developing and evaluating smaller-scale interventions should be the funding priority. PMID:28053363

  15. Scientific and Ethical Reflections on Academic Corruption in Universities: On the Science Research Evaluation System in China's Universities

    ERIC Educational Resources Information Center

    Xiaochun, Wu; Dan, Jia

    2007-01-01

    A study of the science research activities in China's institutions of higher learning in recent years indicates that there is a major connection between the current instances of corruption in scientific research at colleges and universities and the evaluations system for scientific research implemented at many of the colleges and universities.…

  16. Attitudes toward evaluation: An exploratory study of students' and stakeholders' social representations.

    PubMed

    Schultes, Marie-Therese; Kollmayer, Marlene; Mejeh, Mathias; Spiel, Christiane

    2018-06-15

    Positive attitudes toward evaluation among stakeholders are an important precondition for successful evaluation processes. However, empirical studies focusing on stakeholders' attitudes toward evaluation are scarce. The present paper explores the approach of assessing social representations as indicators of people's attitudes toward evaluation. In an exploratory study, two groups were surveyed: University students (n = 60) with rather theoretical knowledge of evaluation and stakeholders (n = 61) who had shortly before taken part in participatory evaluation studies. Both groups were asked to name their free associations with the term "evaluation", which were subsequently analyzed lexicographically. The results indicate different social representations of evaluation in the two groups. The student group primarily saw evaluation as an "appraisal", whereas the stakeholders emphasized the "improvement" resulting from evaluation. Implications for further evaluation research and practice are discussed. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Heterogeneity of Human Research Ethics Committees and Research Governance Offices across Australia: An observational study.

    PubMed

    De Smit, Elisabeth; Kearns, Lisa S; Clarke, Linda; Dick, Jonathan; Hill, Catherine L; Hewitt, Alex W

    2016-01-01

    Conducting ethically grounded research is a fundamental facet of all investigations. Nevertheless, the administrative burdens of current ethics review are substantial, and calls have been made for a reduction in research waste. To describe the heterogeneity in administration and documentation required by Human Research Ethics Committees (HRECs) and Research Governance Offices (RGOs) across Australia. In establishing a nationwide study to investigate the molecular aetiology of Giant Cell Arteritis (GCA), for which archived pathological specimens from around Australia are being recruited, we identified variation across separate HREC and RGO requirements. Submission paperwork and correspondence from each collaborating site and its representative office for research were reviewed. This data was interrogated to evaluate differences in current guidelines. Twenty-five pathology departments across seven Australian States collaborated in this study. All states, except Victoria, employed a single ethics review model. There was discrepancy amongst HRECs as to which application process applied to our study: seven requested completion of a "National Ethics Application Form" and three a "Low Negligible Risk" form. Noticeable differences in guidelines included whether electronic submission was sufficient. There was variability in the total number of documents submitted (range five to 22) and panel review turnaround time (range nine to 136 days). We demonstrate the challenges and illustrate the heavy workload involved in receiving widespread ethics and governance approval across Australia. We highlight the need to simplify, homogenise, and nationalise human ethics for non-clinical trial studies. Reducing unnecessary administration will enable investigators to achieve research aims more efficiently.

  18. Evaluation of predictive capacities of biomarkers based on research synthesis.

    PubMed

    Hattori, Satoshi; Zhou, Xiao-Hua

    2016-11-10

    The objective of diagnostic studies or prognostic studies is to evaluate and compare predictive capacities of biomarkers. Suppose we are interested in evaluation and comparison of predictive capacities of continuous biomarkers for a binary outcome based on research synthesis. In analysis of each study, subjects are often classified into two groups of the high-expression and low-expression groups according to a cut-off value, and statistical analysis is based on a 2 × 2 table defined by the response and the high expression or low expression of the biomarker. Because the cut-off is study specific, it is difficult to interpret a combined summary measure such as an odds ratio based on the standard meta-analysis techniques. The summary receiver operating characteristic curve is a useful method for meta-analysis of diagnostic studies in the presence of heterogeneity of cut-off values to examine discriminative capacities of biomarkers. We develop a method to estimate positive or negative predictive curves, which are alternative to the receiver operating characteristic curve based on information reported in published papers of each study. These predictive curves provide a useful graphical presentation of pairs of positive and negative predictive values and allow us to compare predictive capacities of biomarkers of different scales in the presence of heterogeneity in cut-off values among studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. A Cluster-Randomized Trial of Restorative Practices: An Illustration to Spur High-Quality Research and Evaluation

    PubMed Central

    Acosta, Joie D.; Chinman, Matthew; Ebener, Patricia; Phillips, Andrea; Xenakis, Lea; Malone, Patrick S.

    2017-01-01

    Restorative Practices in schools lack rigorous evaluation studies. As an example of rigorous school-based research, this paper describes the first randomized control trial of restorative practices to date, the Study of Restorative Practices. It is a 5-year, cluster-randomized controlled trial (RCT) of the Restorative Practices Intervention (RPI) in 14 middle schools in Maine to assess whether RPI impacts both positive developmental outcomes and problem behaviors and whether the effects persist during the transition from middle to high school. The two-year RPI intervention began in the 2014–2015 school year. The study’s rationale and theoretical concerns are discussed along with methodological concerns including teacher professional development. The theoretical rationale and description of the methods from this study may be useful to others conducting rigorous research and evaluation in this area. PMID:28936104

  20. A Conceptual Framework for Graduate Teaching Assistant Professional Development Evaluation and Research.

    PubMed

    Reeves, Todd D; Marbach-Ad, Gili; Miller, Kristen R; Ridgway, Judith; Gardner, Grant E; Schussler, Elisabeth E; Wischusen, E William

    2016-01-01

    Biology graduate teaching assistants (GTAs) are significant contributors to the educational mission of universities, particularly in introductory courses, yet there is a lack of empirical data on how to best prepare them for their teaching roles. This essay proposes a conceptual framework for biology GTA teaching professional development (TPD) program evaluation and research with three overarching variable categories for consideration: outcome variables, contextual variables, and moderating variables. The framework's outcome variables go beyond GTA satisfaction and instead position GTA cognition, GTA teaching practice, and undergraduate learning outcomes as the foci of GTA TPD evaluation and research. For each GTA TPD outcome variable, key evaluation questions and example assessment instruments are introduced to demonstrate how the framework can be used to guide GTA TPD evaluation and research plans. A common conceptual framework is also essential to coordinating the collection and synthesis of empirical data on GTA TPD nationally. Thus, the proposed conceptual framework serves as both a guide for conducting GTA TPD evaluation at single institutions and as a means to coordinate research across institutions at a national level. © 2016 T. D. Reeves et al. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  1. [Analysis of evaluation process of research projects submitted to the Fondo de Investigación Sanitaria, Spain].

    PubMed

    Prieto Carles, C; Gómez-Gerique, J; Gutiérrez Millet, V; Veiga de Cabo, J; Sanz Martul, E; Mendoza Hernández, J L

    2000-10-07

    At the present time it seems very clear that research improvement is both an unquestionable fact and the right way to develop technological innovation, services and patents. However, such improvement and corresponding finances needs to be done under fine and rigorous evaluation process as an assessment tool under which all the research projects applying to a public or private call for proposals should be submitted to assure a coherence point according to the investment to be made. At this end, the main target of this work has been focused to analysis and study the evaluation process traditionally made by Fondo de Investigación Sanitaria (FIS) as well as to propose most adequate modifications. A sample of 431 research projects corresponding to year 1998 proposal was analysed. The evaluation from FIS and ANEP (National Evaluation and Prospective Agency) was evaluated and scored (evaluation quality) in its main contents by 3 independent evaluators, the showed results submitted to a comparative frame between these agencies at indoor (FIS) and outdoor (FIS/ANEP) level. FIS evaluation had 20 commissions or areas of knowledge. The analysis indoor (FIS) clearly showed that evaluation quality was correlated to the assigned commission (F = 3.71; p < 0.001) and to the time last of the researched proposal (F = 3.42; p < 0.05) but no related to the evaluator. On the other hand, the quality of ANEP evaluation showed a correlated dependency of the three mentioned facts. In all terms, the ANEP evaluation was better than FIS for the three years time projects, but in did not show significant differences in one or two years time projects. In all cases, the evaluation with final results as negative (financing denied) showed an average quality higher than positive evaluation. The obtained results advice about the convenience of making some changes in the evaluative structure and to review the sort of FIS technical commissions focusing an improvement of the evaluation process.

  2. Evaluating Community College Personnel: A Research Report.

    ERIC Educational Resources Information Center

    Deegan, William L.; And Others

    A statewide survey was conducted of local evaluation policies, procedures, and problems of implementing evaluation programs on the campuses of California community colleges. The following areas were studied: (1) the process of development of the evaluation program; (2) procedures utilized in the first year of implementing Senate Bill 696…

  3. Feminist research or humanistic research? Experiences of studying prostatectomy.

    PubMed

    Pateman, B

    2000-03-01

    This paper highlights issues related to men's health research arising from a small-scale study, carried out by a male researcher, to identify the experience of men following transuretheral resection of prostate (TURP) for benign prostatic hypertrophy (BPH). The intention of this paper is to stimulate methodological debate rather than to be a research report. For the study, an informal interview approach was used within a phenomenological framework, and interview experiences raised issues which have been previously discussed under the rubric of feminist research. The conclusion drawn is that a style of research which attempts to gain a holistic view of patients' experiences is better termed 'humanistic research' because the term 'feminist research' clearly cannot be applied to men studying men's health-related experiences.

  4. The Evaluation and Research of Multi-Project Programs: Program Component Analysis.

    ERIC Educational Resources Information Center

    Baker, Eva L.

    1977-01-01

    It is difficult to base evaluations on concepts irrelevant to state policy making. Evaluation of a multiproject program requires both time and differentiation of method. Data from the California Early Childhood Program illustrate process variables for program component analysis, and research questions for intraprogram comparison. (CP)

  5. Project monitoring and evaluation: an enhancing method for health research system management.

    PubMed

    Djalalinia, Shirin; Owlia, Parviz; Malekafzali, Hossein; Ghanei, Mostafa; Babamahmoodi, Abdolreza; Peykari, Niloofar

    2014-04-01

    Planning, organizing, staffing, leading and monitoring are the basic functional component of management. In present article, we aim to define the project monitoring and evaluation in health research system (HRS) considering its success and challenges based on our national experience. IN THIS STUDY BASED ON THE INFORMATION OF ANNUAL MEDICAL SCIENCE UNIVERSITIES EVALUATION DURING THE LAST DECADE THE HRS INDICATORS HAVE BEEN SCORED IN THREE AXES BASED ON HRS FUNCTIONS: Stewardship, capacity building and knowledge production. In this article, we will focus on the results of HRS evaluation from 2002 to 2010, also on its success and challenges. In an overall view, the main results are the experiences of the designing and implantation of such process after pre-project preparation, all parts followed under the whole supervision of the aims of the HRS evaluation. Project management light the way of practical application of knowledge, skills, tools and techniques for better HRS evaluation and management. We concluded that; although monitoring and evaluation as an essential part of HRS Management light the improvement ahead way but we still need to advantage of the new project management advances.

  6. Women's erotic rape fantasies: an evaluation of theory and research.

    PubMed

    Critelli, Joseph W; Bivona, Jenny M

    2008-01-01

    This article is the first systematic review of the research literature on women's rape fantasies. Current research indicates that between 31% and 57% of women have fantasies in which they are forced into sex against their will, and for 9% to 17% of women these are a frequent or favorite fantasy experience. Erotic rape fantasies are paradoxical: they do not appear to make sense. Why would a person have an erotic and pleasurable fantasy about an event that, in real life, would be abhorrent and traumatic? In this article, the major theories of women's rape fantasies are evaluated both rationally and empirically. These theories explain rape fantasies in terms of masochism, sexual blame avoidance, openness to sexuality, sexual desirability, male rape culture, biological predisposition to surrender, sympathetic physiological activation, and adversary transformation. This article evaluates theory and research, makes provisional judgments as to which theories appear to be most viable, and begins the task of theoretical integration to arrive at a more complete and internally consistent explanation for why many women engage in erotic rape fantasies. Methodological critiques and programs for future research are presented throughout.

  7. Methodological challenges in cross-language qualitative research: a research review.

    PubMed

    Squires, Allison

    2009-02-01

    Cross-language qualitative research occurs when a language barrier is present between researchers and participants. The language barrier is frequently mediated through the use of a translator or interpreter. The purpose of this analysis of cross-language qualitative research was threefold: (1) review the methods literature addressing cross-language research; (2) synthesize the methodological recommendations from the literature into a list of criteria that could evaluate how researchers methodologically managed translators and interpreters in their qualitative studies; (3) test these criteria on published cross-language qualitative studies. A group of 40 purposively selected cross-language qualitative studies found in nursing and health sciences journals. The synthesis of the cross-language methods literature produced 14 criteria to evaluate how qualitative researchers managed the language barrier between themselves and their study participants. To test the criteria, the researcher conducted a summative content analysis framed by discourse analysis techniques of the 40 cross-language studies. The evaluation showed that only 6 out of 40 studies met all the criteria recommended by the cross-language methods literature for the production of trustworthy results in cross-language qualitative studies. Multiple inconsistencies, reflecting disadvantageous methodological choices by cross-language researchers, appeared in the remaining 33 studies. To name a few, these included rendering the translator or interpreter as an invisible part of the research process, failure to pilot test interview questions in the participant's language, no description of translator or interpreter credentials, failure to acknowledge translation as a limitation of the study, and inappropriate methodological frameworks for cross-language research. The finding about researchers making the role of the translator or interpreter invisible during the research process supports studies completed by other

  8. Incidental Findings in Imaging Research: Evaluating Incidence, Benefit and Burden

    PubMed Central

    Orme, Nicholas M.; Fletcher, Joel G.; Siddiki, Hassan A.; Harmsen, W. Scott; O’Byrne, Megan M.; Port, John D.; Tremaine, William J.; Pitot, Henry C.; McFarland, Beth; Robinson, Marguerite E.; Koenig, Barabara A.; King, Bernard F.; Wolf, Susan M.

    2013-01-01

    Context Little information exists concerning the frequency of clinically significant incidental findings (IFs) identified in the course of imaging research across a broad spectrum of imaging modalities and body regions. Objective To estimate the frequency with which research imaging IFs generate further clinical action, and the medical benefit/burden of identifying these IFs. Design, Setting, and Participants Retrospective review of subjects undergoing a research imaging exam that was interpreted by a radiologist for IFs in the first quarter of 2004, with 3-year clinical follow-up. An expert panel reviewed IFs generating clinical action to determine medical benefit/burden based on predefined criteria. Main Outcome Measures Frequency of (1) IFs that generated further clinical action by modality, body part, age, gender, and (2) IFs resulting in clear medical benefit or burden. Results 1376 patients underwent 1426 research imaging studies. 40% (567/1426) of exams had at least one IF (1055 total). Risk of an IF increased significantly by age (OR=1.5; [1.4–1.7=95% C.I.] per decade increase). Abdominopelvic CT generated more IFs than other exams (OR=18.9 compared with ultrasound; 9.2% with subsequent clinical action), with CT Thorax and MR brain next (OR=11.9 and 5.9; 2.8% and 2.2% with action, respectively). Overall 6.2% of exams (35/567) with an IF generated clinical action, resulting in clear medical benefit in 1.1% (6/567) and clear medical burden in 0.5% (3/567). In most instances, medical benefit/burden was unclear (4.6%; 26/567). Conclusions The frequency of IFs in imaging research exams varies significantly by imaging modality, body region and age. Research imaging studies at high risk for generating IFs can be identified. Routine evaluation of research images by radiologists may result in identification of IFs in a substantial number of cases and subsequent clinical action to address them in much smaller number. Such clinical action can result in medical

  9. A Study on the Role of Web Technology in Enhancing Research Pursuance among University Academia

    ERIC Educational Resources Information Center

    Hussain, Irshad; Durrani, Muhammad Ismail

    2012-01-01

    The purpose of this study was to evaluate the role of web technologies in promoting research pursuance among university teachers, examine the use of web technologies by university teachers in conducting research and identify the problems of university academia in using web technologies for research. The study was delimited to academia of social…

  10. Evaluation to Redesign a Prototype Officer Data Base for Interdisciplinary Research

    DTIC Science & Technology

    1992-01-01

    accommodate cohort longitudinal research and econometric model testing . Recommendations regarding the adoption of the LOADB were presented. Utilization...commission data sets (Younkman, 1987), and the AIMS data set ( Ramsey & Younkman, 1989). An analysis of selected standardized tests for ROTC screening was...ARI Research Note 92-16 Evaluation to Redesign a Prototype il Officer Data Base for Interdisciplinary Research Dianne D. Younkman and Lori G. Ramsey

  11. Work Group on American Indian Research and Program Evaluation Methodology, Symposium on Research and Evaluation Methodology: Lifespan Issues Related to American Indians/Alaska Natives with Disabilities (Washington, DC, April 26-27, 2002).

    ERIC Educational Resources Information Center

    Davis, Jamie D., Ed.; Erickson, Jill Shepard, Ed.; Johnson, Sharon R., Ed.; Marshall, Catherine A., Ed.; Running Wolf, Paulette, Ed.; Santiago, Rolando L., Ed.

    This first symposium of the Work Group on American Indian Research and Program Evaluation Methodology (AIRPEM) explored American Indian and Alaska Native cultural considerations in relation to "best practices" in research and program evaluation. These cultural considerations include the importance of tribal consultation on research…

  12. Review of Research Reporting Guidelines for Radiology Researchers.

    PubMed

    Cronin, Paul; Rawson, James V

    2016-05-01

    Prior articles have reviewed reporting guidelines and study evaluation tools for clinical research. However, only some of the many available accepted reporting guidelines at the Enhancing the QUAlity and Transparency Of health Research Network have been discussed in previous reports. In this paper, we review the key Enhancing the QUAlity and Transparency Of health Research reporting guidelines that have not been previously discussed. The study types include diagnostic and prognostic studies, reliability and agreement studies, observational studies, analytical and descriptive, experimental studies, quality improvement studies, qualitative research, health informatics, systematic reviews and meta-analyses, economic evaluations, and mixed methods studies. There are also sections on study protocols, and statistical analyses and methods. In each section, there is a brief overview of the study type, and then the reporting guideline(s) that are most applicable to radiology researchers including radiologists involved in health services research are discussed. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  13. Evaluation of Differentiated Human Bronchial Epithelial Cell Culture Systems for Asthma Research

    PubMed Central

    Stewart, Ceri E.; Torr, Elizabeth E.; Mohd Jamili, Nur H.; Bosquillon, Cynthia; Sayers, Ian

    2012-01-01

    The aim of the current study was to evaluate primary (human bronchial epithelial cells, HBEC) and non-primary (Calu-3, BEAS-2B, BEAS-2B R1) bronchial epithelial cell culture systems as air-liquid interface- (ALI-) differentiated models for asthma research. Ability to differentiate into goblet (MUC5AC+) and ciliated (β-Tubulin IV+) cells was evaluated by confocal imaging and qPCR. Expression of tight junction/adhesion proteins (ZO-1, E-Cadherin) and development of transepithelial electrical resistance (TEER) were assessed. Primary cells showed localised MUC5AC, β-Tubulin IV, ZO-1, and E-Cadherin and developed TEER with, however, a large degree of inter- and intradonor variation. Calu-3 cells developed a more reproducible TEER and a phenotype similar to primary cells although with diffuse β-Tubulin IV staining. BEAS-2B cells did not differentiate or develop tight junctions. These data highlight the challenges in working with primary cell models and the need for careful characterisation and selection of systems to answer specific research questions. PMID:22287976

  14. Facilitating comparative effectiveness research in cancer genomics: evaluating stakeholder perceptions of the engagement process

    PubMed Central

    Deverka, Patricia A; Lavallee, Danielle C; Desai, Priyanka J; Armstrong, Joanne; Gorman, Mark; Hole-Curry, Leah; O’Leary, James; Ruffner, BW; Watkins, John; Veenstra, David L; Baker, Laurence H; Unger, Joseph M; Ramsey, Scott D

    2013-01-01

    Aims The Center for Comparative Effectiveness Research in Cancer Genomics completed a 2-year stakeholder-guided process for the prioritization of genomic tests for comparative effectiveness research studies. We sought to evaluate the effectiveness of engagement procedures in achieving project goals and to identify opportunities for future improvements. Materials & methods The evaluation included an online questionnaire, one-on-one telephone interviews and facilitated discussion. Responses to the online questionnaire were tabulated for descriptive purposes, while transcripts from key informant interviews were analyzed using a directed content analysis approach. Results A total of 11 out of 13 stakeholders completed both the online questionnaire and interview process, while nine participated in the facilitated discussion. Eighty-nine percent of questionnaire items received overall ratings of agree or strongly agree; 11% of responses were rated as neutral with the exception of a single rating of disagreement with an item regarding the clarity of how stakeholder input was incorporated into project decisions. Recommendations for future improvement included developing standard recruitment practices, role descriptions and processes for improved communication with clinical and comparative effectiveness research investigators. Conclusions Evaluation of the stakeholder engagement process provided constructive feedback for future improvements and should be routinely conducted to ensure maximal effectiveness of stakeholder involvement. PMID:23459832

  15. Facilitating comparative effectiveness research in cancer genomics: evaluating stakeholder perceptions of the engagement process.

    PubMed

    Deverka, Patricia A; Lavallee, Danielle C; Desai, Priyanka J; Armstrong, Joanne; Gorman, Mark; Hole-Curry, Leah; O'Leary, James; Ruffner, B W; Watkins, John; Veenstra, David L; Baker, Laurence H; Unger, Joseph M; Ramsey, Scott D

    2012-07-01

    The Center for Comparative Effectiveness Research in Cancer Genomics completed a 2-year stakeholder-guided process for the prioritization of genomic tests for comparative effectiveness research studies. We sought to evaluate the effectiveness of engagement procedures in achieving project goals and to identify opportunities for future improvements. The evaluation included an online questionnaire, one-on-one telephone interviews and facilitated discussion. Responses to the online questionnaire were tabulated for descriptive purposes, while transcripts from key informant interviews were analyzed using a directed content analysis approach. A total of 11 out of 13 stakeholders completed both the online questionnaire and interview process, while nine participated in the facilitated discussion. Eighty-nine percent of questionnaire items received overall ratings of agree or strongly agree; 11% of responses were rated as neutral with the exception of a single rating of disagreement with an item regarding the clarity of how stakeholder input was incorporated into project decisions. Recommendations for future improvement included developing standard recruitment practices, role descriptions and processes for improved communication with clinical and comparative effectiveness research investigators. Evaluation of the stakeholder engagement process provided constructive feedback for future improvements and should be routinely conducted to ensure maximal effectiveness of stakeholder involvement.

  16. Electrification Futures Study: A Technical Evaluation of the Impacts of an

    Science.gov Websites

    Technical Evaluation of the Impacts of an Electrified U.S. Energy System Electrification Futures Study: A Technical Evaluation of the Impacts of an Electrified U.S. Energy System Illustration showing various impacts of widespread electrification in the United States. In addition to NREL, the research team

  17. Bias in Research Grant Evaluation Has Dire Consequences for Small Universities.

    PubMed

    Murray, Dennis L; Morris, Douglas; Lavoie, Claude; Leavitt, Peter R; MacIsaac, Hugh; Masson, Michael E J; Villard, Marc-Andre

    2016-01-01

    Federal funding for basic scientific research is the cornerstone of societal progress, economy, health and well-being. There is a direct relationship between financial investment in science and a nation's scientific discoveries, making it a priority for governments to distribute public funding appropriately in support of the best science. However, research grant proposal success rate and funding level can be skewed toward certain groups of applicants, and such skew may be driven by systemic bias arising during grant proposal evaluation and scoring. Policies to best redress this problem are not well established. Here, we show that funding success and grant amounts for applications to Canada's Natural Sciences and Engineering Research Council (NSERC) Discovery Grant program (2011-2014) are consistently lower for applicants from small institutions. This pattern persists across applicant experience levels, is consistent among three criteria used to score grant proposals, and therefore is interpreted as representing systemic bias targeting applicants from small institutions. When current funding success rates are projected forward, forecasts reveal that future science funding at small schools in Canada will decline precipitously in the next decade, if skews are left uncorrected. We show that a recently-adopted pilot program to bolster success by lowering standards for select applicants from small institutions will not erase funding skew, nor will several other post-evaluation corrective measures. Rather, to support objective and robust review of grant applications, it is necessary for research councils to address evaluation skew directly, by adopting procedures such as blind review of research proposals and bibliometric assessment of performance. Such measures will be important in restoring confidence in the objectivity and fairness of science funding decisions. Likewise, small institutions can improve their research success by more strongly supporting productive

  18. Bias in Research Grant Evaluation Has Dire Consequences for Small Universities

    PubMed Central

    Murray, Dennis L.; Morris, Douglas; Lavoie, Claude; Leavitt, Peter R.; MacIsaac, Hugh; Masson, Michael E. J.; Villard, Marc-Andre

    2016-01-01

    Federal funding for basic scientific research is the cornerstone of societal progress, economy, health and well-being. There is a direct relationship between financial investment in science and a nation’s scientific discoveries, making it a priority for governments to distribute public funding appropriately in support of the best science. However, research grant proposal success rate and funding level can be skewed toward certain groups of applicants, and such skew may be driven by systemic bias arising during grant proposal evaluation and scoring. Policies to best redress this problem are not well established. Here, we show that funding success and grant amounts for applications to Canada’s Natural Sciences and Engineering Research Council (NSERC) Discovery Grant program (2011–2014) are consistently lower for applicants from small institutions. This pattern persists across applicant experience levels, is consistent among three criteria used to score grant proposals, and therefore is interpreted as representing systemic bias targeting applicants from small institutions. When current funding success rates are projected forward, forecasts reveal that future science funding at small schools in Canada will decline precipitously in the next decade, if skews are left uncorrected. We show that a recently-adopted pilot program to bolster success by lowering standards for select applicants from small institutions will not erase funding skew, nor will several other post-evaluation corrective measures. Rather, to support objective and robust review of grant applications, it is necessary for research councils to address evaluation skew directly, by adopting procedures such as blind review of research proposals and bibliometric assessment of performance. Such measures will be important in restoring confidence in the objectivity and fairness of science funding decisions. Likewise, small institutions can improve their research success by more strongly supporting productive

  19. Task-Based Language Learning and Teaching: An Action-Research Study

    ERIC Educational Resources Information Center

    Calvert, Megan; Sheen, Younghee

    2015-01-01

    The creation, implementation, and evaluation of language learning tasks remain a challenge for many teachers, especially those with limited experience with using tasks in their teaching. This action-research study reports on one teacher's experience of developing, implementing, critically reflecting on, and modifying a language learning task…

  20. The Influence of Peer Reviewer Expertise on the Evaluation of Research Funding Applications

    PubMed Central

    Sullivan, Joanne H.; Glisson, Scott R.

    2016-01-01

    Although the scientific peer review process is crucial to distributing research investments, little has been reported about the decision-making processes used by reviewers. One key attribute likely to be important for decision-making is reviewer expertise. Recent data from an experimental blinded review utilizing a direct measure of expertise has found that closer intellectual distances between applicant and reviewer lead to harsher evaluations, possibly suggesting that information is differentially sampled across subject-matter expertise levels and across information type (e.g. strengths or weaknesses). However, social and professional networks have been suggested to play a role in reviewer scoring. In an effort to test whether this result can be replicated in a real-world unblinded study utilizing self-assessed reviewer expertise, we conducted a retrospective multi-level regression analysis of 1,450 individual unblinded evaluations of 725 biomedical research funding applications by 1,044 reviewers. Despite the large variability in the scoring data, the results are largely confirmatory of work from blinded reviews, by which a linear relationship between reviewer expertise and their evaluations was observed—reviewers with higher levels of self-assessed expertise tended to be harsher in their evaluations. However, we also found that reviewer and applicant seniority could influence this relationship, suggesting social networks could have subtle influences on reviewer scoring. Overall, these results highlight the need to explore how reviewers utilize their expertise to gather and weight information from the application in making their evaluations. PMID:27768760

  1. Intellectual Growth for Undergraduate Students: Evaluation Results from an Undergraduate Research Conference

    ERIC Educational Resources Information Center

    Potter, Sharyn J.; Abrams, Eleanor; Townson, Lisa; Wake, Cameron; Williams, Julie E.

    2010-01-01

    We describe the development and evaluation of the university-wide, weeklong undergraduate research conference at the University of New Hampshire. Despite increases nationally in the number of undergraduate research conferences (URC), there has been little research examining the social and educational impact of these events on student presenters.…

  2. Matriculation Evaluation: Monographs on Designs from the Local Research Options Project.

    ERIC Educational Resources Information Center

    1992

    In November 1989, nine research designs developed to assist local colleges in the evaluation of their matriculation activities were disseminated to the California community colleges. These designs, created as part of the Local Research Options Project, focused on measuring the effects of the colleges' matriculation activities on student…

  3. Beyond the Google Search Bar: Evaluating Source Credibility in Contemporary Research

    ERIC Educational Resources Information Center

    Sorenson, Mary E.

    2016-01-01

    Courses: Research Methods, Public Speaking, Communication Theory, any other course that requires college students to engage in a formal research process. Can be conducted in traditional, online, or hybrid courses. Objectives: In this original single-class activity, students will be able to evaluate source credibility for resources that extend…

  4. Evaluative Intervention Research in Child's Play.

    ERIC Educational Resources Information Center

    Yawkey, Thomas Daniels; Fox, Frank D.

    Evaluative intervention studies which have examined the potential of imaginative play to foster young children's cognitive and social development provide the focus of this literature review. Four criteria were used to select the studies: (1) the study's focus was on the use of imaginative play to foster cognitive growth; (2) the studies were…

  5. Youth Opportunity Fund and Youth Capital Fund: Evaluation Findings from Initial Case-Study Visits. Research Report DCSF-RR004

    ERIC Educational Resources Information Center

    O'Donnell, Lisa; Bielby, Gill; Golden, Sarah; Morris, Marian; Walker, Matthew; Maguire, Sue

    2007-01-01

    The Department for Education and Skills (Replace by the Department for Children, Schools and Families as of June 28, 2007) commissioned the National Foundation for Educational Research (NFER) to conduct an evaluation of the Youth Opportunity Fund and Youth Capital Fund (YOF/YCF). This summary presents the main findings from the interim report of…

  6. Planning for the Evaluation of Teaching. NSPER: 79. A CEDR [Center on Evaluation, Development and Research] Monograph.

    ERIC Educational Resources Information Center

    Duckett, Willard R., Ed.

    This is a series of papers delivered at three National Symposia for Professionals in Evaluation and Research (NSPER) sessions in 1979. The agenda was the same at all sessions. The main topic was "Planning for the Evaluation of Teaching." The sessions were conducted in Charlotte, North Carolina; Milwaukee, Wisconsin; and Albuguerque, New…

  7. The Implementation and Evaluation of Teacher Training in Gaming Instruction for Secondary Science: An Action Research Project

    ERIC Educational Resources Information Center

    Sanders, Veronica

    2016-01-01

    This study implemented and evaluated gaming instruction as a professional development for science teachers at a Georgia high school. It was guided by four research questions that (a) assessed the impact of training in gaming instruction and evaluation of that training on science teachers' ability to use games; (b) examined evidence showing that…

  8. The Value and Feasibility of Evaluation Research on Teacher Development: Contrasting Experiences in Sri Lanka and Mexico

    ERIC Educational Resources Information Center

    Tatto, M. T.

    2002-01-01

    This article discusses the value and feasibility of carrying out evaluation research on teacher development and uses as points of reference the author's experiences in two countries, Sri Lanka and Mexico. In Sri Lanka, an evaluation study was designed to understand the effectiveness and costs of teacher development at the elementary level linking…

  9. Evaluation of occupational health interventions using a randomized controlled trial: challenges and alternative research designs.

    PubMed

    Schelvis, Roosmarijn M C; Oude Hengel, Karen M; Burdorf, Alex; Blatter, Birgitte M; Strijk, Jorien E; van der Beek, Allard J

    2015-09-01

    Occupational health researchers regularly conduct evaluative intervention research for which a randomized controlled trial (RCT) may not be the most appropriate design (eg, effects of policy measures, organizational interventions on work schedules). This article demonstrates the appropriateness of alternative designs for the evaluation of occupational health interventions, which permit causal inferences, formulated along two study design approaches: experimental (stepped-wedge) and observational (propensity scores, instrumental variables, multiple baseline design, interrupted time series, difference-in-difference, and regression discontinuity). For each design, the unique characteristics are presented including the advantages and disadvantages compared to the RCT, illustrated by empirical examples in occupational health. This overview shows that several appropriate alternatives for the RCT design are feasible and available, which may provide sufficiently strong evidence to guide decisions on implementation of interventions in workplaces. Researchers are encouraged to continue exploring these designs and thus contribute to evidence-based occupational health.

  10. Evaluating a community-based program to improve healthcare quality: research design for the Aligning Forces for Quality initiative.

    PubMed

    Scanlon, Dennis P; Alexander, Jeffrey A; Beich, Jeff; Christianson, Jon B; Hasnain-Wynia, Romana; McHugh, Megan C; Mittler, Jessica N; Shi, Yunfeng; Bodenschatz, Laura J

    2012-09-01

    The Aligning Forces for Quality (AF4Q) initiative is the Robert Wood Johnson Foundation's (RWJF's) signature effort to increase the overall quality of healthcare in targeted communities throughout the country. In addition to sponsoring this 16-site, complex program, the RWJF funds an independent scientific evaluation to support objective research on the initiative's effectiveness and contributions to basic knowledge in 5 core programmatic areas. The research design, data, and challenges faced in the evaluation of this 10-year initiative are discussed. A descriptive overview of the evaluation research design for a multi-site, community based, healthcare quality improvement initiative is provided. The multiphase research design employed by the evaluation team is discussed. Evaluation provides formative feedback to the RWJF, participants, and other interested audiences in real time; develops approaches to assess innovative and under-studied interventions; furthers the analysis and understanding of effective community-based collaborative work in healthcare; and helps to differentiate the various facilitators, barriers, and contextual dimensions that affect the implementation and outcomes of community-based health interventions. The AF4Q initiative is arguably the largest community-level healthcare improvement demonstration in the United States to date; it is being implemented at a time of rapid change in national healthcare policy. The implementation of large-scale, multi-site initiatives is becoming an increasingly common approach for addressing problems in healthcare. The evaluation research design for the AF4Q initiative, and the lessons learned from its approach, may be valuable to others tasked with evaluating similar community-based initiatives.

  11. Education research: evaluating the use of podcasting for residents during EEG instruction: a pilot study.

    PubMed

    Bensalem-Owen, Meriem; Chau, Destiny F; Sardam, Sean C; Fahy, Brenda G

    2011-08-23

    Educational methods for residents are shifting toward greater learner independence aided by technological advances. A Web-based program using a podcast was created for resident EEG instruction, replacing conventional didactics. The EEG curriculum also consisted of EEG interpretations under the tutelage of a neurophysiologist. This pilot study aimed to objectively evaluate the effectiveness of the podcast as a new teaching tool. A podcast for resident EEG instruction was implemented on the Web, replacing the traditional lecture. After Institutional Review Board approval, consent was obtained from the participating residents. Using 25-question evaluation tools, participants were assessed at baseline before any EEG instruction, and reassessed after podcasting and after 10 clinical EEG exposures. Each 25-item evaluation tool contained tracings used for clinical EEG interpretations. Scores after podcast training were also compared to scores after traditional didactic training from a previous study among anesthesiology trainees. Ten anesthesiology residents completed the study. The mean scores with standard deviations are 9.50 ± 2.92 at baseline, 13.40 ± 3.31 (p = 0.034) after the podcast, and 16.20 ± 1.87 (p = 0.019) after interpreting 10 EEGs. No differences were noted between the mean educational tool scores for those who underwent podcasting training compared to those who had undergone traditional didactic training. In this pilot study, podcast training was as effective as the prior conventional lecture in meeting the curricular goals of increasing EEG knowledge after 10 EEG interpretations as measured by assessment tools.

  12. Reported credibility techniques in higher education evaluation studies that use qualitative methods: A research synthesis.

    PubMed

    Liao, Hongjing; Hitchcock, John

    2018-06-01

    This synthesis study examined the reported use of credibility techniques in higher education evaluation articles that use qualitative methods. The sample included 118 articles published in six leading higher education evaluation journals from 2003 to 2012. Mixed methods approaches were used to identify key credibility techniques reported across the articles, document the frequency of these techniques, and describe their use and properties. Two broad sets of techniques were of interest: primary design techniques (i.e., basic), such as sampling/participant recruitment strategies, data collection methods, analytic details, and additional qualitative credibility techniques (e.g., member checking, negative case analyses, peer debriefing). The majority of evaluation articles reported use of primary techniques although there was wide variation in the amount of supporting detail; most of the articles did not describe the use of additional credibility techniques. This suggests that editors of evaluation journals should encourage the reporting of qualitative design details and authors should develop strategies yielding fuller methodological description. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Critiquing qualitative research.

    PubMed

    Beck, Cheryl Tatano

    2009-10-01

    The ability to critique research is a valuable skill that is fundamental to a perioperative nurse's ability to base his or her clinical practice on evidence derived from research. Criteria differ for critiquing a quantitative versus a qualitative study (ie, statistics are evaluated in a quantitative study, but not in a qualitative study). This article provides on guidelines for assessing qualitative research. Excerpts from a published qualitative research report are summarized and then critiqued. Questions are provided that help evaluate different sections of a research study (eg, sample, data collection methods, data analysis).

  14. Research-IQ: Development and Evaluation of an Ontology-anchored Integrative Query Tool

    PubMed Central

    Borlawsky, Tara B.; Lele, Omkar; Payne, Philip R. O.

    2011-01-01

    Investigators in the translational research and systems medicine domains require highly usable, efficient and integrative tools and methods that allow for the navigation of and reasoning over emerging large-scale data sets. Such resources must cover a spectrum of granularity from bio-molecules to population phenotypes. Given such information needs, we report upon the initial design and evaluation of an ontology-anchored integrative query tool, Research-IQ, which employs a combination of conceptual knowledge engineering and information retrieval techniques to enable the intuitive and rapid construction of queries, in terms of semi-structured textual propositions, that can subsequently be applied to integrative data sets. Our initial results, based upon both quantitative and qualitative evaluations of the efficacy and usability of Research-IQ, demonstrate its potential to increase clinical and translational research throughput. PMID:21821150

  15. Toward Better Research on--and Thinking about--Evaluation Influence, Especially in Multisite Evaluations

    ERIC Educational Resources Information Center

    Mark, Melvin M.

    2011-01-01

    Evaluation is typically carried out with the intention of making a difference in the understandings and actions of stakeholders and decision makers. The author provides a general review of the concepts of evaluation "use," evaluation "influence," and "influence pathways," with connections to multisite evaluations. The study of evaluation influence…

  16. Evaluating the Potential of NASA's Earth Science Research Results for Improving Future Operational Systems

    NASA Astrophysics Data System (ADS)

    Frederick, M. E.; Cox, E. L.; Friedl, L. A.

    2006-12-01

    NASA's Earth Science Theme is charged with implementing NASA Strategic Goal 3A to "study Earth from space to advance scientific understanding and meet societal needs." In the course of meeting this objective, NASA produces research results, such as scientific observatories, research models, advanced sensor and space system technology, data active archives and interoperability technology, high performance computing systems, and knowledge products. These research results have the potential to serve society beyond their intended purpose of answering pressing Earth system science questions. NASA's Applied Sciences Program systematically evaluates the potential of the portfolio of research results to serve society by conducting projects in partnership with regional/national scale operational partners with the statutory responsibility to inform decision makers. These projects address NASA's National Applications and the societal benefit areas under the IEOS and GEOSS. Prototyping methods are used in two ways in NASA's Applied Sciences Program. The first is part of the National Applications program element, referred to as Integrated Systems Solutions (ISS) projects. The approach for these projects is to use high fidelity prototypes to benchmark the assimilation of NASA research results into our partners' decision support systems. The outcome from ISS projects is a prototype system that has been rigorously tested with the partner to understand the scientific uncertainty and improved value of their modified system. In many cases, these completed prototypes are adopted or adapted for use by the operational partners. The second falls under the Crosscutting Solutions program element, referred to as Rapid Prototyping (RP) experiments. The approach for RP experiments is to use low fidelity prototypes that are low cost and quickly produced to evaluate the potential of the breadth of NASA research results to serve society. The outcome from the set of RP experiments is an

  17. The Iterative Research Cycle: Process-Based Model Evaluation

    NASA Astrophysics Data System (ADS)

    Vrugt, J. A.

    2014-12-01

    The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex physics based models that simulate a myriad of processes at different spatial and temporal scales. Reconciling these high-order system models with perpetually larger volumes of field data is becoming more and more difficult, particularly because classical likelihood-based fitting methods lack the power to detect and pinpoint deficiencies in the model structure. In this talk I will give an overview of our latest research on process-based model calibration and evaluation. This approach, rooted in Bayesian theory, uses summary metrics of the calibration data rather than the data itself to help detect which component(s) of the model is (are) malfunctioning and in need of improvement. A few case studies involving hydrologic and geophysical models will be used to demonstrate the proposed methodology.

  18. Using Photo-Interviewing as Tool for Research and Evaluation.

    ERIC Educational Resources Information Center

    Dempsey, John V.; Tucker, Susan A.

    Arguing that photo-interviewing yields richer data than that usually obtained from verbal interviewing procedures alone, it is proposed that this method of data collection be added to "standard" methodologies in instructional development research and evaluation. The process, as described in this paper, consists of using photographs of…

  19. Evaluating the High School Lunar Research Projects Program

    NASA Astrophysics Data System (ADS)

    Shaner, A. J.; Shipp, S. S.; Allen, J.; Kring, D. A.

    2012-12-01

    The Center for Lunar Science and Exploration (CLSE), a collaboration between the Lunar and Planetary Institute and NASA's Johnson Space Center, is one of seven member teams of the NASA Lunar Science Institute (NLSI). In addition to research and exploration activities, the CLSE team is deeply invested in education and outreach. In support of NASA's and NLSI's objective to train the next generation of scientists, CLSE's High School Lunar Research Projects program is a conduit through which high school students can actively participate in lunar science and learn about pathways into scientific careers. The objectives of the program are to enhance 1) student views of the nature of science; 2) student attitudes toward science and science careers; and 3) student knowledge of lunar science. In its first three years, approximately 140 students and 28 teachers from across the United States have participated in the program. Before beginning their research, students undertake Moon 101, a guided-inquiry activity designed to familiarize them with lunar science and exploration. Following Moon 101, and guided by a lunar scientist mentor, teams choose a research topic, ask their own research question, and design their own research approach to direct their investigation. At the conclusion of their research, teams present their results to a panel of lunar scientists. This panel selects four posters to be presented at the annual Lunar Science Forum held at NASA Ames. The top scoring team travels to the forum to present their research. Three instruments have been developed or modified to evaluate the extent to which the High School Lunar Research Projects meets its objectives. These three instruments measure changes in student views of the nature of science, attitudes towards science and science careers, and knowledge of lunar science. Exit surveys for teachers, students, and mentors were also developed to elicit general feedback about the program and its impact. The nature of science

  20. Evaluation of an action research project in ophthalmic nursing practice.

    PubMed

    Waterman, Heather; Harker, Rona; MacDonald, Heather; McLaughlan, Rita; Waterman, Christine

    2005-11-01

    This paper reports the evaluation phase of an action research project that promoted face-down posturing of patients following vitreo-retinal surgery for macular hole to enhance patient outcomes. The evaluation phase identified areas of practice needing further development from the perspectives of those involved with the care of patients. To achieve best results following surgical repair of macular hole, patients are required to posture face down for several weeks. As a consequence, patients complain of severe back and neck ache and find it difficult to persist with the posturing. Work to advance nursing practice as surgical developments occur has relevance beyond ophthalmology and the particular context of this project. The first three phases of this action research--problem identification, planning and action--have been reported in another paper. Throughout the project an action research group comprising of representatives of key stakeholders were actively involved in researching and changing practice. During the evaluation phase, a qualitative methodology was chosen. Interviews with 17 members of staff from the inpatient area were carried out to elicit their perspectives on the posturing of patients. Qualitative interviews were selected to facilitate comparison with interview data from Phase 1. Data analysis ran concurrently with data collection, so that one could inform the other. Overall, nurses and healthcare support workers felt that patients were more agreeable to posturing and after surgery began to posture more quickly. Communication was still an issue in some instances, and patients having urgent as opposed to planned surgery were found to be more difficult to prepare and the psychological care of patients still posed problems for nursing staff. The evaluation suggests that improvements in the care of this group of patients have occurred. A 10-point plan to promote face-down posturing has been developed which will be of use to practitioners in other

  1. Adding Dimension to Evaluative Research Through the Use of Protocol Material.

    ERIC Educational Resources Information Center

    Tittle, Carol Kehr

    A rationale and illustration of the use of original records or protocol materials in an evaluation research report are described. Records of school observations and audiotape transcripts were selected to represent the concepts or categories which were developed in the process of evaluation. These qualitative data were collected in a project which…

  2. 48 CFR 225.7016 - Restriction on Ballistic Missile Defense research, development, test, and evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Restriction on Ballistic Missile Defense research, development, test, and evaluation. 225.7016 Section 225.7016 Federal Acquisition... Acquisition 225.7016 Restriction on Ballistic Missile Defense research, development, test, and evaluation. [68...

  3. 32 CFR 219.123 - Early termination of research support: Evaluation of applications and proposals.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 2 2010-07-01 2010-07-01 false Early termination of research support: Evaluation of applications and proposals. 219.123 Section 219.123 National Defense Department of Defense....123 Early termination of research support: Evaluation of applications and proposals. (a) The...

  4. 32 CFR 219.123 - Early termination of research support: Evaluation of applications and proposals.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 2 2011-07-01 2011-07-01 false Early termination of research support: Evaluation of applications and proposals. 219.123 Section 219.123 National Defense Department of Defense....123 Early termination of research support: Evaluation of applications and proposals. (a) The...

  5. Linking Teacher Evaluation to Professional Development: Focusing on Improving Teaching and Learning. Research & Policy Brief

    ERIC Educational Resources Information Center

    Goe, Laura; Biggers, Kietha; Croft, Andrew

    2012-01-01

    Recently, teacher evaluation has become a major focus in educational policy debates and research efforts. This increased attention to teacher evaluation has raised questions about the relationship between evaluation and student outcomes. Rivkin, Hanushek, and Kain (2005) and others have demonstrated with value-added research that there are…

  6. Research on Evaluation and Control of Karst Water Resources in a Certain Tunnel of Dalian Subway

    NASA Astrophysics Data System (ADS)

    Wang, Guang Qiang

    2018-05-01

    Taking a certain tunnel in Dalian Metro as the research object, to evaluate the situation of karst development through geophysical prospecting and drilling data in study area. Karst water resources can be evaluated by quality and quantity in the study area, the correlation of the ion content can be analyzed according to the analysis results of chemical composition of groundwater and the maximum water inflow in karst water section tunnel can be calculated by using the Oshima Yoshi formula. Put forward measures and methods of groundwater control and tube well dewatering based on these evaluation, it has certain guiding significance for tunnel construction in karst area.

  7. Evaluation of the Treatment of Diabetic Retinopathy A Research Project

    ERIC Educational Resources Information Center

    Kupfer, Carl

    1973-01-01

    Evaluated is the treatment of diabetic retinopathy (blindness due to ruptured vessels of the retina as a side effect of diabetes), and described is a research project comparing two types of photocoagulation treatment. (DB)

  8. Introducing problem-based learning into research methods teaching: student and facilitator evaluation.

    PubMed

    Carlisle, Caroline; Ibbotson, Tracy

    2005-10-01

    The evidence base for the effectiveness of problem-based learning (PBL) has never been substantively established, although PBL is a generally accepted approach to learning in health care curricula. PBL is believed to encourage transferable skills, including problem-solving and team-working. PBL was used to deliver a postgraduate research methods module and a small evaluation study to explore its efficacy was conducted amongst the students (n = 51) and facilitators (n = 6). The study comprised of an evaluation questionnaire, distributed after each themed group of PBL sessions, and a group discussion conducted 4 weeks after the conclusion of the module, which was attended by student representatives and the facilitators. Questionnaire data was analysed using SPSS, and a transcript of the interview was subjected to content analysis. The results indicated that students felt that a PBL approach helped to make the subject matter more interesting to them and they believed that they would retain knowledge for a longer period than if their learning had used a more traditional lecture format. Students also perceived that PBL was effective in its ability to enhance students' understanding of the group process. All those involved in the PBL process reinforced the pivotal role of the facilitator. This study indicates that there is potential for PBL to be used beyond the more usual clinical scenarios constructed for health care professional education and further exploration of its use in areas such as building research capability should be undertaken.

  9. Interdisciplinary research and education in the Vienna Doctoral Programme on Water Resource Systems: a framework for evaluation

    NASA Astrophysics Data System (ADS)

    Bloeschl, G.; Carr, G.; Loucks, D. P.

    2017-12-01

    Greater understanding of how interdisciplinary research and education evolves is critical for identifying and implementing appropriate programme management strategies. We propose a program evaluation framework that is based on social learning processes (individual learning, interdisciplinary research practices, and interaction between researchers with different backgrounds); social capital outcomes (ability to interact, interpersonal connectivity, and shared understanding); and knowledge and human capital outcomes (new knowledge that integrates multiple research fields). The framework is tested on established case study doctoral program: the Vienna Doctoral Program on Water Resource Systems. Data are collected via mixed qualitative/quantitative methods that include semi-structured interviews, publication co-author analysis, analysis of research proposals, categorisation of the interdisciplinarity of publications and graduate analysis. Through the evaluation and analysis, several interesting findings about how interdisciplinary research evolves and can be supported are identified. Firstly, different aspects of individual learning seem to contribute to a researcher's ability to interact with researchers from other research fields and work collaboratively. These include learning new material from different research fields, learning how to learn new material and learning how to integrate different material. Secondly, shared interdisciplinary research practices can be identified that may be common to other programs and support interaction and shared understanding between different researchers. They include clarification and questioning, harnessing differences and setting defensible research boundaries. Thirdly, intensive interaction between researchers from different backgrounds support connectivity between the researchers, further enabling cross-disciplinary collaborative work. The case study data suggest that social learning processes and social capital outcomes

  10. Describing the implementation of an innovative intervention and evaluating its effectiveness in increasing research capacity of advanced clinical nurses: using the consolidated framework for implementation research.

    PubMed

    McKee, Gabrielle; Codd, Margaret; Dempsey, Orla; Gallagher, Paul; Comiskey, Catherine

    2017-01-01

    Despite advanced nursing roles having a research competency, participation in research is low. There are many barriers to participation in research and few interventions have been developed to address these. This paper aims to describe the implementation of an intervention to increase research participation in advanced clinical nursing roles and evaluate its effectiveness. The implementation of the intervention was carried out within one hospital site. The evaluation utilised a mixed methods design and a implementation science framework. All staff in advanced nursing roles were invited to take part, all those who were interested and had a project in mind could volunteer to participate in the intervention. The intervention consisted of the development of small research groups working on projects developed by the nurse participant/s and supported by an academic and a research fellow. The main evaluation was through focus groups. Output was analysed using thematic analysis. In addition, a survey questionnaire was circulated to all participants to ascertain their self-reported research skills before and after the intervention. The results of the survey were analysed using descriptive statistics. Finally an inventory of research outputs was collated. In the first year, twelve new clinical nurse-led research projects were conducted and reported in six peer reviewed papers, two non-peer reviewed papers and 20 conference presentations. The main strengths of the intervention were its promptness to complete research, to publish and to showcase clinical innovations. Main barriers identified were time, appropriate support from academics and from peers. The majority of participants had increased experience at scientific writing and data analysis. This study shows that an intervention, with minor financial resources; a top down approach; support of a hands on research fellow; peer collaboration with academics; strong clinical ownership by the clinical nurse researcher

  11. Evaluation of Nuclear Facility Decommissioning Projects program: a reference research reactor. Project summary report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baumann, B.L.; Miller, R.L.

    1983-10-01

    This document presents, in summary form, generic conceptual information relevant to the decommissioning of a reference research reactor (RRR). All of the data presented were extracted from NUREG/CR-1756 and arranged in a form that will provide a basis for future comparison studies for the Evaluation of Nuclear Facility Decommissioning Projects (ENFDP) program.

  12. Evaluation of Family Planning Programmes, An Example from Botswana. Research for Action No. 2.

    ERIC Educational Resources Information Center

    Cook, Sheila

    Since 1969 the International Planned Parenthood Federation has worked with the government of Botswana in setting up family planning services. An evaluation of the family planning aspects of the program were carried out. This is a summary of three research studies and some general comments. Included is: (1) an introduction to Botswana and the…

  13. [Establish research model of post-marketing clinical safety evaluation for Chinese patent medicine].

    PubMed

    Zheng, Wen-ke; Liu, Zhi; Lei, Xiang; Tian, Ran; Zheng, Rui; Li, Nan; Ren, Jing-tian; Du, Xiao-xi; Shang, Hong-cai

    2015-09-01

    The safety of Chinese patent medicine has become a focus of social. It is necessary to carry out work on post-marketing clinical safety evaluation for Chinese patent medicine. However, there have no criterions to guide the related research, it is urgent to set up a model and method to guide the practice for related research. According to a series of clinical research, we put forward some views, which contained clear and definite the objective and content of clinical safety evaluation, the work flow should be determined, make a list of items for safety evaluation project, and put forward the three level classification of risk control. We set up a model of post-marketing clinical safety evaluation for Chinese patent medicine. Based this model, the list of items can be used for ranking medicine risks, and then take steps for different risks, aims to lower the app:ds:risksrisk level. At last, the medicine can be managed by five steps in sequence. The five steps are, collect risk signal, risk recognition, risk assessment, risk management, and aftereffect assessment. We hope to provide new ideas for the future research.

  14. Multi-KW dc distribution system technology research study

    NASA Technical Reports Server (NTRS)

    Dawson, S. G.

    1978-01-01

    The Multi-KW DC Distribution System Technology Research Study is the third phase of the NASA/MSFC study program. The purpose of this contract was to complete the design of the integrated technology test facility, provide test planning, support test operations and evaluate test results. The subjet of this study is a continuation of this contract. The purpose of this continuation is to study and analyze high voltage system safety, to determine optimum voltage levels versus power, to identify power distribution system components which require development for higher voltage systems and finally to determine what modifications must be made to the Power Distribution System Simulator (PDSS) to demonstrate 300 Vdc distribution capability.

  15. 28 CFR 46.123 - Early termination of research support: Evaluation of applications and proposals.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Early termination of research support: Evaluation of applications and proposals. 46.123 Section 46.123 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) PROTECTION OF HUMAN SUBJECTS § 46.123 Early termination of research support: Evaluation of...

  16. 49 CFR 11.123 - Early termination of research support: Evaluation of applications and proposals.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 1 2010-10-01 2010-10-01 false Early termination of research support: Evaluation of applications and proposals. 11.123 Section 11.123 Transportation Office of the Secretary of Transportation PROTECTION OF HUMAN SUBJECTS § 11.123 Early termination of research support: Evaluation of...

  17. 28 CFR 46.123 - Early termination of research support: Evaluation of applications and proposals.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Early termination of research support: Evaluation of applications and proposals. 46.123 Section 46.123 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) PROTECTION OF HUMAN SUBJECTS § 46.123 Early termination of research support: Evaluation of...

  18. 22 CFR 225.123 - Early termination of research support: Evaluation of applications and proposals.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Early termination of research support: Evaluation of applications and proposals. 225.123 Section 225.123 Foreign Relations AGENCY FOR INTERNATIONAL DEVELOPMENT PROTECTION OF HUMAN SUBJECTS § 225.123 Early termination of research support: Evaluation of...

  19. 22 CFR 225.123 - Early termination of research support: Evaluation of applications and proposals.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Early termination of research support: Evaluation of applications and proposals. 225.123 Section 225.123 Foreign Relations AGENCY FOR INTERNATIONAL DEVELOPMENT PROTECTION OF HUMAN SUBJECTS § 225.123 Early termination of research support: Evaluation of...

  20. 49 CFR 11.123 - Early termination of research support: Evaluation of applications and proposals.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 1 2011-10-01 2011-10-01 false Early termination of research support: Evaluation of applications and proposals. 11.123 Section 11.123 Transportation Office of the Secretary of Transportation PROTECTION OF HUMAN SUBJECTS § 11.123 Early termination of research support: Evaluation of...

  1. Implementing a Service Learning Model for Teaching Research Methods and Program Evaluation

    ERIC Educational Resources Information Center

    Shannon, Patrick; Kim, Wooksoo; Robinson, Adjoa

    2012-01-01

    In an effort to teach students the basic knowledge of research methods and the realities of conducting research in the context of agencies in the community, faculty developed and implemented a service learning model for teaching research and program evaluation to foundation-year MSW students. A year-long foundation course was designed in which one…

  2. Digital Libraries and Repositories in India: An Evaluative Study

    ERIC Educational Resources Information Center

    Mittal, Rekha; Mahesh, G.

    2008-01-01

    Purpose: The purpose of this research is to identify and evaluate the collections within digital libraries and repositories in India available in the public domain. Design/methodology/approach: The digital libraries and repositories were identified through a study of the literature, as well as internet searching and browsing. The resulting digital…

  3. Methodological Challenges in Cross-Language Qualitative Research: A Research Review

    PubMed Central

    Squires, Allison

    2009-01-01

    Objectives Cross-language qualitative research occurs when a language barrier is present between researchers and participants. The language barrier is frequently mediated through the use of a translator or interpreter. The purpose of this critical review of cross-language qualitative research was three fold: 1) review the methods literature addressing cross language research; 2) synthesize the methodological recommendations from the literature into a list of criteria that could evaluate how researchers methodologically managed translators and interpreters in their qualitative studies; and 3) test these criteria on published cross-language qualitative studies. Data sources A group of 40 purposively selected cross-language qualitative studies found in nursing and health sciences journals. Review methods The synthesis of the cross-language methods literature produced 14 criteria to evaluate how qualitative researchers managed the language barrier between themselves and their study participants. To test the criteria, the researcher conducted a summative content analysis framed by discourse analysis techniques of the 40 cross-language studies. Results The evaluation showed that only 6 out of 40 studies met all the criteria recommended by the cross-language methods literature for the production of trustworthy results in cross-language qualitative studies. Multiple inconsistencies, reflecting disadvantageous methodological choices by cross-language researchers, appeared in the remaining 33 studies. To name a few, these included rendering the translator or interpreter as an invisible part of the research process, failure to pilot test interview questions in the participant’s language, no description of translator or interpreter credentials, failure to acknowledge translation as a limitation of the study, and inappropriate methodological frameworks for cross-language research. Conclusions The finding about researchers making the role of the translator or interpreter

  4. Measuring the Measurements: A Study of Evaluation of Writing: An Annotated Bibliography.

    ERIC Educational Resources Information Center

    Scherer, Darlene Lienau

    Intended to make the educational community aware of how research has defined acceptable practice in writing assessment, this annotated bibliography examines research about writing evaluation. Divided into five sections, the first section of the bibliography surveys some psychological and linguistic studies of the development of students' writing…

  5. A Recess Evaluation with the Players: Taking Steps Toward Participatory Action Research

    PubMed Central

    Ren, Julie Yunyi

    2010-01-01

    This playground study conceptualizes recess as a time and space that belongs to students; their inclusion in this evaluation is a notable difference from other recess/playground research. The goal was to help elementary school students make the changes they felt were needed on their playground. After conducting structured observations and student and recess aide focus groups, a report was presented to all stakeholders, and recess changes were made. We seek to show how the process of being inclusive during the evaluation was not only valuable for determining problem definition and potential interventions, but was also necessary to determine the best methods for solutions, move toward second-order change, and to create a space to facilitate children’s participation and empowerment. PMID:20544270

  6. Evaluating Impact. Education Research Paper.

    ERIC Educational Resources Information Center

    McKay, Veronica, Ed.; Treffgarne, Carew, Ed.

    Papers in this collection address issues related to participatory approaches to assessing impact. The first section, "What Is an Impact Study and How Should We Do It?" contains: (1) "Participatory Impact Assessment" (John Shotton); (2) "Participatory Action Research as an Approach to Impact Assessment" (Victoria…

  7. Institute for Training Minority Group Research and Evaluation Specialists. Final Report.

    ERIC Educational Resources Information Center

    Brown, Roscoe C., Jr.

    The Institute for Training Minority Group Research and Evaluation Specialists comprised 4 programs in 1: (1) a 6-week graduate course at New York University (NYU) during the 1970 summer session for 20 minority group persons that provided training in research design, statistics, data collection and analysis, and report writing; (2) a program of…

  8. A Cluster-Randomized Trial of Restorative Practices: An Illustration to Spur High-Quality Research and Evaluation

    ERIC Educational Resources Information Center

    Acosta, Joie D.; Chinman, Matthew; Ebener, Patricia; Phillips, Andrea; Xenakis, Lea; Malone, Patrick S.

    2016-01-01

    Restorative practices in schools lack rigorous evaluation studies. As an example of rigorous school-based research, this article describes the first randomized control trial of restorative practices to date, the Study of Restorative Practices. It is a 5-year, cluster-randomized controlled trial (RCT) of the Restorative Practices Intervention (RPI)…

  9. 10 CFR 745.123 - Early termination of research support: Evaluation of applications and proposals.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false Early termination of research support: Evaluation of... § 745.123 Early termination of research support: Evaluation of applications and proposals. (a) The... finds an institution has materially failed to comply with the terms of this policy. (b) In making...

  10. 49 CFR 11.123 - Early termination of research support: Evaluation of applications and proposals.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 1 2014-10-01 2014-10-01 false Early termination of research support: Evaluation... Transportation PROTECTION OF HUMAN SUBJECTS § 11.123 Early termination of research support: Evaluation of..., when the department or agency head finds an institution has materially failed to comply with the terms...

  11. 49 CFR 11.123 - Early termination of research support: Evaluation of applications and proposals.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 1 2013-10-01 2013-10-01 false Early termination of research support: Evaluation... Transportation PROTECTION OF HUMAN SUBJECTS § 11.123 Early termination of research support: Evaluation of..., when the department or agency head finds an institution has materially failed to comply with the terms...

  12. 49 CFR 11.123 - Early termination of research support: Evaluation of applications and proposals.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 1 2012-10-01 2012-10-01 false Early termination of research support: Evaluation... Transportation PROTECTION OF HUMAN SUBJECTS § 11.123 Early termination of research support: Evaluation of..., when the department or agency head finds an institution has materially failed to comply with the terms...

  13. 10 CFR 745.123 - Early termination of research support: Evaluation of applications and proposals.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Early termination of research support: Evaluation of applications and proposals. 745.123 Section 745.123 Energy DEPARTMENT OF ENERGY PROTECTION OF HUMAN SUBJECTS § 745.123 Early termination of research support: Evaluation of applications and proposals. (a) The...

  14. 10 CFR 745.123 - Early termination of research support: Evaluation of applications and proposals.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Early termination of research support: Evaluation of applications and proposals. 745.123 Section 745.123 Energy DEPARTMENT OF ENERGY PROTECTION OF HUMAN SUBJECTS § 745.123 Early termination of research support: Evaluation of applications and proposals. (a) The...

  15. Leadership behaviours and healthcare research performance: prospective correlational study.

    PubMed

    Patel, Vanash M; Ashrafian, Hutan; Uzoho, Chukwudi; Nikiteas, Nikolaos; Panzarasa, Pietro; Sevdalis, Nick; Darzi, Ara; Athanasiou, Thanos

    2016-05-16

    The aims of the study were to determine whether differences in leadership self-perception/behaviour in healthcare researchers may influence research performance and to evaluate whether certain leadership characteristics are associated with enhanced leadership efficiency in terms of motivation, effectiveness and satisfaction. All Faculty of Medicine Professors at Imperial College London (n=215) were sent the Multifactor Leadership Questionnaire (MLQ) Self form as a means of evaluating self-perception of leadership behaviours. For each professor, we extracted objective research performance measures (total number of publications, total number of citations and h index) from 1 January 2007 to 31 December 2009. The MLQ measured three leadership outcomes, which included motivation, effectiveness and satisfaction. Regression analysis was used to determine associations. A total number of 90 responses were received, which equated to a 42% response rate. There were no significant correlations between transformational, transactional or passive/avoidant leadership behaviours and any of the research performance measures. The five transformational leadership behaviours (ie, idealised attributes (IA), idealised behaviours (IB), inspirational motivation (IM), intellectual stimulation (IS), individual consideration (IC)) were highly significant predictors of leadership outcomes, extra effort (all B>0.404, SE=0.093-0.146, p<0.001), effectiveness (IA, IM, IS, IC B>0.359, SE=0.093-0.146, p<0.001; IB B=0.233, SE=0.103, p=0.026) and satisfaction (IA, IM, IS, IC B>0.483, SE=0.086-0.139, p<0.001; IB B=0.296, SE=0.101, p=0.004). Similarly, contingent reward was a significant predictor of extra effort (B=0.400, SE=0.123, p=0.002), effectiveness (B=0.353, SE=0.113, p=0.002) and satisfaction (B=0.326, SE=0.114, p=0.005). This study demonstrates that transformational leadership and contingent reward positively influence leadership efficiency in healthcare researchers. Although we did not show

  16. Evaluation: Two Studies of SCSD Schools. Building Systems Information Clearinghouse Research Report Number Two.

    ERIC Educational Resources Information Center

    Building Systems Information Clearinghouse, Menlo Park, CA.

    The first evaluative study is a survey of user -- student and teacher -- response to the School Construction Systems Development (SCSD) schools and to elements of the SCSD building system. Results of the 3,000 person survey are presented both as comparative findings for the 11 SCSD schools involved and as response profiles for each of the schools.…

  17. Repair, Evaluation, Maintenance, and Rehabilitation Research Program. Lock Accident Study

    DTIC Science & Technology

    1990-09-01

    ZIP Code) 10 . SOURCE OF FUNDIN6 NUMBERS -- . ;_ PROGRAM PROJECT TASK WORK UNIT Washington, DC 20314-1000 ELEMENT NO. NO. NO. . NO. 11. TITLE (1 eNy...miwcrwA; I ’+an na SECURITY CLASSIFICATION OF THIS PAGE 10 . WORK UNIT ACCESSION NO. (Continued). Funding provided by Repair, Evaluation, Maintenance, and... 10 PM S ............................................................... 10 District Records

  18. Core Domains for Clinical Research in Acute Respiratory Failure Survivors: An International Modified Delphi Consensus Study.

    PubMed

    Turnbull, Alison E; Sepulveda, Kristin A; Dinglas, Victor D; Chessare, Caroline M; Bingham, Clifton O; Needham, Dale M

    2017-06-01

    To identify the "core domains" (i.e., patient outcomes, health-related conditions, or aspects of health) that relevant stakeholders agree are essential to assess in all clinical research studies evaluating the outcomes of acute respiratory failure survivors after hospital discharge. A two-round consensus process, using a modified Delphi methodology, with participants from 16 countries, including patient and caregiver representatives. Prior to voting, participants were asked to review 1) results from surveys of clinical researchers, acute respiratory failure survivors, and caregivers that rated the importance of 19 preliminary outcome domains and 2) results from a qualitative study of acute respiratory failure survivors' outcomes after hospital discharge, as related to the 19 preliminary outcome domains. Participants also were asked to suggest any additional potential domains for evaluation in the first Delphi survey. Web-based surveys of participants representing four stakeholder groups relevant to clinical research evaluating postdischarge outcomes of acute respiratory failure survivors: clinical researchers, clinicians, patients and caregivers, and U.S. federal research funding organizations. None. None. Survey response rates were 97% and 99% in round 1 and round 2, respectively. There were seven domains that met the a priori consensus criteria to be designated as core domains: physical function, cognition, mental health, survival, pulmonary function, pain, and muscle and/or nerve function. This study generated a consensus-based list of core domains that should be assessed in all clinical research studies evaluating acute respiratory failure survivors after hospital discharge. Identifying appropriate measurement instruments to assess these core domains is an important next step toward developing a set of core outcome measures for this field of research.

  19. Evaluation of the medical student research programme in Norwegian medical schools. A survey of students and supervisors

    PubMed Central

    Hunskaar, Steinar; Breivik, Jarle; Siebke, Maje; Tømmerås, Karin; Figenschau, Kristian; Hansen, John-Bjarne

    2009-01-01

    Background The Medical Student Research Programme is a national education and grant scheme for medical students who wish to carry out research in parallel with their studies. The purpose of the programme is to increase recruitment of people with a standard medical degree to medical research. The Research Programme was established in 2002 and underwent a thorough evaluation during the spring of 2007. The evaluation should investigate if the programme had fulfilled its objectives of increased recruitment to medical research, in addition to the students' and supervisors' satisfaction of the programme, and unwanted differences between the universities. Methods Data was collected from students, supervisors and administrative staff via web-based questionnaires. Information about admission, implementation, results achieved and satisfaction was analysed and compared between the four Norwegian medical schools. In addition, the position of the scheme in relation to the national Quality Reform of Higher Education was analysed. Results At the end of 2006, the Medical Student Research Programme had recruited 265 medical students to research. These consisted of 214 active students, 35 who had completed their studies and only 17 who had dropped out. Both students and supervisors were generally very satisfied with the scheme, including the curriculum, the results achieved and the administrative service. The majority of students wanted to continue their research towards a PhD and, of those who had completed the Medical Student Research Programme, practically all had published one or several scientific papers. The survey showed only small differences between the four medical schools, despite their choice of somewhat different solutions in terms of administration and organisation. The Medical Student Research Programme satisfies the majority of the demands of the Quality Reform, however as an integrated research programme aimed at a PhD it presupposes access to PhD courses before the

  20. Incidental findings in imaging research: evaluating incidence, benefit, and burden.

    PubMed

    Orme, Nicholas M; Fletcher, Joel G; Siddiki, Hassan A; Harmsen, W Scott; O'Byrne, Megan M; Port, John D; Tremaine, William J; Pitot, Henry C; McFarland, Elizabeth G; Robinson, Marguerite E; Koenig, Barbara A; King, Bernard F; Wolf, Susan M

    2010-09-27

    Little information exists concerning the frequency and medical significance of incidental findings (IFs) in imaging research. Medical records of research participants undergoing a research imaging examination interpreted by a radiologist during January through March 2004 were reviewed, with 3-year clinical follow-up. An expert panel reviewed all IFs generating clinical action to determine medical benefit/burden on the basis of predefined criteria. The frequency of IFs that generated further clinical action was estimated by modality, body part, age, and sex, along with net medical benefit or burden. Of 1426 research imaging examinations, 567 (39.8%) had at least 1 IF (1055 total). Risk of an IF increased significantly by age (odds ratio [OR], 1.5; 95% confidence interval, 1.4-1.7 per decade increase). Abdominopelvic computed tomography generated more IFs than other examinations (OR, 18.9 vs ultrasonography; 9.2% with subsequent clinical action), with computed tomography of the thorax and magnetic resonance imaging of the head next (OR, 11.9 and 5.9; 2.8% and 2.2% with action, respectively). Of the 567 examinations with an IF, 35 (6.2%) generated clinical action, resulting in clear medical benefit in 1.1% (6 of 567) and clear medical burden in 0.5% (3 of 567). Medical benefit/burden was usually unclear (26 of 567 [4.6%]). Frequency of IFs in imaging research examinations varies significantly by imaging modality, body region, and age. Research imaging studies at high risk for generating IFs can be identified. Routine evaluation of research images by radiologists may result in identification of IFs in a high number of cases and subsequent clinical action to address them in a small but significant minority. Such clinical action can result in medical benefit to a small number of patients.

  1. Study design and "evidence" in patient-oriented research.

    PubMed

    Concato, John

    2013-06-01

    Individual studies in patient-oriented research, whether described as "comparative effectiveness" or using other terms, are based on underlying methodological designs. A simple taxonomy of study designs includes randomized controlled trials on the one hand, and observational studies (such as case series, cohort studies, and case-control studies) on the other. A rigid hierarchy of these design types is a fairly recent phenomenon, promoted as a tenet of "evidence-based medicine," with randomized controlled trials receiving gold-standard status in terms of producing valid results. Although randomized trials have many strengths, and contribute substantially to the evidence base in clinical care, making presumptions about the quality of a study based solely on category of research design is unscientific. Both the limitations of randomized trials as well as the strengths of observational studies tend to be overlooked when a priori assumptions are made. This essay presents an argument in support of a more balanced approach to evaluating evidence, and discusses representative examples from the general medical as well as pulmonary and critical care literature. The simultaneous consideration of validity (whether results are correct "internally") and generalizability (how well results apply to "external" populations) is warranted in assessing whether a study's results are accurate for patients likely to receive the intervention-examining the intersection of clinical and methodological issues in what can be called a medicine-based evidence approach. Examination of cause-effect associations in patient-oriented research should recognize both the strengths and limitations of randomized trials as well as observational studies.

  2. Research and Evaluations of the Health Aspects of Disasters, Part VI: Interventional Research and the Disaster Logic Model.

    PubMed

    Birnbaum, Marvin L; Daily, Elaine K; O'Rourke, Ann P; Kushner, Jennifer

    2016-04-01

    Disaster-related interventions are actions or responses undertaken during any phase of a disaster to change the current status of an affected community or a Societal System. Interventional disaster research aims to evaluate the results of such interventions in order to develop standards and best practices in Disaster Health that can be applied to disaster risk reduction. Considering interventions as production functions (transformation processes) structures the analyses and cataloguing of interventions/responses that are implemented prior to, during, or following a disaster or other emergency. Since currently it is not possible to do randomized, controlled studies of disasters, in order to validate the derived standards and best practices, the results of the studies must be compared and synthesized with results from other studies (ie, systematic reviews). Such reviews will be facilitated by the selected studies being structured using accepted frameworks. A logic model is a graphic representation of the transformation processes of a program [project] that shows the intended relationships between investments and results. Logic models are used to describe a program and its theory of change, and they provide a method for the analyzing and evaluating interventions. The Disaster Logic Model (DLM) is an adaptation of a logic model used for the evaluation of educational programs and provides the structure required for the analysis of disaster-related interventions. It incorporates a(n): definition of the current functional status of a community or Societal System, identification of needs, definition of goals, selection of objectives, implementation of the intervention(s), and evaluation of the effects, outcomes, costs, and impacts of the interventions. It is useful for determining the value of an intervention and it also provides the structure for analyzing the processes used in providing the intervention according to the Relief/Recovery and Risk-Reduction Frameworks.

  3. A methodological review of qualitative case study methodology in midwifery research.

    PubMed

    Atchan, Marjorie; Davis, Deborah; Foureur, Maralyn

    2016-10-01

    To explore the use and application of case study research in midwifery. Case study research provides rich data for the analysis of complex issues and interventions in the healthcare disciplines; however, a gap in the midwifery research literature was identified. A methodological review of midwifery case study research using recognized templates, frameworks and reporting guidelines facilitated comprehensive analysis. An electronic database search using the date range January 2005-December 2014: Maternal and Infant Care, CINAHL Plus, Academic Search Complete, Web of Knowledge, SCOPUS, Medline, Health Collection (Informit), Cochrane Library Health Source: Nursing/Academic Edition, Wiley online and ProQuest Central. Narrative evaluation was undertaken. Clearly worded questions reflected the problem and purpose. The application, strengths and limitations of case study methods were identified through a quality appraisal process. The review identified both case study research's applicability to midwifery and its low uptake, especially in clinical studies. Many papers included the necessary criteria to achieve rigour. The included measures of authenticity and methodology were varied. A high standard of authenticity was observed, suggesting authors considered these elements to be routine inclusions. Technical aspects were lacking in many papers, namely a lack of reflexivity and incomplete transparency of processes. This review raises the profile of case study research in midwifery. Midwives will be encouraged to explore if case study research is suitable for their investigation. The raised profile will demonstrate further applicability; encourage support and wider adoption in the midwifery setting. © 2016 John Wiley & Sons Ltd.

  4. Study on index system of GPS interference effect evaluation

    NASA Astrophysics Data System (ADS)

    Zhang, Kun; Zeng, Fangling; Zhao, Yuan; Zeng, Ruiqi

    2018-05-01

    Satellite navigation interference effect evaluation is the key technology to break through the research of Navigation countermeasure. To evaluate accurately the interference degree and Anti-jamming ability of GPS receiver, this text based on the existing research results of Navigation interference effect evaluation, build the index system of GPS receiver effectiveness evaluation from four levels of signal acquisition, tracking, demodulation and positioning/timing and establish the model for each index. These indexes can accurately and quantitatively describe the interference effect at all levels.

  5. Challenges in studying the effects of scientific societies on research integrity.

    PubMed

    Levine, Felice J; Iutcovich, Joyce M

    2003-04-01

    Beyond impressionistic observations, little is known about the role and influence of scientific societies on research conduct. Acknowledging that the influence of scientific societies is not easily disentangled from other factors that shape norms and practices, this article addresses how best to study the promotion of research integrity generally as well as the role and impact of scientific societies as part of that process. In setting forth the parameters of a research agenda, the article addresses four issues: (1) how to conceptualize research on scientific societies and research integrity; (2) challenges and complexities in undertaking basic research; (3) strategies for undertaking basic research that is attentive to individual, situational, organizational, and environmental levels of analysis; and (4) the need for evaluation research as integral to programmatic change and to assessment of the impact of activities by scientific societies.

  6. A Participatory Action Research Pilot Study of Urban Health Disparities Using Rapid Assessment Response and Evaluation

    PubMed Central

    Brown, David Richard; Hernández, Agueda; Saint-Jean, Gilbert; Evans, Siân; Tafari, Ida; Brewster, Luther G.; Celestin, Michel J.; Gómez-Estefan, Carlos; Regalado, Fernando; Akal, Siri; Nierenberg, Barry; Kauschinger, Elaine D.; Schwartz, Robert; Page, J. Bryan

    2008-01-01

    Healthy People 2010 made it a priority to eliminate health disparities. We used a rapid assessment response and evaluation (RARE) to launch a program of participatory action research focused on health disparities in an urban, disadvantaged Black community serviced by a major south Florida health center. We formed partnerships with community members, identified local health disparities, and guided interventions targeting health disparities. We describe the RARE structure used to triangulate data sources and guide intervention plans as well as findings and conclusions drawn from scientific literature and epidemiological, historic, planning, clinical, and ethnographic data. Disenfranchisement and socioeconomic deprivation emerged as the principal determinants of local health disparities and the most appropriate targets for intervention. PMID:18048802

  7. Case studies within a mixed methods paradigm: toward a resolution of the alienation between researcher and practitioner in psychotherapy research.

    PubMed

    Dattilio, Frank M; Edwards, David J A; Fishman, Daniel B

    2010-12-01

    This article addresses the long-standing divide between researchers and practitioners in the field of psychotherapy, regarding what really works in treatment and the extent to which interventions should be governed by outcomes generated in a "laboratory atmosphere." This alienation has its roots in a positivist paradigm, which is epistemologically incomplete because it fails to provide for context-based practical knowledge. In other fields of evaluation research, it has been superseded by a mixed methods paradigm, which embraces pragmatism and multiplicity. On the basis of this paradigm, we propose and illustrate new scientific standards for research on the evaluation of psychotherapeutic treatments. These include the requirement that projects should comprise several parallel studies that involve randomized controlled trials, qualitative examinations of the implementation of treatment programs, and systematic case studies. The uniqueness of this article is that it contributes a guideline for involving a set of complementary publications, including a review that offers an overall synthesis of the findings from different methodological approaches. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  8. Being an Insider Researcher while Conducting Case Study Research

    ERIC Educational Resources Information Center

    Unluer, Sema

    2012-01-01

    It is crucial for social researchers to clarify their researchers' roles, especially for those utilizing qualitative methodology to make their research credible. The purpose of this paper is to examine the advantages and disadvantages of the researcher's insider role, an instructor, occupied within case study research on the integration of…

  9. Farmers' Attitude towards a Participatory Research Method Used to Evaluate Weed Management Strategies in Bananas

    ERIC Educational Resources Information Center

    Ganpat, Wayne G.; Isaac, Wendy-Ann P.; Brathwaite, Richard A. I.; Bekele, Isaac

    2009-01-01

    In this study, farmers were engaged in a participatory research project and their attitudes evaluated. The purpose was to identify the characteristics of farmers who are favourably predisposed towards meaningful participation in the process. Several cover crops were tested for possible use in the management of watergrass ("Commelina…

  10. Tackling the tensions in evaluating capacity strengthening for health research in low- and middle-income countries

    PubMed Central

    Bates, Imelda; Boyd, Alan; Aslanyan, Garry; Cole, Donald C

    2015-01-01

    Strengthening research capacity in low- and middle-income countries is one of the most effective ways of advancing their health and development but the complexity and heterogeneity of health research capacity strengthening (RCS) initiatives means it is difficult to evaluate their effectiveness. Our study aimed to enhance understanding about these difficulties and to make recommendations about how to make health RCS evaluations more effective. Through discussions and surveys of health RCS funders, including the ESSENCE on Health Research initiative, we identified themes that were important to health RCS funders and used these to guide a systematic analysis of their evaluation reports. Eighteen reports, produced between 2000 and 2013, representing 12 evaluations, were purposefully selected from 54 reports provided by the funders to provide maximum variety. Text from the reports was extracted independently by two authors against a pre-designed framework. Information about the health RCS approaches, tensions and suggested solutions was re-constructed into a narrative. Throughout the process contacts in the health RCS funder agencies were involved in helping us to validate and interpret our results. The focus of the health RCS evaluations ranged from individuals and institutions to national, regional and global levels. Our analysis identified tensions around how much stakeholders should participate in an evaluation, the appropriate balance between measuring and learning and between a focus on short-term processes vs longer-term impact and sustainability. Suggested solutions to these tensions included early and ongoing stakeholder engagement in planning and evaluating health RCS, modelling of impact pathways and rapid assimilation of lessons learned for continuous improvement of decision making and programming. The use of developmental approaches could improve health RCS evaluations by addressing common tensions and promoting sustainability. Sharing learning about how to

  11. Appraising the quality of medical education research methods: the Medical Education Research Study Quality Instrument and the Newcastle-Ottawa Scale-Education.

    PubMed

    Cook, David A; Reed, Darcy A

    2015-08-01

    The Medical Education Research Study Quality Instrument (MERSQI) and the Newcastle-Ottawa Scale-Education (NOS-E) were developed to appraise methodological quality in medical education research. The study objective was to evaluate the interrater reliability, normative scores, and between-instrument correlation for these two instruments. In 2014, the authors searched PubMed and Google for articles using the MERSQI or NOS-E. They obtained or extracted data for interrater reliability-using the intraclass correlation coefficient (ICC)-and normative scores. They calculated between-scale correlation using Spearman rho. Each instrument contains items concerning sampling, controlling for confounders, and integrity of outcomes. Interrater reliability for overall scores ranged from 0.68 to 0.95. Interrater reliability was "substantial" or better (ICC > 0.60) for nearly all domain-specific items on both instruments. Most instances of low interrater reliability were associated with restriction of range, and raw agreement was usually good. Across 26 studies evaluating published research, the median overall MERSQI score was 11.3 (range 8.9-15.1, of possible 18). Across six studies, the median overall NOS-E score was 3.22 (range 2.08-3.82, of possible 6). Overall MERSQI and NOS-E scores correlated reasonably well (rho 0.49-0.72). The MERSQI and NOS-E are useful, reliable, complementary tools for appraising methodological quality of medical education research. Interpretation and use of their scores should focus on item-specific codes rather than overall scores. Normative scores should be used for relative rather than absolute judgments because different research questions require different study designs.

  12. Using an Evaluability Assessment To Select Methods for Evaluating State Technology Development Programs: The Case of the Georgia Research Alliance.

    ERIC Educational Resources Information Center

    Youtie, Jan; Bozeman, Barry; Shapira, Philip

    1999-01-01

    Describes an evaluability assessment of the Georgia Research Alliance (GRA), a technology development program. Presents the steps involved in conducting an evaluability assessment, including development of an understanding of the program and its stakeholders. Analyzes and compares different methods by which the GRA could be evaluated. (SLD)

  13. Primary Care Research Team Assessment (PCRTA): development and evaluation.

    PubMed

    Carter, Yvonne H; Shaw, Sara; Macfarlane, Fraser

    2002-02-01

    College of General Practitioners (RCGP) to develop a scheme to accredit UK general practices undertaking primary care R&D. The pilot began with initial consultation on the development of the process, as well as the standards and criteria for assessment. The resulting assessment schedule allowed for assessment at one of two levels: Collaborative Research Practice (Level I), with little direct experience of gaining project or infrastructure funding Established Research Practice (Level II), with more experience of research funding and activity and a sound infrastructure to allow for growth in capacity. The process for assessment of practices involved the assessment of written documentation, followed by a half-day assessment visit by a multidisciplinary team of three assessors. IMPLEMENTATION--THE PILOT PROJECT: Pilot practices were sampled in two regions. Firstly, in the NHS Executive South West Region, where over 150 practices expressed an interest in participating. From these a purposive sample of 21 practices was selected, providing a range of research and service activity. A further seven practices were identified and included within the project through the East London and Essex Network of Researchers (ELENoR). Many in this latter group received funding and administrative support and advice from ELENoR in order to prepare written submissions for assessment. Some sample loss was encountered within the pilot project, which was attributable largely to conflicting demands on participants' time. Indeed, the preparation of written submissions within the South West coincided with the introduction of primary care groups (PCGs) in April 1999, which several practices cited as having a major impact on their participation in the pilot project. A final sample of 15 practices (nine in the South West and six through ELENoR) underwent assessment through the pilot project. A formal evaluation of the Primary Care Research Team Assessment (PCRTA) pilot was undertaken by an independent

  14. Communicating Qualitative Research Study Designs to Research Ethics Review Boards

    ERIC Educational Resources Information Center

    Ells, Carolyn

    2011-01-01

    Researchers using qualitative methodologies appear to be particularly prone to having their study designs called into question by research ethics or funding agency review committees. In this paper, the author considers the issue of communicating qualitative research study designs in the context of institutional research ethics review and offers…

  15. The implementation and evaluation of teacher training in gaming instruction for secondary science: An action research project

    NASA Astrophysics Data System (ADS)

    Sanders, Veronica

    This study implemented and evaluated gaming instruction as a professional development for science teachers at a Georgia high school. It was guided by four research questions that (a) assessed the impact of training in gaming instruction and evaluation of that training on science teachers' ability to use games; (b) examined evidence showing that science teachers used games; (c) assessed the impact of the implementation and subsequent evaluation of games-based training on how science teachers instruct their students; and (d) explored the use of change management principles to help teachers transition from traditional to gaming instruction. The study included a purposive sampling of 10 volunteer science teachers who received the professional development of training in gaming instruction and were observed as they used games to instruct their students. Quantitative data were collected from interviews, observations, and reviews of student assignments and teacher plans, and were statistically analyzed to answer the research questions. These same methods were used to obtain qualitative data, which were also analyzed to answer the research questions as well as to understand the meaning, beliefs and experience behind the numbers. Ultimately, data analysis revealed that the science teachers not only used gaming instruction but also that the training helped them to use gaming instruction and that they considered gaming instruction a viable instruction methodology. Finally, data analysis revealed that change management was successfully used in the study.

  16. Global informetric perspective studies on translational medical research

    PubMed Central

    2013-01-01

    Background Translational medical research literature has increased rapidly in the last few decades and played a more and more important role during the development of medicine science. The main aim of this study is to evaluate the global performance of translational medical research during the past few decades. Methods Bibliometric, social network analysis, and visualization technologies were used for analyzing translational medical research performance from the aspects of subject categories, journals, countries, institutes, keywords, and MeSH terms. Meanwhile, the co-author, co-words and cluster analysis methods were also used to trace popular topics in translational medical research related work. Results Research output suggested a solid development in translational medical research, in terms of increasing scientific production and research collaboration. We identified the core journals, mainstream subject categories, leading countries, and institutions in translational medical research. There was an uneven distribution of publications at authorial, institutional, and national levels. The most commonly used keywords that appeared in the articles were “translational research”, “translational medicine”, “biomarkers”, “stroke”, “inflammation”, “cancer”, and “breast cancer”. Conclusions The subject categories of “Research & Experimental Medicine”, “Medical Laboratory Technology”, and “General & Internal Medicine” play a key role in translational medical research both in production and in its networks. Translational medical research and CTS, etc. are core journals of translational research. G7 countries are the leading nations for translational medical research. Some developing countries, such as P.R China, also play an important role in the communication of translational research. The USA and its institutions play a dominant role in the production, collaboration, citations and high quality articles. The research trends in

  17. Evaluating QR Code Case Studies Using a Mobile Learning Framework

    ERIC Educational Resources Information Center

    Rikala, Jenni

    2014-01-01

    The aim of this study was to evaluate the feasibility of Quick Response (QR) codes and mobile devices in the context of Finnish basic education. The feasibility was analyzed through a mobile learning framework, which includes the core characteristics of mobile learning. The study is part of a larger research where the aim is to develop a…

  18. Cutting through the noise: an evaluative framework for research communication

    NASA Astrophysics Data System (ADS)

    Strickert, G. E.; Bradford, L. E.; Shantz, S.; Steelman, T.; Orozs, C.; Rose, I.

    2017-12-01

    With an ever-increasing amount of research, there is a parallel challenge to mobilize the research for decision making, policy development and management actions. The tradition of "loading dock" model of science to policy is under renovation, replaced by more engaging methods of research communication. Research communication falls on a continuum from passive methods (e.g. reports, social media, infographics) to more active methods (e.g. forum theatre, decision labs, and stakeholder planning, and mix media installations that blend, art, science and traditional knowledge). Drawing on a five-year water science research program in the Saskatchewan River Basin, an evaluation framework is presented that draws on a wide communities of knowledge users including: First Nation and Metis, Community Organizers, Farmers, Consultants, Researchers, and Civil Servants. A mixed method framework consisting of quantitative surveys, qualitative interviews, focus groups, and q-sorts demonstrates that participants prefer more active means of research communication to draw them into the research, but they also value more traditional and passive methods to provide more in-depth information when needed.

  19. Behavioral Response Research Evaluation Workshop (BRREW)

    DTIC Science & Technology

    2015-09-30

    future research directions, focusing on controlled exposure experiments ( captive and free-ranging animals) and observational studies. OBJECTIVES...in key areas including controlled exposure experiments ( captive and free-ranging animals) and observational studies on real Navy exercises; 2...include response to simulated sources of Navy sonar (BRS and captive studies), response to real Navy sources (BRS studies, M3R), incidental response

  20. Principles of Research Tissue Banking and Specimen Evaluation from the Pathologist's Perspective.

    PubMed

    McDonald, Sandra A

    2010-12-01

    Human tissue biorepositories have an increasingly visible and important role within industrial enterprises in supporting biomedical research, including the rapidly advancing fields of proteomics, pharmacogenomics, and molecular epidemiology. Pathologists play a vital but often underrecognized role in the operation of these tissue banks. Besides interpreting studies that arise from banked samples, pathologists are needed to characterize tissues for research, to conduct quality assurance programs, to assist with resource allocation decisions, and to serve an educational role for investigators using the tissues. This article describes these key principles and illustrates examples where pathologist involvement is crucial to biorepository management. Of overarching importance, pathologists play a critical role in helping biorepository users understand the principles of specimen evaluation (histologic and structural composition of tissues, and their limitations) so as to optimize the scientific benefit of the tissues. In conclusion, greater involvement of pathologists in research tissue banking will enhance the scientific utility of biorepositories.