van der Klink, Marcel R.; van Merriënboer, Jeroen J. G.
2010-01-01
This study investigated the effect of performance-based versus competence-based assessment criteria on task performance and self-assessment skills among 39 novice secondary vocational education students in the domain of nursing and care. In a performance-based assessment group students are provided with a preset list of performance-based assessment criteria, describing what students should do, for the task at hand. The performance-based group is compared to a competence-based assessment group in which students receive a preset list of competence-based assessment criteria, describing what students should be able to do. The test phase revealed that the performance-based group outperformed the competence-based group on test task performance. In addition, higher performance of the performance-based group was reached with lower reported mental effort during training, indicating a higher instructional efficiency for novice students. PMID:20054648
Performance Assessment as a Diagnostic Tool for Science Teachers
NASA Astrophysics Data System (ADS)
Kruit, Patricia; Oostdam, Ron; van den Berg, Ed; Schuitema, Jaap
2018-04-01
Information on students' development of science skills is essential for teachers to evaluate and improve their own education, as well as to provide adequate support and feedback to the learning process of individual students. The present study explores and discusses the use of performance assessments as a diagnostic tool for formative assessment to inform teachers and guide instruction of science skills in primary education. Three performance assessments were administered to more than 400 students in grades 5 and 6 of primary education. Students performed small experiments using real materials while following the different steps of the empirical cycle. The mutual relationship between the three performance assessments is examined to provide evidence for the value of performance assessments as useful tools for formative evaluation. Differences in response patterns are discussed, and the diagnostic value of performance assessments is illustrated with examples of individual student performances. Findings show that the performance assessments were difficult for grades 5 and 6 students but that much individual variation exists regarding the different steps of the empirical cycle. Evaluation of scores as well as a more substantive analysis of students' responses provided insight into typical errors that students make. It is concluded that performance assessments can be used as a diagnostic tool for monitoring students' skill performance as well as to support teachers in evaluating and improving their science lessons.
ERIC Educational Resources Information Center
Fastre, Greet Mia Jos; van der Klink, Marcel R.; van Merrienboer, Jeroen J. G.
2010-01-01
This study investigated the effect of performance-based versus competence-based assessment criteria on task performance and self-assessment skills among 39 novice secondary vocational education students in the domain of nursing and care. In a performance-based assessment group students are provided with a preset list of performance-based…
Improving Student Performance through Computer-Based Assessment: Insights from Recent Research.
ERIC Educational Resources Information Center
Ricketts, C.; Wilks, S. J.
2002-01-01
Compared student performance on computer-based assessment to machine-graded multiple choice tests. Found that performance improved dramatically on the computer-based assessment when students were not required to scroll through the question paper. Concluded that students may be disadvantaged by the introduction of online assessment unless care is…
Assessing students' performance in software requirements engineering education using scoring rubrics
NASA Astrophysics Data System (ADS)
Mkpojiogu, Emmanuel O. C.; Hussain, Azham
2017-10-01
The study investigates how helpful the use of scoring rubrics is, in the performance assessment of software requirements engineering students and whether its use can lead to students' performance improvement in the development of software requirements artifacts and models. Scoring rubrics were used by two instructors to assess the cognitive performance of a student in the design and development of software requirements artifacts. The study results indicate that the use of scoring rubrics is very helpful in objectively assessing the performance of software requirements or software engineering students. Furthermore, the results revealed that the use of scoring rubrics can also produce a good achievement assessments direction showing whether a student is either improving or not in a repeated or iterative assessment. In a nutshell, its use leads to the performance improvement of students. The results provided some insights for further investigation and will be beneficial to researchers, requirements engineers, system designers, developers and project managers.
[Teaching performance assessment in Public Health employing three different strategies].
Martínez-González, Adrián; Moreno-Altamirano, Laura; Ponce-Rosas, Efrén Raúl; Martínez-Franco, Adrián Israel; Urrutia-Aguilar, María Esther
2011-01-01
The educational system depends upon the quality and performance of their faculty and should therefore be process of continuous improvement. To assess the teaching performance of the Public Health professors, at the Faculty of Medicine, UNAM through three strategies. Justification study. The evaluation was conducted under a mediational model through three strategies: students' opinion assessment, self-assessment and students' academic achievement. We applied descriptive statistics, Student t test, ANOVA and Pearson correlation. Twenty professors were evaluated from the Public Health department, representing 57% of all them who teach the subject. The professor's performance was highly valued self-assessment compared with assessment of student opinion, was confirmed by statistical analysis the difference was significant. The difference amongst the three evaluation strategies became more evident between self-assessment and the scores obtained by students in their academic achievement. The integration of these three strategies offers a more complete view of the teacher's performance quality. Academic achievement appears to be a more objective strategy for teaching performance assessment than students' opinion and self-assessment.
ERIC Educational Resources Information Center
National Center on Educational Outcomes, 2012
2012-01-01
The performance of special education students on state assessments has been the subject of much discussion and concern. A common belief is that all special education students perform poorly on state assessments. There are many misperceptions about the performance of students with disabilities. It is important for the Race-to-the-Top Assessment…
Practical session assessments in human anatomy: Weightings and performance.
McDonald, Aaron C; Chan, Siew-Pang; Schuijers, Johannes A
2016-07-08
Assessment weighting within a given module can be a motivating factor for students when deciding on their commitment level and time given to study a specific topic. In this study, an analysis of assessment performances of second year anatomy students was performed over four years to determine if (1) students performed better when a higher weighting was given to a set of practical session assessments and (2) whether an improved performance in the practical session assessments had a carry-over effect on other assessment tasks within that anatomy module and/or other anatomy modules that follow. Results showed that increasing the weighting of practical session assessments improved the average mark in that assessment and also improved the percentage of students passing that assessment. Further, it significantly improved performance in the written end-semester examination within the same module and had a carry-over effect on the anatomy module taught in the next teaching period, as students performed better in subsequent practical session assessments as well as subsequent end-semester examinations. It was concluded that the weighting of assessments had significant influences on a student's performance in that, and subsequent, assessments. It is postulated that practical session assessments, designed to develop deep learning skills in anatomy, improved efficacy in student performance in assessments undertaken in that and subsequent anatomy modules when the weighting of these assessments was greater. These deep learning skills were also transferable to other methods of assessing anatomy. Anat Sci Educ 9: 330-336. © 2015 American Association of Anatomists. © 2015 American Association of Anatomists.
Hernick, Marcy
2015-09-25
Objective. To develop a series of active-learning modules that would improve pharmacy students' performance on summative assessments. Design. A series of optional online active-learning modules containing questions with multiple formats for topics in a first-year (P1) course was created using a test-enhanced learning approach. A subset of module questions was modified and included on summative assessments. Assessment. Student performance on module questions improved with repeated attempts and was predictive of student performance on summative assessments. Performance on examination questions was higher for students with access to modules than for those without access to modules. Module use appeared to have the most impact on low performing students. Conclusion. Test-enhanced learning modules with immediate feedback provide pharmacy students with a learning tool that improves student performance on summative assessments and also may improve metacognitive and test-taking skills.
The Effect of Peer Assessment on Project Performance of Students at Different Learning Levels
ERIC Educational Resources Information Center
Li, Lan; Gao, Fei
2016-01-01
Peer assessment has been increasingly integrated in educational settings as a strategy to foster student learning. Yet little has been studied about how students at different learning levels may benefit from peer assessment. This study examined how peer-assessment and students' learning levels influenced students' project performance using a…
Prior academic background and student performance in assessment in a graduate entry programme.
Craig, P L; Gordon, J J; Clark, R M; Langendyk, V
2004-11-01
This study aims to identify whether non-science graduates perform as well as science graduates in Basic and Clinical Sciences (B & CS) assessments during Years 1-3 of a four-year graduate-entry programme at the University of Sydney (the 'USydMP'). Students were grouped into five categories: Health Professions (HP), Biomedical Sciences (BMS), Other Biology (BIOL), Physical Sciences (PHYS) or Non-Science (NONS). We examined the performance rank of students in each of the five groups for single best answer (SBA) and modified essay (MEQ) assessments separately, and also calculated the relative risk of failure in the summative assessments in Years 2 and 3. Students with science-based prior degrees performed better in the SBA assessments. The same occurred initially in the MEQs, but the effect diminished with time. The HP students performed consistently better but converged with other groups over time, particularly in the MEQs. Relative performance by the NONS students improved with time in both assessment formats. Overall, differences between the highest and lowest groups were small and very few students failed to meet the overall standard for the summative assessments. HP and BMS students had the lowest failure rate. NONS students were more likely to fail the assessments in Year 2 and 3, but their pass rates were still high. Female students performed significantly better overall at the end of Year 2 and in Year 3. There were only minor differences between Australian resident and International students. While there are small differences in performance in B & CS early in the programme, these lessen with time. The study results will inform decisions regarding timing of summative assessments, selection policy and for providing additional support to students who need it to minimize their risk of failure. Readers should note that this paper refers to student performance in only one of the four curriculum themes, where health professional and science graduates would be expected to have a significant advantage.
Student Emotions in Conversation-Based Assessments
ERIC Educational Resources Information Center
Lehman, Blair A.; Zapata-Rivera, Diego
2018-01-01
Students can experience a variety of emotions while completing assessments. Some emotions can get in the way of students performing their best (e.g., anxiety, frustration), whereas other emotions can facilitate student performance (e.g., engagement). Many new, non-traditional assessments, such as automated conversation-based assessments (CBA), are…
ERIC Educational Resources Information Center
Urda, Julie; Ramocki, Stephen P.
2015-01-01
This paper is an empirical field study of whether college students' preferences for assessment type correspond to their performance in assessment that tests that particular strength. For example, if students say they prefer assessment that tests their creativity, do they actually perform better on assessment tasks requiring the use of…
ERIC Educational Resources Information Center
Liu, Xiongyi; Li, Lan
2014-01-01
This study examines the impact of an assessment training module on student assessment skills and task performance in a technology-facilitated peer assessment. Seventy-eight undergraduate students participated in the study. The participants completed an assessment training exercise, prior to engaging in peer-assessment activities. During the…
2015-01-01
Objective. To develop a series of active-learning modules that would improve pharmacy students’ performance on summative assessments. Design. A series of optional online active-learning modules containing questions with multiple formats for topics in a first-year (P1) course was created using a test-enhanced learning approach. A subset of module questions was modified and included on summative assessments. Assessment. Student performance on module questions improved with repeated attempts and was predictive of student performance on summative assessments. Performance on examination questions was higher for students with access to modules than for those without access to modules. Module use appeared to have the most impact on low performing students. Conclusion. Test-enhanced learning modules with immediate feedback provide pharmacy students with a learning tool that improves student performance on summative assessments and also may improve metacognitive and test-taking skills. PMID:27168610
Srinivasan, Malathi; Hauer, Karen E; Der-Martirosian, Claudia; Wilkes, Michael; Gesundheit, Neil
2007-09-01
Achieving competence in 'practice-based learning' implies that doctors can accurately self- assess their clinical skills to identify behaviours that need improvement. This study examines the impact of receiving feedback via performance benchmarks on medical students' self-assessment after a clinical performance examination (CPX). The authors developed a practice-based learning exercise at 3 institutions following a required 8-station CPX for medical students at the end of Year 3. Standardised patients (SPs) scored students after each station using checklists developed by experts. Students assessed their own performance immediately after the CPX (Phase 1). One month later, students watched their videotaped performance and reassessed (Phase 2). Some students received performance benchmarks (their scores, plus normative class data) before the video review. Pearson's correlations between self-ratings and SP ratings were calculated for overall performance and specific skill areas (history taking, physical examination, doctor-patient communication) for Phase 1 and Phase 2. The 2 correlations were then compared for each student group (i.e. those who received and those who did not receive feedback). A total of 280 students completed both study phases. Mean CPX scores ranged from 51% to 71% of items correct overall and for each skill area. Phase 1 self-assessment correlated weakly with SP ratings of student performance (r = 0.01-0.16). Without feedback, Phase 2 correlations remained weak (r = 0.13-0.18; n = 109). With feedback, Phase 2 correlations improved significantly (r = 0.26-0.47; n = 171). Low-performing students showed the greatest improvement after receiving feedback. The accuracy of student self-assessment was poor after a CPX, but improved significantly with performance feedback (scores and benchmarks). Videotape review alone (without feedback) did not improve self-assessment accuracy. Practice-based learning exercises that incorporate feedback to medical students hold promise to improve self-assessment skills.
Poirier, Therese I; Pailden, Junvie; Jhala, Ray; Ronald, Katie; Wilhelm, Miranda; Fan, Jingyang
2017-04-01
Objectives. To conduct a prospective evaluation for effectiveness of an error disclosure assessment tool and video recordings to enhance student learning and metacognitive skills while assessing the IPEC competencies. Design. The instruments for assessing performance (planning, communication, process, and team dynamics) in interprofessional error disclosure were developed. Student self-assessment of performance before and after viewing the recordings of their encounters were obtained. Faculty used a similar instrument to conduct real-time assessments. An instrument to assess achievement of the Interprofessional Education Collaborative (IPEC) core competencies was developed. Qualitative data was reviewed to determine student and faculty perceptions of the simulation. Assessment. The interprofessional simulation training involved a total of 233 students (50 dental, 109 nursing and 74 pharmacy). Use of video recordings made a significant difference in student self-assessment for communication and process categories of error disclosure. No differences in student self-assessments were noted among the different professions. There were differences among the family member affects for planning and communication for both pre-video and post-video data. There were significant differences between student self-assessment and faculty assessment for all paired comparisons, except communication in student post-video self-assessment. Students' perceptions of achievement of the IPEC core competencies were positive. Conclusion. The use of assessment instruments and video recordings may have enhanced students' metacognitive skills for assessing performance in interprofessional error disclosure. The simulation training was effective in enhancing perceptions on achievement of IPEC core competencies. This enhanced assessment process appeared to enhance learning about the skills needed for interprofessional error disclosure.
A Systematic Review of the Use of Self-Assessment in Preclinical and Clinical Dental Education.
Mays, Keith A; Branch-Mays, Grishondra L
2016-08-01
A desired outcome of dental and dental hygiene programs is the development of students' self-assessment skills. To that end, the Commission on Dental Accreditation states that "graduates must demonstrate the ability to self-assess." However, it is unclear that merely providing opportunity for self-assessment actually leads to the desired outcome. The aim of this study was to systematically review the literature on self-assessment in dental education. A search of English-language articles for the past 25 years (January 1, 1990, to June 30, 2015) was performed using MEDLINE Medical Subject Heading terms. Each abstract and/or article was validated for inclusion. The data collected included student classification, self-assessment environment, faculty assessment, training, faculty calibration, predictive value, and student perceptions. A qualitative analysis was also performed. From an initial list of 258 articles, 19 were selected for inclusion; exclusion criteria included studies that evaluated a non-preclinical or non-clinical exercise or whose subjects were not predoctoral dental or dental hygiene students. The results showed limited information regarding any kind of systematic training of students on how to perform a self-assessment. The majority of the studies also did not specify the impact of self-assessment on student performance. Self-assessment was primarily performed in the second year and in the preclinical environment. Students received feedback through a correlated faculty assessment in 73% of the studies, but 64% did not provide information regarding students' perceptions of self-assessment. There was a trend for students to be better self-assessors in studies in which a grade was connected to the process. In addition, there was a trend for better performing students to underrate themselves and for poorer performing students to overrate themselves and, overall, for students to score themselves higher than did their faculty evaluators. These findings suggest the need for greater attention to systematically teaching self-assessment in dental and dental hygiene curricula and for further research on the impact of self-assessment on desired outcomes.
Lee, Cliff; Kobayashi, Hiro; Lee, Samuel R; Ohyama, Hiroe
2018-04-01
The aim of this study was to determine how dental student self-assessment and faculty assessment of operative preparations compared for conventional visual assessment versus assessment of scanned digital 3D models. In 2016, all third-year students in the Class of 2018 (N=35) at Harvard School of Dental Medicine performed preclinical exams of Class II amalgam preparations (C2AP) and Class III composite preparations (C3CP) and completed self-assessment forms; in 2017, all third-year students in the Class of 2019 (N=34) performed the same exams. Afterwards, the prepared typodont teeth were digitally scanned. Students self-assessed their preparations digitally, and four faculty members graded the preparations conventionally and digitally. The results showed that, overall, the students assessed their preparations higher than the faculty assessments. The mean student-faculty gaps for C2AP and C3CP in the conventional assessments were 11% and 5%, respectively. The mean digital student-faculty gap for C2AP and C3CP were 8% and 2%, respectively. In the conventional assessments, preclinical performance was negatively correlated with the student-faculty gap (r=-0.47, p<0.001). The correlations were not statistically significant with the digital assessments (p=0.39, p=0.26). Students in the bottom quartile significantly improved their self-assessment accuracy using digital self-assessments over conventional assessments (C2AP 10% vs. 17% and C3CP 3% vs. 10%, respectively). These results suggest that digital assessments offered a significant learning opportunity for students to critically self-assess themselves in operative preclinical dentistry. The lower performing students benefitted the most, improving their assessment ability to the level of the rest of the class.
Development of self and peer performance assessment on iodometric titration experiment
NASA Astrophysics Data System (ADS)
Nahadi; Siswaningsih, W.; Kusumaningtyas, H.
2018-05-01
This study aims to describe the process in developing of reliable and valid assessment to measure students’ performance on iodometric titration and the effect of the self and peer assessment on students’ performance. The self and peer-instrument provides valuable feedback for the student performance improvement. The developed assessment contains rubric and task for facilitating self and peer assessment. The participants are 24 students at the second-grade student in certain vocational high school in Bandung. The participants divided into two groups. The first 12 students involved in the validity test of the developed assessment, while the remain 12 students participated for the reliability test. The content validity was evaluated based on the judgment experts. Test result of content validity based on judgment expert show that the developed performance assessment instrument categorized as valid on each task with the realibity classified as very good. Analysis of the impact of the self and peer assessment implementation showed that the peer instrument supported the self assessment.
ERIC Educational Resources Information Center
Perera, Luckmika; Nguyen, Hoa; Watty, Kim
2014-01-01
This paper investigates the effectiveness (measured using assignment and examination performance) of an assessment design incorporating formative feedback through summative tutorial-based assessments to improve student performance, in a second-year Finance course at an Australian university. Data was collected for students who were enrolled in an…
NASA Astrophysics Data System (ADS)
Tonnis, Dorothy Ann
The goals of this interpretive study were to examine selected Wisconsin science teachers' perceptions of teaching and learning science, to describe the scope of classroom performance assessment practices, and to gain an understanding of teachers' personal and professional experiences that influenced their belief systems of teaching, learning and assessment. The study was designed to answer the research questions: (1) How does the integration of performance assessment relate to the teachers' views of teaching and learning? (2) How are the selected teachers integrating performance assessment in their teaching? (3) What past personal and professional experiences have influenced teachers' attitudes and beliefs related to their classroom performance assessment practices? Purposeful sampling was used to select seven Wisconsin elementary, middle and high school science teachers who participated in the WPADP initiative from 1993-1995. Data collection methods included a Teaching Practices Inventory (TPI), semi-structured interviews, teacher developed portfolios, portfolio conferences, and classroom observations. Four themes and multiple categories emerged through data analysis to answer the research questions and to describe the results. Several conclusions were drawn from this research. First, science teachers who appeared to effectively integrate performance assessment, demonstrated transformational thinking in their attitudes and beliefs about teaching and learning science. In addition, these teachers viewed assessment and instructional practices as interdependent. Third, transformational teachers generally used well defined criteria to judge student work and made it public to the students. Transformational teachers provided students with real-world performance assessment tasks that were also learning events. Furthermore, student task responses informed the transformational teachers about effectiveness of instruction, students' complex thinking skills, quality of assessment instruments, students' creativity, and students' self-assessment skills. Finally, transformational teachers maintained integration of performance assessment practices through sustaining teacher support networks, engaging in professional development programs, and reflecting upon past personal and professional experiences related to teaching, learning and assessment. Salient conflicts overcome or minimized by transformational teachers include the conflict between assessment scoring and grading issues, validity and reliability concerns about the performance assessment tasks used, and the difficulty for teachers to consistently provide public criteria to students before task administration.
NASA Astrophysics Data System (ADS)
Susilaningsih, E.; Khotimah, K.; Nurhayati, S.
2018-04-01
The assessment of laboratory skill in general hasn’t specific guideline in assessment, while the individual assessment of students during a performance and skill in performing laboratory is still not been observed and measured properly. Alternative assessment that can be used to measure student laboratory skill is use performance assessment. The purpose of this study was to determine whether the performance assessment instrument that the result of research can be used to assess basic skills student laboratory. This research was conducted by the Research and Development. The result of the data analysis performance assessment instruments developed feasible to implement and validation result 62.5 with very good categories for observation sheets laboratory skills and all of the components with the very good category. The procedure is the preliminary stages of research and development stages. Preliminary stages are divided in two, namely the field studies and literature studies. The development stages are divided into several parts, namely 1) development of the type instrument, 2) validation by an expert, 3) a limited scale trial, 4) large-scale trials and 5) implementation of the product. The instrument included in the category of effective because 26 from 29 students have very high laboratory skill and high laboratory skill. The research of performance assessment instrument is standard and can be used to assess basic skill student laboratory.
ERIC Educational Resources Information Center
DeNome, Evonne C.
2015-01-01
This quantitative study reviews the impact on student achievement following professional development on the principles of formative assessment. The study compared mathematics and reading performance data from student populations with teachers who received training in formative assessment to performance data from student populations with teachers…
Making the Grade in America's Cities: Assessing Student Achievement in Urban Districts
ERIC Educational Resources Information Center
Blagg, Kristin
2016-01-01
Many US education reform efforts focus on student performance in large, urban school districts. The National Assessment of Educational Progress's Trial Urban District Assessment (TUDA) program provides data on student achievement in these districts, but differences in student characteristics complicate comparisons of district performance. I use…
Interim Outcomes Assessment of the Comprehensive Clinical Performance Grid for Student Evaluation.
ERIC Educational Resources Information Center
Tolls, Dorothy Bazzinotti; Carlson, Nancy; Wilson, Roger; Richman, Jack
2001-01-01
Assessed the viability of the Comprehensive Clinical Performance Grid for Student Evaluation, introduced at The New England College of Optometry in 1996 in clinical student assessment. Analyzed faculty and student feedback and consistency with previous evaluations, between evaluators, and between clinical sites and tracts. Found satisfaction with…
25 CFR 30.115 - Which students' performance data must be included for purposes of AYP?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 1 2011-04-01 2011-04-01 false Which students' performance data must be included for... EDUCATION ADEQUATE YEARLY PROGRESS Assessing Adequate Yearly Progress § 30.115 Which students' performance data must be included for purposes of AYP? The performance data of all students assessed pursuant to...
25 CFR 30.115 - Which students' performance data must be included for purposes of AYP?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false Which students' performance data must be included for... EDUCATION ADEQUATE YEARLY PROGRESS Assessing Adequate Yearly Progress § 30.115 Which students' performance data must be included for purposes of AYP? The performance data of all students assessed pursuant to...
Colbert-Getz, Jorie M; Fleishman, Carol; Jung, Julianna; Shilkofski, Nicole
2013-01-01
Research suggests that medical students are not accurate in self-assessment, but it is not clear whether students over- or underestimate their skills or how certain characteristics correlate with accuracy in self-assessment. The goal of this study was to determine the effect of gender and anxiety on accuracy of students' self-assessment and on actual performance in the context of a high-stakes assessment. Prior to their fourth year of medical school, two classes of medical students at Johns Hopkins University School of Medicine completed a required clinical skills exam in fall 2010 and 2011, respectively. Two hundred two students rated their anxiety in anticipation of the exam and predicted their overall scores in the history taking and physical examination performance domains. A self-assessment deviation score was calculated by subtracting each student's predicted score from his or her score as rated by standardized patients. When students self-assessed their data gathering performance, there was a weak negative correlation between their predicted scores and their actual scores on the examination. Additionally, there was an interaction effect of anxiety and gender on both self-assessment deviation scores and actual performance. Specifically, females with high anxiety were more accurate in self-assessment and achieved higher actual scores compared with males with high anxiety. No differences by gender emerged for students with moderate or low anxiety. Educators should take into account not only gender but also the role of emotion, in this case anxiety, when planning interventions to help improve accuracy of students' self-assessment.
Performance of Students with Visual Impairments on High-Stakes Tests: A Pennsylvania Report Card
ERIC Educational Resources Information Center
Fox, Lynn A.
2012-01-01
Students with disabilities participate in high-stakes assessments to meet NCLB's newer proficiency standards. This study explored performance in reading and math on the Pennsylvania System of School Assessment (PSSA), Pennsylvania's grade-level assessment, to provide a foundational baseline on performance and accommodations used by students with…
Teacher Compliance and Accuracy in State Assessment of Student Motor Skill Performance
ERIC Educational Resources Information Center
Hall, Tina J.; Hicklin, Lori K.; French, Karen E.
2015-01-01
Purpose: The purpose of this study was to investigate teacher compliance with state mandated assessment protocols and teacher accuracy in assessing student motor skill performance. Method: Middle school teachers (N = 116) submitted eighth grade student motor skill performance data from 318 physical education classes to a trained monitoring…
Influence of Strategies-Based Feedback in Students' Oral Performance
ERIC Educational Resources Information Center
Sisquiarco, Angie; Rojas, Santiago Sánchez; Abad, José Vicente
2018-01-01
This article reports on an action research study that assessed the influence of cognitive and metacognitive strategies-based feedback in the oral performance of a group of 6th grade students at a public school in Medellin, Colombia. Researchers analyzed students' oral performance through assessment and self-assessment rubrics, applied inventories…
O'Mara, Deborah A; Canny, Ben J; Rothnie, Imogene P; Wilson, Ian G; Barnard, John; Davies, Llewelyn
2015-02-02
To report the level of participation of medical schools in the Australian Medical Schools Assessment Collaboration (AMSAC); and to measure differences in student performance related to medical school characteristics and implementation methods. Retrospective analysis of data using the Rasch statistical model to correct for missing data and variability in item difficulty. Linear model analysis of variance was used to assess differences in student performance. 6401 preclinical students from 13 medical schools that participated in AMSAC from 2011 to 2013. Rasch estimates of preclinical basic and clinical science knowledge. Representation of Australian medical schools and students in AMSAC more than doubled between 2009 and 2013. In 2013 it included 12 of 19 medical schools and 68% of medical students. Graduate-entry students scored higher than students entering straight from school. Students at large schools scored higher than students at small schools. Although the significance level was high (P < 0.001), the main effect sizes were small (4.5% and 2.3%, respectively). The time allowed per multiple choice question was not significantly associated with student performance. The effect on performance of multiple assessments compared with the test items as part of a single end-of-year examination was negligible. The variables investigated explain only 12% of the total variation in student performance. An increasing number of medical schools are participating in AMSAC to monitor student performance in preclinical sciences against an external benchmark. Medical school characteristics account for only a small part of overall variation in student performance. Student performance was not affected by the different methods of administering test items.
Introducing a design exigency to promote student learning through assessment: A case study.
Grealish, Laurie A; Shaw, Julie M
2018-02-01
Assessment technologies are often used to classify student and newly qualified nurse performance as 'pass' or 'fail', with little attention to how these decisions are achieved. Examining the design exigencies of classification technologies, such as performance assessment technologies, provides opportunities to explore flexibility and change in the process of using those technologies. Evaluate an established assessment technology for nursing performance as a classification system. A case study analysis that is focused on the assessment approach and a priori design exigencies of performance assessment technology, in this case the Australian Nursing Standards Assessment Tool 2016. Nurse assessors are required to draw upon their expertise to judge performance, but that judgement is described as a source of bias, creating confusion. The definition of satisfactory performance is 'ready to enter practice'. To pass, the performance on each criterion must be at least satisfactory, indicating to the student that no further improvement is required. The Australian Nursing Standards Assessment Tool 2016 does not have a third 'other' category, which is usually found in classification systems. Introducing a 'not yet competent' category and creating a two-part, mixed methods assessment process can improve the Australian Nursing Standards Assessment Tool 2016 assessment technology. Using a standards approach in the first part, judgement is valued and can generate learning opportunities across a program. Using a measurement approach in the second part, student performance can be 'not yet competent' but still meet criteria for year level performance and a graded pass. Subjecting the Australian Nursing Standards Assessment Tool 2016 assessment technology to analysis as a classification system provides opportunities for innovation in design. This design innovation has the potential to support students who move between programs and clinicians who assess students from different universities. Copyright © 2017 Elsevier Ltd. All rights reserved.
Pailden, Junvie; Jhala, Ray; Ronald, Katie; Wilhelm, Miranda; Fan, Jingyang
2017-01-01
Objectives. To conduct a prospective evaluation for effectiveness of an error disclosure assessment tool and video recordings to enhance student learning and metacognitive skills while assessing the IPEC competencies. Design. The instruments for assessing performance (planning, communication, process, and team dynamics) in interprofessional error disclosure were developed. Student self-assessment of performance before and after viewing the recordings of their encounters were obtained. Faculty used a similar instrument to conduct real-time assessments. An instrument to assess achievement of the Interprofessional Education Collaborative (IPEC) core competencies was developed. Qualitative data was reviewed to determine student and faculty perceptions of the simulation. Assessment. The interprofessional simulation training involved a total of 233 students (50 dental, 109 nursing and 74 pharmacy). Use of video recordings made a significant difference in student self-assessment for communication and process categories of error disclosure. No differences in student self-assessments were noted among the different professions. There were differences among the family member affects for planning and communication for both pre-video and post-video data. There were significant differences between student self-assessment and faculty assessment for all paired comparisons, except communication in student post-video self-assessment. Students’ perceptions of achievement of the IPEC core competencies were positive. Conclusion. The use of assessment instruments and video recordings may have enhanced students’ metacognitive skills for assessing performance in interprofessional error disclosure. The simulation training was effective in enhancing perceptions on achievement of IPEC core competencies. This enhanced assessment process appeared to enhance learning about the skills needed for interprofessional error disclosure. PMID:28496274
Portfolios: An Alternative Method of Student and Program Assessment
Hannam, Susan E.
1995-01-01
The use of performance-based evaluation and alternative assessment techniques has become essential for curriculum programs seeking Commission of Accreditation of Allied Health Education Programs (CAAHEP) accreditation. In athletic training education, few assessment models exist to assess student performance over the entire course of their educational program. This article describes a model of assessment-a student athletic training portfolio of “best works.” The portfolio can serve as a method to assess student development and to assess program effectiveness. The goals of the program include purposes specific to the five NATA performance domains. In addition, four types of portfolio evidence are described: artifacts, attestations, productions, and reproductions. Quality assignments and projects completed by students as they progress through a six-semester program are identified relative to the type of evidence and the domain(s) they represent. The portfolio assists with student development, provides feedback for curriculum planning, allows for student/faculty collaboration and “coaching” of the student, and assists with job searching. This information will serve as a useful model for those athletic training programs looking for an alternative method of assessing student and program outcomes. PMID:16558359
Power, Thomas J; Dombrowski, Stefan C; Watkins, Marley W; Mautone, Jennifer A; Eagle, John W
2007-06-01
Efforts to develop interventions to improve homework performance have been impeded by limitations in the measurement of homework performance. This study was conducted to develop rating scales for assessing homework performance among students in elementary and middle school. Items on the scales were intended to assess student strengths as well as deficits in homework performance. The sample included 163 students attending two school districts in the Northeast. Parents completed the 36-item Homework Performance Questionnaire - Parent Scale (HPQ-PS). Teachers completed the 22-item teacher scale (HPQ-TS) for each student for whom the HPQ-PS had been completed. A common factor analysis with principal axis extraction and promax rotation was used to analyze the findings. The results of the factor analysis of the HPQ-PS revealed three salient and meaningful factors: student task orientation/efficiency, student competence, and teacher support. The factor analysis of the HPQ-TS uncovered two salient and substantive factors: student responsibility and student competence. The findings of this study suggest that the HPQ is a promising set of measures for assessing student homework functioning and contextual factors that may influence performance. Directions for future research are presented.
Power, Thomas J.; Dombrowski, Stefan C.; Watkins, Marley W.; Mautone, Jennifer A.; Eagle, John W.
2007-01-01
Efforts to develop interventions to improve homework performance have been impeded by limitations in the measurement of homework performance. This study was conducted to develop rating scales for assessing homework performance among students in elementary and middle school. Items on the scales were intended to assess student strengths as well as deficits in homework performance. The sample included 163 students attending two school districts in the Northeast. Parents completed the 36-item Homework Performance Questionnaire – Parent Scale (HPQ-PS). Teachers completed the 22-item teacher scale (HPQ-TS) for each student for whom the HPQ-PS had been completed. A common factor analysis with principal axis extraction and promax rotation was used to analyze the findings. The results of the factor analysis of the HPQ-PS revealed three salient and meaningful factors: student task orientation/efficiency, student competence, and teacher support. The factor analysis of the HPQ-TS uncovered two salient and substantive factors: student responsibility and student competence. The findings of this study suggest that the HPQ is a promising set of measures for assessing student homework functioning and contextual factors that may influence performance. Directions for future research are presented. PMID:18516211
ERIC Educational Resources Information Center
Ohio State Univ., Columbus. Agricultural Curriculum Materials Service.
This report contains 26 performance assessments for documenting student employability skills. Each performance assessment consists of the following: a competency; a terminal performance objective (outcome); competency builders and pupil performance objectives (criteria for documenting mastery of the objective); applied academic competencies;…
Accounting for the Performance of Students With Disabilities on Statewide Assessments
ERIC Educational Resources Information Center
Malmgren, Kimber W.; McLaughlin, Margaret J.; Nolet, Victor
2005-01-01
The current study investigates school-level factors that affect the performance of students with disabilities on statewide assessments. Data were collected as part of a larger study examining the effects of education policy reform on students with disabilities. Statewide assessment data for students with disabilities from 2 school districts within…
Deane, Richard P; Joyce, Pauline; Murphy, Deirdre J
2015-10-09
Team Objective Structured Bedside Assessment (TOSBA) is a learning approach in which a team of medical students undertake a set of structured clinical tasks with real patients in order to reach a diagnosis and formulate a management plan and receive immediate feedback on their performance from a facilitator. TOSBA was introduced as formative assessment to an 8-week undergraduate teaching programme in Obstetrics and Gynaecology (O&G) in 2013/14. Each student completed 5 TOSBA sessions during the rotation. The aim of the study was to evaluate TOSBA as a teaching method to provide formative assessment for medical students during their clinical rotation. The research questions were: Does TOSBA improve clinical, communication and/or reasoning skills? Does TOSBA provide quality feedback? A prospective cohort study was conducted over a full academic year (2013/14). The study used 2 methods to evaluate TOSBA as a teaching method to provide formative assessment: (1) an online survey of TOSBA at the end of the rotation and (2) a comparison of the student performance in TOSBA with their performance in the final summative examination. During the 2013/14 academic year, 157 students completed the O&G programme and the final summative examination . Each student completed the required 5 TOSBA tasks. The response rate to the student survey was 68 % (n = 107/157). Students reported that TOSBA was a beneficial learning experience with a positive impact on clinical, communication and reasoning skills. Students rated the quality of feedback provided by TOSBA as high. Students identified the observation of the performance and feedback of other students within their TOSBA team as key features. High achieving students performed well in both TOSBA and summative assessments. The majority of students who performed poorly in TOSBA subsequently passed the summative assessments (n = 20/21, 95 %). Conversely, the majority of students who failed the summative assessments had satisfactory scores in TOSBA (n = 6/7, 86 %). TOSBA has a positive impact on the clinical, communication and reasoning skills of medical students through the provision of high-quality feedback. The use of structured pre-defined tasks, the observation of the performance and feedback of other students and the use of real patients are key elements of TOSBA. Avoiding student complacency and providing accurate feedback from TOSBA are on-going challenges.
Satheesh, Keerthana M; Brockmann, Lorraine B; Liu, Ying; Gadbury-Amyot, Cynthia C
2015-12-01
While educators agree that using self-assessment in education is valuable, a major challenge is the poor agreement often found between faculty assessment and student self-assessment. The aim of this study was to determine if use of a predefined grading rubric would improve reliability between faculty and dental student assessment on a periodontal oral competency examination. Faculty members used the grading rubric to assess students' performance on the exam. Immediately after taking the exam, students used the same rubric to self-assess their performance on it. Data were collected from all third- and/or fourth-year students in four classes at one U.S. dental school from 2011 to 2014. Since two of the four classes took the exam in both the third and fourth years, those data were compared to determine if those students' self-assessment skills improved over time. Statistical analyses were performed to determine agreement between the two faculty graders and between the students' and faculty assessments on each criterion in the rubric and the overall grade. Data from the upper and lower performing quartiles of students were sub-analyzed. The results showed that faculty reliability for the overall grades was high (K=0.829) and less so for individual criteria, while student-faculty reliability was weak to moderate for both overall grades (Spearman's rho=0.312) and individual criteria. Students in the upper quartile self-evaluated themselves more harshly than the faculty (p<0.0001), while the lower quartile students overestimated their performance (p=0.0445) compared to faculty evaluation. No significant improvement was found in assessment over time in the students who took the exam in the third and fourth years. This study found only limited support for the hypothesis that a grading rubric used by both faculty and students would increase correspondence between faculty and student assessment and points to a need to reexamine the rubric and instructional strategies to help students improve their ability to self-assess their work.
Keeping Student Performance Central: The New York Assessment Collection. Studies on Exhibitions.
ERIC Educational Resources Information Center
Allen, David; McDonald, Joseph
This report describes a computer tool used by the state of New York to assess student performance in elementary and secondary grades. Based on the premise that every assessment is a system of interacting elements, the tool examines students on six dimensions: vision, prompt, coaching context, performance, standards, and reflection. Vision, which…
Development and validation of a Clinical Assessment Tool for Nursing Education (CAT-NE).
Skúladóttir, Hafdís; Svavarsdóttir, Margrét Hrönn
2016-09-01
The aim of this study was to develop a valid assessment tool to guide clinical education and evaluate students' performance in clinical nursing education. The development of the Clinical Assessment Tool for Nursing Education (CAT-NE) was based on the theory of nursing as professional caring and the Bologna learning outcomes. Benson and Clark's four steps of instrument development and validation guided the development and assessment of the tool. A mixed-methods approach with individual structured cognitive interviewing and quantitative assessments was used to validate the tool. Supervisory teachers, a pedagogical consultant, clinical expert teachers, clinical teachers, and nursing students at the University of Akureyri in Iceland participated in the process. This assessment tool is valid to assess the clinical performance of nursing students; it consists of rubrics that list the criteria for the students' expected performance. According to the students and their clinical teachers, the assessment tool clarified learning objectives, enhanced the focus of the assessment process, and made evaluation more objective. Training clinical teachers on how to assess students' performances in clinical studies and use the tool enhanced the quality of clinical assessment in nursing education. Copyright © 2016 Elsevier Ltd. All rights reserved.
Carrasco, Gonzalo A; Behling, Kathryn C; Lopez, Osvaldo J
2018-04-01
Student participation is important for the success of active learning strategies, but participation is often linked to the level of preparation. At our institution, we use two types of active learning activities, a modified case-based learning exercise called active learning groups (ALG) and team-based learning (TBL). These strategies have different assessment and incentive structures for participation. Non-cognitive skills are assessed in ALG using a subjective five-point Likert scale. In TBL, assessment of individual student preparation is based on a multiple choice quiz conducted at the beginning of each session. We studied first-year medical student participation and performance in ALG and TBL as well as performance on course final examinations. Student performance in TBL, but not in ALG, was strongly correlated with final examination scores. Additionally, in students who performed in the upper 33rd percentile on the final examination, there was a positive correlation between final examination performance and participation in TBL and ALG. This correlation was not seen in students who performed in the lower 33rd percentile on the final examinations. Our results suggest that assessments of medical knowledge during active learning exercises could supplement non-cognitive assessments and could be good predictors of performance on summative examinations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roxas, R. M.; Monterola, C.; Carreon-Monterola, S. L.
2010-07-28
We probe the effect of seating arrangement, group composition and group-based competition on students' performance in Physics using a teaching technique adopted from Mazur's peer instruction method. Ninety eight lectures, involving 2339 students, were conducted across nine learning institutions from February 2006 to June 2009. All the lectures were interspersed with student interaction opportunities (SIO), in which students work in groups to discuss and answer concept tests. Two individual assessments were administered before and after the SIO. The ratio of the post-assessment score to the pre-assessment score and the Hake factor were calculated to establish the improvement in student performance.more » Using actual assessment results and neural network (NN) modeling, an optimal seating arrangement for a class was determined based on student seating location. The NN model also provided a quantifiable method for sectioning students. Lastly, the study revealed that competition-driven interactions increase within-group cooperation and lead to higher improvement on the students' performance.« less
The use of observational diaries in in-training evaluation: student perceptions.
Govaerts, Marjan J B; van der Vleuten, Cees P M; Schuwirth, Lambert W T; Muijtjens, Arno M M
2005-08-01
In health science education clinical clerkships serve the twofold purpose of guiding student learning and assessment of performance. Evidently, both formative and summative assessment procedures are needed in clerkship assessment. In-training evaluation (ITE) has the potential to serve both assessment functions. Implementation of effective ITE, however, has been shown to be problematic, partly because integration of assessment functions may have negative consequences for teaching and learning. This study investigates student perceptions of the impact of an integrated assessment approach, seeking to refine criteria for effective ITE. In the curriculum of Maastricht Midwifery School (MMS), clerkship assessment is based on ITE serving both assessment functions. The ITE model is based on principles of extensive work sampling, and frequent documentation of performance. A focus group technique was used to explore student perceptions on the impact of the ITE approach on student learning and supervisor teaching behaviour, and on the usefulness of information for decision making. Results indicate that the assessment approach is effective in guidance of student learning. Furthermore, students consider the frequent performance documentation essential in clerkship grading. Acceptance and effectivity of ITE requires a learning environment which is safe and respectful. Transparency of assessment processes is the key to success. Suggestions for improvement focus on variation in evaluation formats, improvement of feedback (narrative, complete) and student involvement in assessment. ITE can fulfill both its formative and summative purposes when some crucial conditions are taken into account. Careful training of both supervisors and students in the use of ITE for student learning and performance measurement is essential.
Impact of Student vs Faculty Facilitators on Motivational Interviewing Student Outcomes.
Widder-Prewett, Rebecca; Draime, Juanita A; Cameron, Ginger; Anderson, Douglas; Pinkerton, Mark; Chen, Aleda M H
2017-08-01
Objective. To determine the impact of student or faculty facilitation on student self-assessed attitudes, confidence, and competence in motivational interviewing (MI) skills; actual competence; and evaluation of facilitator performance. Methods. Second-year pharmacy (P2) students were randomly assigned to a student or faculty facilitator for a four-hour, small-group practice of MI skills. MI skills were assessed in a simulated patient encounter with the mMITI (modified Motivational Interviewing Treatment Integrity) tool. Students completed a pre-post, 6-point, Likert-type assessment addressing the research objectives. Differences were assessed using a Mann-Whitney U test. Results. Student (N=44) post-test attitudes, confidence, perceived or actual competence, and evaluations of facilitator performance were not different for faculty- and student-facilitated groups. Conclusion. Using pharmacy students as small-group facilitators did not affect student performance and were viewed as equally favorable. Using pharmacy students as facilitators can lessen faculty workload and provide an outlet for students to develop communication and facilitation skills that will be needed in future practice.
Fang, Ji-Tseng; Ko, Yu-Shien; Chien, Chu-Chun; Yu, Kuang-Hui
2013-01-01
Since 1994, Taiwanese medical universities have employed the multiple application method comprising "recommendations and screening" and "admission application." The purpose of this study is to examine whether medical students admitted using different admission programs gave different performances. To evaluate the six core competencies for medical students proposed by Accreditation Council for Graduate Medical Education (ACGME), this study employed various assessment tools, including student opinion feedback, multi-source feedback (MSF), course grades, and examination results.MSF contains self-assessment scale, peer assessment scale, nursing staff assessment scale, visiting staff assessment scale, and chief resident assessment scale. In the subscales, the CronbachÊs alpha were higher than 0.90, indicating good reliability. Research participants consisted of 182 students from the School of Medicine at Chang Gung University. Regarding studentsÊ average grade for the medical ethics course, the performance of students who were enrolled through school recommendations exceeded that of students who were enrolled through the National College University Entrance Examination (NCUEE) p = 0.011), and all considered "teamwork" as the most important. Different entry pipelines of students in the "communication," "work attitude," "medical knowledge," and "teamwork" assessment scales showed no significant difference. The improvement rate of the students who were enrolled through the school recommendations was better than that of the students who were enrolled through the N CUEE in the "professional skills," "medical core competencies," "communication," and "teamwork" projects of self-assessment and peer assessment scales. However, the students who were enrolled through the NCUEE were better in the "professional skills," "medical core competencies," "communication," and "teamwork" projects of the visiting staff assessment scale and the chief resident assessment scale. Collectively, the performance of the students enrolled through recommendations was slightly better than that of the students enrolled through the NCUEE, although statistical significance was found in certain parts of the grades only.
NASA Astrophysics Data System (ADS)
Haydel, Angela Michelle
The purpose of this dissertation was to advance theoretical understanding about fit between the personal resources of individuals and the characteristics of science achievement tasks. Testing continues to be pervasive in schools, yet we know little about how students perceive tests and what they think and feel while they are actually working on test items. This study focused on both the personal (cognitive and motivational) and situational factors that may contribute to individual differences in achievement-related outcomes. 387 eighth grade students first completed a survey including measures of science achievement goals, capability beliefs, efficacy related to multiple-choice items and performance assessments, validity beliefs about multiple-choice items and performance assessments, and other perceptions of these item formats. Students then completed science achievement tests including multiple-choice items and two performance assessments. A sample of students was asked to verbalize both thoughts and feelings as they worked through the test items. These think-alouds were transcribed and coded for evidence of cognitive, metacognitive and motivational engagement. Following each test, all students completed measures of effort, mood, energy level and strategy use during testing. Students reported that performance assessments were more challenging, authentic, interesting and valid than multiple-choice tests. They also believed that comparisons between students were easier using multiple-choice items. Overall, students tried harder, felt better, had higher levels of energy and used more strategies while working on performance assessments. Findings suggested that performance assessments might be more congruent with a mastery achievement goal orientation, while multiple-choice tests might be more congruent with a performance achievement goal orientation. A variable-centered analytic approach including regression analyses provided information about how students, on average, who differed in terms of their teachers' ratings of their science ability, achievement goals, capability beliefs and experiences with science achievement tasks perceived, engaged in, and performed on multiple-choice items and performance assessments. Person-centered analyses provided information about the perceptions, engagement and performance of subgroups of individuals who had different motivational characteristics. Generally, students' personal goals and capability beliefs related more strongly to test perceptions, but not performance, while teacher ratings of ability and test-specific beliefs related to performance.
NASA Astrophysics Data System (ADS)
Nahadi, Firman, Harry; Yulina, Erlis
2016-02-01
The purposes of this study were to develop a performance assessment instrument for assessing the competence of psychomotor high school students on salt hydrolysis concepts. The design used in this study was the Research & Development which consists of three phases: development, testing and application of instruments. Subjects in this study were high school students in class XI science, which amounts to 93 students. In the development phase, seven validators validated 17 tasks instrument. In the test phase, we divided 19 students into three-part different times to conduct performance test in salt hydrolysis lab work and observed by six raters. The first, the second, and the third groups recpectively consist of five, six, and eight students. In the application phase, two raters observed the performance of 74 students in the salt hydrolysis lab work in several times. The results showed that 16 of 17 tasks of performance assessment instrument developed can be stated to be valid with CVR value of 1,00 and 0,714. While, the rest was not valid with CVR value was 0.429, below the critical value (0.622). In the test phase, reliability value of instrument obtained were 0,951 for the five-student group, 0,806 for the six-student group and 0,743 for the eight-student group. From the interviews, teachers strongly agree with the performance instrument developed. They stated that the instrument was feasible to use for maximum number of students were six in a single observation.
Student performance on argumentation task in the Swedish National Assessment in science
NASA Astrophysics Data System (ADS)
Jönsson, Anders
2016-07-01
The aim of this study is to investigate the influence of content knowledge on students' socio-scientific argumentation in the Swedish National Assessment in biology, chemistry and physics for 12-year-olds. In Sweden, the assessment of socio-scientific argumentation has been a major part of the National Assessment during three consecutive years and this study utilizes data on student performance to investigate (a) the relationship between tasks primarily addressing argumentation and tasks addressing primarily content knowledge as well as (b) students' performance on argumentation tasks, which differ in relation to content, subject, aspect of argumentation and assessment criteria. Findings suggest a strong and positive relationship between content knowledge and students' performance on argumentation tasks. The analysis also provides some hypotheses about the task difficulty of argumentation tasks that may be pursued in future investigations.
Education Reforms and Innovations to Improve Student Assessment Performance
ERIC Educational Resources Information Center
McAfee, Wade J.
2014-01-01
International assessments such as the Trends in International Mathematics and Science Study (TIMSS) and Program for International Student Assessment (PISA) have exhibited United States students specifically in the fourth and eighth grades, are not performing well when compared to their international peers. Educational stakeholders including…
ERIC Educational Resources Information Center
Legg, David E.
2013-01-01
The purpose of this study was to explore the relationship between student performance on Reading Curriculum-based Measures (R-CBM) and student performance on the Alaska's standards based assessment (SBA) administered to students in Studied School District (SSD) Grade 3 through Grade 5 students in the Studied School District as required by…
Impact of self-assessment by students on their learning.
Sharma, Rajeev; Jain, Amit; Gupta, Naveenta; Garg, Sonia; Batta, Meenal; Dhir, Shashi Kant
2016-01-01
Tutor assessment is sometimes also considered as an exercise of power by the assessor over assesses. Student self-assessment is the process by which the students gather information about and reflect on their own learning and is considered to be a very important component of learning. The primary objective of this study was to analyze the impact of self-assessment by undergraduate medical students on their subsequent academic performance. The secondary objective was to obtain the perception of students and faculty about self-assessment as a tool for enhanced learning. The study was based on the evaluation of two theory tests consisting of both essay type and short answer questions, administered to students of the 1(st) year MBBS (n = 89). They self-assessed their performance after 3 days of the first test followed by marking of faculty and feedback. Then, a nonidentical theory test on the same topic with the same difficulty level was conducted after 7 days and assessed by the teachers. The feedback about the perception of students and faculty about this intervention was obtained. Significant improvement in the academic performance after the process of self-assessment was observed (P < 0.001). There was a significantly positive correlation between student and teacher marking (r = 0.79). Both students and faculty perceived it to be helpful for developing self-directed learning skills. Self-assessment can increase the interest and motivation level of students for the subjects leading to enhanced learning and better academic performance, helping them in development of critical skills for analysis of their own work.
The reliability of in-training assessment when performance improvement is taken into account.
van Lohuizen, Mirjam T; Kuks, Jan B M; van Hell, Elisabeth A; Raat, A N; Stewart, Roy E; Cohen-Schotanus, Janke
2010-12-01
During in-training assessment students are frequently assessed over a longer period of time and therefore it can be expected that their performance will improve. We studied whether there really is a measurable performance improvement when students are assessed over an extended period of time and how this improvement affects the reliability of the overall judgement. In-training assessment results were obtained from 104 students on rotation at our university hospital or at one of the six affiliated hospitals. Generalisability theory was used in combination with multilevel analysis to obtain reliability coefficients and to estimate the number of assessments needed for reliable overall judgement, both including and excluding performance improvement. Students' clinical performance ratings improved significantly from a mean of 7.6 at the start to a mean of 7.8 at the end of their clerkship. When taking performance improvement into account, reliability coefficients were higher. The number of assessments needed to achieve a reliability of 0.80 or higher decreased from 17 to 11. Therefore, when studying reliability of in-training assessment, performance improvement should be considered.
Enabling performance skills: Assessment in engineering education
NASA Astrophysics Data System (ADS)
Ferrone, Jenny Kristina
Current reform in engineering education is part of a national trend emphasizing student learning as well as accountability in instruction. Assessing student performance to demonstrate accountability has become a necessity in academia. In newly adopted criterion proposed by the Accreditation Board for Engineering and Technology (ABET), undergraduates are expected to demonstrate proficiency in outcomes considered essential for graduating engineers. The case study was designed as a formative evaluation of freshman engineering students to assess the perceived effectiveness of performance skills in a design laboratory environment. The mixed methodology used both quantitative and qualitative approaches to assess students' performance skills and congruency among the respondents, based on individual, team, and faculty perceptions of team effectiveness in three ABET areas: Communications Skills. Design Skills, and Teamwork. The findings of the research were used to address future use of the assessment tool and process. The results of the study found statistically significant differences in perceptions of Teamwork Skills (p < .05). When groups composed of students and professors were compared, professors were less likely to perceive student's teaming skills as effective. The study indicated the need to: (1) improve non-technical performance skills, such as teamwork, among freshman engineering students; (2) incorporate feedback into the learning process; (3) strengthen the assessment process with a follow-up plan that specifically targets performance skill deficiencies, and (4) integrate the assessment instrument and practice with ongoing curriculum development. The findings generated by this study provides engineering departments engaged in assessment activity, opportunity to reflect, refine, and develop their programs as it continues. It also extends research on ABET competencies of engineering students in an under-investigated topic of factors correlated with team processes, behavior, and student learning.
ERIC Educational Resources Information Center
Shelton, Angela
2012-01-01
Many United States secondary students perform poorly on standardized summative science assessments. Situated Assessments using Virtual Environments (SAVE) Science is an innovative assessment project that seeks to capture students' science knowledge and understanding by contextualizing problems in a game-based virtual environment called…
Clinical expectations: what facilitators expect from ESL students on clinical placement.
San Miguel, Caroline; Rogan, Fran
2012-03-01
Many nursing students for whom English is a second language (ESL) face challenges related to communication on clinical placement and although clinical facilitators are not usually trained language assessors, they are often in a position of needing to assess ESL students' clinical language performance. Little is known, however, about the particular areas of clinical performance facilitators focus on when they are assessing ESL students. This paper discusses the results of a study of facilitators' written assessment comments about the clinical performance of a small group of ESL nursing students over a two and a half year period. These comments were documented on students' clinical assessment forms at the end of each placement. The results provide a more detailed insight into facilitators' expectations of students' language performance and the particular challenges faced by ESL students and indicate that facilitators have clear expectations of ESL students regarding communication, learning styles and professional demeanour. These findings may help both ESL students and their facilitators better prepare for clinical placement. Copyright © 2011 Elsevier Ltd. All rights reserved.
Hulsman, Robert L; Peters, Joline F; Fabriek, Marcel
2013-09-01
Peer-assessment of communication skills may contribute to mastery of assessment criteria. When students develop the capacity to judge their peers' performance, they might improve their capacity to examine their own clinical performance. In this study peer-assessment ratings are compared to teacher-assessment ratings. The aim of this paper is to explore the impact of personality and social reputation as source of bias in assessment of communication skills. Second year students were trained and assessed history taking communication skills. Peers rated the students' personality and academic and social reputation. Peer-assessment ratings were significantly correlated with teacher-ratings in a summative assessment of medical communication. Peers did not provide negative ratings on final scales but did provide negative ratings on subcategories. Peer- and teacher-assessments were both related to the students' personality and academic reputation. Peer-assessment cannot replace teacher-assessment if the assessment should result in high-stake decisions about students. Our data do not confirm the hypothesis that peers are overly biased by personality and reputation characteristics in peer-assessment of performance. Early introduction of peer-assessment in medical education would facilitate early acceptance of this mode of evaluation and would promote early on the habit of critical evaluation of professional clinical performance and acceptance of being evaluated critically by peers. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Validating the Assessment for Measuring Indonesian Secondary School Students Performance in Ecology
NASA Astrophysics Data System (ADS)
Rachmatullah, A.; Roshayanti, F.; Ha, M.
2017-09-01
The aims of this current study are validating the American Association for the Advancement of Science (AAAS) Ecology assessment and examining the performance of Indonesian secondary school students on the assessment. A total of 611 Indonesian secondary school students (218 middle school students and 393 high school students) participated in the study. Forty-five items of AAAS assessment in the topic of Interdependence in Ecosystems were divided into two versions which every version has 21 similar items. Linking item method was used as the method to combine those two versions of assessment and further Rasch analyses were utilized to validate the instrument. Independent sample t-test was also run to compare the performance of Indonesian students and American students based on the mean of item difficulty. We found that from the total of 45 items, three items were identified as misfitting items. Later on, we also found that both Indonesian middle and high school students were significantly lower performance with very large and medium effect size compared to American students. We will discuss our findings in the regard of validation issue and the connection to Indonesian student’s science literacy.
Souchal, Carine; Toczek, Marie-Christine; Darnon, Céline; Smeding, Annique; Butera, Fabrizio; Martinot, Delphine
2014-03-01
Is it possible to reach performance equality between boys and girls in a science class? Given the stereotypes targeting their groups in scientific domains, diagnostic contexts generally lower girls' performance and non-diagnostic contexts may harm boys' performance. The present study tested the effectiveness of a mastery-oriented assessment, allowing both boys and girls to perform at an optimal level in a science class. Participants were 120 boys and 72 girls (all high-school students). Participants attended a science lesson while expecting a performance-oriented assessment (i.e., an assessment designed to compare and select students), a mastery-oriented assessment (i.e., an assessment designed to help students in their learning), or no assessment of this lesson. In the mastery-oriented assessment condition, both boys and girls performed at a similarly high level, whereas the performance-oriented assessment condition reduced girls' performance and the no-assessment condition reduced boys' performance. One way to increase girls' performance on a science test without harming boys' performance is to present assessment as a tool for improving mastery rather than as a tool for comparing performances. © 2013 The British Psychological Society.
The ‘unskilled and unaware’ effect is linear in a real-world setting
Sawdon, Marina; Finn, Gabrielle
2014-01-01
Self-assessment ability in medical students and practising physicians is generally poor, yet essential for academic progress and professional development. The aim of this study was to determine undergraduate medical students' ability to self-assess their exam performance accurately in a real-world, high-stakes exam setting, something not previously investigated. Year 1 and Year 2 medical students (n = 74) participated in a self-assessment exercise. Students predicted their exam grade (%) on the anatomy practical exam. This exercise was completed online immediately after the exam. Students' predicted exam grades were correlated with their actual attained exam grades using a Pearson's correlation. Demographic data were analysed using an independent t-test. A negative correlation was found between students' overall predicted and attained exam grades (P < 0.0001). There was a significant difference between the students' predicted grades and actual grades in the bottom, 3rd and top (P < 0.0001), but not 2nd quartiles of participants. There was no relationship between the students' entry status into medical school and self-assessment ability (Year 1: P = 0.112; Year 2: P = 0.236) or between males and females (Year 1: P = 0.174). However, a relationship was determined for these variables in Year 2 (P = 0.022). The number of hours of additional self-directed learning undertaken did not influence students' self-assessment in both years. Our results demonstrate the ‘unskilled and unaware’ phenomenon in a real-world, high-stakes and practice-related setting. Students in all quartiles were unable to self-assess their exam performance, except for a group of mid-range students in the 2nd quartile. Poor performers were shown to overestimate their ability and, conversely, high achievers to underestimate their performance. We present evidence of a strong, significant linear relationship between medical students' ability to self-assess their performance in an anatomy practical exam, and their actual performance; in a real world setting. Despite the limited ability to self-assess reported in the literature, our results may inform approaches to revalidation, which currently frequently rely on an ability to self-assess. PMID:23781887
Azizi, Kourosh; Aghamolaei, Teamur; Parsa, Nader; Dabbaghmanesh, Tahereh
2014-07-01
The present study aimed to compare self-assessment forms of coursework taught in the school of public health at undergraduate, graduate, and postgraduate levels and students' evaluation of the performance of the faculty members at these levels. The subjects in this cross-sectional study were the faculty members and students of the School of Public Health and Nutrition, Shiraz University of Medical Sciences, Shiraz, Iran. The data were collected using a socio-demographic information form and evaluation forms of professors prepared by the Educational Development Center (EDC). The faculty members were assessed by the students in undergraduate and graduate classes. Among the study subjects, 23 faculty members filled out the self-assessment forms which were then evaluated by 23 students. Then, the data were analyzed using the SPSS statistical 14. Paired t-test was used to compare the students' evaluation of the faculty members' performance and the professors' self-assessment. The mean score of self-assessment of the faculty members who taught undergraduate courses was 289.7±8.3, while that of the students' evaluation was 281.3±16.1; the difference was statistically significant (t=3.56, p=0.001). Besides, the mean score of the self-assessment of the faculty members who taught graduate courses was 269.0±9.7, while that of the students' evaluation was 265.7±14.6 but the difference was not statistically significant (t=1.09, p=0.28). Teaching performance perceptions of the faculty were similar to those of the graduate students as compared to the undergraduate ones. This may reflect better understanding of coursework at this level compared to the undergraduate students. Faculty members may need to adjust teaching methods to improve students' performance and understanding especially in the undergraduate level.
NASA Astrophysics Data System (ADS)
Chu, Man-Wai; Fung, Karen
2018-04-01
Canadian students experience many different assessments throughout their schooling (O'Connor 2011). There are many benefits to using a variety of assessment types, item formats, and science-based performance tasks in the classroom to measure the many dimensions of science education. Although using a variety of assessments is beneficial, it is unclear exactly what types, format, and tasks are used in Canadian science classrooms. Additionally, since assessments are often administered to help improve student learning, this study identified assessments that may improve student learning as measured using achievement scores on a standardized test. Secondary analyses of the students' and teachers' responses to the questionnaire items asked in the Pan-Canadian Assessment Program were performed. The results of the hierarchical linear modeling analyses indicated that both students and teachers identified teacher-developed classroom tests or quizzes as the most common types of assessments used. Although this ranking was similar across the country, statistically significant differences in terms of the assessments that are used in science classrooms among the provinces were also identified. The investigation of which assessment best predicted student achievement scores indicated that minds-on science performance-based tasks significantly explained 4.21% of the variance in student scores. However, mixed results were observed between the student and teacher responses towards tasks that required students to choose their own investigation and design their own experience or investigation. Additionally, teachers that indicated that they conducted more demonstrations of an experiment or investigation resulted in students with lower scores.
ERIC Educational Resources Information Center
Klibanov, Olga M.; Dolder, Christian; Anderson, Kevin; Kehr, Heather A.; Woods, J. Andrew
2018-01-01
The impact of distance education via interactive videoconferencing on pharmacy students' performance in a course was assessed after implementation of a distance campus. Students filled out a "Student Demographic Survey" and a "Precourse Knowledge Assessment" at the start of the course and a "Postcourse Knowledge…
Predicting Students' Writing Performance on the NAEP from Student- and State-Level Variables
ERIC Educational Resources Information Center
Mo, Ya; Troia, Gary A.
2017-01-01
This study examines the relationship between students' demographic background and their experiences with writing at school, the alignment between state and National Assessment of Educational Progress (NAEP) direct writing assessments, and students' NAEP writing performance. The study utilizes primary data collection via content analysis of writing…
ERIC Educational Resources Information Center
Haughton, Noela A.; Keil, Virginia L.
2009-01-01
This article discusses the development and implementation of a technology-supported student teacher performance assessment that supports integration with a larger electronic assessment system. The authors spearheaded a multidisciplinary team to develop a comprehensive performance assessment based on the Pathwise framework. The team collaborated…
Impact of Student vs Faculty Facilitators on Motivational Interviewing Student Outcomes
Widder-Prewett, Rebecca; Cameron, Ginger; Anderson, Douglas; Pinkerton, Mark; Chen, Aleda M. H.
2017-01-01
Objective. To determine the impact of student or faculty facilitation on student self-assessed attitudes, confidence, and competence in motivational interviewing (MI) skills; actual competence; and evaluation of facilitator performance. Methods. Second-year pharmacy (P2) students were randomly assigned to a student or faculty facilitator for a four-hour, small-group practice of MI skills. MI skills were assessed in a simulated patient encounter with the mMITI (modified Motivational Interviewing Treatment Integrity) tool. Students completed a pre-post, 6-point, Likert-type assessment addressing the research objectives. Differences were assessed using a Mann-Whitney U test. Results. Student (N=44) post-test attitudes, confidence, perceived or actual competence, and evaluations of facilitator performance were not different for faculty- and student-facilitated groups. Conclusion. Using pharmacy students as small-group facilitators did not affect student performance and were viewed as equally favorable. Using pharmacy students as facilitators can lessen faculty workload and provide an outlet for students to develop communication and facilitation skills that will be needed in future practice. PMID:28970608
Braend, Anja Maria; Gran, Sarah Frandsen; Frich, Jan C; Lindbaek, Morten
2010-01-01
Formative assessment of medical students' clinical performance during general practice clerkship is necessary to learn consultation skills. Our aim was to triangulate feedback using patient questionnaires, written self-assessment and teachers' observation-based assessment, and to describe the content of this feedback. We developed StudentPEP, a 15-item version of EUROPEP, a tool for measuring patients' evaluation of quality in general practice. The teacher and student forms consisted of five StudentPEP-items and open-ended questions asking for approval and improvement needed on four aspects. Quantitative scores were analyzed statistically. Free-text comments were analyzed and categorized into 'specific and concrete' versus 'general and unspecific'. One hundred seventy-three students returned data from 2643 consultations. Mean patients' scores for 15 items were 4.3-4.8 on a five-point Likert scale. Mean teacher scores were 4.4 on five items, while students' mean self-assessments were 3.6-3.8. In an analysis of 380 consultations, students were more specific and concrete in their self-evaluation compared with teachers (p < 0.01). Patients scored students' performance high compared with students' self-assessments. Teachers' scores were in accordance with patients' scores. Teachers' written evaluations of students were often general. There is a potential for improving teachers' feedback in terms of more specific and concrete comments.
Does Formative Assessment Improve Student Learning and Performance in Soil Science?
ERIC Educational Resources Information Center
Kopittke, Peter M.; Wehr, J. Bernhard; Menzies, Neal W.
2012-01-01
Soil science students are required to apply knowledge from a range of disciplines to unfamiliar scenarios to solve complex problems. To encourage deep learning (with student performance an indicator of learning), a formative assessment exercise was introduced to a second-year soil science subject. For the formative assessment exercise, students…
ERIC Educational Resources Information Center
Wang, Ye; Gushta, Matthew
2013-01-01
The No Child Left Behind Act resulted in increased school-level implementation of assessment-based school interventions that aim to improve student performance. Diagnostic assessments are included among these interventions, designed to help teachers use evidence about student performance to modify and differentiate instruction and improve student…
ERIC Educational Resources Information Center
Chang, Chi-Cheng; Tseng, Kuo-Hung; Lou, Shi-Jer
2012-01-01
This study explored the consistency and difference of teacher-, student self- and peer-assessment in the context of Web-based portfolio assessment. Participants were 72 senior high school students enrolled in a computer application course. Through the assessment system, the students performed portfolio creation, inspection, self- and…
Taglieri, Catherine A; Crosby, Steven J; Zimmerman, Kristin; Schneider, Tulip; Patel, Dhiren K
2017-06-01
Objective. To assess the effect of incorporating virtual patient activities in a pharmacy skills lab on student competence and confidence when conducting real-time comprehensive clinic visits with mock patients. Methods. Students were randomly assigned to a control or intervention group. The control group completed the clinic visit prior to completing virtual patient activities. The intervention group completed the virtual patient activities prior to the clinic visit. Student proficiency was evaluated in the mock lab. All students completed additional exercises with the virtual patient and were subsequently assessed. Student impressions were assessed via a pre- and post-experience survey. Results. Student performance conducting clinic visits was higher in the intervention group compared to the control group. Overall student performance continued to improve in the subsequent module. There was no change in student confidence from pre- to post-experience. Student rating of the ease of use and realistic simulation of the virtual patient increased; however, student rating of the helpfulness of the virtual patient decreased. Despite student rating of the helpfulness of the virtual patient program, student performance improved. Conclusion. Virtual patient activities enhanced student performance during mock clinic visits. Students felt the virtual patient realistically simulated a real patient. Virtual patients may provide additional learning opportunities for students.
Towards an Operational Definition of Clinical Competency in Pharmacy
2015-01-01
Objective. To estimate the inter-rater reliability and accuracy of ratings of competence in student pharmacist/patient clinical interactions as depicted in videotaped simulations and to compare expert panelist and typical preceptor ratings of those interactions. Methods. This study used a multifactorial experimental design to estimate inter-rater reliability and accuracy of preceptors’ assessment of student performance in clinical simulations. The study protocol used nine 5-10 minute video vignettes portraying different levels of competency in student performance in simulated clinical interactions. Intra-Class Correlation (ICC) was used to calculate inter-rater reliability and Fisher exact test was used to compare differences in distribution of scores between expert and nonexpert assessments. Results. Preceptors (n=42) across 5 states assessed the simulated performances. Intra-Class Correlation estimates were higher for 3 nonrandomized video simulations compared to the 6 randomized simulations. Preceptors more readily identified high and low student performances compared to satisfactory performances. In nearly two-thirds of the rating opportunities, a higher proportion of expert panelists than preceptors rated the student performance correctly (18 of 27 scenarios). Conclusion. Valid and reliable assessments are critically important because they affect student grades and formative student feedback. Study results indicate the need for pharmacy preceptor training in performance assessment. The process demonstrated in this study can be used to establish minimum preceptor benchmarks for future national training programs. PMID:26089563
It's All about Student Learning: Assessing Teacher Candidates' Ability to Impact P-12 Students
ERIC Educational Resources Information Center
Wise, A. E., Ed.; Ehrenberg, P., Ed.; Leibbrand, J., Ed.
2010-01-01
"It's All About Student Learning Assessing Teacher Candidates' Ability to Impact P-12 Students", provides practical assistance for institutions designing or revising assessment systems or individual assessments for use by units or programs. The publication includes performance assessments currently used by teacher preparation institutions and…
ERIC Educational Resources Information Center
Kelly, Dana; Nord, Christine Winquist; Jenkins, Frank; Chan, Jessica Ying; Kastberg, David
2013-01-01
The Program for International Student Assessment (PISA) is a system of international assessments that allows countries to compare outcomes of learning as students near the end of compulsory schooling. PISA core assessments measure the performance of 15-year-old students in mathematics, science, and reading literacy every 3 years. Coordinated by…
ERIC Educational Resources Information Center
Blom, Diana; Poole, Kim
2004-01-01
This paper discusses a project in which third-year undergraduate Performance majors were asked to assess their second-year peers. The impetus for launching the project came from some stirrings of discontent amongst a few students. Instead of finding the assessment of their peers a manageable task, most students found the breadth of musical focus,…
ERIC Educational Resources Information Center
Baron, Joan Boykoff, Ed.; Wolf, Dennie Palmer, Ed.
1996-01-01
These discussions of performance-based student assessment provide a record of a decade-long exploration of unsettled issues in assessment. The first section provides the elements of a basic rationale for performance-based assessment, while the second section contains accounts of efforts to develop new performance-based systems in one city and four…
The Reliability of In-Training Assessment when Performance Improvement Is Taken into Account
ERIC Educational Resources Information Center
van Lohuizen, Mirjam T.; Kuks, Jan B. M.; van Hell, Elisabeth A.; Raat, A. N.; Stewart, Roy E.; Cohen-Schotanus, Janke
2010-01-01
During in-training assessment students are frequently assessed over a longer period of time and therefore it can be expected that their performance will improve. We studied whether there really is a measurable performance improvement when students are assessed over an extended period of time and how this improvement affects the reliability of the…
Massachusetts Educational Assessment Program. Science and Ecology 1976-1977.
ERIC Educational Resources Information Center
Massachusetts State Dept. of Education, Boston.
Biology, Chemistry, Earth Science, Physics, and Ecology assessment instruments were administered to approximately 1,800 nine year old and 1,800 seventeen year old Massachusetts students. The 9 year old students exceeded the performance of a national and international sampling of students. They equaled the performance of a sampling of students from…
Allen, Joseph; Gregory, Anne; Mikami, Amori; Lun, Janetta; Hamre, Bridget; Pianta, Robert
2017-01-01
Multilevel modeling techniques were used with a sample of 643 students enrolled in 37 secondary school classrooms to predict future student achievement (controlling for baseline achievement) from observed teacher interactions with students in the classroom, coded using the Classroom Assessment Scoring System—Secondary. After accounting for prior year test performance, qualities of teacher interactions with students predicted student performance on end-of-year standardized achievement tests. Classrooms characterized by a positive emotional climate, with sensitivity to adolescent needs and perspectives, use of diverse and engaging instructional learning formats, and a focus on analysis and problem solving were associated with higher levels of student achievement. Effects of higher quality teacher–student interactions were greatest in classrooms with fewer students. Implications for teacher performance assessment and teacher effects on achievement are discussed. PMID:28931966
How Does Student Performance on Formative Assessments Relate to Learning Assessed by Exams?
ERIC Educational Resources Information Center
Smith, Gary
2007-01-01
A retrospective analysis examines the relationships between formative assessments and exam grades in two undergraduate geoscience courses. Pair and group-work grades correlate weakly with individual exam grades. Exam performance correlates to individual, weekly online assessments. Student attendance and use of assessment feedback are also…
ERIC Educational Resources Information Center
Hawthorne, Katrice A.; Bol, Linda; Pribesh, Shana; Suh, Yonghee
2015-01-01
Increased demands for accountability have placed an emphasis on assessment of student learning outcomes. At the post-secondary level, many of the assessments are considered low-stakes, as student performance is linked to few, if any, individual consequences. Given the prevalence of low-stakes assessment of student learning, research that…
ERIC Educational Resources Information Center
Thompson, Andrew R.; Braun, Mark W.; O'Loughlin, Valerie D.
2013-01-01
Curricular reform is a widespread trend among medical schools. Assessing the impact that pedagogical changes have on students is a vital step in review process. This study examined how a shift from discipline-focused instruction and assessment to integrated instruction and assessment affected student performance in a second-year medical school…
Assessment Timing: Student Preferences and Its Impact on Performance
ERIC Educational Resources Information Center
McManus, Richard
2016-01-01
Students on a first year undergraduate economics module were given the choice of when to sit their first assessment in the subject in order to determine both preferences over assessment timing, and the impact of timing on performance. Clear preferences of having this option were shown (only 2% of students stated to be indifferent) with those more…
McAllister, Sue; Lincoln, Michelle; Ferguson, Allison; McAllister, Lindy
2013-01-01
Valid assessment of health science students' ability to perform in the real world of workplace practice is critical for promoting quality learning and ultimately certifying students as fit to enter the world of professional practice. Current practice in performance assessment in the health sciences field has been hampered by multiple issues regarding assessment content and process. Evidence for the validity of scores derived from assessment tools are usually evaluated against traditional validity categories with reliability evidence privileged over validity, resulting in the paradoxical effect of compromising the assessment validity and learning processes the assessments seek to promote. Furthermore, the dominant statistical approaches used to validate scores from these assessments fall under the umbrella of classical test theory approaches. This paper reports on the successful national development and validation of measures derived from an assessment of Australian speech pathology students' performance in the workplace. Validation of these measures considered each of Messick's interrelated validity evidence categories and included using evidence generated through Rasch analyses to support score interpretation and related action. This research demonstrated that it is possible to develop an assessment of real, complex, work based performance of speech pathology students, that generates valid measures without compromising the learning processes the assessment seeks to promote. The process described provides a model for other health professional education programs to trial.
Impact of hybrid delivery of education on student academic performance and the student experience.
Congdon, Heather Brennan; Nutter, Douglas A; Charneski, Lisa; Butko, Peter
2009-11-12
To compare student academic performance and the student experience in the first-year doctor of pharmacy (PharmD) program between the main and newly opened satellite campuses of the University of Maryland. Student performance indicators including graded assessments, course averages, cumulative first-year grade point average (GPA), and introductory pharmacy practice experience (IPPE) evaluations were analyzed retrospectively. Student experience indicators were obtained via an online survey instrument and included involvement in student organizations; time-budgeting practices; and stress levels and their perceived effect on performance. Graded assessments, course averages, GPA, and IPPE evaluations were indistinguishable between campuses. Students' time allocation was not different between campuses, except for time spent attending class and watching lecture videos. There was no difference between students' stress levels at each campus. The implementation of a satellite campus to expand pharmacy education yielded academic performance and student engagement comparable to those from traditional delivery methods.
A study of performance assessment task organization in high school optics
NASA Astrophysics Data System (ADS)
Zawicki, Joseph Leo
2002-01-01
This investigation was undertaken to validate three performance assessment tasks in high school physics. The tasks that were studied were developed around three organizational models of performance assessments: integrated, independent and surrogate. The integrated model required students to answer questions, make observations and demonstrate skills related to the index of refraction of a particular material. All of the questions and activities the students completed were related to a sample of a particular plastic sample that was the focus of this task. The independent model is analogous to the station model that is currently used on three New York State assessments: the Grade 4 - Elementary Science Program Evaluation Test, the Intermediate Level Science (ILS) Test, and the Physical Setting: Earth Science Regents Exam. Students took measurements related to the index of refraction of a plastic sample that was the focus of the initial portion of this task; the remaining questions on the assessment were generally related to the concept of the index of refraction but did not refer back to the initial sample. The final task organization followed the surrogate model. In this model, students reviewed data that was collected and analyzed by other (fictitious) students. The students completing this task were asked to review the work presented on this assessment for errors; they evaluated the conclusions and statements presented on the assessment. Students were also asked to determine if the student work was acceptable or if this investigation should be repeated. Approximately 300 students from urban, suburban and rural districts across Western New York State participated in the study. The tasks were administered during the spring semester of the 2000--2001 school year. The participating schools had at least covered the topic of refraction, both in classroom lectures and in laboratory activities. Each student completed only one form of the task---either the integrated, the independent or the surrogate form. A set of ten questions, compiled from past New York State Regents Examinations in Physics, was used as an additional measurement of student conceptual understanding. This question set was identified as the "Optics Baseline Test" (OBT). Additionally, classroom teachers ranked the academic performance of each of the students in their classroom on the outcomes of the physics course; these rankings were compared with student scores on the performance assessment tasks. The process skills incorporated within the individual questions on each task were reviewed by a panel of expert teachers. Student scores on the tasks themselves were examined using a principal component analysis. This analysis provided support for the process skill subtests organized around the general process skills of planning, performing, and reasoning. Scoring guides and inter-rater reliabilities were established for each task. The reliabilities for tasks, subtests and questions were fairly high, indicting adequate task reliability. Correlations between student performance on the individual tasks and the OBT were not significant. Teacher ranking of student achievement in individual classrooms also failed to correlate significantly with student performance on tasks. The lack of correlation could be attributed to several factors, including (among others) a wide range of student opportunities to learn from the seven schools in the sample. As has been reported in the performance assessment literature, there were no significant differences between the performance of male and female students. (Abstract shortened by UMI.)
Alnasir, F A; Jaradat, A A
2011-08-01
To graduate good doctors, medical schools should adopt proper student procedures to select among applicant students. When selecting students, many medical colleges focus solely on their academic achievement on high school examinations, which do not reflect all, important attributes of student. For several years, the College of Medicine and Medical Sciences of the Arabian Gulf University has introduced and administered the AGU-MCAT (Arabian Gulf University Medical College Assessment Test) for screening student applicants. This study aimed to assess the ability of the AGU-MCAT to predict students' performance during their first year college study, as an example of one school's multi-dimensional admissions screening process. The AGU-MCAT is made up of three parts, including a written test on science, a test of students' English language skills and an interview. In the first part, students' science knowledge is tested with 100 multiple choice questions. The English exam assesses students. English reading and listening skills. Lastly, students are interviewed by two faculty members and one senior student to assess their personal qualities. The 138 students who passed the AGU-MCAT in September 2008 and matriculated in the school were studied. Their performance during Year One including their performance on exams in the various disciplines was compared to their achievement on the three AGU-MCAT components. AGU-MCAT's total mark and its science component had the highest linear relationship to students' performance in the various disciplines in Year One, while the strongest predictor of students' performance at the end of Year One was the AGU-MCAT's science test (R2=45.5%). Students' grades in high school did not predict their achievement in Year One. The AGU-MCAT used to screen applicants to the school also predicts students' performance during their first year of medical school.
Effect of students' learning styles on classroom performance in problem-based learning.
Alghasham, Abdullah A
2012-01-01
Since problem-based learning (PBL) sessions require a combination of active discussion, group interaction, and inductive and reflective thinking, students with different learning styles can be expected to perform differently in the PBL sessions. Using "Learning Style Inventory Questionnaire," students were divided into separate active and reflective learner groups. Tutors were asked to observe and assess the students' behavioral performance during the PBL sessions for a period of 5 weeks. A questionnaire of 24 items was developed to assess students' behavioral performance in PBL sessions. Active students tended to use multiple activities to obtain the needed information were more adjusted to the group norms and regulation and more skillful in using reasoning and problem-solving skills and in participation in discussion. On the other hand, reflective students used independent study more, listened actively and carefully to others and used previously acquired information in the discussion more frequently. Formative assessment quizzes did not indicate better performance of either group. There were no significant gender differences in PBL behavioral performance or quizzes' scores. Active and reflective learners differ in PBL class behavioral performance but not in the formative assessment. We recommend that students should be informed about their learning style and that they should learn strategies to compensate for any lacks in PBL sessions through self-study. Also, educational planners should ensure an adequate mix of students with different learning styles in the PBL groups to achieve PBL desired objectives.
Ahlin, C; Klang-Söderkvist, B; Johansson, E; Björkholm, M; Löfmark, A
2017-03-01
Venepuncture and the insertion of peripheral venous catheters are common tasks in health care, and training in these procedures is included in nursing programmes. Evidence of nursing students' knowledge and skills in these procedures is limited. The main aim of this study was to assess nursing students' knowledge and skills when performing venepuncture and inserting peripheral venous catheters. Potential associations between level of knowledge and skills, self-training, self-efficacy, and demographic characteristics were also investigated. The assessment was performed by lecturers at a university college in Sweden using the two previously tested instruments "Assess Venepuncture" and "Assess Peripheral Venous Catheter Insertion". Between 81% and 100% of steps were carried out correctly by the students. The step with the highest rating was "Uses gloves", and lowest rating was 'Informs the patients about the possibility of obtaining local anaesthesia'. Significant correlations between degree of self-training and correct performance were found in the group of students who registered their self-training. No associations between demographic characteristics and correct performances were found. Assessing that students have achieved adequate levels of knowledge and skills in these procedures at different levels of the nursing education is of importance to prevent complications and support patient safety. Copyright © 2017 Elsevier Ltd. All rights reserved.
Casey, Petra M; Palmer, Brian A; Thompson, Geoffrey B; Laack, Torrey A; Thomas, Matthew R; Hartz, Martha F; Jensen, Jani R; Sandefur, Benjamin J; Hammack, Julie E; Swanson, Jerry W; Sheeler, Robert D; Grande, Joseph P
2016-04-27
Evidence suggests that poor performance on standardized tests before and early in medical school is associated with poor performance on standardized tests later in medical school and beyond. This study aimed to explore relationships between standardized examination scores (before and during medical school) with test and clinical performance across all core clinical clerkships. We evaluated characteristics of 435 students at Mayo Medical School (MMS) who matriculated 2000-2009 and for whom undergraduate grade point average, medical college aptitude test (MCAT), medical school standardized tests (United States Medical Licensing Examination [USMLE] 1 and 2; National Board of Medical Examiners [NBME] subject examination), and faculty assessments were available. We assessed the correlation between scores and assessments and determined USMLE 1 cutoffs predictive of poor performance (≤10th percentile) on the NBME examinations. We also compared the mean faculty assessment scores of MMS students vs visiting students, and for the NBME, we determined the percentage of MMS students who scored at or below the tenth percentile of first-time national examinees. MCAT scores correlated robustly with USMLE 1 and 2, and USMLE 1 and 2 independently predicted NBME scores in all clerkships. USMLE 1 cutoffs corresponding to poor NBME performance ranged from 220 to 223. USMLE 1 scores were similar among MMS and visiting students. For most academic years and clerkships, NBME scores were similar for MMS students vs all first-time examinees. MCAT, USMLE 1 and 2, and subsequent clinical performance parameters were correlated with NBME scores across all core clerkships. Even more interestingly, faculty assessments correlated with NBME scores, affirming patient care as examination preparation. USMLE 1 scores identified students at risk of poor performance on NBME subject examinations, facilitating and supporting implementation of remediation before the clinical years. MMS students were representative of medical students across the nation.
ERIC Educational Resources Information Center
Webb, Noreen M.; Nemer, Kariane Mari; Zuniga, Stephen
2002-01-01
Studied the effects of group ability composition (homogeneous versus heterogeneous) on group processes and outcomes for high-ability students completing science assessments. Results for 83 high ability students show the quality of group functioning serves as the strongest predictor of high-ability students' performance and explained much of the…
ERIC Educational Resources Information Center
Sanchez, Maria Teresa; Ehrlich, Stacy; Midouhas, Emily; O'Dwyer, Laura
2009-01-01
Massachusetts policymakers recently expressed a desire to better understand Hispanic student achievement patterns in their state. Scores on the Massachusetts Comprehensive Assessment System (MCAS) tests have consistently revealed a gap in performance between Hispanic students and students from other subgroups, a gap corresponding to national…
The Use of a Performance Assessment for Identifying Gifted Lebanese Students: Is DISCOVER Effective?
ERIC Educational Resources Information Center
Sarouphim, Ketty M.
2009-01-01
The purpose of this study was to investigate the effectiveness of DISCOVER, a performance- based assessment in identifying gifted Lebanese students. The sample consisted of 248 students (121 boys, 127 girls) from Grades 3-5 at two private schools in Beirut, Lebanon. Students were administered DISCOVER and the Raven Standard Progressive Matrices…
NASA Astrophysics Data System (ADS)
Cushing, Patrick Ryan
This study compared the performance of high school students on laboratory assessments. Thirty-four high school students who were enrolled in the second semester of a regular biology class or had completed the biology course the previous semester participated in this study. They were randomly assigned to examinations of two formats, performance-task and traditional multiple-choice, from two content areas, using a compound light microscope and diffusion. Students were directed to think-aloud as they performed the assessments. Additional verbal data were obtained during interviews following the assessment. The tape-recorded narrative data were analyzed for type and diversity of knowledge and skill categories, and percentage of in-depth processing demonstrated. While overall mean scores on the assessments were low, elicited statements provided additional insight into student cognition. Results indicated that a greater diversity of knowledge and skill categories was elicited by the two microscope assessments and by the two performance-task assessments. In addition, statements demonstrating in-depth processing were coded most frequently in narratives elicited during clinical interviews following the diffusion performance-task assessment. This study calls for individual teachers to design authentic assessment practices and apply them to daily classroom routines. Authentic assessment should be an integral part of the learning process and not merely an end result. In addition, teachers are encouraged to explicitly identify and model, through think-aloud methods, desired cognitive behaviors in the classroom.
A Simulated Peer-Assessment Approach to Improving Student Performance in Chemical Calculations
ERIC Educational Resources Information Center
Scott, Fraser J.
2014-01-01
This paper describes the utility of using simulated, rather than real, student solutions to problems within a peer-assessment setting and whether this approach can be used as a means of improving performance in chemical calculations. The study involved a small cohort of students, of two levels, who carried out a simulated peer-assessment as a…
ERIC Educational Resources Information Center
Pholphirul, Piriya
2017-01-01
Several research papers have assessed the long-term benefits of pre-primary education in terms of academic performance and labor market outcomes. This study analyzes data obtained from the Programme for International Student Assessment (PISA) to estimate the effects of preschool enrollment of Thai students on producing long-term benefits in their…
ERIC Educational Resources Information Center
Styles, Irene; Wildy, Helen; Pepper, Vivienne; Faulkner, Joanne; Berman, Ye'Elah
2014-01-01
The assessment of literacy and numeracy skills of students as they enter school for the first time is not yet established nation-wide in Australia. However, a large proportion of primary schools have chosen to assess their starting students on the Performance Indicators in Primary Schools-Baseline Assessment (PIPS-BLA). This series of three…
Evaluating Tasks for Performance-Based Assessments: Advice for Music Teachers
ERIC Educational Resources Information Center
Scott, Sheila
2004-01-01
Performance-based assessments allow teachers to systematically observe skills used or demonstrated by students when they create a product, construct a response, or make a presentation (McMillan 2001). These assessments are grounded in performance-based tasks that elicit students' responses in relation to the outcomes of instruction. The criteria…
Putting the Focus on Student Engagement: The Benefits of Performance-Based Assessment
ERIC Educational Resources Information Center
Barlowe, Avram; Cook, Ann
2016-01-01
For more than two decades, the New York Performance Standards Consortium, a coalition of 38 public high schools, has steered clear of high-stakes testing, which superficially assess student learning. Instead, the consortium's approach relies on performance-based assessments--essays, research papers, science experiments, and high-level mathematical…
TIMSS 2011 Science Assessment Results: A Review of Ghana's Performance
ERIC Educational Resources Information Center
Buabeng, Isaac; Owusu, Kofi Acheaw; Ntow, Forster Danso
2014-01-01
This paper reviews Ghana's performance in the TIMSS 2011 survey in comparison with other African and some high performing countries which participated in the TIMSS assessment. Students' achievement in the science content areas assessed were summarized and teacher preparation constructs of teachers of the students who took part in the assessment…
NASA Astrophysics Data System (ADS)
Peterman, Karen; Cranston, Kayla A.; Pryor, Marie; Kermish-Allen, Ruth
2015-11-01
This case study was conducted within the context of a place-based education project that was implemented with primary school students in the USA. The authors and participating teachers created a performance assessment of standards-aligned tasks to examine 6-10-year-old students' graph interpretation skills as part of an exploratory research project. Fifty-five students participated in a performance assessment interview at the beginning and end of a place-based investigation. Two forms of the assessment were created and counterbalanced within class at pre and post. In situ scoring was conducted such that responses were scored as correct versus incorrect during the assessment's administration. Criterion validity analysis demonstrated an age-level progression in student scores. Tests of discriminant validity showed that the instrument detected variability in interpretation skills across each of three graph types (line, bar, dot plot). Convergent validity was established by correlating in situ scores with those from the Graph Interpretation Scoring Rubric. Students' proficiency with interpreting different types of graphs matched expectations based on age and the standards-based progression of graphs across primary school grades. The assessment tasks were also effective at detecting pre-post gains in students' interpretation of line graphs and dot plots after the place-based project. The results of the case study are discussed in relation to the common challenges associated with performance assessment. Implications are presented in relation to the need for authentic and performance-based instructional and assessment tasks to respond to the Common Core State Standards and the Next Generation Science Standards.
ERIC Educational Resources Information Center
Lyons, Susan; Qiu, Yuxi
2017-01-01
This field report from 2017's National Conference on Student Assessment shares possibilities for flexibility and innovation in assessment and accountability made possible by the Every Student Succeeds Act.
ERIC Educational Resources Information Center
McKevitt, Conor Thomas
2016-01-01
Assessment is one of the most important elements of student life and significantly shapes their learning. Consequently, tutors need to ensure that student awareness regarding assessment is promoted. Students should get the opportunity to practise assessing work and receive tutor feedback so that they might improve on both the work and their…
NASA Astrophysics Data System (ADS)
Guzzomi, Andrew L.; Male, Sally A.; Miller, Karol
2017-05-01
Engineering educators should motivate and support students in developing not only technical competence but also professional competence including commitment to excellence. We developed an authentic assessment to improve students' understanding of the importance of 'perfection' in engineering - whereby 50% good enough will not be acceptable in industry. Subsequently we aimed to motivate them to practise performing at their best when they practice engineering. Students in a third-year mechanical and mechatronic engineering unit completed a team design project designed with authentic assessment features to replicate industry expectations and a novel marking scheme to encourage the pursuit of excellence. We report mixed responses from students. Students' ratings of their levels of effort on this assessment indicate that many perceived a positive influence on their effort. However, students' comments included several that were consistent with students experiencing the assessment as alienating.
ERIC Educational Resources Information Center
Bain, Lisa Z.
2012-01-01
There are many different delivery methods used by institutions of higher education. These include traditional, hybrid, and online course offerings. The comparisons of these typically use final grade as the measure of student performance. This research study looks behind the final grade and compares student performance by assessment type, core…
ERIC Educational Resources Information Center
Chu, Man-Wai; Babenko, Oksana; Cui, Ying; Leighton, Jacqueline P.
2014-01-01
The study examines the role that perceptions or impressions of learning environments and assessments play in students' performance on a large-scale standardized test. Hierarchical linear modeling (HLM) was used to test aspects of the Learning Errors and Formative Feedback model to determine how much variation in students' performance was explained…
ERIC Educational Resources Information Center
Escudier, M. P.; Newton, T. J.; Cox, M. J.; Reynolds, P. A.; Odell, E. W.
2011-01-01
This study compared higher education dental undergraduate student performance in online assessments with performance in traditional paper-based tests and investigated students' perceptions of the fairness and acceptability of online tests, and showed performance to be comparable. The project design involved two parallel cross-over trials, one in…
A Qualitative Analysis of Narrative Preclerkship Assessment Data to Evaluate Teamwork Skills.
Dolan, Brigid M; O'Brien, Celia Laird; Cameron, Kenzie A; Green, Marianne M
2018-04-16
Construct: Students entering the health professions require competency in teamwork. Although many teamwork curricula and assessments exist, studies have not demonstrated robust longitudinal assessment of preclerkship students' teamwork skills and attitudes. Assessment portfolios may serve to fill this gap, but it is unknown how narrative comments within portfolios describe student teamwork behaviors. We performed a qualitative analysis of narrative data in 15 assessment portfolios. Student portfolios were randomly selected from 3 groups stratified by quantitative ratings of teamwork performance gathered from small-group and clinical preceptor assessment forms. Narrative data included peer and faculty feedback from these same forms. Data were coded for teamwork-related behaviors using a constant comparative approach combined with an identification of the valence of the coded statements as either "positive observation" or "suggestion for improvement." Eight codes related to teamwork emerged: attitude and demeanor, information facilitation, leadership, preparation and dependability, professionalism, team orientation, values team member contributions, and nonspecific teamwork comments. The frequency of codes and valence varied across the 3 performance groups, with students in the low-performing group receiving more suggestions for improvement across all teamwork codes. Narrative data from assessment portfolios included specific descriptions of teamwork behavior, with important contributions provided by both faculty and peers. A variety of teamwork domains were represented. Such feedback as collected in an assessment portfolio can be used for longitudinal assessment of preclerkship student teamwork skills and attitudes.
Impact of a Paper vs Virtual Simulated Patient Case on Student-Perceived Confidence and Engagement
Gallimore, Casey E.; Pitterle, Michael; Morrill, Josh
2016-01-01
Objective. To evaluate online case simulation vs a paper case on student confidence and engagement. Design. Students enrolled in a pharmacotherapy laboratory course completed a patient case scenario as a component of an osteoarthritis laboratory module. Two laboratory sections used a paper case (n=53); three sections used an online virtual case simulation (n=81). Student module performance was assessed through a submitted subjective objective assessment plan (SOAP) note. Students completed pre/post surveys to measure self-perceived confidence in providing medication management. The simulation group completed postmodule questions related to realism and engagement of the online virtual case simulation. Group assessments were performed using chi-square and Mann Whitney tests. Assessment. A significant increase in all 13 confidence items was seen in both student groups following completion of the laboratory module. The simulation group had an increased change of confidence compared to the paper group in assessing medication efficacy and documenting a thorough assessment. Comparing the online virtual simulation to a paper case, students agreed the learning experience increased interest, enjoyment, relevance, and realism. The simulation group performed better on the subjective SOAP note domain though no differences in total SOAP note scores was found between the two groups. Conclusion. Virtual case simulations result in increased student engagement and may lead to improved documentation performance in the subjective domain of SOAP notes. However, virtual patient cases may offer limited benefit over paper cases in improving overall student self-confidence to provide medication management. PMID:26941442
de Souza Teixeira, Carla Regina; Kusumota, Luciana; Alves Pereira, Marta Cristiane; Merizio Martins Braga, Fernanda Titareli; Pirani Gaioso, Vanessa; Mara Zamarioli, Cristina; Campos de Carvalho, Emilia
2014-01-01
To compare the level of anxiety and performance of nursing students when performing a clinical simulation through the traditional method of assessment with the presence of an evaluator and through a filmed assessment without the presence of an evaluator. Controlled trial with the participation of Brazilian public university 20 students who were randomly assigned to one of two groups: a) assessment through the traditional method with the presence of an evaluator; or b) filmed assessment. The level of anxiety was assessed using the Zung test and performance was measured based on the number of correct answers. Averages of 32 and 27 were obtained on the anxiety scale by the group assessed through the traditional method before and after the simulation, respectively, while the filmed group obtained averages of 33 and 26; the final scores correspond to mild anxiety. Even though there was a statistically significant reduction in the intra-groups scores before and after the simulation, there was no difference between the groups. As for the performance assessments in the clinical simulation, the groups obtained similar percentages of correct answers (83% in the traditional assessment and 84% in the filmed assessment) without statistically significant differences. Filming can be used and encouraged as a strategy to assess nursing undergraduate students.
Morton, David A; Colbert-Getz, Jorie M
2017-03-01
The flipped classroom (FC) model has emerged as an innovative solution to improve student-centered learning. However, studies measuring student performance of material in the FC relative to the lecture classroom (LC) have shown mixed results. An aim of this study was to determine if the disparity in results of prior research is due to level of cognition (low or high) needed to perform well on the outcome, or course assessment. This study tested the hypothesis that (1) students in a FC would perform better than students in a LC on an assessment requiring higher cognition and (2) there would be no difference in performance for an assessment requiring lower cognition. To test this hypothesis the performance of 28 multiple choice anatomy items that were part of a final examination were compared between two classes of first year medical students at the University of Utah School of Medicine. Items were categorized as requiring knowledge (low cognition), application, or analysis (high cognition). Thirty hours of anatomy content was delivered in LC format to 101 students in 2013 and in FC format to 104 students in 2014. Mann Whitney tests indicated FC students performed better than LC students on analysis items, U = 4243.00, P = 0.030, r = 0.19, but there were no differences in performance between FC and LC students for knowledge, U = 5002.00, P = 0.720 or application, U = 4990.00, P = 0.700, items. The FC may benefit retention when students are expected to analyze material. Anat Sci Educ 10: 170-175. © 2016 American Association of Anatomists. © 2016 American Association of Anatomists.
Etheridge, Kierstan; DeLellis, Teresa
2017-01-01
Objective. To describe the redesigned assessment plan for a patient safety and informatics course and assess student pharmacist performance and perceptions. Methods. The final examination of a patient safety course was redesigned from traditional multiple choice and short answer to team-based, open-ended, and case-based. Faculty for each class session developed higher level activities, focused on developing key skills or attitudes deemed essential for practice, for a progressive patient case consisting of nine activities. Student performance and perceptions were analyzed with pre- and post-surveys using 5-point scales. Results. Mean performance on the examination was 93.6%; median scores for each assessed course outcome ranged from 90% to 100%. Eighty-five percent of students completed both surveys. Confidence performing skills and demonstrating attitudes improved for each item on post-survey compared with pre-survey. Eighty-one percent of students indicated the experience of taking the examination was beneficial for their professional development. Conclusion. A team, case-based examination was associated with high student performance and improved self-confidence in performing medication safety-related skills. PMID:28970618
Pfeiffer, Carol A; Palley, Jane E; Harrington, Karen L
2010-07-01
The assessment of clinical competence and the impact of training in ambulatory settings are two issues of importance in the evaluation of medical student performance. This study compares the clinical skills performance of students placed in three types of community preceptors' offices (pediatrics, medicine, family medicine) on yearly clinical skills assessments with standardized patients. Our goal was to see if the site specialty impacted on clinical performance. The students in the study were completing a 3-year continuity preceptorship at a site representing one of the disciplines. Their performance on the four clinical skills assessments was compared. There was no significant difference in history taking, physical exam, communication, or clinical reasoning in any year (ANOVA p< or = .05) There was a small but significant difference in performance on a measure of interpersonal and interviewing skills during Years 1 and 2. The site specialty of an early clinical experience does not have a significant impact on performance of most of the skills measured by the assessments.
Olmsted, Jodi L
2014-10-01
This ten-year, longitudinal examination of a dental hygiene distance education (DE) program considered student performance on standard benchmark assessments as direct measures of institutional effectiveness. The aim of the study was to determine if students face-to-face in a classroom with an instructor performed differently from their counterparts in a DE program, taking courses through the alternative delivery system of synchronous interactive television (ITV). This study used students' grade point averages and National Board Dental Hygiene Examination scores to assess the impact of ITV on student learning, filling a crucial gap in current evidence. The study's research population consisted of 189 students who graduated from one dental hygiene program between 1997 and 2006. One hundred percent of the institution's data files for these students were used: 117 students were face-to-face with the instructor, and seventy-two received instruction through the ITV system. The results showed that, from a year-by-year perspective, no statistically significant performance differences were apparent between the two student groups when t-tests were used for data analysis. The DE system examined was considered effective for delivering education if similar performance outcomes were the evaluation criteria used for assessment.
Assessing Student Reasoning in Upper-Division Electricity and Magnetism at Oregon State University
ERIC Educational Resources Information Center
Zwolak, Justyna P.; Manogue, Corinne A.
2015-01-01
Standardized assessment tests that allow researchers to compare the performance of students under various curricula are highly desirable. There are several research-based conceptual tests that serve as instruments to assess and identify students' difficulties in lower-division courses. At the upper-division level assessing students' difficulties…
Should Athletic Training Educators Utilize Grades When Evaluating Student Clinical Performance?
ERIC Educational Resources Information Center
Scriber, Kent; Gray, Courtney; Millspaugh, Rose
2010-01-01
Objective: To explore and address some of the challenges for assessing, interpreting, and grading athletic training students' clinical performance and to suggest athletic training educators consider using a more universal assessment method for professional consistency. Background: In years past students learned from teachers or mentors on an…
Interpreting Assessments of Student Learning in the Introductory Physics Classroom and Laboratory
NASA Astrophysics Data System (ADS)
Dowd, Jason Edward
Assessment is the primary means of feedback between students and instructors. However, to effectively use assessment, the ability to interpret collected information is essential. We present insights into three unique, important avenues of assessment in the physics classroom and laboratory. First, we examine students' performance on conceptual surveys. The goal of this research project is to better utilize the information collected by instructors when they administer the Force Concept Inventory (FCI) to students as a pre-test and post-test of their conceptual understanding of Newtonian mechanics. We find that ambiguities in the use of the normalized gain, g, may influence comparisons among individual classes. Therefore, we propose using stratagrams, graphical summaries of the fraction of students who exhibit "Newtonian thinking," as a clearer, more informative method of both assessing a single class and comparing performance among classes. Next, we examine students' expressions of confusion when they initially encounter new material. The goal of this research project is to better understand what such confusion actually conveys to instructors about students' performance and engagement. We investigate the relationship between students' self-assessment of their confusion over material and their performance, confidence in reasoning, pre-course self-efficacy and several other measurable characteristics of engagement. We find that students' expressions of confusion are negatively related to initial performance, confidence and self-efficacy, but positively related to final performance when all factors are considered together. Finally, we examine students' exhibition of scientific reasoning abilities in the instructional laboratory. The goal of this research project is to explore two inquiry-based curricula, each of which proposes a different degree of scaffolding. Students engage in sequences of these laboratory activities during one semester of an introductory physics course. We find that students who participate in the less scaffolded activities exhibit marginally stronger scientific reasoning abilities in distinct exercises throughout the semester, but exhibit no differences in the final, common exercises. Overall, we find that, although students demonstrate some enhanced scientific reasoning skills, they fail to exhibit or retain even some of the most strongly emphasized skills.
Mitra, Nilesh Kumar; Barua, Ankur
2015-03-03
The impact of web-based formative assessment practices on performance of undergraduate medical students in summative assessments is not widely studied. This study was conducted among third-year undergraduate medical students of a designated university in Malaysia to compare the effect, on performance in summative assessment, of repeated computer-based formative assessment with automated feedback with that of single paper-based formative assessment with face-to face feedback. This quasi-randomized trial was conducted among two groups of undergraduate medical students who were selected by stratified random technique from a cohort undertaking the Musculoskeletal module. The control group C (n = 102) was subjected to a paper-based formative MCQ test. The experimental group E (n = 65) was provided three online formative MCQ tests with automated feedback. The summative MCQ test scores for both these groups were collected after the completion of the module. In this study, no significant difference was observed between the mean summative scores of the two groups. However, Band 1 students from group E with higher entry qualification showed higher mean score in the summative assessment. A trivial, but significant and positive correlation (r(2) = +0.328) was observed between the online formative test scores and summative assessment scores of group E. The proportionate increase of performance in group E was found to be almost double than group C. The use of computer based formative test with automated feedback improved the performance of the students with better academic background in the summative assessment. Computer-based formative test can be explored as an optional addition to the curriculum of pre-clinical integrated medical program to improve the performance of the students with higher academic ability.
ERIC Educational Resources Information Center
Maryland State Dept. of Education. Baltimore. Div. of Planning, Results and Information Management.
One component of the Maryland School Performance Assessment Program (MSPAP) is the state's performance-based assessments, criterion-referenced tests that require students to apply what they know and can do to solve problems and display other higher-order thinking skills. This document helps parents, teachers, students, and other citizens…
ERIC Educational Resources Information Center
Zebehazy, Kim T.; Zigmond, Naomi; Zimmerman, George J.
2012-01-01
Introduction: This study investigated the use of accommodations and the performance of students with visual impairments and severe cognitive disabilities on the Pennsylvania Alternate System of Assessment (PASA)yCoan alternate performance-based assessment. Methods: Differences in test scores on the most basic level (level A) of the PASA of 286…
Lin, Hsin-Hsin
2015-01-01
It was noted worldwide while learning fundamental skills and facing skills assessments, nursing students seemed to experience low confidence and high anxiety levels. Could simulation-based learning help to enhance students' self-efficacy and performance? Its effectiveness is mostly unidentified. This study was conducted to provide a shared experience to give nurse educators confidence and an insight into how simulation-based teaching can fit into nursing skills learning. A pilot study was completed with 50 second-year undergraduate nursing students, and the main study included 98 students where a pretest-posttest design was adopted. Data were gathered through four questionnaires and a performance assessment under scrutinized controls such as previous experiences, lecturers' teaching skills, duration of teaching, procedure of skills performance assessment and the inter-rater reliability. The results showed that simulation-based learning significantly improved students' self-efficacy regarding skills learning and the skills performance that nurse educators wish students to acquire. However, technology anxiety, examiners' critical attitudes towards students' performance and their unpredicted verbal and non-verbal expressions, have been found as possible confounding factors. The simulation-based learning proved to have a powerful positive effect on students' achievement outcomes. Nursing skills learning is one area that can benefit greatly from this kind of teaching and learning method.
ERIC Educational Resources Information Center
Prat-Sala, Merce; Redford, Paul
2012-01-01
Self-efficacy beliefs have been identified as associated with students' academic performance. The present research assessed the relationship between two new self-efficacy scales (self-efficacy in reading [SER] and self-efficacy in writing [SEW]) and students' writing performance on a piece of assessed written coursework. Using data from first and…
Krasne, Sally; Wimmers, Paul F; Relan, Anju; Drake, Thomas A
2006-05-01
Formative assessments are systematically designed instructional interventions to assess and provide feedback on students' strengths and weaknesses in the course of teaching and learning. Despite their known benefits to student attitudes and learning, medical school curricula have been slow to integrate such assessments into the curriculum. This study investigates how performance on two different modes of formative assessment relate to each other and to performance on summative assessments in an integrated, medical-school environment. Two types of formative assessment were administered to 146 first-year medical students each week over 8 weeks: a timed, closed-book component to assess factual recall and image recognition, and an un-timed, open-book component to assess higher order reasoning including the ability to identify and access appropriate resources and to integrate and apply knowledge. Analogous summative assessments were administered in the ninth week. Models relating formative and summative assessment performance were tested using Structural Equation Modeling. Two latent variables underlying achievement on formative and summative assessments could be identified; a "formative-assessment factor" and a "summative-assessment factor," with the former predicting the latter. A latent variable underlying achievement on open-book formative assessments was highly predictive of achievement on both open- and closed-book summative assessments, whereas a latent variable underlying closed-book assessments only predicted performance on the closed-book summative assessment. Formative assessments can be used as effective predictive tools of summative performance in medical school. Open-book, un-timed assessments of higher order processes appeared to be better predictors of overall summative performance than closed-book, timed assessments of factual recall and image recognition.
Kirwin, Jennifer; Greenwood, Kristin Curry; Rico, Janet; Nalliah, Romesh; DiVall, Margarita
2017-02-25
Objective. To design and implement a series of activities focused on developing interprofessional communication skills and to assess the impact of the activities on students' attitudes and achievement of educational goals. Design. Prior to the first pharmacy practice skills laboratory session, pharmacy students listened to a classroom lecture about team communication and viewed short videos describing the roles, responsibilities, and usual work environments of four types of health care professionals. In each of four subsequent laboratory sessions, students interacted with a different standardized health care professional role-played by a pharmacy faculty member who asked them a medication-related question. Students responded in verbal and written formats. Assessment. Student performance was assessed with a three-part rubric. The impact of the exercise was assessed by conducting pre- and post-intervention surveys and analyzing students' performance on relevant Center for the Advancement of Pharmacy Education (CAPE) outcomes. Survey results showed improvement in student attitudes related to team-delivered care. Students' performance on the problem solver and collaborator CAPE outcomes improved, while performance on the educator outcome worsened. Conclusions. The addition of an interprofessional communication activity with standardized health care professionals provided the opportunity for students to develop skills related to team communication. Students felt the activity was valuable and realistic; however, analysis of outcome achievement from the exercise revealed a need for more exposure to team communication skills.
Assessment Criteria for Competency-Based Education: A Study in Nursing Education
ERIC Educational Resources Information Center
Fastré, Greet M. J.; van der Klink, Marcel R.; Amsing-Smit, Pauline; van Merriënboer, Jeroen J.
2014-01-01
This study examined the effects of type of assessment criteria (performance-based vs. competency-based), the relevance of assessment criteria (relevant criteria vs. all criteria), and their interaction on secondary vocational education students' performance and assessment skills. Students on three programmes in the domain of nursing and care…
NASA Astrophysics Data System (ADS)
Apipah, S.; Kartono; Isnarto
2018-03-01
This research aims to analyze the quality of VAK learning with self-assessment toward the ability of mathematical connection performed by students and to analyze students’ mathematical connection ability based on learning styles in VAK learning model with self-assessment. This research applies mixed method type with concurrent embedded design. The subject of this research consists of VIII grade students from State Junior High School 9 Semarang who apply visual learning style, auditory learning style, and kinesthetic learning style. The data of learning style is collected by using questionnaires, the data of mathematical connection ability is collected by performing tests, and the data of self-assessment is collected by using assessment sheets. The quality of learning is qualitatively valued from planning stage, realization stage, and valuation stage. The result of mathematical connection ability test is analyzed quantitatively by mean test, conducting completeness test, mean differentiation test, and mean proportional differentiation test. The result of the research shows that VAK learning model results in well-qualified learning regarded from qualitative and quantitative sides. Students with visual learning style perform the highest mathematical connection ability, students with kinesthetic learning style perform average mathematical connection ability, and students with auditory learning style perform the lowest mathematical connection ability.
O'Mahony, Siobhain M; Sbayeh, Amgad; Horgan, Mary; O'Flynn, Siun; O'Tuathaigh, Colm M P
2016-07-08
An improved understanding of the relationship between anatomy learning performance and approaches to learning can lead to the development of a more tailored approach to delivering anatomy teaching to medical students. This study investigated the relationship between learning style preferences, as measured by Visual, Aural, Read/write, and Kinesthetic (VARK) inventory style questionnaire and Honey and Mumford's learning style questionnaire (LSQ), and anatomy and clinical skills assessment performance at an Irish medical school. Additionally, mode of entry to medical school [undergraduate/direct-entry (DEM) vs. graduate-entry (GEM)], was examined in relation to individual learning style, and assessment results. The VARK and LSQ were distributed to first and second year DEM, and first year GEM students. DEM students achieved higher clinical skills marks than GEM students, but anatomy marks did not differ between each group. Several LSQ style preferences were shown to be weakly correlated with anatomy assessment performance in a program- and year-specific manner. Specifically, the "Activist" style was negatively correlated with anatomy scores in DEM Year 2 students (rs = -0.45, P = 0.002). The "Theorist" style demonstrated a weak correlation with anatomy performance in DEM Year 2 (rs = 0.18, P = 0.003). Regression analysis revealed that, among the LSQ styles, the "Activist" was associated with poorer anatomy assessment performance (P < 0.05), while improved scores were associated with students who scored highly on the VARK "Aural" modality (P < 0.05). These data support the contention that individual student learning styles contribute little to variation in academic performance in medical students. Anat Sci Educ 9: 391-399. © 2016 American Association of Anatomists. © 2016 American Association of Anatomists.
Impact of Hybrid Delivery of Education on Student Academic Performance and the Student Experience
Nutter, Douglas A.; Charneski, Lisa; Butko, Peter
2009-01-01
Objectives To compare student academic performance and the student experience in the first-year doctor of pharmacy (PharmD) program between the main and newly opened satellite campuses of the University of Maryland. Methods Student performance indicators including graded assessments, course averages, cumulative first-year grade point average (GPA), and introductory pharmacy practice experience (IPPE) evaluations were analyzed retrospectively. Student experience indicators were obtained via an online survey instrument and included involvement in student organizations; time-budgeting practices; and stress levels and their perceived effect on performance. Results Graded assessments, course averages, GPA, and IPPE evaluations were indistinguishable between campuses. Students' time allocation was not different between campuses, except for time spent attending class and watching lecture videos. There was no difference between students' stress levels at each campus. Conclusions The implementation of a satellite campus to expand pharmacy education yielded academic performance and student engagement comparable to those from traditional delivery methods. PMID:19960080
Utilization of a virtual patient for advanced assessment of student performance in pain management.
Smith, Michael A; Waite, Laura H
2017-09-01
To assess student performance and achievement of course objectives following the integration of a virtual patient case designed to promote active, patient-centered learning in a required pharmacy course. DecisionSim™ (Kynectiv, Inc., Chadsford, PA), a dynamic virtual patient platform, was used to implement an interactive patient case to augment pain management material presented during a didactic session in a pharmacotherapy course. Simulation performance data were collected and analyzed. Student exam performance on pain management questions was compared to student exam performance on nearly identical questions from a prior year when a paper-based case was used instead of virtual patient technology. Students who performed well on the virtual patient case performed better on exam questions related to patient assessment (p = 0.0244), primary pharmacological therapy (p = 0.0001), and additional pharmacological therapy (p = 0.0001). Overall exam performance did not differ between the two groups. However, students with exposure to the virtual patient case demonstrated significantly better performance on higher level Bloom's Taxonomy questions that required them to create pharmacotherapy regimens (p=0.0005). Students in the previous year (exposed only to a paper patient case) performed better in calculating conversions of opioids for patients (p = 0.0001). Virtual patient technology may enhance student performance on high-level Bloom's Taxonomy examination questions. This study adds to the current literature demonstrating the value of virtual patient technology as an active-learning strategy. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Biggam, John
There are many different ways of providing university students with feedback on their assessment performance, ranging from written checklists and handwritten commentaries to individual verbal feedback. Regardless of whether the feedback is summative or formative in nature, it is widely recognized that providing consistent, meaningful written feedback to students on assessment performance is not a simple task, particularly where a module is delivered by a team of staff. Typical student complaints about such feedback include: inconsistency of comment between lecturers; illegible handwriting; difficulty in relating feedback to assessment criteria; and vague remarks. For staff themselves, there is the problem that written comments, to be of any benefit to students, are immensely time-consuming. This paper illustrates, through a case study, the enormous benefits of Automated Assessment Feedback for staff and students. A clear strategy on how to develop an automated assessment feedback system, using the simplest of technologies, is provided.
Comparison of answer-until-correct and full-credit assessments in a team-based learning course.
Farland, Michelle Z; Barlow, Patrick B; Levi Lancaster, T; Franks, Andrea S
2015-03-25
To assess the impact of awarding partial credit to team assessments on team performance and on quality of team interactions using an answer-until-correct method compared to traditional methods of grading (multiple-choice, full-credit). Subjects were students from 3 different offerings of an ambulatory care elective course, taught using team-based learning. The control group (full-credit) consisted of those enrolled in the course when traditional methods of assessment were used (2 course offerings). The intervention group consisted of those enrolled in the course when answer-until-correct method was used for team assessments (1 course offering). Study outcomes included student performance on individual and team readiness assurance tests (iRATs and tRATs), individual and team final examinations, and student assessment of quality of team interactions using the Team Performance Scale. Eighty-four students enrolled in the courses were included in the analysis (full-credit, n=54; answer-until-correct, n=30). Students who used traditional methods of assessment performed better on iRATs (full-credit mean 88.7 (5.9), answer-until-correct mean 82.8 (10.7), p<0.001). Students who used answer-until-correct method of assessment performed better on the team final examination (full-credit mean 45.8 (1.5), answer-until-correct 47.8 (1.4), p<0.001). There was no significant difference in performance on tRATs and the individual final examination. Students who used the answer-until-correct method had higher quality of team interaction ratings (full-credit 97.1 (9.1), answer-until-correct 103.0 (7.8), p=0.004). Answer-until-correct assessment method compared to traditional, full-credit methods resulted in significantly lower scores for iRATs, similar scores on tRATs and individual final examinations, improved scores on team final examinations, and improved perceptions of the quality of team interactions.
Estimating learning outcomes from pre- and posttest student self-assessments: a longitudinal study.
Schiekirka, Sarah; Reinhardt, Deborah; Beißbarth, Tim; Anders, Sven; Pukrop, Tobias; Raupach, Tobias
2013-03-01
Learning outcome is an important measure for overall teaching quality and should be addressed by comprehensive evaluation tools. The authors evaluated the validity of a novel evaluation tool based on student self-assessments, which may help identify specific strengths and weaknesses of a particular course. In 2011, the authors asked 145 fourth-year students at Göttingen Medical School to self-assess their knowledge on 33 specific learning objectives in a pretest and posttest as part of a cardiorespiratory module. The authors compared performance gain calculated from self-assessments with performance gain derived from formative examinations that were closely matched to these 33 learning objectives. Eighty-three students (57.2%) completed the assessment. There was good agreement between performance gain derived from subjective data and performance gain derived from objective examinations (Pearson r=0.78; P<.0001) on the group level. The association between the two measures was much weaker when data were analyzed on the individual level. Further analysis determined a quality cutoff for performance gain derived from aggregated student self-assessments. When using this cutoff, the evaluation tool was highly sensitive in identifying specific learning objectives with favorable or suboptimal objective performance gains. The tool is easy to implement, takes initial performance levels into account, and does not require extensive pre-post testing. By providing valid estimates of actual performance gain obtained during a teaching module, it may assist medical teachers in identifying strengths and weaknesses of a particular course on the level of specific learning objectives.
Zúñiga, Denisse; Mena, Beltrán; Oliva, Rose; Pedrals, Nuria; Padilla, Oslando; Bitran, Marcela
2009-10-01
The study of predictors of academic performance is relevant for medical education. Most studies of academic performance use global ratings as outcome measure, and do not evaluate the influence of the assessment methods. To model by multivariate analysis, the academic performance of medical considering, besides academic and demographic variables, the methods used to assess students' learning and their preferred modes of information processing. Two hundred seventy two students admitted to the medical school of the Pontificia Universidad Católica de Chile from 2000 to 2003. Six groups of variables were studied to model the students' performance in five basic science courses (Anatomy, Biology, Calculus, Chemistry and Physics) and two pre-clinical courses (Integrated Medical Clinic I and IT). The assessment methods examined were multiple choice question tests, Objective Structured Clinical Examination and tutor appraisal. The results of the university admission tests (high school grades, mathematics and biology tests), the assessment methods used, the curricular year and previous application to medical school, were predictors of academic performance. The information processing modes influenced academic performance, but only in interaction with other variables. Perception (abstract or concrete) interacted with the assessment methods, and information use (active or reflexive), with sex. The correlation between the real and predicted grades was 0.7. In addition to the academic results obtained prior to university entrance, the methods of assessment used in the university and the information processing modes influence the academic performance of medical students in basic and preclinical courses.
ERIC Educational Resources Information Center
Hsia, Lu-Ho; Huang, Iwen; Hwang, Gwo-Jen
2016-01-01
In this paper, a web-based peer-assessment approach is proposed for conducting performing arts activities. A peer-assessment system was implemented and applied to a junior high school performing arts course to evaluate the effectiveness of the proposed approach. A total of 163 junior high students were assigned to an experimental group and a…
Oh, Deborah M; Kim, Joshua M; Garcia, Raymond E; Krilowicz, Beverly L
2005-06-01
There is increasing pressure, both from institutions central to the national scientific mission and from regional and national accrediting agencies, on natural sciences faculty to move beyond course examinations as measures of student performance and to instead develop and use reliable and valid authentic assessment measures for both individual courses and for degree-granting programs. We report here on a capstone course developed by two natural sciences departments, Biological Sciences and Chemistry/Biochemistry, which engages students in an important culminating experience, requiring synthesis of skills and knowledge developed throughout the program while providing the departments with important assessment information for use in program improvement. The student work products produced in the course, a written grant proposal, and an oral summary of the proposal, provide a rich source of data regarding student performance on an authentic assessment task. The validity and reliability of the instruments and the resulting student performance data were demonstrated by collaborative review by content experts and a variety of statistical measures of interrater reliability, including percentage agreement, intraclass correlations, and generalizability coefficients. The high interrater reliability reported when the assessment instruments were used for the first time by a group of external evaluators suggests that the assessment process and instruments reported here will be easily adopted by other natural science faculty.
Student-led tutorials in problem-based learning: educational outcomes and students' perceptions.
Kassab, Salah; Abu-Hijleh, Marwan F; Al-Shboul, Qasim; Hamdy, Hossam
2005-09-01
The aim of this study was to examine the effectiveness of using students as tutors in a problem-based learning (PBL) medical curriculum. Ninety-one third-year medical students were divided into ten tutorial groups. The groups were randomly allocated into student-led tutorials (SLT) (five groups, n = 44 students) and faculty-led tutorials (FLT) (five groups, n = 47 students). Outcome measurements included assessment of students' performance in tutorials individually and as a group, end-unit examinations scores, assessment of tutoring skills and identifying students' perceptions about peer tutoring. Student tutors were perceived better in providing feedback and in understanding the difficulties students face in tutorials. Tutorial atmosphere, decision-making and support for the group leader were better in SLT compared with FLT groups. Self-assessment of student performance in SLT was not different from FLT. Student scores in the written and practical examinations were comparable in both groups. However, SLT groups found difficulties in analysis of problems presented in the first tutorial session. We conclude that the impact of peer tutoring on student performance in tutorials, group dynamics, and student achievement in examinations is positive overall. However, student tutors require special training before adopting this approach in PBL programs.
Madrazo, Lorenzo; Lee, Claire B; McConnell, Meghan; Khamisa, Karima
2018-06-15
Physicians and medical students are generally poor-self assessors. Research suggests that this inaccuracy in self-assessment differs by gender among medical students whereby females underestimate their performance compared to their male counterparts. However, whether this gender difference in self-assessment is observable in low-stakes scenarios remains unclear. Our study's objective was to determine whether self-assessment differed between male and female medical students when compared to peer-assessment in a low-stakes objective structured clinical examination. Thirty-three (15 males, 18 females) third-year students participated in a 5-station mock objective structured clinical examination. Trained fourth-year student examiners scored their performance on a 6-point Likert-type global rating scale. Examinees also scored themselves using the same scale. To examine gender differences in medical students' self-assessment abilities, mean self-assessment global rating scores were compared with peer-assessment global rating scores using an independent samples t test. Overall, female students' self-assessment scores were significantly lower compared to peer-assessment (p < 0.001), whereas no significant difference was found between self- and peer-assessment scores for male examinees (p = 0.228). This study provides further evidence that underestimation in self-assessment among females is observable even in a low-stakes formative objective structured clinical examination facilitated by fellow medical students.
ERIC Educational Resources Information Center
Leslie, Laura J.; Gorman, Paul C.
2017-01-01
Student engagement is vital in enhancing the student experience and encouraging deeper learning. Involving students in the design of assessment criteria is one way in which to increase student engagement. In 2011, a marking matrix was used at Aston University (UK) for logbook assessment (Group One) in a project-based learning module. The next…
ERIC Educational Resources Information Center
Krolak-Schwerdt, Sabine; Bohmer, Matthias; Grasel, Cornelia
2013-01-01
Research on teachers' judgments of student performance has demonstrated that educational assessments may be biased or may more correctly take the achievements of students into account depending on teachers' motivations while making the judgment. Building on research on social judgment formation the present investigation examined whether the…
Error Ratio Analysis: Alternate Mathematics Assessment for General and Special Educators.
ERIC Educational Resources Information Center
Miller, James H.; Carr, Sonya C.
1997-01-01
Eighty-seven elementary students in grades four, five, and six, were administered a 30-item multiplication instrument to assess performance in computation across grade levels. An interpretation of student performance using error ratio analysis is provided and the use of this method with groups of students for instructional decision making is…
ERIC Educational Resources Information Center
Christensen, William Howard
2013-01-01
In 2010, the federal government increased accountability expectations by placing more emphasis on monitoring teacher performance. Using a model that focuses on the New York State teacher evaluation system, that is comprised of a rubric for observation, local student assessment scores, and student state assessment scores, this…
Student Engagement: A Framework for On-Demand Performance Assessment Tasks
ERIC Educational Resources Information Center
Taylor, Catherine; Kokka, Kari; Darling-Hammond, Linda; Dieckmann, Jack; Pacheco, Vivian Santana; Sandler, Susan; Bae, Soung
2016-01-01
Engaging students in meaningful applications of their knowledge is a key aspect of both addressing the standards and providing greater access. Not only do the standards emphasize the importance of meaningful engagement in real-world tasks, but evidence shows that engagement is strongly related to student performance on assessment tasks, especially…
Assessing Logo Programming among Jordanian Seventh Grade Students through Turtle Geometry
ERIC Educational Resources Information Center
Khasawneh, Amal A.
2009-01-01
The present study is concerned with assessing Logo programming experiences among seventh grade students. A formal multiple-choice test and five performance tasks were used to collect data. The results provided that students' performance was better than the expected score by the probabilistic laws, and a very low correlation between their Logo…
ERIC Educational Resources Information Center
Lein, Amy E.; Jitendra, Asha K.; Starosta, Kristin M.; Dupuis, Danielle N.; Hughes-Reid, Cheyenne L.; Star, John R.
2016-01-01
In this study, the authors assessed the contribution of engagement (on-task behavior) to the mathematics problem-solving performance of seventh-grade students after accounting for prior mathematics achievement. A subsample of seventh-grade students in four mathematics classrooms (one high-, two average-, and one low-achieving) from a larger…
ERIC Educational Resources Information Center
Lein, Amy E.; Jitendra, Asha K.; Starosta, Kristin M.; Dupuis, Danielle N.; Hughes-Reid, Cheyenne L.; Star, Jon R.
2016-01-01
In this study, the authors assessed the contribution of engagement (on-task behavior) to the mathematics problem-solving performance of seventh-grade students after accounting for prior mathematics achievement. A subsample of seventh-grade students in four mathematics classrooms (one high-, two average-, and one low-achieving) from a larger…
ERIC Educational Resources Information Center
Brady, Michael P.; Heiser, Lawrence A.; McCormick, Jazarae K.; Forgan, James
2016-01-01
High-stakes standardized student assessments are increasingly used in value-added evaluation models to connect teacher performance to P-12 student learning. These assessments are also being used to evaluate teacher preparation programs, despite validity and reliability threats. A more rational model linking student performance to candidates who…
Gender-Related Differential Item Functioning on a Middle-School Mathematics Performance Assessment.
ERIC Educational Resources Information Center
Lane, Suzanne; And Others
This study examined gender-related differential item functioning (DIF) using a mathematics performance assessment, the QUASAR Cognitive Assessment Instrument (QCAI), administered to middle school students. The QCAI was developed for the Quantitative Understanding: Amplifying Student Achievement and Reading (QUASAR) project, which focuses on…
Making the grade in a portfolio-based system: student performance and the student perspective.
Nowacki, Amy S
2013-01-01
Assessment is such an integral part of the educational system that we rarely reflect on its value and impact. Portfolios have gained in popularity, but much attention has emphasized the end-user and portfolio assessment. Here we focus on the portfolio creator (the student) and examine whether their educational needs are met with such an assessment method. This study aims to investigate how assessment practices influence classroom performance and the learning experience of the student in a graduate education setting. Studied were 33 medical students at the Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, a program utilizing a portfolio-based system. The students may elect to simultaneously enroll in a Masters program; however, these programs employ traditional letter grades. Thus creating a unique opportunity to assess 25 portfolio only (P) students and 8 portfolio and grade (PG) students concurrently taking a course that counts for both programs. Classroom performance was measured via a comprehensive evaluation where the PG students scored modestly better (median total scores, 72% P vs. 76% PG). Additionally, a survey was conducted to gain insight into student's perspective on how assessment method impacts the learning experience. The students in the PG group (those receiving a grade) reported increased stress but greater affirmation and self-assurance regarding their knowledge and skill mastery. Incorporation of such affirmation remains a challenge for portfolio-based systems and an area for investigation and improvement.
Thompson, Laura R; Leung, Cynthia G; Green, Brad; Lipps, Jonathan; Schaffernocker, Troy; Ledford, Cynthia; Davis, John; Way, David P; Kman, Nicholas E
2017-01-01
Medical schools in the United States are encouraged to prepare and certify the entrustment of medical students to perform 13 core entrustable professional activities (EPAs) prior to graduation. Entrustment is defined as the informed belief that the learner is qualified to autonomously perform specific patient-care activities. Core EPA-10 is the entrustment of a graduate to care for the emergent patient. The purpose of this project was to design a realistic performance assessment method for evaluating fourth-year medical students on EPA-10. First, we wrote five emergent patient case-scenarios that a medical trainee would likely confront in an acute care setting. Furthermore, we developed high-fidelity simulations to realistically portray these patient case scenarios. Finally, we designed a performance assessment instrument to evaluate the medical student's performance on executing critical actions related to EPA-10 competencies. Critical actions included the following: triage skills, mustering the medical team, identifying causes of patient decompensation, and initiating care. Up to four students were involved with each case scenario; however, only the team leader was evaluated using the assessment instruments developed for each case. A total of 114 students participated in the EPA-10 assessment during their final year of medical school. Most students demonstrated competence in recognizing unstable vital signs (97%), engaging the team (93%), and making appropriate dispositions (92%). Almost 87% of the students were rated as having reached entrustment to manage the care of an emergent patient (99 of 114). Inter-rater reliability varied by case scenario, ranging from moderate to near-perfect agreement. Three of five case-scenario assessment instruments contained items that were internally consistent at measuring student performance. Additionally, the individual item scores for these case scenarios were highly correlated with the global entrustment decision. High-fidelity simulation showed good potential for effective assessment of medical student entrustment of caring for the emergent patient. Preliminary evidence from this pilot project suggests content validity of most cases and associated checklist items. The assessments also demonstrated moderately strong faculty inter-rater reliability.
ERIC Educational Resources Information Center
Maryland State Dept. of Education. Baltimore. Div. of Planning, Results and Information Management.
One component of the Maryland School Performance Assessment; Program (MSPAP) is the state's performance-based assessments, criterion-referenced tests that require students to apply what they know and can do to solve problems and display other higher-order thinking skills. This document helps parents, teachers, students, and other citizens…
Tatachar, Amulya; Kominski, Carol
2017-07-01
To compare the impact of a traditional case-based application exercise with a student question creation exercise on a) student exam performance, b) student perceptions of enjoyment, competence, understanding, effort, interest in continuing participation, and interest in the subject. Subjects were 84 second-year pharmacy students in a pharmacotherapy course. The research focus was active learning involving the topic of chronic kidney disease-mineral bone disorder. Student teams were randomly assigned to either case-based or student question creation exercises using PeerWise. Student performance was assessed by a pre- and posttest and on block and final exams. After completion, an online survey assessed student perceptions of both exercises. Statistically significant differences were revealed in favor of the student question creation group on enjoyment and interest in the subject matter. No statistically differences were found between the traditional case-based group and the student question creation group on gain score from pre-test to posttest. The student question creation group performed slightly better than the case-based application group on two of the five questions on the block exam but none of these differences reached statistical significance. Students randomly assigned to groups that created and reviewed questions exhibited slightly improved summative exam performance and reported significantly more positive perceptions than students engaging in a more traditional case-based learning activity. Student question creation has demonstrated potential as a useful learning activity. Despite inherent difficulties in designing studies involving educational research in a controlled environment, students who have submitted, created, rated, and answered peers' questions have overall performed well. Copyright © 2017 Elsevier Inc. All rights reserved.
Formative Assessment in EFL Writing: An Exploratory Case Study
ERIC Educational Resources Information Center
Lee, Icy
2011-01-01
In second-language writing, assessment has traditionally focused on the written products and how well (or badly) students perform in writing. Teachers dominate the assessment process as testers, while students remain passive testees. Assessment is something teachers "do to" rather than '"with" students, mainly for…
ERIC Educational Resources Information Center
Walz, Lynn; Thompson, Sandra; Thurlow, Martha; Spicuzza, Richard
This report focuses on the participation and performance of students with disabilities on the initial administration of Minnesota's Comprehensive Assessments (MCAs). The MCAs are criterion-referenced tests used for district accountability purposes and as tools for making decisions about curriculum and instruction. Assessments in mathematics and…
Evaluating the Effect of Learning Style and Student Background on Self-Assessment Accuracy
ERIC Educational Resources Information Center
Alaoutinen, Satu
2012-01-01
This study evaluates a new taxonomy-based self-assessment scale and examines factors that affect assessment accuracy and course performance. The scale is based on Bloom's Revised Taxonomy and is evaluated by comparing students' self-assessment results with course performance in a programming course. Correlation has been used to reveal possible…
The Impact of Preceptor and Student Learning Styles on Experiential Performance Measures
Cox, Craig D.; Seifert, Charles F.
2012-01-01
Objectives. To identify preceptors’ and students’ learning styles to determine how these impact students’ performance on pharmacy practice experience assessments. Methods. Students and preceptors were asked to complete a validated Pharmacist’s Inventory of Learning Styles (PILS) questionnaire to identify dominant and secondary learning styles. The significance of “matched” and “unmatched” learning styles between students and preceptors was evaluated based on performance on both subjective and objective practice experience assessments. Results. Sixty-one percent of 67 preceptors and 57% of 72 students who participated reported “assimilator” as their dominant learning style. No differences were found between student and preceptor performance on evaluations, regardless of learning style match. Conclusion. Determination of learning styles may encourage preceptors to use teaching methods to challenge students during pharmacy practice experiences; however, this does not appear to impact student or preceptor performance. PMID:23049100
Impact of a Paper vs Virtual Simulated Patient Case on Student-Perceived Confidence and Engagement.
Barnett, Susanne G; Gallimore, Casey E; Pitterle, Michael; Morrill, Josh
2016-02-25
To evaluate online case simulation vs a paper case on student confidence and engagement. Students enrolled in a pharmacotherapy laboratory course completed a patient case scenario as a component of an osteoarthritis laboratory module. Two laboratory sections used a paper case (n=53); three sections used an online virtual case simulation (n=81). Student module performance was assessed through a submitted subjective objective assessment plan (SOAP) note. Students completed pre/post surveys to measure self-perceived confidence in providing medication management. The simulation group completed postmodule questions related to realism and engagement of the online virtual case simulation. Group assessments were performed using chi-square and Mann Whitney tests. A significant increase in all 13 confidence items was seen in both student groups following completion of the laboratory module. The simulation group had an increased change of confidence compared to the paper group in assessing medication efficacy and documenting a thorough assessment. Comparing the online virtual simulation to a paper case, students agreed the learning experience increased interest, enjoyment, relevance, and realism. The simulation group performed better on the subjective SOAP note domain though no differences in total SOAP note scores was found between the two groups. Virtual case simulations result in increased student engagement and may lead to improved documentation performance in the subjective domain of SOAP notes. However, virtual patient cases may offer limited benefit over paper cases in improving overall student self-confidence to provide medication management.
The Odd Couple: The Australian NAPLAN and Singaporean PSLE
ERIC Educational Resources Information Center
Greenlees, Jane
2013-01-01
The use of high-stakes assessment to measure students' mathematical performance has become commonplace in schools all over the world. Such assessment instruments provide national or international comparisons of student (and potentially teacher performance). Each form of assessment is specialised in nature and is characteristic of the culture and…
Wass, Val; Roberts, Celia; Hoogenboom, Ron; Jones, Roger; Van der Vleuten, Cees
2003-01-01
Objective To assess the effect of ethnicity on student performance in stations assessing communication skills within an objective structured clinical examination. Design Quantitative and qualitative study. Setting A final UK clinical examination consisting of a two day objective structured clinical examination with 22 stations. Participants 82 students from ethnic minorities and 97 white students. Main outcome measures Mean scores for stations (quantitative) and observations made using discourse analysis on selected communication stations (qualitative). Results Mean performance of students from ethnic minorities was significantly lower than that of white students for stations assessing communication skills on days 1 (67.0% (SD 6.8%) and 72.3% (7.6%); P=0.001) and 2 (65.2% (6.6%) and 69.5% (6.3%); P=0.003). No examples of overt discrimination were found in 309 video recordings. Transcriptions showed subtle differences in communication styles in some students from ethnic minorities who performed poorly. Examiners' assumptions about what is good communication may have contributed to differences in grading. Conclusions There was no evidence of explicit discrimination between students from ethnic minorities and white students in the objective structured clinical examination. A small group of male students from ethnic minorities used particularly poorly rated communicative styles, and some subtle problems in assessing communication skills may have introduced bias. Tests need to reflect issues of diversity to ensure that students from ethnic minorities are not disadvantaged. What is already known on this topicUK medical schools are concerned that students from ethnic minorities may perform less well than white students in examinationsIt is important to understand whether our examination system disadvantages themWhat this study addsMean performance of students from ethnic minorities was significantly lower than that of white students in a final year objective structured clinical examinationTwo possible reasons for the difference were poor communicative performance of a small group of male students from ethnic minorities and examiners' use of a textbook patient centred notion of good communicationIssues of diversity in test construction and implementation must be addressed to ensure that students from ethnic minorities are not disadvantaged PMID:12689978
Williams, Reed G; Klamen, Debra L; Mayer, David; Valaski, Maureen; Roberts, Nicole K
2007-10-01
Skill acquisition and maintenance requires spaced deliberate practice. Assessing medical students' physical examination performance ability is resource intensive. The authors assessed the nature and size of physical examination performance samples necessary to accurately estimate total physical examination skill. Physical examination assessment data were analyzed from second year students at the University of Illinois College of Medicine at Chicago in 2002, 2003, and 2004 (N = 548). Scores on subgroups of physical exam maneuvers were compared with scores on the total physical exam, to identify sound predictors of total test performance. Five exam subcomponents were sufficiently correlated to overall test performance and provided adequate sensitivity and specificity to serve as a means to prompt continued student review and rehearsal of physical examination technical skills. Selection and administration of samples of the total physical exam provide a resource-saving approach for promoting and estimating overall physical examination skills retention.
Comparison of patient simulation methods used in a physical assessment course.
Grice, Gloria R; Wenger, Philip; Brooks, Natalie; Berry, Tricia M
2013-05-13
To determine whether there is a difference in student pharmacists' learning or satisfaction when standardized patients or manikins are used to teach physical assessment. Third-year student pharmacists were randomized to learn physical assessment (cardiac and pulmonary examinations) using either a standardized patient or a manikin. Performance scores on the final examination and satisfaction with the learning method were compared between groups. Eighty and 74 student pharmacists completed the cardiac and pulmonary examinations, respectively. There was no difference in performance scores between student pharmacists who were trained using manikins vs standardized patients (93.8% vs. 93.5%, p=0.81). Student pharmacists who were trained using manikins indicated that they would have probably learned to perform cardiac and pulmonary examinations better had they been taught using standardized patients (p<0.001) and that they were less satisfied with their method of learning (p=0.04). Training using standardized patients and manikins are equally effective methods of learning physical assessment, but student pharmacists preferred using standardized patients.
ERIC Educational Resources Information Center
Sarka, Samuel; Lijalem, Tsegay; Shibiru, Tilaye
2017-01-01
The aim of this study was to assessing and implementing of continuous assessment to enhance academic performance of 2nd year Animal and Range Sciences department students in Wolaita sodo university; and to take action (train) to raise the academic performance to a desirable state. For the purpose of surveying the students' level of performance…
Incorporating Formative Assessment in Iranian EFL Writing: A Case Study
ERIC Educational Resources Information Center
Naghdipour, Bakhtiar
2017-01-01
Undergraduate students' experience of assessment in universities is usually of summative assessment which provides only limited information to help students improve their performance. By contrast, formative assessment is informative and forward-looking, possessing the leverage to inform students of their day-to-day progress and inform teachers of…
Making the Grade in a Portfolio-Based System: Student Performance and the Student Perspective
Nowacki, Amy S.
2013-01-01
Assessment is such an integral part of the educational system that we rarely reflect on its value and impact. Portfolios have gained in popularity, but much attention has emphasized the end-user and portfolio assessment. Here we focus on the portfolio creator (the student) and examine whether their educational needs are met with such an assessment method. This study aims to investigate how assessment practices influence classroom performance and the learning experience of the student in a graduate education setting. Studied were 33 medical students at the Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, a program utilizing a portfolio-based system. The students may elect to simultaneously enroll in a Masters program; however, these programs employ traditional letter grades. Thus creating a unique opportunity to assess 25 portfolio only (P) students and 8 portfolio and grade (PG) students concurrently taking a course that counts for both programs. Classroom performance was measured via a comprehensive evaluation where the PG students scored modestly better (median total scores, 72% P vs. 76% PG). Additionally, a survey was conducted to gain insight into student’s perspective on how assessment method impacts the learning experience. The students in the PG group (those receiving a grade) reported increased stress but greater affirmation and self-assurance regarding their knowledge and skill mastery. Incorporation of such affirmation remains a challenge for portfolio-based systems and an area for investigation and improvement. PMID:23565103
Tansatit, Tanvaa; Apinuntrum, Prawit; Phetudom, Thavorn
2012-02-01
Preparing students to perform specific procedures on patients presents a challenge of student confidence in performing these tasks. This descriptive study determined the ability of the medical students to perform a basic clinical task after a short hands-on training workshop in cadavers. This basic procedural skills training was an attempt for developing conceptual understanding and increasing procedural skills in endotracheal intubation of the medical students. The students were trained to perform two different endotracheal intubations, uncomplicated intubation, and a traumatic difficult airway scenario. The training session consisted of two methods of endotracheal intubation, oral intubations using direct laryngoscopy (DL) in two cadavers with uncomplicated airway and the Flexible Snake Scope camera (FSSC) assisted nasal intubation procedures in two cadavers simulated trauma victims with difficult airway. In the assessment session, the students performed one timed trial with each device. All four cadavers were changed but the scenarios were the same. The groups of the medical students were randomly assigned to perform the tasks in one of two cadavers of the two scenarios. Thirty-two medical students participated in this training and assessment. The training session and the assessment lasted five hours and three hours respectively. No student was asked to perform the second trial. The average time for successful intubation with DL was 32.7 seconds (SD, 13.8 seconds) and for FSSC was 127.0 seconds (SD, 32.6 seconds). The intubation failure rate was 0% for the entire study. The medical students have the ability to accomplish a basic clinical task after a short hands-on training workshop.
Assessing and Monitoring Student Progress in an E-Learning Personnel Preparation Environment.
ERIC Educational Resources Information Center
Meyen, Edward L.; Aust, Ronald J.; Bui, Yvonne N.; Isaacson, Robert
2002-01-01
Discussion of e-learning in special education personnel preparation focuses on student assessment in e-learning environments. It includes a review of the literature, lessons learned by the authors from assessing student performance in e-learning environments, a literature perspective on electronic portfolios in monitoring student progress, and the…
A Peer-Assessment Mobile Kung Fu Education Approach to Improving Students' Affective Performances
ERIC Educational Resources Information Center
Kuo, Fon-Chu; Chen, Jun-Ming; Chu, Hui-Chun; Yang, Kai-Hsiang; Chen, Yi-Hsuan
2017-01-01
Peer-assessment and video comment-sharing are effective learning strategies for students to receive feedback on their learning. Researchers have emphasized the need for well-designed peer involvement in order to improve students' abilities in the cognitive and affective domains. Although student perceptions of peer-assessment have been studied…
The Effects of Performance Assessment Approach on Democratic Attitude of Students
ERIC Educational Resources Information Center
Yalcinkaya, Elvan
2013-01-01
The aim of the research is to analyze the effects of performance assessment approach on democratic attitude of students. The research model is an experimental design with pretest-posttest control groups. Both quantitative and qualitative techniques are used for gathering of data in this research. 46 students participated in this research, with 23…
Assessment of Teaching Performance of Student-Teachers on Teaching Practice
ERIC Educational Resources Information Center
Oluwatayo, James Ayodele; Adebule, Samuel Olufemi
2012-01-01
The study assessed teaching performance of 222 student-teachers from the Faculty of Education, Ekiti State University, posted to various secondary schools in Ekiti State for a six-week teaching practice during 2010/2011 academic session. The sample included 119 males, 103 females, 78 (300-Level) and 144 (400-Level) students. Data were collected…
ERIC Educational Resources Information Center
Marchand, Gwen C.; Furrer, Carrie J.
2014-01-01
This study explored the relationships among formative curriculum-based measures of reading (CBM-R), student engagement as an extra-academic indicator of student motivation, and summative performance on a high-stakes reading assessment. A diverse sample of third-, fourth-, and fifth-grade students and their teachers responded to questionnaires…
ERIC Educational Resources Information Center
Sanchez, Maria Teresa; Ehrlich, Stacy; Midouhas, Emily; O'Dwyer, Laura
2009-01-01
Massachusetts policymakers have expressed concern about the consistently lower scores of Hispanic students, compared to other subgroups, on the Massachusetts Comprehensive Assessment System (MCAS). This summary describes a larger report that examines Hispanic high school students' performance on the MCAS tests in English language arts and…
ERIC Educational Resources Information Center
Minxuan, Zhang; Lingshuai, Kong
2012-01-01
The outstanding performance of Shanghai students in the 4th Programme for International Student Assessment (PISA 2009) gained widespread attention at home and abroad. In this paper, the authors attribute this outstanding performance to three traditional factors and six modern factors. The traditional factors are high parental expectations, belief…
Learning Styles and Student Performance in Introductory Economics
ERIC Educational Resources Information Center
Brunton, Bruce
2015-01-01
Data from nine introductory microeconomics classes was used to test the effect of student learning style on academic performance. The Kolb Learning Style Inventory was used to assess individual student learning styles. The results indicate that student learning style has no significant effect on performance, undermining the claims of those who…
Changes in College Student Health:Implications for Academic Performance
ERIC Educational Resources Information Center
Ruthig, Joelle C.; Marrone, Sonia; Hladkyj, Steve; Robinson-Epp, Nancy
2011-01-01
This study investigated the longitudinal associations of health perceptions and behaviors with subsequent academic performance among college students. Multiple health perceptions and behaviors were assessed for 203 college students both at the beginning and end of an academic year. Students' academic performance was also measured at the end of the…
The PPST and NTE as Predictors of Student Teacher Performance.
ERIC Educational Resources Information Center
Salzman, Stephanie A.
The Pre-Professional Skills Test (PPST) and the National Teacher Examinations (NTE) were examined as predictors of student classroom performance as measured by three instruments from the "Teacher Performance Assessment Instruments" (TPAIs) of W. Capie et al. (1979). Subjects were 305 teacher education students taking the student teaching…
ERIC Educational Resources Information Center
Lane, Suzanne; And Others
The performance of students from different racial or ethnic subgroups and of students receiving bilingual (Spanish and English) or monolingual (English only) instruction in mathematics was studied using students from schools in the QUASAR (Qualitative Understanding Amplifying Student Achievement and Reasoning) project, a mathematics education…
NASA Astrophysics Data System (ADS)
Shelton, Angela
Many United States secondary students perform poorly on standardized summative science assessments. Situated Assessments using Virtual Environments (SAVE) Science is an innovative assessment project that seeks to capture students' science knowledge and understanding by contextualizing problems in a game-based virtual environment called Scientopolis. Within Scientopolis, students use an "avatar" to interact with non-player characters (NPCs), artifacts, embedded clues and "sci-tools" in order to help solve the problems of the townspeople. In an attempt to increase students' success on assessments, SAVE science places students in an environment where they can use their inquiry skills to solve problems instead of reading long passages which attempt to contextualize questions but ultimately cause construct-irrelevant variance. However, within these assessments reading is still required to access the test questions and character interactions. This dissertation explores how students' in-world performances differ when exposed to a Reading Aloud Accommodation (RAA) treatment in comparison to a control group. Student perceptions of the treatment are also evaluated. While a RAA is typically available for students with learning disabilities or English language learners, within this study, all students were randomly assigned to either the treatment or control, regardless of any demographic factors or learning barriers. The theories of Universal design for learning and brain-based learning advocate for multiple ways for students to engage, comprehend, and illustrate their content knowledge. Further, through providing more ways for students to interact with content, all students should benefit, not just those with learning disabilities. Students in the experimental group listened to the NPCs speak the dialogue that provides them with the problem, clues, and assessment questions, instead of relying on reading skills to gather the information. Overall, students in the treatment group statistically outperformed those in the control. Student perceptions of using the reading aloud accommodation were generally positive. Ideas for future research are presented to investigate the accommodation further.
Woodruff, Ashley; Prescott, Gina M.; Albanese, Nicole; Bernhardi, Christian; Doloresco, Fred
2016-01-01
Objective. To integrate a blended-learning model into a two-course patient assessment sequence in a doctor of pharmacy (PharmD) program and to assess the academic performance and perceptions of enrolled students. Design. A blended-learning model consisting of a flipped classroom format was integrated into a patient assessment (PA) course sequence. Course grades of students in the blended-learning (intervention) and traditional-classroom (control) groups were compared. A survey was administered to assess student perceptions. Assessment. The mean numeric grades of students in the intervention group were higher than those of students in the traditional group (PA1 course: 92.2±3.1 vs 90.0±4.3; and PA2 course: 90.3±4.9 vs 85.8±4.2). Eighty-six percent of the students in the intervention group agreed that the instructional methodologies used in this course facilitated understanding of the material. Conclusion. The blended-learning model was associated with improved academic performance and was well-received by students. PMID:28179725
Prescott, William Allan; Woodruff, Ashley; Prescott, Gina M; Albanese, Nicole; Bernhardi, Christian; Doloresco, Fred
2016-12-25
Objective. To integrate a blended-learning model into a two-course patient assessment sequence in a doctor of pharmacy (PharmD) program and to assess the academic performance and perceptions of enrolled students. Design. A blended-learning model consisting of a flipped classroom format was integrated into a patient assessment (PA) course sequence. Course grades of students in the blended-learning (intervention) and traditional-classroom (control) groups were compared. A survey was administered to assess student perceptions. Assessment. The mean numeric grades of students in the intervention group were higher than those of students in the traditional group (PA1 course: 92.2±3.1 vs 90.0±4.3; and PA2 course: 90.3±4.9 vs 85.8±4.2). Eighty-six percent of the students in the intervention group agreed that the instructional methodologies used in this course facilitated understanding of the material. Conclusion. The blended-learning model was associated with improved academic performance and was well-received by students.
Logan, Alexandra; Yule, Elisa; Taylor, Michael; Imms, Christine
2018-05-28
Australian accreditation standards for occupational therapy courses require consumer participation in the design, delivery and evaluation of programs. This study investigated whether a mental health consumer - as one of two assessors for an oral assessment in a mental health unit - impacted engagement, anxiety states and academic performance of undergraduate occupational therapy students. Students (n = 131 eligible) self-selected into two groups but were blinded to the group differences (assessor panel composition) until shortly prior to the oral assessment. Control group assessors were two occupational therapy educators, while consumer group assessors included an occupational therapy educator and a mental health consumer. Pre- and post-assessment data were successfully matched for 79 students (overall response rate = 73.1%). No evidence was found of significant differences between the two groups for engagement, anxiety or academic performance (all P values >0.05). Including mental health consumers as assessors did not negatively impact student engagement and academic performance, nor increase student anxiety beyond that typically observed in oral assessment tasks. The findings provide support for expanding the role of mental health consumers in the education and assessment of occupational therapy students. Development of methods to determine the efficacy of consumer involvement remains an area for future research. © 2018 Occupational Therapy Australia.
Velan, Gary M; Jones, Philip; McNeil, H Patrick; Kumar, Rakesh K
2008-11-25
Online formative assessments have a sound theoretical basis, and are prevalent and popular in higher education settings, but data to establish their educational benefits are lacking. This study attempts to determine whether participation and performance in integrated online formative assessments in the biomedical sciences has measurable effects on learning by junior medical students. Students enrolled in Phase 1 (Years 1 and 2) of an undergraduate Medicine program were studied over two consecutive years, 2006 and 2007. In seven consecutive courses, end-of-course (EOC) summative examination marks were analysed with respect to the effect of participation and performance in voluntary online formative assessments. Online evaluation surveys were utilized to gather students' perceptions regarding online formative assessments. Students rated online assessments highly on all measures. Participation in formative assessments had a statistically significant positive relationship with EOC marks in all courses. The mean difference in EOC marks for those who participated in formative assessments ranged from 6.3% (95% confidence intervals 1.6 to 11.0; p = 0.009) in Course 5 to 3.2% (0.2 to 6.2; p = 0.037) in Course 2. For all courses, performance in formative assessments correlated significantly with EOC marks (p < 0.001 for each course). The variance in EOC marks that could be explained by performance in the formative assessments ranged from 21.8% in Course 6 to 4.1% in Course 7. The results support the contention that well designed formative assessments can have significant positive effects on learning. There is untapped potential for use of formative assessments to assist learning by medical students and postgraduate medical trainees.
The Learning of Compost Practice in University
NASA Astrophysics Data System (ADS)
Agustina, T. W.; Rustaman, N. Y.; Riandi; Purwianingsih, W.
2017-09-01
The compost as one of the topics of the Urban Farming Movement in Bandung city, Indonesia. The preliminary study aims to obtain a description of the performance capabilities and compost products made by students with STREAM (Science-Technology-Religion-Art-Mathematics) approach. The method was explanatory sequential mixed method. The study was conducted on one class of Biology Education students at the one of the universities in Bandung, Indonesia. The sample was chosen purposively with the number of students as many as 44 people. The instruments were making Student Worksheets, Observation Sheets of Performance and Product Assessment, Rubric of Performance and Product, and Field Notes. The indicators of performance assessment rubrics include Stirring of Compost Materials and Composting Technology in accordance with the design. The product assessment rubric are a Good Composting Criteria and Compost Packaging. The result of can be stated most students have good performance. However, the ability to design of compost technology, compost products and the ability to pack compost are still lacking. The implication of study is students of Biology Education require habituation in the ability of designing technology.
Accuracy and reliability of peer assessment of athletic training psychomotor laboratory skills.
Marty, Melissa C; Henning, Jolene M; Willse, John T
2010-01-01
Peer assessment is defined as students judging the level or quality of a fellow student's understanding. No researchers have yet demonstrated the accuracy or reliability of peer assessment in athletic training education. To determine the accuracy and reliability of peer assessment of athletic training students' psychomotor skills. Cross-sectional study. Entry-level master's athletic training education program. First-year (n = 5) and second-year (n = 8) students. Participants evaluated 10 videos of a peer performing 3 psychomotor skills (middle deltoid manual muscle test, Faber test, and Slocum drawer test) on 2 separate occasions using a valid assessment tool. Accuracy of each peer-assessment score was examined through percentage correct scores. We used a generalizability study to determine how reliable athletic training students were in assessing a peer performing the aforementioned skills. Decision studies using generalizability theory demonstrated how the peer-assessment scores were affected by the number of participants and number of occasions. Participants had a high percentage of correct scores: 96.84% for the middle deltoid manual muscle test, 94.83% for the Faber test, and 97.13% for the Slocum drawer test. They were not able to reliably assess a peer performing any of the psychomotor skills on only 1 occasion. However, the φ increased (exceeding the 0.70 minimal standard) when 2 participants assessed the skill on 3 occasions (φ = 0.79) for the Faber test, with 1 participant on 2 occasions (φ = 0.76) for the Slocum drawer test, and with 3 participants on 2 occasions for the middle deltoid manual muscle test (φ = 0.72). Although students did not detect all errors, they assessed their peers with an average of 96% accuracy. Having only 1 student assess a peer performing certain psychomotor skills was less reliable than having more than 1 student assess those skills on more than 1 occasion. Peer assessment of psychomotor skills could be an important part of the learning process and a tool to supplement instructor assessment.
Gillette, Chris; Rudolph, Michael; Rockich-Winston, Nicole; Blough, Eric R; Sizemore, James A; Hao, Jinsong; Booth, Chris; Broedel-Zaugg, Kimberly; Peterson, Megan; Anderson, Stephanie; Riley, Brittany; Train, Brian C; Stanton, Robert B; Anderson, H Glenn
To characterize student performance on the Pharmacy Curriculum Outcomes Assessment (PCOA) and to determine the significance of specific admissions criteria and pharmacy school performance to predict student performance on the PCOA during the first through third professional years. Multivariate linear regression models were developed to study the relationships between various independent variables and students' PCOA total scores during the first through third professional years. To date, four cohorts have successfully taken the PCOA examination. Results indicate that the Pharmacy College Admissions Test (PCAT), the Health Science Reasoning Test (HSRT), and cumulative pharmacy grade point average were the only consistent significant predictors of higher PCOA total scores across all students who have taken the exam at our school of pharmacy. The school should examine and clarify the role of PCOA within its curricular assessment program. Results suggest that certain admissions criteria and performance in pharmacy school are associated with higher PCOA scores. Copyright © 2016 Elsevier Inc. All rights reserved.
Pitt, Victoria; Powis, David; Levett-Jones, Tracy; Hunter, Sharyn
2014-05-01
Research conducted primarily with psychology and medical students has highlighted that personal qualities play an important role in students' academic performance. In nursing there has been limited investigation of the relationship between personal qualities and performance. Yet, reports of student incivility and a lack of compassion have prompted appeals to integrate the assessment of personal qualities into pre-registration nursing student selection. Before this can be done research is needed to explore the influence of students' personal qualities on programme performance and progression. This study explores the relationships between students' personal qualities and their academic and clinical performance, behaviours and progression through a pre-registration nursing programme in Australia. This longitudinal descriptive correlational study was undertaken with a sample of Australian pre-registration nursing students (n=138). Students' personal qualities were assessed using three personal qualities assessment (PQA) instruments. Outcome measures included grades in nursing theory and clinical courses, yearly grade point average, final clinical competency, progression (completion), class attendance and levels of life event stress. Significant correlations were found between academic performance and PQA scores for self-control, resilience and traits of aloofness, confidence and involvement. Final clinical competence was predicted by confidence and self-control scores. Students with higher empathy had higher levels of life event stress in their first year and class attendance had a positive correlation with self-control. Completing the programme in three years was weakly predicted by the measure of resilience. No difference was noted between extreme or non-extreme scorers on the PQA scales with respect to performance or progression. This sample of students' personal qualities was found to influence their academic and clinical performance and their ability to complete a pre-registration programme in three years. However, further research is required with larger cohorts to confirm the use of personal qualities assessment during selection. © 2013.
ERIC Educational Resources Information Center
Hewson, C.
2012-01-01
To address concerns raised regarding the use of online course-based summative assessment methods, a quasi-experimental design was implemented in which students who completed a summative assessment either online or offline were compared on performance scores when using their self-reported "preferred" or "non-preferred" modes.…
Carr, Sandra E; Celenza, Antonio; Puddey, Ian B; Lake, Fiona
2014-07-30
Little recent published evidence explores the relationship between academic performance in medical school and performance as a junior doctor. Although many forms of assessment are used to demonstrate a medical student's knowledge or competence, these measures may not reliably predict performance in clinical practice following graduation. This descriptive cohort study explores the relationship between academic performance of medical students and workplace performance as junior doctors, including the influence of age, gender, ethnicity, clinical attachment, assessment type and summary score measures (grade point average) on performance in the workplace as measured by the Junior Doctor Assessment Tool. There were two hundred participants. There were significant correlations between performance as a Junior Doctor (combined overall score) and the grade point average (r = 0.229, P = 0.002), the score from the Year 6 Emergency Medicine attachment (r = 0.361, P < 0.001) and the Written Examination in Year 6 (r = 0.178, P = 0.014). There was no significant effect of any individual method of assessment in medical school, gender or ethnicity on the overall combined score of performance of the junior doctor. Performance on integrated assessments from medical school is correlated to performance as a practicing physician as measured by the Junior Doctor Assessment Tool. These findings support the value of combining undergraduate assessment scores to assess competence and predict future performance.
Faculty Perception and Use of Learning-Centered Strategies to Assess Student Performance
ERIC Educational Resources Information Center
Johnson, Matthew Lynn
2013-01-01
In this study, the researcher explored collegiate faculty use and perception of learning- centered strategies to assess student performance on various learning tasks. Through this study, the researcher identified the assessment strategies that faculty participants most frequently used, as well as the strategies that they perceived to be most…
ERIC Educational Resources Information Center
Bogo, Marion; Regehr, Cheryl; Logie, Carmen; Katz, Ellen; Mylopoulos, Maria; Regehr, Glenn
2011-01-01
The development of standardized, valid, and reliable methods for assessment of students' practice competence continues to be a challenge for social work educators. In this study, the Objective Structured Clinical Examination (OSCE), originally used in medicine to assess performance through simulated interviews, was adapted for social work to…
ERIC Educational Resources Information Center
Nestel, Debra; Kneebone, Roger; Nolan, Carmel; Akhtar, Kash; Darzi, Ara
2011-01-01
Assessment of clinical skills is a critical element of undergraduate medical education. We compare a traditional approach to procedural skills assessment--the Objective Structured Clinical Examination (OSCE) with the Integrated Performance Procedural Instrument (IPPI). In both approaches, students work through "stations" or…
Preszler, Ralph W; Dawe, Angus; Shuster, Charles B; Shuster, Michèle
2007-01-01
With the advent of wireless technology, new tools are available that are intended to enhance students' learning and attitudes. To assess the effectiveness of wireless student response systems in the biology curriculum at New Mexico State University, a combined study of student attitudes and performance was undertaken. A survey of students in six biology courses showed that strong majorities of students had favorable overall impressions of the use of student response systems and also thought that the technology improved their interest in the course, attendance, and understanding of course content. Students in lower-division courses had more strongly positive overall impressions than did students in upper-division courses. To assess the effects of the response systems on student learning, the number of in-class questions was varied within each course throughout the semester. Students' performance was compared on exam questions derived from lectures with low, medium, or high numbers of in-class questions. Increased use of the response systems in lecture had a positive influence on students' performance on exam questions across all six biology courses. Students not only have favorable opinions about the use of student response systems, increased use of these systems increases student learning.
Using Student-Produced Video to Validate Head-to-Toe Assessment Performance.
Purpora, Christina; Prion, Susan
2018-03-01
This study explored third-semester baccalaureate nursing students' perceptions of the value of using student-produced video as an approach for learning head-to-toe assessment, an essential clinical nursing skill taught in the classroom. A cognitive apprenticeship model guided the study. The researchers developed a 34-item survey. A convenience sample of 72 students enrolled in an applied assessment and nursing fundamentals course at a university in the western United States provided the data. Most students reported a videotaping process that worked, supportive faculty, valuable faculty review of their work, confidence, a sense of performance independence, the ability to identify normal assessment findings, and few barriers to learning. The results suggested that a student-produced video approach to learning head-to-toe assessment was effective. Further, the study demonstrated how to leverage available instructional technology to provide meaningful, personalized instruction and feedback to students about an essential nursing skill. [J Nurs Educ. 2018;57(3):154-158.]. Copyright 2018, SLACK Incorporated.
Student Engagement in Assessments: What Students and Teachers Find Engaging
ERIC Educational Resources Information Center
Bae, Soung; Kokka, Kari
2016-01-01
Although research has shown that student engagement is strongly related to performance on assessment tasks, especially for traditionally underserved subgroups of students, increasing student engagement has not been the goal of standardized tests of content knowledge. Recent state and federal policies, however, are changing the assessment…
Workplace-based assessment and students' approaches to learning: a qualitative inquiry.
Al-Kadri, Hanan M; Al-Kadi, Mohammed T; Van Der Vleuten, Cees P M
2013-01-01
We have performed this research to assess the effect of work-place based assessment (WBA) practice on medical students' learning approaches. The research was conducted at the King Saud bin Abdulaziz University for Health Sciences, College of Medicine from 1 March to 31 July 2012. We conducted a qualitative, phenomenological research utilizing semi-structured individual interviews with medical students exposed to WBA. The audio-taped interviews were transcribed verbatim, analyzed, and themes were identified. We preformed investigators' triangulation, member checking with clinical supervisors and we triangulated the data with a similar research performed prior to the implementation of WBA. WBA results in variable learning approaches. Based on several affecting factors; clinical supervisors, faculty-given feedback, and assessment function, students may swing between surface, deep and effort and achievement learning approaches. Students' and supervisors' orientations on the process of WBA, utilization of peer feedback and formative rather than summative assessment facilitate successful implementation of WBA and lead to students' deeper approaches to learning. Interestingly, students and their supervisors have contradicting perceptions to WBA. A change in culture to unify students' and supervisors' perceptions of WBA, more accommodation of formative assessment, and feedback may result in students' deeper approach to learning.
ERIC Educational Resources Information Center
Krasne, Sally; Wimmers, Paul F.; Relan, Anju; Drake, Thomas A.
2006-01-01
Formative assessments are systematically designed instructional interventions to assess and provide feedback on students' strengths and weaknesses in the course of teaching and learning. Despite their known benefits to student attitudes and learning, medical school curricula have been slow to integrate such assessments into the curriculum. This…
The "Mozart Effect" and the Mathematical Connection
ERIC Educational Resources Information Center
Taylor, Judy M.; Rowe, Beverly J.
2012-01-01
Educators are always looking for ways to enhance the performance of students on outcome assessments. There is a growing body of research showing the benefits of music on educational performance. The purpose of this study was to determine if a "Mozart Effect" improves student performance on outcome assessments in mathematics. In this study, during…
ERIC Educational Resources Information Center
Cui, Ying; Gierl, Mark; Guo, Qi
2016-01-01
The purpose of the current investigation was to describe how the artificial neural networks (ANNs) can be used to interpret student performance on cognitive diagnostic assessments (CDAs) and evaluate the performances of ANNs using simulation results. CDAs are designed to measure student performance on problem-solving tasks and provide useful…
Student performance on practical gross anatomy examinations is not affected by assessment modality.
Meyer, Amanda J; Innes, Stanley I; Stomski, Norman J; Armson, Anthony J
2016-01-01
Anatomical education is becoming modernized, not only in its teaching and learning, but also in its assessment formats. Traditional "steeplechase" examinations are being replaced with online gross anatomy examinations. The aims of this study were to: (1) determine if online anatomy practical examinations are equivalent to traditional anatomy practical examinations; and (2) to examine if students' perceptions of the online or laboratory testing environments influenced their performance on the examinations. In phase one, 10 third-year students were interviewed to generate perception items to which five anatomy lecturers assigned content validity. In phase two, students' gross anatomical knowledge was assessed by examinations in two modes and their perceptions were examined using the devised survey instrument. Forty-five second-year chiropractic students voluntarily participated in Phase Two. The two randomly allocated groups completed the examinations in a sequential cross-over manner. Student performance on the gross anatomy examination was not different between traditional "steeplechase" (mean ± standard deviation (SD): 69 ± 11%) and online (68 ± 15%) modes. The majority of students (87%) agreed that they felt comfortable using computers for gross anatomy examinations. However, fewer students found it easy to orientate images of cadaver specimens online. The majority of students (85%) agreed that they felt comfortable working with cadavers but there was less agreement on the effect of moving around the laboratory during practical examinations. This data will allow anatomists to confidently implement online assessments without fear of jeopardizing academic rigor or student performance. © 2015 American Association of Anatomists.
ERIC Educational Resources Information Center
Marshall, Joy Morgan
2014-01-01
Higher expectations are on all parties to ensure students successfully perform on standardized tests. Specifically in North Carolina agriculture classes, students are given a CTE Post Assessment to measure knowledge gained and proficiency. Prior to students taking the CTE Post Assessment, teachers have access to a test item bank system that…
Unique Considerations for Assessing the Learning Media of Students Who Are Deaf-Blind
ERIC Educational Resources Information Center
McKenzie, Amy R.
2009-01-01
The use of current assessment results is an essential part of the Individualized Education Program (IEP) process for students with disabilities. The results of assessments allow the IEP team to write accurate statements of present levels of performance and thus student-centered goals and objectives. For students with visual impairments, including…
ERIC Educational Resources Information Center
Pennaforte, Antoine
2016-01-01
This paper investigates how student-workers' performance can be assessed through the notion of work-role performance, on the basis of three behavioral-related dimensions (proficiency, adaptivity, and proactivity), and proposes a definition of performance prior to graduation. By taking into account the accumulation of work experience, this article…
ERIC Educational Resources Information Center
Vatne, Stacy Jennifer
2010-01-01
The purpose of my study was to understand undergraduate music performance students' perceptions of their experiences as music performance majors and to assess music student positionality. Music student positionality, music students' perceptions of their place in the university setting, involves music majors' perceptions of their relationships to…
ERIC Educational Resources Information Center
Blom, Diana; Encarnacao, John
2012-01-01
The study investigates criteria chosen by music students for peer and self assessment of both the rehearsal process and performance outcome of their rock groups. The student-chosen criteria and their explanations of these criteria were analysed in relation to Birkett's skills taxonomy of "soft" and "hard" skills. In the rehearsal process, students…
The Effect of Classroom Performance Assessment on EFL Students' Basic and Inferential Reading Skills
ERIC Educational Resources Information Center
El-Koumy, Abdel Salam Abdel Khalek
2009-01-01
The purpose of this study was to investigate the effect of classroom performance assessment on the EFL students' basic and inferential reading skills. A pretest-posttest quasi-experimental design was employed in the study. The subjects of the study consisted of 64 first-year secondary school students in Menouf Secondary School for Boys at Menoufya…
ERIC Educational Resources Information Center
Laitusis, Cara Cahalan; Maneckshana, Behroz; Monfils, Lora; Ahlgrim-Delzell, Lynn
2009-01-01
The purpose of this study was to examine Differential Item Functioning (DIF) by disability groups on an on-demand performance assessment for students with severe cognitive impairments. Researchers examined the presence of DIF for two comparisons. One comparison involved students with severe cognitive impairments who served as the reference group…
ERIC Educational Resources Information Center
Vu, Nu Viet; And Others
1992-01-01
The use of a performance-based assessment of senior medical students' clinical skills utilizing standardized patients was evaluated, with 6,804 student-patient encounters involving 405 students over 6 years. Results provide evidence for test security, content validity, construct validity, reliability, and test ability to discriminate a wide range…
ERIC Educational Resources Information Center
Peterman, Karen; Cranston, Kayla A.; Pryor, Marie; Kermish-Allen, Ruth
2015-01-01
This case study was conducted within the context of a place-based education project that was implemented with primary school students in the USA. The authors and participating teachers created a performance assessment of standards-aligned tasks to examine 6-10-year-old students' graph interpretation skills as part of an exploratory research…
ERIC Educational Resources Information Center
Stone, Elizabeth; Cook, Linda
2009-01-01
Research studies have shown that a smaller percentage of students with learning disabilities participate in state assessments than do their peers without learning disabilities. Furthermore, there is almost always a performance gap between these groups of students on these assessments. It is important to evaluate whether a performance gap on a…
van Dulmen, Sandra; Tromp, Fred; Grosfeld, Frans; ten Cate, Olle; Bensing, Jozien
2007-01-01
Seventy second-year medical students volunteered to participate in a study with the aim of evaluating the impact of the assessment of simulated bad news consultations on their physiological and psychological stress and communication performance. Measurements were taken of salivary cortisol, systolic and diastolic blood pressure, heart rate, state anxiety and global stress using a Visual Analogue Scale. The subjects were asked to take three salivary cortisol samples on the assessment day as well as on a quiet control day, and to take all other measures 5 min before and 10 min after conducting the bad news consultation. Consultations were videotaped and analyzed using the information-giving subscale of the Amsterdam Attitude and Communication Scale (AACS), the Roter Interaction Analysis System (RIAS), and the additional non-verbal measures, smiling, nodding and patient-directed gaze. MANOVA repeated measurements were used to test the difference between the cortisol measurements taken on the assessment and the control day. Linear regression analysis was used to determine the association between physiological and psychological stress measures and the students' communication performance. The analyses were restricted to the sample of 57 students who had complete data records. In anticipation of the communication assessment, cortisol levels remained elevated, indicating a heightened anticipatory stress response. After the assessment, the students' systolic blood pressure, heart rate, globally assessed stress level and state anxiety diminished. Pre-consultation stress did not appear to be related to the quality of the students' communication performance. Non-verbal communication could be predicted by pre-consultation physiological stress levels in the sense that patient-directed gaze occurred more often the higher the students' systolic blood pressure and heart rate. Post-consultation heart rate remained higher the more often the students had looked at the patient and the more information they had provided. However, the heart rate appeared to diminish the more often the students had reassured the patient. These findings suggest that in evaluating students' communication performance there is a need to take their stress levels into account.
ERIC Educational Resources Information Center
Termos, Mohamad Hani
2013-01-01
The Classroom Performance System (CPS) is an instructional technology that increases student performance and promotes active learning. This study assessed the effect of the CPS on student participation, attendance, and achievement in multicultural college-level anatomy and physiology classes, where students' first spoken language is not English.…
Effects of Prompting Multiple Solutions for Modelling Problems on Students' Performance
ERIC Educational Resources Information Center
Schukajlow, Stanislaw; Krug, André; Rakoczy, Katrin
2015-01-01
Prompting students to construct multiple solutions for modelling problems with vague conditions has been found to be an effective way to improve students' performance on interest-oriented measures. In the current study, we investigated the influence of this teaching element on students' performance. To assess the impact of prompting multiple…
Factors Affecting Performance of Undergraduate Students in Construction Related Disciplines
ERIC Educational Resources Information Center
Olatunji, Samuel Olusola; Aghimien, Douglas Omoregie; Oke, Ayodeji Emmanuel; Olushola, Emmanuel
2016-01-01
Academic performance of students in Nigerian institutions has been of much concern to all and sundry hence the need to assess the factors affecting performance of undergraduate students in construction related discipline in Nigeria. A survey design was employed with questionnaires administered on students in the department of Quantity Surveying,…
Science Laboratory Environment and Academic Performance
NASA Astrophysics Data System (ADS)
Aladejana, Francisca; Aderibigbe, Oluyemisi
2007-12-01
The study determined how students assess the various components of their science laboratory environment. It also identified how the laboratory environment affects students' learning outcomes. The modified ex-post facto design was used. A sample of 328 randomly selected students was taken from a population of all Senior Secondary School chemistry students in a state in Nigeria. The research instrument, Science Laboratory Environment Inventory (SLEI) designed and validated by Fraser et al. (Sci Educ 77:1-24, 1993) was administered on the selected students. Data analysis was done using descriptive statistics and Product Moment Correlation. Findings revealed that students could assess the five components (Student cohesiveness, Open-endedness, Integration, Rule clarity, and Material Environment) of the laboratory environment. Student cohesiveness has the highest assessment while material environment has the least. The results also showed that the five components of the science laboratory environment are positively correlated with students' academic performance. The findings are discussed with a view to improving the quality of the laboratory environment, subsequent academic performance in science and ultimately the enrolment and retaining of learners in science.
Stetzik, Lucas; Deeter, Anthony; Parker, Jamie; Yukech, Christine
2015-06-23
A traditional lecture-based pedagogy conveys information and content while lacking sufficient development of critical thinking skills and problem solving. A puzzle-based pedagogy creates a broader contextual framework, and fosters critical thinking as well as logical reasoning skills that can then be used to improve a student's performance on content specific assessments. This paper describes a pedagogical comparison of traditional lecture-based teaching and puzzle-based teaching in a Human Anatomy and Physiology II Lab. Using a single subject/cross-over design half of the students from seven sections of the course were taught using one type of pedagogy for the first half of the semester, and then taught with a different pedagogy for the second half of the semester. The other half of the students were taught the same material but with the order of the pedagogies reversed. Students' performance on quizzes and exams specific to the course, and in-class assignments specific to this study were assessed for: learning outcomes (the ability to form the correct conclusion or recall specific information), and authentic academic performance as described by (Am J Educ 104:280-312, 1996). Our findings suggest a significant improvement in students' performance on standard course specific assessments using a puzzle-based pedagogy versus a traditional lecture-based teaching style. Quiz and test scores for students improved by 2.1 and 0.4% respectively in the puzzle-based pedagogy, versus the traditional lecture-based teaching. Additionally, the assessments of authentic academic performance may only effectively measure a broader conceptual understanding in a limited set of contexts, and not in the context of a Human Anatomy and Physiology II Lab. In conclusion, a puzzle-based pedagogy, when compared to traditional lecture-based teaching, can effectively enhance the performance of students on standard course specific assessments, even when the assessments only test a limited conceptual understanding of the material.
Meissner, Theresa M; Kloppe, Cordula; Hanefeld, Christoph
2012-04-14
Immediate bystander cardiopulmonary resuscitation (CPR) significantly improves survival after a sudden cardiopulmonary collapse. This study assessed the basic life support (BLS) knowledge and performance of high school students before and after CPR training. This study included 132 teenagers (mean age 14.6 ± 1.4 years). Students completed a two-hour training course that provided theoretical background on sudden cardiac death (SCD) and a hands-on CPR tutorial. They were asked to perform BLS on a manikin to simulate an SCD scenario before the training. Afterwards, participants encountered the same scenario and completed a questionnaire for self-assessment of their pre- and post-training confidence. Four months later, we assessed the knowledge retention rate of the participants with a BLS performance score. Before the training, 29.5% of students performed chest compressions as compared to 99.2% post-training (P < 0.05). At the four-month follow-up, 99% of students still performed correct chest compressions. The overall improvement, assessed by the BLS performance score, was also statistically significant (median of 4 and 10 pre- and post-training, respectively, P < 0.05). After the training, 99.2% stated that they felt confident about performing CPR, as compared to 26.9% (P < 0.05) before the training. BLS training in high school seems highly effective considering the minimal amount of previous knowledge the students possess. We observed significant improvement and a good retention rate four months after training. Increasing the number of trained students may minimize the reluctance to conduct bystander CPR and increase the number of positive outcomes after sudden cardiopulmonary collapse.
ERIC Educational Resources Information Center
Adkins, Denice
2014-01-01
This paper looks at results from the 2009 Programme for International Student Assessment to examine the effects of school libraries on students' test performance, with specific focus on the average of students' family wealth in a school. The paper documents students' school library use and students' home possessions to indicate how school…
Weeks, Benjamin K; Carty, Christopher P; Horan, Sean A
2012-10-25
The single-leg squat (SLS) is a common test used by clinicians for the musculoskeletal assessment of the lower limb. The aim of the current study was to reveal the kinematic parameters used by experienced and inexperienced clinicians to determine SLS performance and establish reliability of such assessment. Twenty-two healthy, young adults (23.8 ± 3.1 years) performed three SLSs on each leg whilst being videoed. Three-dimensional data for the hip and knee was recorded using a 10-camera optical motion analysis system (Vicon, Oxford, UK). SLS performance was rated from video data using a 10-point ordinal scale by experienced musculoskeletal physiotherapists and student physiotherapists. All ratings were undertaken a second time at least two weeks after the first by the same raters. Stepwise multiple regression analysis was performed to determine kinematic predictors of SLS performance scores and inter- and intra-rater reliability were determined using a two-way mixed model to generate intra-class correlation coefficients (ICC3,1) of consistency. One SLS per leg for each participant was used for analysis, providing 44 SLSs in total. Eight experienced physiotherapists and eight physiotherapy students agreed to rate each SLS. Variance in physiotherapist scores was predicted by peak knee flexion, knee medio-lateral displacement, and peak hip adduction (R2 = 0.64, p = 0.01), while variance in student scores was predicted only by peak knee flexion, and knee medio-lateral displacement (R2 = 0.57, p = 0.01). Inter-rater reliability was good for physiotherapists (ICC3,1 = 0.71) and students (ICC3,1 = 0.60), whilst intra-rater reliability was excellent for physiotherapists (ICC3,1 = 0.81) and good for students (ICC3,1 = 0.71). Physiotherapists and students are both capable of reliable assessment of SLS performance. Physiotherapist assessments, however, bear stronger relationships to lower limb kinematics and are more sensitive to hip joint motion than student assessments.
Franklin, Brandon M.; Xiang, Lin; Collett, Jason A.; Rhoads, Megan K.
2015-01-01
Student populations are diverse such that different types of learners struggle with traditional didactic instruction. Problem-based learning has existed for several decades, but there is still controversy regarding the optimal mode of instruction to ensure success at all levels of students' past achievement. The present study addressed this problem by dividing students into the following three instructional groups for an upper-level course in animal physiology: traditional lecture-style instruction (LI), guided problem-based instruction (GPBI), and open problem-based instruction (OPBI). Student performance was measured by three summative assessments consisting of 50% multiple-choice questions and 50% short-answer questions as well as a final overall course assessment. The present study also examined how students of different academic achievement histories performed under each instructional method. When student achievement levels were not considered, the effects of instructional methods on student outcomes were modest; OPBI students performed moderately better on short-answer exam questions than both LI and GPBI groups. High-achieving students showed no difference in performance for any of the instructional methods on any metric examined. In students with low-achieving academic histories, OPBI students largely outperformed LI students on all metrics (short-answer exam: P < 0.05, d = 1.865; multiple-choice question exam: P < 0.05, d = 1.166; and final score: P < 0.05, d = 1.265). They also outperformed GPBI students on short-answer exam questions (P < 0.05, d = 1.109) but not multiple-choice exam questions (P = 0.071, d = 0.716) or final course outcome (P = 0.328, d = 0.513). These findings strongly suggest that typically low-achieving students perform at a higher level under OPBI as long as the proper support systems (formative assessment and scaffolding) are provided to encourage student success. PMID:26628656
Greenwood, Kristin Curry; Rico, Janet; Nalliah, Romesh; DiVall, Margarita
2017-01-01
Objective. To design and implement a series of activities focused on developing interprofessional communication skills and to assess the impact of the activities on students’ attitudes and achievement of educational goals. Design. Prior to the first pharmacy practice skills laboratory session, pharmacy students listened to a classroom lecture about team communication and viewed short videos describing the roles, responsibilities, and usual work environments of four types of health care professionals. In each of four subsequent laboratory sessions, students interacted with a different standardized health care professional role-played by a pharmacy faculty member who asked them a medication-related question. Students responded in verbal and written formats. Assessment. Student performance was assessed with a three-part rubric. The impact of the exercise was assessed by conducting pre- and post-intervention surveys and analyzing students’ performance on relevant Center for the Advancement of Pharmacy Education (CAPE) outcomes. Survey results showed improvement in student attitudes related to team-delivered care. Students’ performance on the problem solver and collaborator CAPE outcomes improved, while performance on the educator outcome worsened. Conclusions. The addition of an interprofessional communication activity with standardized health care professionals provided the opportunity for students to develop skills related to team communication. Students felt the activity was valuable and realistic; however, analysis of outcome achievement from the exercise revealed a need for more exposure to team communication skills. PMID:28289305
Innovative assessment paradigm to enhance student learning in engineering education
NASA Astrophysics Data System (ADS)
El-Maaddawy, Tamer
2017-11-01
Incorporation of student self-assessment (SSA) in engineering education offers opportunities to support and encourage learner-led-learning. This paper presents an innovative assessment paradigm that integrates formative, summative, and SSA to enhance student learning. The assessment innovation was implemented in a senior-level civil engineering design course. Direct evidence of the impact of employing this innovation on student learning and achievement was derived by monitoring student academic performance in direct assessment tasks throughout the semester. Students' feedback demonstrated the effectiveness of this innovation to improve their understanding of course topics build their autonomy, independent judgement, and self-regulated learning skills.
Self-Esteem & Academic Performance among University Students
ERIC Educational Resources Information Center
Arshad, Muhammad; Zaidi, Syed Muhammad Imran Haider; Mahmood, Khalid
2015-01-01
The current study was conducted to assess the self-esteem and academic performance among university students after arising of several behavioral and educational problems. A total number of 80 students, 40 male students and 40 female students were selected through purposive sampling from G. C. University Faisalabad. The participants were…
ERIC Educational Resources Information Center
Blue, Elfreda V.; Alexander, Tammy
2009-01-01
Students with learning disabilities face real reading challenges. Research into the reading performance of culturally diverse students indicates improved reading performance for culturally diverse students when text matches students' cultural perspective. This quasiexperimental research investigates whether Caucasian and African American students…
Lu, Fletcher; Lemonde, Manon
2013-12-01
The objective of this study was to assess if online teaching delivery produces comparable student test performance as the traditional face-to-face approach irrespective of academic aptitude. This study involves a quasi-experimental comparison of student performance in an undergraduate health science statistics course partitioned in two ways. The first partition involves one group of students taught with a traditional face-to-face classroom approach and the other through a completely online instructional approach. The second partition of the subjects categorized the academic aptitude of the students into groups of higher and lower academically performing based on their assignment grades during the course. Controls that were placed on the study to reduce the possibility of confounding variables were: the same instructor taught both groups covering the same subject information, using the same assessment methods and delivered over the same period of time. The results of this study indicate that online teaching delivery is as effective as a traditional face-to-face approach in terms of producing comparable student test performance but only if the student is academically higher performing. For academically lower performing students, the online delivery method produced significantly poorer student test results compared to those lower performing students taught in a traditional face-to-face environment.
ERIC Educational Resources Information Center
Gale, Jessica; Wind, Stefanie; Koval, Jayma; Dagosta, Joseph; Ryan, Mike; Usselman, Marion
2016-01-01
This paper illustrates the use of simulation-based performance assessment (PA) methodology in a recent study of eighth-grade students' understanding of physical science concepts. A set of four simulation-based PA tasks were iteratively developed to assess student understanding of an array of physical science concepts, including net force,…
ERIC Educational Resources Information Center
Yeo, Cheok Heng; Ke, Keneth; Chatterjee, Bikram
2014-01-01
The aim of this study is to investigate the relationship between attempting online formative assessments and performance of students. The study is motivated by the dearth in research in the area of online formative assessment. The study reports mixed result of such relationship. A relationship was reported between attempting online formative…
ERIC Educational Resources Information Center
Prokop, Kristie L.
2011-01-01
On assessments such as Trends in International Mathematics and Science Study (TIMSS) (Stigler & Hiebert, 1999) and Program for International Assessment (PISA) ("PISA 2006 Science Competencies for Tomorrow's World", 2007) students in the United States have not been performing as well in mathematics as students in other countries. In…
Podcasts and Mobile Assessment Enhance Student Learning Experience and Academic Performance
ERIC Educational Resources Information Center
Morris, Neil P.
2010-01-01
The aim of this study was to combine podcasts of lectures with mobile assessments (completed via SMS on mobile telephones) to assess the effect on examination performance. Students (n = 100) on a final year, research-led, module were randomly divided into equal sized control and trial groups. The trial group were given access to podcasts/mobile…
Assessing clinical competency in the health sciences
NASA Astrophysics Data System (ADS)
Panzarella, Karen Joanne
To test the success of integrated curricula in schools of health sciences, meaningful measurements of student performance are required to assess clinical competency. This research project analyzed a new performance assessment tool, the Integrated Standardized Patient Examination (ISPE), for assessing clinical competency: specifically, to assess Doctor of Physical Therapy (DPT) students' clinical competence as the ability to integrate basic science knowledge with clinical communication skills. Thirty-four DPT students performed two ISPE cases, one of a patient who sustained a stroke and the other a patient with a herniated lumbar disc. Cases were portrayed by standardized patients (SPs) in a simulated clinical setting. Each case was scored by an expert evaluator in the exam room and then by one investigator and the students themselves via videotape. The SPs scored each student on an overall encounter rubric. Written feedback was obtained from all participants in the study. Acceptable reliability was demonstrated via inter-rater agreement as well as inter-rater correlations on items that used a dichotomous scale, whereas the items requiring the use of the 4-point rubric were somewhat less reliable. For the entire scale both cases had a significant correlation between the Expert-Investigator pair of raters, for the CVA case r = .547, p < .05 and for the HD case r = .700, p < .01. The SPs scored students higher than the other raters. Students' self-assessments were most closely aligned with the investigator. Effects were apparent due to case. Content validity was gathered in the process of developing cases and patient scenarios that were used in this study. Construct validity was obtained from the survey results analyzed from the experts and students. Future studies should examine the effect of rater training upon the reliability. Criterion or predictive validity could be further studied by comparing students' performances on the ISPE with other independent estimates of students' competence. The unique integration questions of the ISPE were judged to have good content validity from experts and students, suggestive that integration, a most crucial element of clinical competence, while done in the mind of the student, can be practiced, learned and assessed.
NASA Astrophysics Data System (ADS)
Septiani, A.; Rustaman, N. Y.
2017-02-01
A descriptive study about the implementation of performance assessment in STEM based instruction was carried out to investigate the tenth grade of Vocational school students’ science process skills during the teaching learning processes. A number of tenth grade agriculture students was involved as research subjects selected through cluster random sampling technique (n=35). Performance assessment was planned on skills during the teaching learning process through observation and on product resulted from their engineering practice design. The procedure conducted in this study included thinking phase (identifying problem and sharing idea), designing phase, construction phase, and evaluation phase. Data was collected through the use of science process skills (SPS) test, observation sheet on student activity, as well as tasks and rubrics for performance assessment during the instruction. Research findings show that the implementation of performance assessment in STEM education in planting media could detect students science process skills better from the observation individually compared through SPS test. It was also found that the result of performance assessment was diverse when it was correlated to each indicator of SPS (strong and positive; weak and positive).
ERIC Educational Resources Information Center
Feldon, David F.; Maher, Michelle A.; Hurst, Melissa; Timmerman, Briana
2015-01-01
Faculty mentorship is thought to be a linchpin of graduate education in STEM disciplines. This mixed-method study investigates agreement between student mentees' and their faculty mentors' perceptions of the students' developing research knowledge and skills in STEM. We also compare both assessments against independent ratings of the students'…
Assessing High School Student Learning on Science Outreach Lab Activities
ERIC Educational Resources Information Center
Thomas, Courtney L.
2012-01-01
The effect of hands-on laboratory activities on secondary student learning was examined. Assessment was conducted over a two-year period, with 262 students participating the first year and 264 students the second year. Students took a prequiz, performed a laboratory activity (gas chromatography of alcohols, or photosynthesis and respiration), and…
Towards an operational definition of pharmacy clinical competency
NASA Astrophysics Data System (ADS)
Douglas, Charles Allen
The scope of pharmacy practice and the training of future pharmacists have undergone a strategic shift over the last few decades. The pharmacy profession recognizes greater pharmacist involvement in patient care activities. Towards this strategic objective, pharmacy schools are training future pharmacists to meet these new clinical demands. Pharmacy students have clerkships called Advanced Pharmacy Practice Experiences (APPEs), and these clerkships account for 30% of the professional curriculum. APPEs provide the only opportunity for students to refine clinical skills under the guidance of an experienced pharmacist. Nationwide, schools of pharmacy need to evaluate whether students have successfully completed APPEs and are ready treat patients. Schools are left to their own devices to develop assessment programs that demonstrate to the public and regulatory agencies, students are clinically competent prior to graduation. There is no widely accepted method to evaluate whether these assessment programs actually discriminate between the competent and non-competent students. The central purpose of this study is to demonstrate a rigorous method to evaluate the validity and reliability of APPE assessment programs. The method introduced in this study is applicable to a wide variety of assessment programs. To illustrate this method, the study evaluated new performance criteria with a novel rating scale. The study had two main phases. In the first phase, a Delphi panel was created to bring together expert opinions. Pharmacy schools nominated exceptional preceptors to join a Delphi panel. Delphi is a method to achieve agreement of complex issues among experts. The principal researcher recruited preceptors representing a variety of practice settings and geographical regions. The Delphi panel evaluated and refined the new performance criteria. In the second phase, the study produced a novel set of video vignettes that portrayed student performances based on recommendations of an expert panel. Pharmacy preceptors assessed the performances with the new performance criteria. Estimates of reliability and accuracy from preceptors' assessments can be used to establish benchmarks for future comparisons. Findings from the first phase suggested preceptors held a unique perspective, where APPE assessments are based in relevance to clinical activities. The second phase analyzed assessment results from pharmacy preceptors who watched the video simulations. Reliability results were higher for non-randomized compared to randomized video simulations. Accuracy results showed preceptors more readily identified high and low student performances compared to average students. These results indicated the need for pharmacy preceptor training in performance assessment. The study illustrated a rigorous method to evaluate the validity and reliability of APPE assessment instruments.
Kirkup, Michele L; Adams, Brooke N; Meadows, Melinda L; Jackson, Richard
2016-06-01
A traditional summative grading structure, used at Indiana University School of Dentistry (IUSD) for more than 30 years, was identified by faculty as outdated for assessing students' clinical performance. In an effort to change the status quo, a feedback-driven assessment was implemented in 2012 to provide a constructive assessment tool acceptable to both faculty and students. Building on the successful non-graded clinical evaluation employed at Baylor College of Dentistry, IUSD implemented a streamlined electronic formative feedback model (FFM) to assess students' daily clinical performance. An important addition to this evaluation tool was the inclusion of routine student self-assessment opportunities. The aim of this study was to determine faculty and student response to the new assessment instrument. Following training sessions, anonymous satisfaction surveys were examined for the three user groups: clinical faculty (60% response rate), third-year (D3) students (72% response rate), and fourth-year (D4) students (57% response rate). In the results, 70% of the responding faculty members preferred the FFM over the summative model; however, 61.8% of the D4 respondents preferred the summative model, reporting insufficient assessment time and low faculty participation. The two groups of students had different responses to the self-assessment component: 70.2% of the D4 respondents appreciated clinical self-assessment compared to 46% of the D3 respondents. Overall, while some components of the FFM assessment were well received, a phased approach to implementation may have facilitated a transition more acceptable to both faculty and students. Improvements are being made in an attempt to increase overall satisfaction.
ERIC Educational Resources Information Center
Chang, Chi-Cheng; Liang, Chaoyun; Chen, Yi-Hui
2013-01-01
This study explored the reliability and validity of Web-based portfolio self-assessment. Participants were 72 senior high school students enrolled in a computer application course. The students created learning portfolios, viewed peers' work, and performed self-assessment on the Web-based portfolio assessment system. The results indicated: 1)…
ERIC Educational Resources Information Center
Lent, Chad
2012-01-01
Schools and educators have been increasingly educating students with disabilities in the general education setting; while at the same time the level of accountability for making a positive outcome on high stakes assessments for all students has increased. As educators feel pressure for their students to perform well on state assessments and meet…
Palmer, Edward J; Devitt, Peter G
2008-01-01
Background Teachers strive to motivate their students to be self-directed learners. One of the methods used is to provide online formative assessment material. The concept of formative assessment and use of these processes is heavily promoted, despite limited evidence as to their efficacy. Methods Fourth year medical students, in their first year of clinical work were divided into four groups. In addition to the usual clinical material, three of the groups were provided with some form of supplementary learning material. For two groups, this was provided as online formative assessment. The amount of time students spent on the supplementary material was measured, their opinion on learning methods was surveyed, and their performance in summative exams at the end of their surgical attachments was measured. Results The performance of students was independent of any educational intervention imposed by this study. Despite its ready availability and promotion, student use of the online formative tools was poor. Conclusion Formative learning is an ideal not necessarily embraced by students. If formative assessment is to work students need to be encouraged to participate, probably by implementing some form of summative assessment. PMID:18471324
Formats for Assessing Students' Self-Assessment Abilities.
ERIC Educational Resources Information Center
Miller, Maurice; Turner, Tamrah
The paper examines some self-assessment techniques used with handicapped students and discusses the advantages and disadvantages of these techniques. The use of self-rating scales is reviewed, and questionable results are cited. Another method, in which students view an item and estimate whether they can perform it before attempting it…
NASA Astrophysics Data System (ADS)
Fouche, Jaunine
The purpose of this nonequivalent control group design study was to evaluate the effectiveness of metacognitive and self-regulatory strategy use on the assessment achievement of 215 9th-grade, residential physics students from low socioeconomic status (low-SES) backgrounds. Students from low-SES backgrounds often lack the self-regulatory habits and metacognitive strategies to improve academic performance. In an effort to increase these scores and to increase student self-regulation and metacognition with regard to achievement in physics, this study investigated the use of metacognitive and self-regulatory strategies specifically as they apply to students' use of their own assessment data. Traditionally, student performance data is used by adults to inform instructional and curricular decisions. However, students are rarely given or asked to evaluate their own performance data. Moreover, students are not shown how to use this data to plan for or inform their own learning. It was found that students in the overall and algebra-ready treatment groups performed significantly better than their control group peers. These results are favorable for inclusion of strategies involving self-regulation and metacognition in secondary physics classrooms. Although these results may be applicable across residential, impoverished populations, further research is needed with non-residential populations.
Effects of ICT Assisted Real and Virtual Learning on the Performance of Secondary School Students
ERIC Educational Resources Information Center
Deka, Monisha; Jena, Ananta Kumar
2017-01-01
The study aimed to assess the effect of ICT assisted real and virtual learning performance over the traditional approach of secondary school students. Non-Equivalent Pretest-Posttest Quasi Experimental Design used to assess and relate the effects of independent variables virtual learning on dependent variables (i.e. learning performance).…
ERIC Educational Resources Information Center
Mitchell, Alison; Baron, Lauren; Macaruso, Paul
2018-01-01
Screening and monitoring student reading progress can be costly and time consuming. Assessment embedded within the context of online instructional programs can capture ongoing student performance data while limiting testing time outside of instruction. This paper presents two studies that examined the validity of using performance measures from a…
ERIC Educational Resources Information Center
Hansen, Michele J.; Meshulam, Susan; Parker, Brooke
2013-01-01
National attention is focused on the persistent high failure rates for students enrolled in math courses, and the search for strategies to change these outcomes is on. This study used a mixed-method research design to assess the effectiveness of a learning community course designed to improve the math performance levels of firstyear students.…
ERIC Educational Resources Information Center
Hoyle, Craig D.; O'Dwyer, Laura M.; Chang, Quincy
2011-01-01
The Maine Department of Education wanted to use longitudinal data from its data system to better understand whether and how student and school characteristics are associated with student performance on the state-mandated Maine High School Assessment (MHSA). It was particularly interested in understanding the factors associated with changes in test…
ERIC Educational Resources Information Center
Hoyle, Craig D.; O'Dwyer, Laura M.; Chang, Quincy
2011-01-01
The Maine Department of Education wanted to use longitudinal data from its data system to better understand whether and how student and school characteristics are associated with student performance on the state-mandated Maine High School Assessment (MHSA). It was particularly interested in understanding the factors associated with changes in test…
ERIC Educational Resources Information Center
Coleman, Howard D.
2013-01-01
Since the inception of high-stakes standardized testing, schools have been labeled as either succeeding or failing based on student standardized assessment performance. If students perform adequately, the building principal receives acknowledgement for being an effective instructional leader. Conversely, if students perform poorly, the principal…
Assessment of school mathematics: Teachers' perceptions and practices
NASA Astrophysics Data System (ADS)
Pfannkuch, Maxine
2001-12-01
This is the first report of a proposed ten-year interval longitudinal study about teacher assessment practice in Auckland, New Zealand. Interviews with teachers of Year 3, 6, 8, 10, and 13 students are analysed. These interviews indicate that primary teachers are using a variety of assessment strategies in a mastery-based system. Their judgement of mathematical performance is dominated by the belief that all students must feel that they are achieving. The secondary teacher interviews indicate common use of alternative assessment strategies in non-examination classes. Judgement of student performance is benchmarked against national examinations. It is conjectured that an education system effect determines teachers' assessment practices.
Oyebola, D D; Adewoye, O E; Iyaniwura, J O; Alada, A R; Fasanmade, A A; Raji, Y
2000-01-01
This study was designed to compare the performance of medical students in physiology when assessed by multiple choice questions (MCQs) and short essay questions (SEQs). The study also examined the influence of factors such as age, sex, O/level grades and JAMB scores on performance in the MCQs and SEQs. A structured questionnaire was administered to 264 medical students' four months before the Part I MBBS examination. Apart from personal data of each student, the questionnaire sought information on the JAMB scores and GCE O' Level grades of each student in English Language, Biology, Chemistry, Physics and Mathematics. The physiology syllabus was divided into five parts and the students were administered separate examinations (tests) on each part. Each test consisted of MCQs and SEQs. The performance in MCQs and SEQs were compared. Also, the effects of JAMB scores and GCE O/level grades on the performance in both the MCQs and SEQs were assessed. The results showed that the students performed better in all MCQ tests than in the SEQs. JAMB scores and O' level English Language grade had no significant effect on students' performance in MCQs and SEQs. However O' level grades in Biology, Chemistry, Physics and Mathematics had significant effects on performance in MCQs and SEQs. Inadequate knowledge of physiology and inability to present information in a logical sequence are believed to be major factors contributing to the poorer performance in the SEQs compared with MCQs. In view of the finding of significant association between performance in MCQs and SEQs and GCE O/level grades in science subjects and mathematics, it was recommended that both JAMB results and the GCE results in the four O/level subjects above may be considered when selecting candidates for admission into the medical schools.
Students' Metacomprehension Knowledge: Components That Predict Comprehension Performance
ERIC Educational Resources Information Center
Zabrucky, Karen M.; Moore, DeWayne; Agler, Lin-Miao Lin; Cummings, Andrea M.
2015-01-01
In the present study, we assessed students' metacomprehension knowledge and examined the components of knowledge most related to comprehension of expository texts. We used the Revised Metacomprehension Scale (RMCS) to investigate the relations between students' metacomprehension knowledge and comprehension performance. Students who evaluated and…
Assessing students' conceptual knowledge of electricity and magnetism
NASA Astrophysics Data System (ADS)
McColgan, Michele W.; Finn, Rose A.; Broder, Darren L.; Hassel, George E.
2017-12-01
We present the Electricity and Magnetism Conceptual Assessment (EMCA), a new assessment aligned with second-semester introductory physics courses. Topics covered include electrostatics, electric fields, circuits, magnetism, and induction. We have two motives for writing a new assessment. First, we find other assessments such as the Brief Electricity and Magnetism Assessment and the Conceptual Survey on Electricity and Magnetism not well aligned with the topics and content depth of our courses. We want to test introductory physics content at a level appropriate for our students. Second, we want the assessment to yield scores and gains comparable to the widely used Force Concept Inventory (FCI). After five testing and revision cycles, the assessment was finalized in early 2015 and is available online. We present performance results for a cohort of 225 students at Siena College who were enrolled in our algebra- and calculus-based physics courses during the spring 2015 and 2016 semesters. We provide pretest, post-test, and gain analyses, as well as individual question and whole test statistics to quantify difficulty and reliability. In addition, we compare EMCA and FCI scores and gains, and we find that students' FCI scores are strongly correlated with their performance on the EMCA. Finally, the assessment was piloted in an algebra-based physics course at George Washington University (GWU). We present performance results for a cohort of 130 GWU students and we find that their EMCA scores are comparable to the scores of students in our calculus-based physics course.
Westby, Carol; Washington, Karla N
2017-07-26
The aim of this tutorial is to support speech-language pathologists' (SLPs') application of the International Classification of Functioning, Disability and Health (ICF) in assessment and treatment practices with children with language impairment. This tutorial reviews the framework of the ICF, describes the implications of the ICF for SLPs, distinguishes between students' capacity to perform a skill in a structured context and the actual performance of that skill in naturalistic contexts, and provides a case study of an elementary school child to demonstrate how the principles of the ICF can guide assessment and intervention. The Scope of Practice and Preferred Practice documents for the American Speech-Language-Hearing Association identify the ICF as the framework for practice in speech-language pathology. This tutorial will facilitate clinicians' ability to identify personal and environmental factors that influence students' skill capacity and skill performance, assess students' capacity and performance, and develop impairment-based and socially based language goals linked to Common Core State Standards that build students' language capacity and their communicative performance in naturalistic contexts.
Romito, Laura M; Eckert, George J
2011-05-01
This study assessed biomedical science content acquisition from problem-based learning (PBL) and its relationship to students' level of group interaction. We hypothesized that learning in preparation for exams results primarily from individual study of post-case learning objectives and that outcomes would be unrelated to students' group involvement. During dental curricular years 1 and 2, student-generated biomedical learning issues (LIs) were identified from six randomly chosen PBL cases. Knowledge and application of case concepts were assessed with quizzes based on the identified LIs prior to dissemination of the learning objectives. Students and facilitators were surveyed on students' level of group involvement for the assessed LI topics. Year 1 students had significantly higher assessment scores (p=0.0001). For both student classes, means were significantly higher for the recall item (Q1) than for the application item (Q2). Q1 scores increased along with the student's reported role for Year 1 (p=0.04). However, there was no relationship between the student's reported role and Q1 for Year 2 (p=0.20). There was no relationship between the student's reported role and Q2 for Year 1 (p=0.09) or Year 2 (p=0.19). This suggests that students' level of group involvement on the biomedical learning issues did not significantly impact students' assessment performance.
The Validation of a Case-Based, Cumulative Assessment and Progressions Examination
Coker, Adeola O.; Copeland, Jeffrey T.; Gottlieb, Helmut B.; Horlen, Cheryl; Smith, Helen E.; Urteaga, Elizabeth M.; Ramsinghani, Sushma; Zertuche, Alejandra; Maize, David
2016-01-01
Objective. To assess content and criterion validity, as well as reliability of an internally developed, case-based, cumulative, high-stakes third-year Annual Student Assessment and Progression Examination (P3 ASAP Exam). Methods. Content validity was assessed through the writing-reviewing process. Criterion validity was assessed by comparing student scores on the P3 ASAP Exam with the nationally validated Pharmacy Curriculum Outcomes Assessment (PCOA). Reliability was assessed with psychometric analysis comparing student performance over four years. Results. The P3 ASAP Exam showed content validity through representation of didactic courses and professional outcomes. Similar scores on the P3 ASAP Exam and PCOA with Pearson correlation coefficient established criterion validity. Consistent student performance using Kuder-Richardson coefficient (KR-20) since 2012 reflected reliability of the examination. Conclusion. Pharmacy schools can implement internally developed, high-stakes, cumulative progression examinations that are valid and reliable using a robust writing-reviewing process and psychometric analyses. PMID:26941435
The Role of Online Reader Experience in Explaining Students' Performance in Digital Reading
ERIC Educational Resources Information Center
Gil-Flores, Javier; Torres-Gordillo, Juan-Jesus; Perera-Rodriguez, Victor-Hugo
2012-01-01
This study explores the relationship between students' extracurricular experiences online and their performance on the Program for International Student Assessment (PISA), focusing specifically on students' competence in digital reading. The study uses a descriptive, correlational, ex post facto design. The data are taken from Spanish students'…
Paul, Fiona
2010-09-01
Cardiopulmonary resuscitation (CPR) is an essential skill taught within undergraduate nursing programmes. At the author's institution, students must pass the CPR objective structured clinical examination (OSCE) before progressing to second year. However, some students have difficulties developing competence in CPR and evidence suggests that resuscitation skills may only be retained for several months. This has implications for practice as nurses are required to be competent in CPR. Therefore, further opportunities for students to develop these skills are necessary. An action research project was conducted with six students who were assessed by an examiner at a video-recorded mock OSCE. Students self-assessed their skills using the video and a checklist. Semi-structured interviews were conducted to compare checklist scores, and explore students' thoughts and experiences of the OSCE. The findings indicate that students may need to repeat this exercise by comparing their previous and current performances to develop both their self-assessment and CPR skills. Although there were some differences between the examiner's and student's checklist scores, all students reported the benefits of participating in this project, e.g. discussion and identification of knowledge and skills deficits, thus emphasising the benefits of formative assessments to prepare students for summative assessments and ultimately clinical practice. (c) 2010 Elsevier Ltd. All rights reserved.
Game-Based Assessment: Investigating the Impact on Test Anxiety and Exam Performance
ERIC Educational Resources Information Center
Mavridis, A.; Tsiatsos, T.
2017-01-01
The aim of this study is to assess the impact of a 3D educational computer game on students' test anxiety and exam performance when used in evaluative situations as compared to the traditional method of examination. The participants of the study were students in tertiary education who were examined using game-based assessment and traditional…
ERIC Educational Resources Information Center
Kerr, Deirdre; Chung, Gregory K. W. K.
2012-01-01
The assessment cycle of "evidence-centered design" (ECD) provides a framework for treating an educational video game or simulation as an assessment. One of the main steps in the assessment cycle of ECD is the identification of the key features of student performance. While this process is relatively simple for multiple choice tests, when…
Science at Age 13. Assessment of Performance Unit. Science Report for Teachers: 3.
ERIC Educational Resources Information Center
Murphy, Patricia; Schofield, Beta
This report presents some of the results of two national surveys which assessed the performance of 13-year-old students in science. It includes an outline of the assessment framework; some of the questions which were written to match it; a description of how well, and how differently, students responded to the questions; and suggests how the…
ERIC Educational Resources Information Center
Ruiz, Jorge G.; Smith, Michael; Rodriguez, Osvaldo; Van Zuilen, Maria H.; Mintzer, Michael J.
2007-01-01
We evaluated the effectiveness of an e-learning tutorial (iPOMA) as a supplement to traditional teaching of the Performance-Oriented Mobility Assessment. Second-year medical students (137) completed the iPOMA, in preparation for a session on fall risk assessment consisting of a lecture, practice with elder volunteers and small group debriefing.…
ERIC Educational Resources Information Center
Bekkink, Marleen Olde; Donders, Rogier; van Muijen, Goos N. P.; Ruiter, Dirk J.
2012-01-01
Until now, positive effects of assessment at a medical curriculum level have not been demonstrated. This study was performed to determine whether an interim assessment, taken during a small group work session of an ongoing biomedical course, results in students' increased performance at the formal course examination. A randomized controlled trial…
Decker, Andrew S; Cipriano, Gabriela C; Tsouri, Gill; Lavigne, Jill E
2016-04-25
Objective. To assess and improve student adherence to hand hygiene indications using radio frequency identification (RFID) enabled hand hygiene stations and performance report cards. Design. Students volunteered to wear RFID-enabled hospital employee nametags to monitor their adherence to hand-hygiene indications. After training in World Health Organization (WHO) hand hygiene methods and indications, student were instructed to treat the classroom as a patient care area. Report cards illustrating individual performance were distributed via e-mail to students at the middle and end of each 5-day observation period. Students were eligible for individual and team prizes consisting of Starbucks gift cards in $5 increments. Assessment. A hand hygiene station with an RFID reader and dispensing sensor recorded the nametag nearest to the station at the time of use. Mean frequency of use per student was 5.41 (range: 2-10). Distance between the student's seat and the dispenser was the only variable significantly associated with adherence. Student satisfaction with the system was assessed by a self-administered survey at the end of the study. Most students reported that the system increased their motivation to perform hand hygiene as indicated. Conclusion. The RFID-enabled hand hygiene system and benchmarking reports with performance incentives was feasible, reliable, and affordable. Future studies should record video to monitor adherence to the WHO 8-step technique.
Hall, Samuel R; Stephens, Jonny R; Seaby, Eleanor G; Andrade, Matheus Gesteira; Lowry, Andrew F; Parton, Will J C; Smith, Claire F; Border, Scott
2016-10-01
It is important that clinicians are able to adequately assess their level of knowledge and competence in order to be safe practitioners of medicine. The medical literature contains numerous examples of poor self-assessment accuracy amongst medical students over a range of subjects however this ability in neuroanatomy has yet to be observed. Second year medical students attending neuroanatomy revision sessions at the University of Southampton and the competitors of the National Undergraduate Neuroanatomy Competition were asked to rate their level of knowledge in neuroanatomy. The responses from the former group were compared to performance on a ten item multiple choice question examination and the latter group were compared to their performance within the competition. In both cohorts, self-assessments of perceived level of knowledge correlated weakly to their performance in their respective objective knowledge assessments (r = 0.30 and r = 0.44). Within the NUNC, this correlation improved when students were instead asked to rate their performance on a specific examination within the competition (spotter, rS = 0.68; MCQ, rS = 0.58). Despite its inherent difficulty, medical student self-assessment accuracy in neuroanatomy is comparable to other subjects within the medical curriculum. Anat Sci Educ 9: 488-495. © 2016 American Association of Anatomists. © 2016 American Association of Anatomists.
ERIC Educational Resources Information Center
Cassidy-Floyd, Juliet
2017-01-01
Florida, from 1971 to 2014 has used the Florida Comprehensive Assessment Test (FCAT) as a yearly accountability tool throughout the education system in the state (Bureau of K-12 Assessment, 2005). Schools use their own assessments to determine if students are making progress throughout the year. In one school district within Florida, Performance…
Koo, Cathy L; Demps, Elaine L; Farris, Charlotte; Bowman, John D; Panahi, Ladan; Boyle, Paul
2016-03-25
Objective. To determine whether a flipped classroom design would improve student performance and perceptions of the learning experience compared to traditional lecture course design in a required pharmacotherapy course for second-year pharmacy students. Design. Students viewed short online videos about the foundational concepts and answered self-assessment questions prior to face-to-face sessions involving patient case discussions. Assessment. Pretest/posttest and precourse/postcourse surveys evaluated students' short-term knowledge retention and perceptions before and after the redesigned course. The final grades improved after the redesign. Mean scores on the posttest improved from the pretest. Postcourse survey showed 88% of students were satisfied with the redesign. Students reported that they appreciated the flexibility of video viewing and knowledge application during case discussions but some also struggled with time requirements of the course. Conclusion. The redesigned course improved student test performance and perceptions of the learning experience during the first year of implementation.
Alrakaf, Saleh; Anderson, Claire; Coulman, Sion A; John, Dai N; Tordoff, June; Sainsbury, Erica; Rose, Grenville; Smith, Lorraine
2015-04-25
To identify pharmacy students' preferred achievement goals in a multi-national undergraduate population, to investigate achievement goal preferences across comparable degree programs, and to identify relationships between achievement goals, academic performance, and assessment type. The Achievement Goal Questionnaire was administered to second year students in 4 universities in Australia, New Zealand, England, and Wales. Academic performance was measured using total scores, multiple-choice questions, and written answers (short essay). Four hundred eighty-six second year students participated. Students showed an overall preference for the mastery-approach goal orientation across all sites. The predicted relationships between goal orientation and multiple-choice questions, and written answers scores, were significant. This study is the first of its kind to examine pharmacy students' achievement goals at a multi-national level and to differentiate between assessment type and measures of achievement motivation. Students adopting a mastery-approach goal are more likely to gain high scores in assessments that measure understanding and depth of knowledge.
O'Brien, Celia Laird; Thomas, John X; Green, Marianne M
2018-01-01
Medical educators struggle to find effective ways to assess essential competencies such as communication, professionalism, and teamwork. Portfolio-based assessment provides one method of addressing this problem by allowing faculty reviewers to judge performance, as based on a longitudinal record of student behavior. At the Feinberg School of Medicine, the portfolio system measures behavioral competence using multiple assessments collected over time. This study examines whether a preclerkship portfolio review is a valid method of identifying problematic student behavior affecting later performance in clerkships. The authors divided students into two groups based on a summative preclerkship portfolio review in 2014: students who had concerning behavior in one or more competencies and students progressing satisfactorily. They compared how students in these groups later performed on two clerkship outcomes as of October 2015: final grades in required clerkships, and performance on a clerkship clinical composite score. They used Mann-Whitney tests and multiple linear regression to examine the relationship between portfolio review results and clerkship outcomes. They used USMLE Step 1 to control for knowledge acquisition. Students with concerning behavior preclerkship received significantly lower clerkship grades than students progressing satisfactorily (P = .002). They also scored significantly lower on the clinical composite score (P < .001). Regression analysis indicated concerning behavior was associated with lower clinical composite scores, even after controlling for knowledge acquisition. The results show a preclerkship portfolio review can identify behaviors that impact clerkship performance. A comprehensive portfolio system is a valid way to measure behavioral competencies.
Mandatory coursework assignments can be, and should be, eliminated!
NASA Astrophysics Data System (ADS)
Haugan, John; Lysebo, Marius; Lauvas, Per
2017-11-01
Formative assessment can serve as a catalyst for increased student effort and student learning. Yet, many engineering degree programmes are dominated by summative assessment and make limited use of formative assessment. The present case study serves as an example on how formative assessment can be used strategically to increase student effort and improve student learning. Within five courses of an engineering bachelor degree programme in Norway, the mandatory coursework assignments were removed and replaced by formative-only assessment. To facilitate the formative assessment, weekly student peer-assessment sessions were introduced. The main findings include an increase in student study hours and improved student performance on the examinations. Finally, interviews were conducted by an external consultant in an effort to identify key factors that attributed to the positive outcome.
ERIC Educational Resources Information Center
Dion, G. S.; Kuang, M.; Dresher, A. R.
2008-01-01
In 2007, public school students in Puerto Rico at grades 4 and 8 participated in a Spanish-language version of the National Assessment of Educational Progress (NAEP) in mathematics. A representative sample of approximately 2,800 students from 100 public schools was assessed at each grade. This report contains performance results on NAEP…
Palermo, C; Volders, E; Gibson, S; Kennedy, M; Wray, A; Thomas, J; Hannan-Jones, M; Gallegos, D; Beck, E
2018-02-01
Assessment presents one of the greatest challenges to evaluating health professional trainee performance, as a result of the subjectivity of judgements and variability in assessor standards. The present study aimed to test a moderation procedure for assessment across four independent universities and explore approaches to assessment and the factors that influence assessment decisions. Assessment tasks designed independently by each of the four universities to assess student readiness for placement were chosen for the present study. Each university provided four student performance recordings for moderation. Eight different academic assessors viewed the student performances and assessed them using the corresponding university assessment instrument. Assessment results were collated and presented back to the assessors, together with the original university assessment results. Results were discussed with assessors to explore variations. The discussion was recorded, transcribed, thematically analysed and presented back to all assessors to achieve consensus on the emerging major learnings. Although there were differences in absolute scores, there was consistency (12 out of 16 performances) in overall judgement decisions regarding placement readiness. Proficient communication skills were considered a key factor when determining placement readiness. The discussion revealed: (i) assessment instruments; (ii) assessor factors; and (iii) the subjectivity of judgement as the major factors influencing assessment. Assessment moderation is a useful method for improving the quality of assessment decisions by sharing understanding and aligning standards of performance. © 2017 The British Dietetic Association Ltd.
ERIC Educational Resources Information Center
Smaby, Marlowe H.; Maddux, Cleborne; Packman, Jill; Lepkowski, William J.; Richmond, Aaron S.; LeBeauf, Ireon
2005-01-01
Educators of professionals often want to determine the quality of student performance and its impact on those they serve. Performance assessment in the education of teachers, medical practitioners, and counselors, for example, currently receives much discussion and debate in the literature (Vaugh & Everhart, 2004; Howley, 2003; and, Urbani et al.,…
ERIC Educational Resources Information Center
ACPA College Student Educators International, 2011
2011-01-01
The Assessment Skills and Knowledge (ASK) standards seek to articulate the areas of content knowledge, skill and dispositions that student affairs professionals need in order to perform as practitioner-scholars to assess the degree to which students are mastering the learning and development outcomes the professionals intend. Consistent with…
Investigating ESL Students' Performance on Outcomes Assessments in Higher Education
ERIC Educational Resources Information Center
Lakin, Joni M.; Elliott, Diane Cardenas; Liu, Ou Lydia
2012-01-01
Outcomes assessments are gaining great attention in higher education because of increased demand for accountability. These assessments are widely used by U.S. higher education institutions to measure students' college-level knowledge and skills, including students who speak English as a second language (ESL). For the past decade, the increasing…
Standardized Patients Provide a Reliable Assessment of Athletic Training Students' Clinical Skills
ERIC Educational Resources Information Center
Armstrong, Kirk J.; Jarriel, Amanda J.
2016-01-01
Context: Providing students reliable objective feedback regarding their clinical performance is of great value for ongoing clinical skill assessment. Since a standardized patient (SP) is trained to consistently portray the case, students can be assessed and receive immediate feedback within the same clinical encounter; however, no research, to our…
Research-Based Assessment of Students' Beliefs about Experimental Physics: When Is Gender a Factor?
ERIC Educational Resources Information Center
Wilcox, Bethany R.; Lewandowski, H. J.
2016-01-01
The existence of gender differences in student performance on conceptual assessments and their responses to attitudinal assessments has been repeatedly demonstrated. This difference is often present in students' preinstruction responses and persists in their postinstruction responses. However, one area in which the presence of gender differences…
A Comparison of Self versus Tutor Assessment among Hungarian Undergraduate Business Students
ERIC Educational Resources Information Center
Kun, András István
2016-01-01
This study analyses the self-assessment behaviour and efficiency of 163 undergraduate business students from Hungary. Using various statistical methods, the results support the hypothesis that high-achieving students are more accurate in their pre- and post-examination self-assessments, and also less likely to overestimate their performance, and,…
On-Line vs. Face-to-Face Delivery of Information Technology Courses: Students' Assessment
ERIC Educational Resources Information Center
Said, Hazem; Kirgis, Lauren; Verkamp, Brian; Johnson, Lawrence
2015-01-01
This paper investigates students' assessment of on-line vs face-to-face delivery of lecture-based information technology courses. The study used end-of-course surveys to examine students' ratings of five course quality indicators: Course Organization, Assessment and Grading Procedures, Instructor Performance, Positive Learning Experience, and…
Assessing Students in Human-to-Agent Settings to Inform Collaborative Problem-Solving Learning
ERIC Educational Resources Information Center
Rosen, Yigal
2017-01-01
In order to understand potential applications of collaborative problem-solving (CPS) assessment tasks, it is necessary to examine empirically the multifaceted student performance that may be distributed across collaboration methods and purposes of the assessment. Ideally, each student should be matched with various types of group members and must…
Forming Student Online Teams for Maximum Performance
ERIC Educational Resources Information Center
Olson, Joel D.; Ringhand, Darlene G.; Kalinski, Ray C.; Ziegler, James G.
2015-01-01
What is the best way to assign graduate business students to online team-based projects? Team assignments are frequently made on the basis of alphabet, time zones or previous performance. This study reviews personality as an indicator of student online team performance. The personality assessment IDE (Insights Discovery Evaluator) was administered…
ERIC Educational Resources Information Center
Bogo, Marion; Lee, Barbara; McKee, Eileen; Ramjattan, Roxanne; Baird, Stephanie L.
2017-01-01
To strengthen students' preparation for engaging in field learning, an innovation was implemented to teach and assess foundation-year students' performance prior to entering field education. An Objective Structured Clinical Examination informed the final evaluation of students' performance in two companion courses on practice theory and skills.…
NASA Astrophysics Data System (ADS)
Serevina, V.; Muliyati, D.
2018-05-01
This research aims to develop students’ performance assessment instrument based on scientific approach is valid and reliable in assessing the performance of students on basic physics lab of Simple Harmonic Motion (SHM). This study uses the ADDIE consisting of stages: Analyze, Design, Development, Implementation, and Evaluation. The student performance assessment developed can be used to measure students’ skills in observing, asking, conducting experiments, associating and communicate experimental results that are the ‘5M’ stages in a scientific approach. Each grain of assessment in the instrument is validated by the instrument expert and the evaluation with the result of all points of assessment shall be eligible to be used with a 100% eligibility percentage. The instrument is then tested for the quality of construction, material, and language by panel (lecturer) with the result: 85% or very good instrument construction aspect, material aspect 87.5% or very good, and language aspect 83% or very good. For small group trial obtained instrument reliability level of 0.878 or is in the high category, where r-table is 0.707. For large group trial obtained instrument reliability level of 0.889 or is in the high category, where r-table is 0.320. Instruments declared valid and reliable for 5% significance level. Based on the result of this research, it can be concluded that the student performance appraisal instrument based on the developed scientific approach is declared valid and reliable to be used in assessing student skill in SHM experimental activity.
Assessment of the Effectiveness of an Online Learning System in Improving Student Test Performance
ERIC Educational Resources Information Center
Buttner, E. Holly; Black, Aprille Noe
2014-01-01
Colleges and universities, particularly public institutions, are facing higher enrollments and declining resources from state and federal governments. In this resource-constrained environment, faculty are seeking more efficient and effective teaching strategies to improve student learning and test performance. The authors assessed an online…
Science Rocks! A Performance Assessment for Earth Science
ERIC Educational Resources Information Center
Waters, Melia; Straits, William
2008-01-01
This article presents an activity in which students pool their knowledge and creativity to make a song presenting what they have learned in a unit on the rock cycle. This highly motivating, integrated performance assessment incorporates multiple intelligences, reinforces learning, and is a student favorite in the elementary and middle grades.
The Consequences of Using One Assessment System to Pursue Two Objectives
ERIC Educational Resources Information Center
Neal, Derek
2013-01-01
Education officials often use one assessment system both to create measures of student achievement and to create performance metrics for educators. However, modern standardized testing systems are not designed to produce performance metrics for teachers or principals. They are designed to produce reliable measures of individual student achievement…
Albert, Dara V; Brorson, James R; Amidei, Christina; Lukas, Rimas V
2014-04-22
Using outpatient neurology clinic case logs completed by medical students on neurology clerkships, we examined the impact of outpatient clinical encounter volume per student on outcomes of knowledge assessed by the National Board of Medical Examiners (NBME) Clinical Neurology Subject Examination and clinical skills assessed by the Objective Structured Clinical Examination (OSCE). Data from 394 medical students from July 2008 to June 2012, representing 9,791 patient encounters, were analyzed retrospectively. Pearson correlations were calculated examining the relationship between numbers of cases logged per student and performance on the NBME examination. Similarly, correlations between cases logged and performance on the OSCE, as well as on components of the OSCE (history, physical examination, clinical formulation), were evaluated. There was a correlation between the total number of cases logged per student and NBME examination scores (r = 0.142; p = 0.005) and OSCE scores (r = 0.136; p = 0.007). Total number of cases correlated with the clinical formulation component of the OSCE (r = 0.172; p = 0.001) but not the performance on history or physical examination components. The volume of cases logged by individual students in the outpatient clinic correlates with performance on measures of knowledge and clinical skill. In measurement of clinical skill, seeing a greater volume of patients in the outpatient clinic is related to improved clinical formulation on the OSCE. These findings may affect methods employed in assessment of medical students, residents, and fellows.
NASA Astrophysics Data System (ADS)
Feltham, Nicola F.; Downs, Colleen T.
2002-02-01
The Science Foundation Programme (SFP) was launched in 1991 at the University of Natal, Pietermaritzburg, South Africa in an attempt to equip a selected number of matriculants from historically disadvantaged schools with the skills, resources and self-confidence needed to embark on their tertiary studies. Previous research within the SFP biology component suggests that a major contributor to poor achievement and low retention rates among English second language (ESL) students in the Life Sciences is the inadequate background knowledge in natural history. In this study, SFP student background knowledge was assessed along a continuum of language dependency using a set of three probes. Improved student performance in each of the respective assessments examined the extent to which a sound natural history background facilitated meaningful learning relative to ESL proficiency. Student profiles and attitudes to biology were also examined. Results indicated that students did not perceive language to be a problem in biology. However, analysis of the student performance in the assessment probes indicated that, although the marine course provided the students with the background knowledge that they were initially lacking, they continued to perform better in the drawing and MCQ tools in the post-tests, suggesting that it is their inability to express themselves in the written form that hampers their development. These results have implications for curriculum development within the constructivist framework of the SFP.
Clay, Alison S; Ming, David Y; Knudsen, Nancy W; Engle, Deborah L; Grochowski, Colleen O'Connor; Andolsek, Kathryn M; Chudgar, Saumil M
2017-03-01
Despite the importance of self-directed learning (SDL) in the field of medicine, individuals are rarely taught how to perform SDL or receive feedback on it. Trainee skill in SDL is limited by difficulties with self-assessment and goal setting. Ninety-two graduating fourth-year medical students from Duke University School of Medicine completed an individualized learning plan (ILP) for a transition-to-residency Capstone course in spring 2015 to help foster their skills in SDL. Students completed the ILP after receiving a personalized report from a designated faculty coach detailing strengths and weaknesses on specific topics (e.g., pulmonary medicine) and clinical skills (e.g., generating a differential diagnosis). These were determined by their performance on 12 Capstone Problem Sets of the Week (CaPOWs) compared with their peers. Students used transitional-year milestones to self-assess their confidence in SDL. SDL was successfully implemented in a Capstone course through the development of required clinically oriented problem sets. Coaches provided guided feedback on students' performance to help them identify knowledge deficits. Students' self-assessment of their confidence in SDL increased following course completion. However, students often chose Capstone didactic sessions according to factors other than their CaPOW performance, including perceived relevance to planned specialty and session timing. Future Capstone curriculum changes may further enhance SDL skills of graduating students. Students will receive increased formative feedback on their CaPOW performance and be incentivized to attend sessions in areas of personal weakness.
NASA Astrophysics Data System (ADS)
Lawrence, Michael John
1997-12-01
The problem of science illiteracy has been well documented. The development of the critical thinking skills in science education are often sacrificed in favor of content coverage. Opportunities for critical thinking within a context of science have been recommended to promote science literacy (AAAS, 1993). One means of doing this is to have students make and explain predictions involving physical phenomena, observe feedback, and then revise the prediction. A videotaped assessment using this process served as the focus for this study. High school physics students were asked to predict and explain what would happen in situations involving optics. They were then given different feedback treatments. The purpose of this study was to: (a) examine the effect of providing feedback on the quality of responses in making both revisions and subsequent predictions, and (b) examine the relationship between content knowledge and qualitative performance. Sixty-four high ability students were separated into three treatment groups: no feedback (NF), visual feedback (F), and teacher-explained feedback (TE). These students responded to six items on the Optics Videotape Assessment and ten optics multiple choice items from the National Physics Exam (NPE). Their teachers had previously attended a professional development institute which emphasized the practice and philosophy of assessments like the Optics Assessment. The assessment responses were categorized by two raters who used a taxonomy that ranged from simple descriptions to complete explanations. NPE performance was compared using one-way ANOVA, Optics Assessment performance was compared using a chi-square test of homogeneity, and a point-biserial correlation was done to compare qualitative and quantitative performance. The study found that students were unable to use feedback to make a significant change in the quality of their responses, whether revision or subsequent prediction. There was no correlation between content knowledge and qualitative performance. It was concluded that for students to succeed on an assessment of this type, their classroom teachers must be given the time to implement the appropriate instruction. Instruction and assessment of this nature are crucial to the development of science literacy.
Minimal supervision out-patient clinical teaching.
Figueiró-Filho, Ernesto Antonio; Amaral, Eliana; McKinley, Danette; Bezuidenhout, Juanita; Tekian, Ara
2014-08-01
Minimal faculty member supervision of students refers to a method of instruction in which the patient-student encounter is not directly supervised by a faculty member, and presents a feasible solution in clinical teaching. It is unclear, however, how such practices are perceived by patients and how they affect student learning. We aimed to assess patient and medical student perceptions of clinical teaching with minimal faculty member supervision. Questionnaires focusing on the perception of students' performance were administered to patients pre- and post-consultation. Students' self-perceptions on their performance were obtained using a questionnaire at the end of the consultation. Before encounters with students, 22 per cent of the 95 patients were not sure if they would feel comfortable or trust the students; after the consultation, almost all felt comfortable (97%) and relied on the students (99%). The 81 students surveyed agreed that instruction with minimal faculty member supervision encouraged their participation and engagement (86%). They expressed interest in knowing patients' opinions about their performance (94%), and they felt comfortable about being assessed by the patients (86%). The minimal faculty member supervision model was well accepted by patients. Responses from the final-year students support the use of assessments that incorporate feedback from patients in their overall clinical evaluations. © 2014 John Wiley & Sons Ltd.
Vanderlelie, Jessica J; Alexander, Heather G
2016-07-08
Assessment plays a critical role in learning and teaching and its power to enhance engagement and student outcomes is still underestimated in tertiary education. The current project considers the impact of a staged redesign of an assessment strategy that emphasized relevance of learning, formative assessment, student engagement, and feedback on student performance, failure rates and overall engagement in the course. Significant improvements in final grades (p < 0.0001) and written performance (p < 0.0001) in the final examination were noted that coincided with increased lecture attendance and overall engagement in the course. This study reinforces the importance of an integrated approach to assessment that includes well developed formative tasks and a continuous summative assessment strategy. © 2016 by The International Union of Biochemistry and Molecular Biology, 44(4):412-420, 2016. © 2016 The International Union of Biochemistry and Molecular Biology.
Assessing the Math Performance of Young ESL Students.
ERIC Educational Resources Information Center
Lee, Fong Yun; Silverman, Fredrick L.; Montoya, Patricia
2002-01-01
Describes proven assessment strategies, which, used separately or in combination, can help young ESL students express their understanding of math concepts while building their English-language skills: Manipulative objects, diagrams, and physical movement. Also describes other assessment techniques including self-assessment, interviewing, and…
Development of Malayalam Handwriting Scale for School Students in Kerala
ERIC Educational Resources Information Center
Gafoor, K. Abdul; Naseer, A. R.
2015-01-01
With a view to support instruction, formative and summative assessment and to provide model handwriting performance for students to compare their own performance, a Malayalam handwriting scale is developed. Data from 2640 school students belonging to Malappuram, Palakkad and Kozhikode districts, sampled by taking 240 students per each grade…
The Effects of Humor on Test Anxiety and Test Performance
ERIC Educational Resources Information Center
Tali, Glenda
2017-01-01
Testing in an academic setting provokes anxiety in all students in higher education, particularly nursing students. When students experience high levels of anxiety, the resulting decline in test performance often does not represent an accurate assessment of students' academic achievement. This quantitative, experimental study examined the effects…
Can We Succeed in Teaching Business Students to Write Effectively?
ERIC Educational Resources Information Center
Pittenger, Khushwant K. S.; Miller, Mary C.; Allison, Jesse
2006-01-01
This article presents the results of a study where business students' writing skills were assessed using an external objective measure in a business communication course. The student performance was disappointing before instructor intervention. After the intervention, student performance improved noticeably. The implications of the study are…
The Rasch Model for Evaluating Italian Student Performance
ERIC Educational Resources Information Center
Camminatiello, Ida; Gallo, Michele; Menini, Tullio
2010-01-01
In 1997 the Organisation for Economic Co-operation and Development (OECD) launched the OECD Programme for International Student Assessment (PISA) for collecting information about 15-year-old students in participating countries. Our study analyses the PISA 2006 cognitive test for evaluating the Italian student performance in mathematics, reading…
Turkish Students' Science Performance and Related Factors in PISA 2006 and 2009
ERIC Educational Resources Information Center
Topçu, Mustafa Sami; Arikan, Serkan; Erbilgin, Evrim
2015-01-01
The OECD's Programme for International Student Assessment (PISA) enables participating countries to monitor 15-year old students' progress in reading, mathematics, and science literacy. The present study investigates persistent factors that contribute to science performance of Turkish students in PISA 2006 and PISA 2009. Additionally, the study…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-13
...: alternative measures of student learning and performance, such as student scores on pre-tests and end-of-course tests; student performance on English language proficiency assessments; and other measures of...
Pierce, Richard; Fox, Jeremy
2012-12-12
To implement a "flipped classroom" model for a renal pharmacotherapy topic module and assess the impact on pharmacy students' performance and attitudes. Students viewed vodcasts (video podcasts) of lectures prior to the scheduled class and then discussed interactive cases of patients with end-stage renal disease in class. A process-oriented guided inquiry learning (POGIL) activity was developed and implemented that complemented, summarized, and allowed for application of the material contained in the previously viewed lectures. Students' performance on the final examination significantly improved compared to performance of students the previous year who completed the same module in a traditional classroom setting. Students' opinions of the POGIL activity and the flipped classroom instructional model were mostly positive. Implementing a flipped classroom model to teach a renal pharmacotherapy module resulted in improved student performance and favorable student perceptions about the instructional approach. Some of the factors that may have contributed to students' improved scores included: student mediated contact with the course material prior to classes, benchmark and formative assessments administered during the module, and the interactive class activities.
Daud-Gallotti, Renata Mahfuz; Morinaga, Christian Valle; Arlindo-Rodrigues, Marcelo; Velasco, Irineu Tadeu; Arruda Martins, Milton; Tiberio, Iolanda Calvo
2011-01-01
INTRODUCTION: Patient safety is seldom assessed using objective evaluations during undergraduate medical education. OBJECTIVE: To evaluate the performance of fifth-year medical students using an objective structured clinical examination focused on patient safety after implementation of an interactive program based on adverse events recognition and disclosure. METHODS: In 2007, a patient safety program was implemented in the internal medicine clerkship of our hospital. The program focused on human error theory, epidemiology of incidents, adverse events, and disclosure. Upon completion of the program, students completed an objective structured clinical examination with five stations and standardized patients. One station focused on patient safety issues, including medical error recognition/disclosure, the patient-physician relationship and humanism issues. A standardized checklist was completed by each standardized patient to assess the performance of each student. The student's global performance at each station and performance in the domains of medical error, the patient-physician relationship and humanism were determined. The correlations between the student performances in these three domains were calculated. RESULTS: A total of 95 students participated in the objective structured clinical examination. The mean global score at the patient safety station was 87.59±1.24 points. Students' performance in the medical error domain was significantly lower than their performance on patient-physician relationship and humanistic issues. Less than 60% of students (n = 54) offered the simulated patient an apology after a medical error occurred. A significant correlation was found between scores obtained in the medical error domains and scores related to both the patient-physician relationship and humanistic domains. CONCLUSIONS: An objective structured clinical examination is a useful tool to evaluate patient safety competencies during the medical student clerkship. PMID:21876976
Clouten, Norene; Homma, Midori; Shimada, Rie
2006-01-01
Clinical education is an integral part of preparation for the profession of physical therapy and the role of the clinical instructor is critical. The purpose of this study was to investigate clinical instructors' expectations of student physical therapists with different ethnic backgrounds and the clinical performance of the students as assessed using a modification of the Generic Abilities Assessment. For this study, individuals with a Caucasian ethnic background who were raised in the United States were considered as the majority. The remaining individuals (minority) were subdivided into five groups: African American, Hispanic, Asian/Pacific Islander, Caucasian from outside the United States, and Other. Clinical instructors reported their experiences with students from different ethnic backgrounds, their expectation of students' performance, and recollections of specific weaknesses in performance. From the 216 surveys distributed, 192 clinical instructors responded. Fifty-seven percent had supervised a minority student, with a mean of three students each. While 4% reported that they expected a higher standard from majority students, 17% noted a difference in performance between majority and minority students. Results from this study suggest that minority students would benefit from further preparation in communication and interpersonal skills but they are stronger than majority students in stress management and the effective use of time and resources.
Villanueva, Idalis; Valladares, Maria; Goodridge, Wade
2016-01-01
Typically, self-reports are used in educational research to assess student response and performance to a classroom activity. Yet, addition of biological and physiological measures such as salivary biomarkers and galvanic skin responses are rarely included, limiting the wealth of information that can be obtained to better understand student performance. A laboratory protocol to study undergraduate students' responses to classroom events (e.g., exams) is presented. Participants were asked to complete a representative exam for their degree. Before and after the laboratory exam session, students completed an academic achievement emotions self-report and an interview that paralleled these questions when participants wore a galvanic skin sensor and salivary biomarkers were collected. Data collected from the three methods resulted in greater depth of information about students' performance when compared to the self-report. The work can expand educational research capabilities through more comprehensive methods for obtaining nearer to real-time student responses to an examination activity. PMID:26891278
Al-Kandari, Fatimah; Vidal, Victoria L
2007-06-01
This descriptive study of 224 nursing students assessed their health-promoting lifestyle profile and correlated it with the levels of enrollment in nursing courses and academic performance. The health-promoting lifestyle profile was measured by Walker's Health-promoting Lifestyle Profile II instrument. Academic performance was measured by assessing the nursing grade point average and general grade point average of the students. The students had positive health-promoting lifestyles with significant differences noted between males and females in the overall profile, physical activity, interpersonal relations, and stress management. Sociodemographic variables, such as age, nationality, and marital status, but not income, showed an association with students' health-promoting lifestyles. A significant correlation was noted between students' nursing enrollment and level of health responsibility. No significant correlation was established between a health-promoting lifestyle and academic performance. This study poses a challenge for nurse educators to provide an effective environment to maximize students' potential to be future vanguards of health.
NASA Astrophysics Data System (ADS)
Howard, Steven J.; Burianová, Hana; Calleia, Alysha; Fynes-Clinton, Samuel; Kervin, Lisa; Bokosmaty, Sahar
2017-08-01
Standardised educational assessments are now widespread, yet their development has given comparatively more consideration to what to assess than how to optimally assess students' competencies. Existing evidence from behavioural studies with children and neuroscience studies with adults suggest that the method of assessment may affect neural processing and performance, but current evidence remains limited. To investigate the impact of assessment methods on neural processing and performance in young children, we used functional magnetic resonance imaging to identify and quantify the neural correlates during performance across a range of current approaches to standardised spelling assessment. Results indicated that children's test performance declined as the cognitive load of assessment method increased. Activation of neural nodes associated with working memory further suggests that this performance decline may be a consequence of a higher cognitive load, rather than the complexity of the content. These findings provide insights into principles of assessment (re)design, to ensure assessment results are an accurate reflection of students' true levels of competency.
Barriers of physical assessment skills among nursing students in Arab Peninsula
Alamri, Majed Sulaiman; Almazan, Joseph U.
2018-01-01
Objective: There is a growing demand for health-care nursing services in several health care institutions. Understanding barriers to physical assessment among nursing students create a more detailed assessment in the development of quality patient’s care in nursing practice. This study examined the barriers to physical assessment skills among nursing students in a government university in Arab Peninsula. Methods: A cross-sectional research survey design of 206 nursing students using a standardized questionnaire was used. The questionnaire is composed of 7 subscales in evaluating the barriers to physical assessment skills between the classroom and clinical setting. Independent Samples t-test was used in comparing the gender mean of the nursing students about the barriers to physical assessment. Paired t-test was also used in determining the differences between perceived barriers to physical assessment in the classroom and clinical setting. Results: Subscale “reliance on others and technology,” ward culture, “lack of influence on patient care” have significant differences between perceived barriers in physical assessment among classroom settings and clinical setting. Conclusion: Although nursing students were oriented and educated about physical assessment in the nursing curriculum, this is not often practiced in clinical settings. The point that is if nursing students are incorrectly performing the patient assessment, then no amount of critical thinking could lead to better clinical decisions. Continuous exposure and enhancing the quality of planning and promotion of the nursing students could develop necessary skills. In addition, increasing self-confidence is vital to assess the patient’s health status effectively and minimize the barriers to performing the physical assessment. PMID:29896073
ERIC Educational Resources Information Center
King, M. Bruce; Schroeder, Jennifer; Chawszczewski, David
This research brief explores the extent to which teacher-designed assessments are authentic in inclusive secondary schools and how students with and without disabilities perform on these assessments. Data come from three high schools that are participating in a 5-year national study conducted by the Research Institute on Secondary Education Reform…
Pharmacy student absenteeism and academic performance.
Hidayat, Levita; Vansal, Sandeep; Kim, Esther; Sullivan, Maureen; Salbu, Rebecca
2012-02-10
To assess the association of pharmacy students' personal characteristics with absenteeism and academic performance. A survey instrument was distributed to first- (P1) and second-year (P2) pharmacy students to gather characteristics including employment status, travel time to school, and primary source of educational funding. In addition, absences from specific courses and reasons for not attending classes were assessed. Participants were divided into "high" and "low" performers based on grade point average. One hundred sixty survey instruments were completed and 135 (84.3%) were included in the study analysis. Low performers were significantly more likely than high performers to have missed more than 8 hours in therapeutics courses. Low performers were significantly more likely than high performers to miss class when the class was held before or after an examination and low performers were significantly more likely to believe that participating in class did not benefit them. There was a negative association between the number of hours students' missed and their performance in specific courses. These findings provide further insight into the reasons for students' absenteeism in a college or school of pharmacy setting.
ERIC Educational Resources Information Center
Lindstrom, Jennifer H.
2010-01-01
The inclusion of students with learning disabilities (LD) in assessment is deemed critical to improve the quality of educational opportunities for these students and to provide meaningful and useful information about student performance. Mandated inclusion and accountability for progress raise many interesting questions regarding how to fairly,…
Stakes Matter: Student Motivation and the Validity of Student Assessments for Teacher Evaluation
ERIC Educational Resources Information Center
Rutkowski, David; Wild, Justin
2015-01-01
In 2011, Indiana lawmakers established a system to evaluate teachers using existing standardized assessments as an indicator of student learning. In this study we examined one component of Indiana's evaluation system to determine whether student knowledge of the test's consequences is predictive of test performance. Using an experimental design,…
ERIC Educational Resources Information Center
Hartley, Michael T.
2012-01-01
This study examined the assessment of resilience in undergraduate college students. Multigroup comparisons of the Connor-Davidson Resilience Scale (CD-RISC; Connor & Davidson, 2003) were performed on general population students and students recruited from campus mental health offices offering college counseling, psychiatric-support, and…
Using contrasting cases to improve self-assessment in physics learning
NASA Astrophysics Data System (ADS)
Jax, Jared Michael
Accurate self-assessment (SA) is widely regarded as a valuable tool for conducting scientific work, although there is growing concern that students present difficulties in accurately assessing their own learning. For students, the challenge of accurately self-assessing their work prevents them from effectively critiquing their own knowledge and skills, and making corrections when necessary to improve their performance. An overwhelming majority of researchers have acknowledged the importance of developing and practicing the necessary reflective skills SA in science, yet it is rarely a focus of daily instruction leading to students typically overestimate their abilities. In an effort to provide a pragmatic approach to overcoming these deficiencies, this study will demonstrate the effect of using positive and negative examples of solutions (contrasting cases) on performance and accuracy of SA when compared to student who are only shown positive examples of solutions. The work described here sought, first, to establish the areas of flawed SA that introductory high school physics students experience when studying circuitry, and, second, to examine how giving students Content Knowledge in addition to Positive and Negative Examples focused on helping them self-assess might help overcome these deficiencies. In doing so, this work highlights the positive impact that these types of support have in significantly increasing student performance, SA accuracy, and the ability to evaluate solutions in physics education.
Decker, Andrew S.; Cipriano, Gabriela C.; Tsouri, Gill
2016-01-01
Objective. To assess and improve student adherence to hand hygiene indications using radio frequency identification (RFID) enabled hand hygiene stations and performance report cards. Design. Students volunteered to wear RFID-enabled hospital employee nametags to monitor their adherence to hand-hygiene indications. After training in World Health Organization (WHO) hand hygiene methods and indications, student were instructed to treat the classroom as a patient care area. Report cards illustrating individual performance were distributed via e-mail to students at the middle and end of each 5-day observation period. Students were eligible for individual and team prizes consisting of Starbucks gift cards in $5 increments. Assessment. A hand hygiene station with an RFID reader and dispensing sensor recorded the nametag nearest to the station at the time of use. Mean frequency of use per student was 5.41 (range: 2-10). Distance between the student’s seat and the dispenser was the only variable significantly associated with adherence. Student satisfaction with the system was assessed by a self-administered survey at the end of the study. Most students reported that the system increased their motivation to perform hand hygiene as indicated. Conclusion. The RFID-enabled hand hygiene system and benchmarking reports with performance incentives was feasible, reliable, and affordable. Future studies should record video to monitor adherence to the WHO 8-step technique. PMID:27170822
ERIC Educational Resources Information Center
Shotwell, Mary; Apigian, Charles H.
2015-01-01
This study aimed to quantify the influence of student attributes, coursework resources, and online assessments on student learning in business statistics. Surveys were administered to students at the completion of both online and on-ground classes, covering student perception and utilization of internal and external academic resources, as well as…
Authentic Assessment: A Handbook for Educators. Assessment Bookshelf Series.
ERIC Educational Resources Information Center
Hart, Diane
This book reviews the assessment movement, from the history of testing to the practical considerations of enhancing classroom experiences for teachers and students. It explores making time for assessment, tailoring assessment to desired outcomes, and scoring and evaluating student performance. The chapters are: (1) "Where We've Been: Standardized…
Using Performance Assessments To Measure Teachers' Competence in Classroom Assessment.
ERIC Educational Resources Information Center
O'Sullivan, Rita G.; Johnson, Robert L.
The development and pilot testing of a set of performance assessments to determine classroom teachers' measurement competencies in areas covered by "Standards for Teacher Competence in Educational Assessment of Students" (1990) are described. How the use of performance assessments in a graduate-level classroom-assessment course can…
Massey, D; Byrne, J; Higgins, N; Weeks, B; Shuker, M-A; Coyne, E; Mitchell, M; Johnston, A N B
2017-07-01
Objective structured clinical examinations (OSCEs) are designed to assess clinical skill performance and competency of students in preparation for 'real world' clinical responsibilities. OSCEs are commonly used in health professional education and are typically associated with high levels of student anxiety, which may present a significant barrier to performance. Students, including nursing students, have identified that flexible access to exemplar OSCEs might reduce their anxiety and enable them to better prepare for such examinations. To implement and evaluate an innovative approach to preparing students for OSCEs in an undergraduate (registration) acute care nursing course. A set of digitized OSCE exemplars were prepared and embedded in the University-based course website as part of usual course learning activities. Use of the exemplars was monitored, pre and post OSCE surveys were conducted, and qualitative data were collected to evaluate the approach. OSCE grades were also examined. The online OSCE exemplars increased self-rated student confidence, knowledge, and capacity to prepare and provided clarity around assessment expectations. OSCE exemplars were accessed frequently and positively received; but did not impact on performance. Video exemplars aid student preparation for OSCEs, providing a flexible, innovative and clear example of the assessment process. Video exemplars improved self-rated student confidence and understanding of performance expectations, leading to increased engagement and reduced anxiety when preparing for the OSCE, but not overall OSCE performance. Such OSCE exemplars could be used to increase staff capacity and improve the quality of the student learning experience. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Student science achievement and the integration of Indigenous knowledge on standardized tests
NASA Astrophysics Data System (ADS)
Dupuis, Juliann; Abrams, Eleanor
2017-09-01
In this article, we examine how American Indian students in Montana performed on standardized state science assessments when a small number of test items based upon traditional science knowledge from a cultural curriculum, "Indian Education for All", were included. Montana is the first state in the US to mandate the use of a culturally relevant curriculum in all schools and to incorporate this curriculum into a portion of the standardized assessment items. This study compares White and American Indian student test scores on these particular test items to determine how White and American Indian students perform on culturally relevant test items compared to traditional standard science test items. The connections between student achievement on adapted culturally relevant science test items versus traditional items brings valuable insights to the fields of science education, research on student assessments, and Indigenous studies.
ERIC Educational Resources Information Center
Dugger-Roberts, Cherith A.
2014-01-01
The purpose of this quantitative study was to determine if there was a relationship between the TCAP test and Pearson Benchmark assessment in elementary students' reading and language arts and math performance in a northeastern Tennessee school district. This study involved 3rd, 4th, 5th, and 6th grade students. The study focused on the following…
ERIC Educational Resources Information Center
Asirifi, Michael Kwabena; Mensah, Kweku Abeeku; Amoako, Joseph
2015-01-01
The purpose of this research article is to find out an assessment of different educational background of students performance in engineering mathematics and on the class of award obtained at the Higher National Diploma (HND) level at Cape Coast Polytechnic. A descriptive survey was conducted on students of the Electricals/Electronics Department…
Using cloud-based mobile technology for assessment of competencies among medical students.
Ferenchick, Gary S; Solomon, David
2013-01-01
Valid, direct observation of medical student competency in clinical settings remains challenging and limits the opportunity to promote performance-based student advancement. The rationale for direct observation is to ascertain that students have acquired the core clinical competencies needed to care for patients. Too often student observation results in highly variable evaluations which are skewed by factors other than the student's actual performance. Among the barriers to effective direct observation and assessment include the lack of effective tools and strategies for assuring that transparent standards are used for judging clinical competency in authentic clinical settings. We developed a web-based content management system under the name, Just in Time Medicine (JIT), to address many of these issues. The goals of JIT were fourfold: First, to create a self-service interface allowing faculty with average computing skills to author customizable content and criterion-based assessment tools displayable on internet enabled devices, including mobile devices; second, to create an assessment and feedback tool capable of capturing learner progress related to hundreds of clinical skills; third, to enable easy access and utilization of these tools by faculty for learner assessment in authentic clinical settings as a means of just in time faculty development; fourth, to create a permanent record of the trainees' observed skills useful for both learner and program evaluation. From July 2010 through October 2012, we implemented a JIT enabled clinical evaluation exercise (CEX) among 367 third year internal medicine students. Observers (attending physicians and residents) performed CEX assessments using JIT to guide and document their observations, record their time observing and providing feedback to the students, and their overall satisfaction. Inter-rater reliability and validity were assessed with 17 observers who viewed six videotaped student-patient encounters and by measuring the correlation between student CEX scores and their scores on subsequent standardized-patient OSCE exams. A total of 3567 CEXs were completed by 516 observers. The average number of evaluations per student was 9.7 (±1.8 SD) and the average number of CEXs completed per observer was 6.9 (±15.8 SD). Observers spent less than 10 min on 43-50% of the CEXs and 68.6% on feedback sessions. A majority of observers (92%) reported satisfaction with the CEX. Inter-rater reliability was measured at 0.69 among all observers viewing the videotapes and these ratings adequately discriminated competent from non-competent performance. The measured CEX grades correlated with subsequent student performance on an end-of-year OSCE. We conclude that the use of JIT is feasible in capturing discrete clinical performance data with a high degree of user satisfaction. Our embedded checklists had adequate inter-rater reliability and concurrent and predictive validity.
Yudkowsky, Rachel; Otaki, Junji; Lowenstein, Tali; Riddle, Janet; Nishigori, Hiroshi; Bordage, Georges
2009-08-01
Diagnostic accuracy is maximised by having clinical signs and diagnostic hypotheses in mind during the physical examination (PE). This diagnostic reasoning approach contrasts with the rote, hypothesis-free screening PE learned by many medical students. A hypothesis-driven PE (HDPE) learning and assessment procedure was developed to provide targeted practice and assessment in anticipating, eliciting and interpreting critical aspects of the PE in the context of diagnostic challenges. This study was designed to obtain initial content validity evidence, performance and reliability estimates, and impact data for the HDPE procedure. Nineteen clinical scenarios were developed, covering 160 PE manoeuvres. A total of 66 Year 3 medical students prepared for and encountered three clinical scenarios during required formative assessments. For each case, students listed anticipated positive PE findings for two plausible diagnoses before examining the patient; examined a standardised patient (SP) simulating one of the diagnoses; received immediate feedback from the SP, and documented their findings and working diagnosis. The same students later encountered some of the scenarios during their Year 4 clinical skills examination. On average, Year 3 students anticipated 65% of the positive findings, correctly performed 88% of the PE manoeuvres and documented 61% of the findings. Year 4 students anticipated and elicited fewer findings overall, but achieved proportionally more discriminating findings, thereby more efficiently achieving a diagnostic accuracy equivalent to that of students in Year 3. Year 4 students performed better on cases on which they had received feedback as Year 3 students. Twelve cases would provide a reliability of 0.80, based on discriminating checklist items only. The HDPE provided medical students with a thoughtful, deliberate approach to learning and assessing PE skills in a valid and reliable manner.
ERIC Educational Resources Information Center
Baldi, Stephane; Jin, Ying; Green, Patricia J.; Herget, Deborah
2007-01-01
The Program for International Student Assessment (PISA) is a system of international assessments administered by the Organization for Economic Cooperation and Development (OECD) that measures 15-year-olds' performance in reading literacy, mathematics literacy, and science literacy every 3 years. This report focuses on the performance of U.S.…
Assessment Testing: Analysis and Predictions, Spring-Fall 1985.
ERIC Educational Resources Information Center
Harris, Howard L.; Hansson, Claudia J.
During spring and fall 1985, a study was conducted at Cosumnes River College (CRC) to determine how assessment testing scores related to student persistence and performance. The student history files of a random sample of 498 students who had been tested by the CRC Assessment Center during spring and fall 1985 were examined, yielding the following…
A Model for Predicting Student Performance on High-Stakes Assessment
ERIC Educational Resources Information Center
Dammann, Matthew Walter
2010-01-01
This research study examined the use of student achievement on reading and math state assessments to predict success on the science state assessment. Multiple regression analysis was utilized to test the prediction for all students in grades 5 and 8 in a mid-Atlantic state. The prediction model developed from the analysis explored the combined…
What Is the Basis for Self-Assessment of Comprehension When Reading Mathematical Expository Texts?
ERIC Educational Resources Information Center
Österholm, Magnus
2015-01-01
The purpose of this study was to characterize students' self-assessments when reading mathematical texts, in particular regarding what students use as a basis for evaluations of their own reading comprehension. A total of 91 students read two mathematical texts, and for each text, they performed a self-assessment of their comprehension and…
Undergraduate Students' Preferences for Constructed versus Multiple-Choice Assessment of Learning
ERIC Educational Resources Information Center
Mingo, Maya A.; Chang, Hsin-Hui; Williams, Robert L.
2018-01-01
Students (N = 161) in seven sections of an undergraduate educational psychology course rated ten performance-assessment options in collegiate courses. They rated in-class essay exams as their most preferred assessment and multiple-choice exams (in-class and out-of-class) as their least preferred. Also, student ratings of multiple papers and a term…
ERIC Educational Resources Information Center
Howard, Steven J.; Woodcock, Stuart; Ehrich, John; Bokosmaty, Sahar
2017-01-01
Background: A fundamental aim of standardized educational assessment is to achieve reliable discrimination between students differing in the knowledge, skills and abilities assessed. However, questions of the purity with which these tests index students' genuine abilities have arisen. Specifically, literacy and numeracy assessments may also engage…
ERIC Educational Resources Information Center
Crossman, Joanne Marciano
The Oral Communication Competencies Assessment Project was designed to determine student communication competency across the curriculum, transferring skills taught in the communication skills class to authentic classroom performances. The 505 students who were required to make oral presentations across the curriculum during the first term of the…
Cumulative versus end-of-course assessment: effects on self-study time and test performance.
Kerdijk, Wouter; Cohen-Schotanus, Janke; Mulder, B Florentine; Muntinghe, Friso L H; Tio, René A
2015-07-01
Students tend to postpone preparation for a test until the test is imminent, which raises various risks associated with 'cramming' behaviours, including that for suboptimal learning. Cumulative assessment utilises spaced testing to stimulate students to study more frequently and to prevent procrastination. This randomised controlled study investigated how cumulative assessment affects time spent on self-study and test performance compared with end-of-course assessment. A total of 78 undergraduate medical students in a Year 2 pre-clinical course were randomly assigned to either of two conditions. Students in the cumulative assessment condition were assessed in weeks 4, 8 and 10. Students in the end-of-course assessment condition were assessed in week 10 only. Each week, students reported the number of hours they spent on self-study. Students in the cumulative assessment condition (n = 25) spent significantly more time on self-study than students in the end-of-course assessment condition (n = 37) in all weeks of the course except weeks 5, 9 and 10. Overall, the cumulative assessment group spent 69 hours more on self-study during the course than did the control group, although the control group spent 7 hours more in studying during the final week of the course than did the cumulative assessment group. Students in the cumulative assessment condition scored slightly higher on questions concerning the content of the last part of the course. Cumulative assessment encourages students to distribute their learning activities over a course, which leaves them more opportunity to study the content of the last part of the course prior to the final examination. There was no evidence for a short-term effect of cumulative assessment on overall knowledge gain. We hypothesise that larger positive effects might be found if retention were to be measured in the long term. © 2015 John Wiley & Sons Ltd.
Team-based assessment of professional behavior in medical students.
Raee, Hojat; Amini, Mitra; Momen Nasab, Ameneh; Malek Pour, Abdolrasoul; Jafari, Mohammad Morad
2014-07-01
Self and peer assessment provides important information about the individual's performance and behavior in all aspects of their professional environment work. The aim of this study is to evaluate the professional behavior and performance in medical students in the form of team based assessment. In a cross-sectional study, 100 medical students in the 7(th) year of education were randomly selected and enrolled; for each student five questionnaires were filled out, including one self-assessment, two peer assessments and two residents assessment. The scoring system of the questionnaires was based on seven point Likert scale. After filling out the questions in the questionnaire, numerical data and written comments provided to the students were collected, analyzed and discussed. Internal consistency (Cronbach's alpha) of the questionnaires was assessed. A p<0.05 was considered as significant level. Internal consistency was acceptable (Cronbach's alpha 0.83). Interviews revealed that the majority of students and assessors interviewed found the method acceptable. The range of scores was 1-6 (Mean±SD=4.39±0.57) for the residents' assessment, 2-6 (Mean±SD= 4.49±0.53) for peer assessment, and 3-7 (Mean±SD=5.04±0.32) for self-assessment. There was a significant difference between self assessment and other methods of assessment. This study demonstrates that a team-based assessment is an acceptable and feasible method for peer and self-assessment of medical students' learning in a clinical clerkship, and has some advantages over traditional assessment methods. Further studies are needed to focus on the strengths and weaknesses.
NASA Astrophysics Data System (ADS)
Matthews, Kelly E.; Adams, Peter; Goos, Merrilyn
2016-07-01
Application of mathematical and statistical thinking and reasoning, typically referred to as quantitative skills, is essential for university bioscience students. First, this study developed an assessment task intended to gauge graduating students' quantitative skills. The Quantitative Skills Assessment of Science Students (QSASS) was the result, which examined 10 mathematical and statistical sub-topics. Second, the study established an evidential baseline of students' quantitative skills performance and confidence levels by piloting the QSASS with 187 final-year biosciences students at a research-intensive university. The study is framed within the planned-enacted-experienced curriculum model and contributes to science reform efforts focused on enhancing the quantitative skills of university graduates, particularly in the biosciences. The results found, on average, weak performance and low confidence on the QSASS, suggesting divergence between academics' intentions and students' experiences of learning quantitative skills. Implications for curriculum design and future studies are discussed.
Mehta, Sagar; Shah, Devesh; Shah, Kushal; Mehta, Sanjiv; Mehta, Neelam; Mehta, Vivek; Mehta, Vijay; Mehta, Vaishali; Motiwala, Smita; Mehta, Naina; Mehta, Devendra
2012-01-01
The objective was to assess the efficacy of a one-year, peer-mediated interventional program consisting of yoga, meditation and play therapy maintained by student volunteers in a school in India. The population consisted of 69 students between the ages of 6 and 11 years, previously identified as having attention deficit hyperactivity disorder (ADHD). A program, known as Climb-Up, was initially embedded in the school twice weekly. Local high school student volunteers were then trained to continue to implement the program weekly over the period of one year. Improvements in ADHD symptoms and academic performance were assessed using Vanderbilt questionnaires completed by both parents and teachers. The performance impairment scores for ADHD students assessed by teachers improved by 6 weeks and were sustained through 12 months in 46 (85%) of the enrolled students. The improvements in their Vanderbilt scores assessed by parents were also seen in 92% (P < 0.0001, Wilcoxon). The Climb-Up program resulted in remarkable improvements in the students' school performances that were sustained throughout the year. These results show promise for a cost-effective program that could easily be implemented in any school. PMID:23316384
Impact of a Web-Based Adaptive Supplemental Digital Resource on Student Mathematics Performance
ERIC Educational Resources Information Center
Sharp, Laurie A.; Hamil, Marc
2018-01-01
Much literature has presented evidence that supplemental digital resources enhance student performance with mathematics. The purpose of this study was to explore the impact of a web-adaptive digital resource, Think Through Math©, on student performance with state-mandated annual standardized mathematics assessments. This study utilized a…
ERIC Educational Resources Information Center
Aleta, Beda T.
2016-01-01
This research study aims to determine the factors of engineering skills self- efficacy sources contributing on the academic performance of AMAIUB engineering students. Thus, a better measure of engineering self-efficacy is needed to adequately assess engineering students' beliefs in their capabilities to perform tasks in their engineering…
Teaching science through literature
NASA Astrophysics Data System (ADS)
Barth, Daniel
2007-12-01
The hypothesis of this study was that a multidisciplinary, activity rich science curriculum based around science fiction literature, rather than a conventional text book would increase student engagement with the curriculum and improve student performance on standards-based test instruments. Science fiction literature was chosen upon the basis of previous educational research which indicated that science fiction literature was able to stimulate and maintain interest in science. The study was conducted on a middle school campus during the regular summer school session. Students were self-selected from the school's 6 th, 7th, and 8th grade populations. The students used the science fiction novel Maurice on the Moon as their only text. Lessons and activities closely followed the adventures of the characters in the book. The student's initial level of knowledge in Earth and space science was assessed by a pre test. After the four week program was concluded, the students took a post test made up of an identical set of questions. The test included 40 standards-based questions that were based upon concepts covered in the text of the novel and in the classroom lessons and activities. The test also included 10 general knowledge questions that were based upon Earth and space science standards that were not covered in the novel or the classroom lessons or activities. Student performance on the standards-based question set increased an average of 35% for all students in the study group. Every subgroup disaggregated by gender and ethnicity improved from 28-47%. There was no statistically significant change in the performance on the general knowledge question set for any subgroup. Student engagement with the material was assessed by three independent methods, including student self-reports, percentage of classroom work completed, and academic evaluation of student work by the instructor. These assessments of student engagement were correlated with changes in student performance on the standards-based assessment tests. A moderate correlation was found to exist between the level of student engagement with the material and improvement in performance from pre to post test.
Zhou, Shaona; Han, Jing; Koenig, Kathleen; Raplinger, Amy; Pi, Yuan; Li, Dan; Xiao, Hua; Fu, Zhao; Bao, Lei
2016-03-01
Scientific reasoning is an important component under the cognitive strand of the 21st century skills and is highly emphasized in the new science education standards. This study focuses on the assessment of student reasoning in control of variables (COV), which is a core sub-skill of scientific reasoning. The main research question is to investigate the extent to which the existence of experimental data in questions impacts student reasoning and performance. This study also explores the effects of task contexts on student reasoning as well as students' abilities to distinguish between testability and causal influences of variables in COV experiments. Data were collected with students from both USA and China. Students received randomly one of two test versions, one with experimental data and one without. The results show that students from both populations (1) perform better when experimental data are not provided, (2) perform better in physics contexts than in real-life contexts, and (3) students have a tendency to equate non-influential variables to non-testable variables. In addition, based on the analysis of both quantitative and qualitative data, a possible progression of developmental levels of student reasoning in control of variables is proposed, which can be used to inform future development of assessment and instruction.
Tucker, Phebe; Jeon-Slaughter, Haekyung; Sener, Ugur; Arvidson, Megan; Khalafian, Andrey
2015-01-01
We explored the theory that measures of medical students' well-being and stress from different types of preclinical curricula are linked with performance on standardized assessment. Self-reported stress and quality of life among sophomore medical students having different types of preclinical curricula will vary in their relationships to USMLE Step 1 scores. Voluntary surveys in 2010 and 2011 compared self-reported stress, physical and mental health, and quality of life with Step 1 scores for beginning sophomore students in the final year of a traditional, discipline-based curriculum and the 1st year of a revised, systems-based curriculum with changed grading system. Wilcoxon rank sum tests and Spearman rank correlations were used to analyze data, significant at p <.05. New curriculum students reported worse physical health, subjective feelings, leisure activities, social relationships and morale, and more depressive symptoms and life stress than traditional curriculum students. However, among curriculum-related stressors, few differences emerged; revised curriculum sophomores reported less stress working with real and standardized patients than traditional students. There were no class differences in respondents' Step 1 scores. Among emotional and physical health measures, only feelings of morale correlated negatively with Step 1 performance. Revised curriculum students' Step 1 scores correlated negatively with stress from difficulty of coursework. Although revised curriculum students reported worse quality of life, general stress, and health and less stress from patient interactions than traditional students, few measures were associated with performance differences on Step 1. Moreover, curriculum type did not appear to either hinder or help students' Step 1 performance. To identify and help students at risk for academic problems, future assessments of correlates of Step 1 performance should be repeated after the new curriculum is well established, relating them also to performance on other standardized assessments of communication skills, professionalism, and later clinical evaluations in clerkships or internships.
Making Performance Assessments a Part of Accountability
ERIC Educational Resources Information Center
Haun, Billy
2018-01-01
The purpose of this commentary is to describe recent efforts in Virginia to develop and use performance assessments, including the challenges that emerged during this process and key considerations for states that integrate performance assessment into their systems. Performance assessments can play an important role in preparing students for…
Online feedback assessments in physiology: effects on students' learning experiences and outcomes.
Marden, Nicole Y; Ulman, Lesley G; Wilson, Fiona S; Velan, Gary M
2013-06-01
Online formative assessments have become increasingly popular; however, formal evidence supporting their educational benefits is limited. This study investigated the impact of online feedback quizzes on the learning experiences and outcomes of undergraduate students enrolled in an introductory physiology course. Four quiz models were tested, which differed in the amount of credit available, the number of attempts permitted, and whether the quizzes were invigilated or unsupervised, timed or untimed, or open or closed book. All quizzes were composed of multiple-choice questions and provided immediate individualized feedback. Summative end-of-course examination marks were analyzed with respect to performance in quizzes and were also compared with examination performance in the year before the quizzes were introduced. Online surveys were conducted to gather students' perceptions regarding the quizzes. The vast majority of students perceived online quizzes as a valuable learning tool. For all quiz models tested, there was a significant relationship between performance in quizzes and end-of-course examination scores. Importantly, students who performed poorly in quizzes were more likely to fail the examination, suggesting that formative online quizzes may be a useful tool to identify students in need of assistance. Of the four quiz models, only one quiz model was associated with a significant increase in mean examination performance. This model had the strongest formative focus, allowing multiple unsupervised and untimed attempts. This study suggests that the format of online formative assessments is critical in achieving the desired impact on student learning. Specifically, such assessments are most effective when they are low stakes.
Joseph, Nitin; Rai, Sharada; Madi, Deepak; Bhat, Kamalakshi; Kotian, Shashidhar M; Kantharaju, Supriya
2016-01-01
Knowledge of community medicine is essential for health care professionals to function as efficient primary health care physicians. Medical students learning Community Medicine as a subject are expected to be competent in critical thinking and generic skills so as to analyze community health problems better. However, current teaching by didactic lectures fails to develop these essential skills. Problem-based learning (PBL) could be an effective strategy in this respect. This study was hence done to compare the academic performance of students who were taught Community Medicine by the PBL method with that of students taught by traditional methods, to assess the generic skills of students taught in a PBL environment and to assess the perception of students toward PBL methodology. This study was conducted among seventh-semester final-year medical students between June and November 2014. PBL was introduced to a randomly chosen group of students, and their performance in an assessment exam at the end of postings was compared with that of the remaining students. Generic skills and perception toward PBL were also assessed using standardized questionnaires. A total of 77 students took part in the brainstorming session of PBL. The correlation between self-assigned scores of the participants and those assigned by the tutor in the brainstorming session of PBL was significant (r = 0.266, P = 0.05). Out of 54 students who took part in the presentation session, almost all 53 (98.1%) had good perception toward PBL. Demotivational scores were found to be significantly higher among males (P = 0.024). The academic performance of students (P < 0.001) and success rates (P = 0.05) in the examination were higher among students who took part in PBL compared to controls. PBL helped improve knowledge of students in comparison to those exposed only to didactic lectures. As PBL enabled students to identify the gaps in their knowledge and enhanced their group functioning and generic skills, we recommend PBL sessions: They would help optimize the training in Community Medicine at medical schools. Good correlation of tutor and self-assessment scores of participants in the brainstorming session suggests that the role of tutors could be restricted to assessment in presentation sessions alone. Demotivation, which hinders group performance in PBL, needs to be corrected by counselling and timely feedback by the tutors.
A mentor-based portfolio program to evaluate pharmacy students' self-assessment skills.
Kalata, Lindsay R; Abate, Marie A
2013-05-13
Objective. To evaluate pharmacy students' self-assessment skills with an electronic portfolio program using mentor evaluators. Design. First-year (P1) and second-year (P2) pharmacy students used online portfolios that required self-assessments of specific graded class assignments. Using a rubric, faculty and alumni mentors evaluated students' self-assessments and provided feedback. Assessment. Eighty-four P1 students, 74 P2 students, and 59 mentors participated in the portfolio program during 2010-2011. Both student groups performed well overall, with only a small number of resubmissions required. P1 students showed significant improvements across semesters for 2 of the self-assessment questions; P2 students' scores did not differ significantly. The P1 scores were significantly higher than P2 scores for 3 questions during spring 2011. Mentors and students had similar levels of agreement with the extent to which students put forth their best effort on the self-assessments. Conclusion. An electronic portfolio using mentors based inside and outside the school provided students with many opportunities to practice their self-assessment skills. This system represents a useful method of incorporating self-assessments into the curriculum that allows for feedback to be provided to the students.
Celma Vicente, Matilde; Ajuria-Imaz, Eloisa; Lopez-Morales, Manuel; Fernandez-Marín, Pilar; Menor-Castro, Alicia; Cano-Caballero Galvez, Maria Dolores
2015-01-01
This paper shows the utility of a NIC standardized language to assess the extent of nursing student skills at Practicum in surgical units To identify the nursing interventions classification (NIC) that students can learn to perform in surgical units. To determine the level of difficulty in learning interventions, depending on which week of rotation in clinical placement the student is. Qualitative study using Delphi consensus technique, involving nurses with teaching experience who work in hospital surgical units, where students undertake the Practicum. The results were triangulated through a questionnaire to tutors about the degree of conformity. A consensus was reached about the interventions that students can achieve in surgical units and the frequency in which they can be performed. The level of difficulty of each intervention, and the amount of weeks of practice that students need to reach the expected level of competence was also determined. The results should enable us to design better rotations matched to student needs. Knowing the frequency of each intervention that is performed in each unit determines the chances of learning it, as well as the indicators for its assessment. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.
The Relationship between Language Literacy and ELL Student Academic Performance in Mathematics
ERIC Educational Resources Information Center
Lawon, Molly A.
2017-01-01
This quantitative study used regression analysis to investigate the correlation of limited language proficiency and the performance of English Language Learner (ELL) students on two commonly used math assessments, namely the Smarter Balanced Assessment Consortium (SBAC) and the Measures of Academic Progress (MAP). Scores were analyzed for eighth…
Distorted Perceptions of Competence and Incompetence Are More than Regression Effects
ERIC Educational Resources Information Center
Albanese, M.; Dottl, S.; Mejicano, G.; Zakowski, L.; Seibert, C.; Van Eyck, S.; Prucha, C.
2006-01-01
Students inaccurately assess their own skills, especially high- or low-performers on exams. This study assessed whether regression effects account for this observation. After completing the Infection and Immunity course final exam (IIF), second year medical students (N = 143) estimated their performance on the IIF in terms of percent correct and…
Assessing Students' Performances in Decision-Making: Coping Strategies of Biology Teachers
ERIC Educational Resources Information Center
Steffen, Benjamin; Hößle, Corinna
2017-01-01
Decision-making in socioscientific issues (SSI) constitutes a real challenge for both biology teachers and learners. The assessment of students' performances in SSIs constitutes a problem, especially for biology teachers. The study at hand was conducted in Germany and uses a qualitative approach following the research procedures of grounded theory…
Improving Performance: Leading from the Bottom. PISA in Focus. No. 2
ERIC Educational Resources Information Center
OECD Publishing (NJ1), 2011
2011-01-01
Since the PISA (Programme for International Student Assessment) 2000 and 2009 surveys both focused on reading, one can track in detail how student reading performance has changed over that period. Among the 26 OECD (Organisation for Economic Cooperation and Development) countries with comparable results in both assessments, Chile, Germany,…
Ali, Madiha; Asim, Hamna; Edhi, Ahmed Iqbal; Hashmi, Muhammad Daniyal; Khan, Muhammad Shahjahan; Naz, Farah; Qaiser, Kanza Noor; Qureshi, Sidra Masud; Zahid, Mohammad Faizan; Jehan, Imtiaz
2015-01-01
Stress among medical students induced by academic pressures is on the rise among the student population in Pakistan and other parts of the world. Our study examined the relationship between two different systems employed to assess academic performance and the levels of stress among students at two different medical schools in Karachi, Pakistan. A sample consisting of 387 medical students enrolled in pre-clinical years was taken from two universities, one employing the semester examination system with grade point average (GPA) scores (a tiered system) and the other employing an annual examination system with only pass/fail grading. A pre-designed, self-administered questionnaire was distributed. Test anxiety levels were assessed by The Westside Test Anxiety Scale (WTAS). Overall stress was evaluated using the Perceived Stress Scale (PSS). There were 82 males and 301 females while four did not respond to the gender question. The mean age of the entire cohort was 19.7 ± 1.0 years. A total of 98 participants were from the pass/fail assessment system while 289 were from the GPA system. There was a higher proportion of females in the GPA system (85% vs. 59%; p < 0.01). Students in the pass/fail assessment system had a lower score on the WTAS (2.4 ± 0.8 vs. 2.8 ± 0.7; p = 0.01) and the PSS (17.0 ± 6.7 vs. 20.3 ± 6.8; p < 0.01), indicating lower levels of test anxiety and overall stress than in students enrolled in the GPA assessment system. More students in the pass/fail system were satisfied with their performance than those in the GPA system. Based on the present study, we suggest governing bodies to revise and employ a uniform assessment system for all the medical colleges to improve student academic performance and at the same time reduce stress levels. Our results indicate that the pass/fail assessment system accomplishes these objectives.
Morton, David A.; Pippitt, Karly; Lamb, Sara; Colbert-Getz, Jorie M.
2016-01-01
Problem Effectively solving problems as a team under stressful conditions is central to medical practice; however, because summative examinations in medical education must test individual competence, they are typically solitary assessments. Approach Using two-stage examinations, in which students first answer questions individually (Stage 1) and then discuss them in teams prior to resubmitting their answers (Stage 2), is one method for rectifying this discordance. On the basis of principles of social constructivism, the authors hypothesized that two-stage examinations would lead to better retention of, specifically, items answered incorrectly at Stage 1. In fall 2014, they divided 104 first-year medical students into two groups of 52 students. Groups alternated each week between taking one- and two-stage examinations such that each student completed 6 one-stage and 6 two-stage examinations. The authors reassessed 61 concepts on a final examination and, using the Wilcoxon signed ranked tests, compared performance for all concepts and for just those students initially missed, between Stages 1 and 2. Outcomes Final examination performance on all previously assessed concepts was not significantly different between the one-and two-stage conditions (P = .77); however, performance on only concepts that students initially answered incorrectly on a prior examination improved by 12% for the two-stage condition relative to the one-stage condition (P = .02, r = 0.17). Next Steps Team assessment may be most useful for assessing concepts students find difficult, as opposed to all content. More research is needed to determine whether these results apply to all medical school topics and student cohorts. PMID:27049544
Assessing Motor Skill Competency in Elementary School Students: A Three-Year Study.
Chen, Weiyun; Mason, Steve; Hypnar, Andrew; Bennett, Austin
2016-03-01
This study was to examine how well fourth- and fifth-grade students demonstrated motor skill competency assessed with selected PE Metrics assessment rubrics (2009). Fourth- and fifth-grade students (n = 1,346-1,926) were assessed on their performance of three manipulative skills using the PE Metrics Assessment Rubrics during the pre-intervention year, the post-intervention year 1, and the post-intervention year 3. Descriptive statistics, independent t-test, ANOVA, and follow-up comparisons were conducted for data analysis. The results indicated that the post-intervention year 2 cohort performed significantly more competent than the pre-intervention cohort and the post-intervention year 1 cohort on the three manipulative skill assessments. The post-intervention year 1 cohort significantly outperformed the pre-intervention cohort on the soccer dribbling, passing, and receiving and the striking skill assessments, but not on the throwing skill assessment. Although the boys in the three cohorts performed significantly better than the girls on all three skills, the girls showed substantial improvement on the overhand throwing and the soccer skills from baseline to the post-intervention year 1 and the post-intervention year 2. However, the girls, in particular, need to improve striking skill. The CTACH PE was conducive to improving fourth- and fifth-grade students' motor skill competency in the three manipulative skills. This study suggest that PE Metrics assessment rubrics are feasible tools for PE teachers to assess levels of students' demonstration of motor skill competency during a regular PE lesson. Key pointsCATCH PE is an empirically-evidenced quality PE curricular that is conducive to improving students' manipulative skill competency.Boys significantly outperformed than girls in all three manipulative skills.Girls need to improve motor skill competency in striking skill. PE Metrics are feasible assessment rubrics that can be easily used by trained physical education teachers to assess students' manipulative skill competency.
Graders' Mathematics Achievement
ERIC Educational Resources Information Center
Bond, John B.; Ellis, Arthur K.
2013-01-01
The purpose of this experimental study was to investigate the effects of metacognitive reflective assessment instruction on student achievement in mathematics. The study compared the performance of 141 students who practiced reflective assessment strategies with students who did not. A posttest-only control group design was employed, and results…
Investigating ESL Students' Academic Performance in Tenses
ERIC Educational Resources Information Center
Javed, Muhammad; Ahmad, Atezaz
2013-01-01
The present study intends to assess the ESL students' performance in tenses at secondary school level. Grade 10 students were the target population of the study. A sample of 396 students (255 male and 141 female) was selected through convenience sampling technique from the District of Bahawalnagar, Pakistan. A test focusing on five different types…
Career-Oriented Performance Tasks: Effects on Students' Interest in Chemistry
ERIC Educational Resources Information Center
Espinosa, Allen A.; Monterola, Sheryl Lyn C.; Punzalan, Amelia E.
2013-01-01
The study was conducted to assess the effectiveness of Career-Oriented Performance Task (COPT) approach against the traditional teaching approach (TTA) in enhancing students' interest in Chemistry. Specifically, it sought to find out if students exposed to COPT have higher interest in Chemistry than those students exposed to the traditional…
ERIC Educational Resources Information Center
Collings, David; Garrill, Ashley; Johnston, Lucy
2018-01-01
Universities have a long-established tradition of granting students special consideration when circumstances beyond their control negatively affect performance in assessments. Typically, such situations affect only one student (e.g. medical emergencies) but we consider the impact of a natural disaster that led to all students being eligible for…
Casagrand, Janet; Semsar, Katharine
2017-06-01
Here we describe a 4-yr course reform and its outcomes. The upper-division neurophysiology course gradually transformed from a traditional lecture in 2004 to a more student-centered course in 2008, through the addition of evidence-based active learning practices, such as deliberate problem-solving practice on homework and peer learning structures, both inside and outside of class. Due to the incremental nature of the reforms and absence of pre-reform learning assessments, we needed a way to retrospectively assess the effectiveness of our efforts. To do this, we first looked at performance on 12 conserved exam questions. Students performed significantly higher post-reform on questions requiring lower-level cognitive skills and those requiring higher-level cognitive skills. Furthermore, student performance on conserved questions was higher post-reform in both the top and bottom quartiles of students, although lower-quartile student performance did not improve until after the first exam. To examine student learning more broadly, we also used Bloom's taxonomy to quantify a significant increase in the Bloom's level of exams, with students performing equally well post-reform on exams that had over twice as many questions at higher cognitive skill levels. Finally, we believe that four factors provided critical contributions to the success of the course reform, including: transformation efforts across multiple course components, alignment between formative and evaluative course materials, student buy-in to course instruction, and instructional support. This reform demonstrates both the effectiveness of incorporating student-centered, active learning into our course, and the utility of using Bloom's level as a metric to assess course reform. Copyright © 2017 the American Physiological Society.
The Role of Student Growth Percentiles in Monitoring Learning and Predicting Learning Outcomes
ERIC Educational Resources Information Center
Seo, Daeryong; McGrane, Joshua; Taherbhai, Husein
2015-01-01
Most formative assessments rely on the performance status of a student at a particular time point. However, such a method does not provide any information on the "propensity" of the student to achieve a predetermined target score or whether the student is performing as per the expectations from identical students with the same history of…
Clinical Observed Performance Evaluation: A Prospective Study in Final Year Students of Surgery
ERIC Educational Resources Information Center
Markey, G. C.; Browne, K.; Hunter, K.; Hill, A. D.
2011-01-01
We report a prospective study of clinical observed performance evaluation (COPE) for 197 medical students in the pre-qualification year of clinical education. Psychometric quality was the main endpoint. Students were assessed in groups of 5 in 40-min patient encounters, with each student the focus of evaluation for 8 min. Each student had a series…
NASA Astrophysics Data System (ADS)
Dori, Yehudit J.
2003-01-01
Matriculation 2000 was a 5-year project aimed at moving from the nationwide traditional examination system in Israel to a school-based alternative embedded assessment. Encompassing 22 high schools from various communities in the country, the Project aimed at fostering deep understanding, higher-order thinking skills, and students' engagement in learning through alternative teaching and embedded assessment methods. This article describes research conducted during the fifth year of the Project at 2 experimental and 2 control schools. The research objective was to investigate students' learning outcomes in chemistry and biology in the Matriculation 2000 Project. The assumption was that alternative embedded assessment has some effect on students' performance. The experimental students scored significantly higher than their control group peers on low-level assignments and more so on assignments that required higher-order thinking skills. The findings indicate that given adequate support and teachers' consent and collaboration, schools can transfer from nationwide or statewide standardized testing to school-based alter-native embedded assessment.
Schlegel, Elisabeth F M; Selfridge, Nancy J
2014-05-01
Formative assessments are tools for assessing content retention, providing valuable feedback to students and teachers. In medical education, information technology-supported games can accommodate large classes divided into student teams while fostering active engagement. To establish an innovative stimulating approach to formative assessments for large classes furthering collaborative skills that promotes learning and student engagement linked to improvement of academic performance. Using audience response technology, a fast-paced, competitive, interactive quiz game involving dermatology was developed. This stimulating setting, provided on the last day of class, prepares students for high-stakes exams to continue their medical education while training collaborative skills as supported by survey outcomes and average class scores. Educational game competitions provide formative assessments and feedback for students and faculty alike, enhancing learning and teaching processes. In this study, we show an innovative approach to accommodate a large class divided into competing teams furthering collaborative skills reflected by academic performance.
Students' Motivations for Data Handling Choices and Behaviors: Their Explanations of Performance
Keiler, Leslie; Woolnough, Brian
2003-01-01
Cries for increased accountability through additional assessment are heard throughout the educational arena. However, as demonstrated in this study, to make a valid assessment of teaching and learning effectiveness, educators must determine not only what students do, but also why they do it, as the latter significantly affects the former. This study describes and analyzes 14- to 16-year-old students' explanations for their choices and performances during science data handling tasks. The study draws heavily on case-study methods for the purpose of seeking an in-depth understanding of classroom processes in an English comprehensive school. During semistructured scheduled and impromptu interviews, students were asked to describe, explain, and justify the work they did with data during their science classes. These student explanations fall within six categories, labeled 1) implementing correct procedures, 2) following instructions, 3) earning marks, 4) doing what is easy, 5) acting automatically, and 6) working within limits. Each category is associated with distinct outcomes for learning and assessment, with some motivations resulting in inflated performances while others mean that learning was underrepresented. These findings illuminate the complexity of student academic choices and behaviors as mediated by an array of motivations, casting doubt on the current understanding of student performance. PMID:12822035
Student-written single-best answer questions predict performance in finals.
Walsh, Jason; Harris, Benjamin; Tayyaba, Saadia; Harris, David; Smith, Phil
2016-10-01
Single-best answer (SBA) questions are widely used for assessment in medical schools; however, often clinical staff have neither the time nor the incentive to develop high-quality material for revision purposes. A student-led approach to producing formative SBA questions offers a potential solution. Cardiff University School of Medicine students created a bank of SBA questions through a previously described staged approach, involving student question-writing, peer-review and targeted senior clinician input. We arranged questions into discrete tests and posted these online. Student volunteer performance on these tests from the 2012/13 cohort of final-year medical students was recorded and compared with the performance of these students in medical school finals (knowledge and objective structured clinical examinations, OSCEs). In addition, we compared the performance of students that participated in question-writing groups with the performance of the rest of the cohort on the summative SBA assessment. Often clinical staff have neither the time nor the incentive to develop high-quality material for revision purposes Performance in the end-of-year summative clinical knowledge SBA paper correlated strongly with performance in the formative student-written SBA test (r = ~0.60, p <0.01). There was no significant correlation between summative OSCE scores and formative student-written SBA test scores. Students who wrote and reviewed questions scored higher than average in the end-of-year summative clinical knowledge SBA paper. Student-written SBAs predict performance in end-of-year SBA examinations, and therefore can provide a potentially valuable revision resource. There is potential for student-written questions to be incorporated into summative examinations. © 2015 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Pine, Jerome; Aschbacher, Pamela; Roth, Ellen; Jones, Melanie; McPhee, Cameron; Martin, Catherine; Phelps, Scott; Kyle, Tara; Foley, Brian
2006-05-01
A large number of American elementary school students are now studying science using the hands-on inquiry curricula developed in the 1990s: Insights; Full Option Science System (FOSS); and Science and Technology for Children (STC). A goal of these programs, echoed in the National Science Education Standards, is that children should gain abilities to do scientific inquiry and understanding about scientific inquiry. We have studied the degree to which students can do inquiries by using four hands-on performance assessments, which required one or three class periods. To be fair, the assessments avoided content that is studied in depth in the hands-on programs. For a sample of about 1000 fifth grade students, we compared the performance of students in hands-on curricula with an equal number of students with textbook curricula. The students were from 41 classrooms in nine school districts. The results show little or no curricular effect. There was a strong dependence on students' cognitive ability, as measured with a standard multiple-choice instrument. There was no significant difference between boys and girls. Also, there was no difference on a multiple-choice test, which used items released from the Trends in International Mathematics and Science Study (TIMSS). It is not completely clear whether the lack of difference on the performance assessments was a consequence of the assessments, the curricula, and/or the teaching.
ERIC Educational Resources Information Center
Berkeley-Jones, Catherine Spotswood
2012-01-01
The purpose of this study was to examine teacher Levels of Technology Implementation (LoTi) self-ratings and student Texas Assessment of Knowledge and Skills (TAKS) scores. The study assessed the relationship between LoTi ratings and TAKS scores of 6th, 7th, and 8th grade students as reported in student records at Alamo Heights Independent School…
ERIC Educational Resources Information Center
Al-Tayib Umar, Abdul Majeed
2018-01-01
This study tries to identify the effect of assessment for learning on a group of Sudanese pre-medical students' performance in English for Specific Purposes (ESP). The study also attempts to identify students' perception and attitudes towards this type of assessment. The sample of the study is composed of 53 subjects from the Pre-medical students…
Weak self-directed learning skills hamper performance in cumulative assessment.
Tio, René A; Stegmann, Mariken E; Koerts, Janke; van Os, Titus W D P; Cohen-Schotanus, Janke
2016-01-01
Self-regulated learning is an important determinant of academic performance. Previous research has shown that cumulative assessment encourages students to work harder and improve their results. However, not all students seem to respond as intended. We investigated the influence of students' behavioral traits on their responsiveness to a cumulative assessment strategy. The cumulative test results of a third-year integrated ten-week course unit were analyzed. The test was divided into three parts delivered at 4, 8 and 10 weeks. Low starters (below median) with low or high improvement (below or above the median) were identified and compared regarding their behavioral traits (assessed with the Temperament and Character Inventory questionnaire). A total of 295 students filled out the questionnaire. A percentage of 70% of the students below the median on the first two test parts improved during the final part. Students who were less responsive to improve their test results, scored low only on the TCI scale "self directedness" (t = 2.49; p = 0.011). Behavioral traits appear to influence student reactions to feedback on test results, with students with low self-directedness scores being particularly at risk. They can thus be identified and should receive special attention from student counselors.
NASA Astrophysics Data System (ADS)
Ishimoto, Michi; Thornton, Ronald K.; Sokoloff, David R.
2014-12-01
This study assesses the Japanese translation of the Force and Motion Conceptual Evaluation (FMCE). Researchers are often interested in comparing the conceptual ideas of students with different cultural backgrounds. The FMCE has been useful in identifying the concepts of English-speaking students from different backgrounds. To identify effectively the conceptual ideas of Japanese students and to compare them to those of their English-speaking counterparts, more work is required. Because of differences between the Japanese and English languages, and between the Japanese and American educational systems, it is important to assess the Japanese translation of the FMCE, a conceptual evaluation originally developed in English for American students. To assess its appropriateness, we examined the performance of a large sample of students on the translated version of the FMCE and then compared the results to those of English-speaking students. The data comprise the pretest results of 1095 students, most of whom were first-year students at a midlevel engineering school between 2003 and 2012. Basic statistics and the classical test theory indices of the translated FMCE indicate that its reliability and discrimination are appropriate to assess Japanese students' concepts about force and motion. In general, the preconcepts of Japanese students assessed with the Japanese translation of the FMCE are quite similar to those of American students assessed with the FMCE, thereby supporting the validity of the translated version. However, our findings do show (1) that only a small percentage of Japanese students grasped Newtonian concepts and (2) that the percentage of Japanese students who used two different concept models together to answer some questions seems to be higher than that of American students.
Weekly active-learning activities in a drug information and literature evaluation course.
Timpe, Erin M; Motl, Susannah E; Eichner, Samantha F
2006-06-15
To incorporate learning activities into the weekly 2-hour Drug Information and Literature Evaluation class sessions to improve student ability and confidence in performing course objectives, as well as to assess student perception of the value of these activities. In-class activities that emphasized content and skills taught within class periods were created and implemented. Three different surveys assessing student ability and confidence in completing drug information and literature retrieval and evaluation tasks were administered prior to and following the appropriate class sessions. At the completion of the course, an additional evaluation was administered to assess the students' impressions of the value of the learning activities. Students reported increased ability and confidence in all course objectives. The teaching activities were also stated to be useful in students' learning of the material. Incorporation of weekly learning activities resulted in an improvement in student ability and confidence to perform course objectives. Students considered these activities to be beneficial and to contribute to the completion of course objectives.
ERIC Educational Resources Information Center
Meyen, Ed; Poggio, John; Seok, Soonhwa; Smith, Sean
2006-01-01
One of the most significant challenges facing policy makers in education today is to ensure that state assessments designed to measure student performance across specified grade-level curriculum content standards will allow all students to demonstrate what they have learned. This challenge is made complex by the varied attributes of students with…
Student Conceptions of Feedback: Impact on Self-Regulation, Self-Efficacy, and Academic Achievement
ERIC Educational Resources Information Center
Brown, Gavin T. L.; Peterson, Elizabeth R.; Yao, Esther S.
2016-01-01
Background: Lecturers give feedback on assessed work in the hope that students will take it on board and use it to help regulate their learning for the next assessment. However, little is known about how students' conceptions of feedback relate to students' self-regulated learning and self-efficacy beliefs and academic performance. Aims: This…
ERIC Educational Resources Information Center
Murphy, Karen; Barry, Shane
2016-01-01
Presentation feedback can be limited in its feed-forward value, as students do not have their actual presentation available for review whilst reflecting upon the feedback. This study reports on students' perceptions of the learning and feed-forward value of an oral presentation assessment. Students self-marked their performance immediately after…
A Mentor-Based Portfolio Program to Evaluate Pharmacy Students’ Self-Assessment Skills
Kalata, Lindsay R.
2013-01-01
Objective. To evaluate pharmacy students' self-assessment skills with an electronic portfolio program using mentor evaluators. Design. First-year (P1) and second-year (P2) pharmacy students used online portfolios that required self-assessments of specific graded class assignments. Using a rubric, faculty and alumni mentors evaluated students' self-assessments and provided feedback. Assessment. Eighty-four P1 students, 74 P2 students, and 59 mentors participated in the portfolio program during 2010-2011. Both student groups performed well overall, with only a small number of resubmissions required. P1 students showed significant improvements across semesters for 2 of the self-assessment questions; P2 students' scores did not differ significantly. The P1 scores were significantly higher than P2 scores for 3 questions during spring 2011. Mentors and students had similar levels of agreement with the extent to which students put forth their best effort on the self-assessments. Conclusion. An electronic portfolio using mentors based inside and outside the school provided students with many opportunities to practice their self-assessment skills. This system represents a useful method of incorporating self-assessments into the curriculum that allows for feedback to be provided to the students. PMID:23716749
ERIC Educational Resources Information Center
Freeman, Sarah Reives
2013-01-01
The main focus of this study is to determine the effect of test design on the academic performance of students with disabilities participating in the NCEXTEND2 modified assessment program during the 2010-2011 school year. Participation of all students in state and federal accountability measure is required by No Child Left Behind (2001) and the…
ERIC Educational Resources Information Center
Chang, Chi-Cheng; Wu, Bing-Hong
2012-01-01
This study explored the reliability and validity of teacher assessment under a Web-based portfolio assessment environment (or Web-based teacher portfolio assessment). Participants were 72 eleventh graders taking the "Computer Application" course. The students perform portfolio creation, inspection, self- and peer-assessment using the Web-based…
Fifteen years of portfolio assessment of dental hygiene student competency: lessons learned.
Gadbury-Amyot, Cynthia C; Bray, Kimberly Krust; Austin, Kylie J
2014-10-01
Adoption of portfolio assessment in the educational environment is gaining attention as a means to incorporate self-assessment into the curriculum and to use evidence to support learning outcomes and to demonstrate competency. Portfolios provide a medium for students to demonstrate and document their personal and professional growth across the curriculum. The purpose of this literature review is to discuss the drivers for portfolio education, the benefits to both students and program faculty/administrators, the barriers associated with portfolio use, and suggested solutions that have been determined through several years of "lessons learned." The University of Missouri Kansas City School of Dentistry, Division of Dental Hygiene department has been utilizing portfolio assessment for over 15 years and has collected data related to portfolio performance since 2001. Results from correlational statistics calculated on the 312 dental hygiene students that graduated from 2001 to 2013 demonstrate a positive and significant relationship between portfolio performance and overall GPA as well as portfolio performance and NBDHE scores. Copyright © 2014 The American Dental Hygienists’ Association.
Abebe, Mesfin G; Tariku, Mebit K; Yitaferu, Tadele B; Shiferaw, Ephrem D; Desta, Firew A; Yimer, Endris M; Akassa, Kefyalew M; Thompson, Elizabeth C
2017-04-01
To assess the level of nutrition-sensitive agriculture competencies of graduating midlevel animal and plant sciences students in Ethiopia and identify factors associated with the attainment of competencies. A cross-sectional study design using structured skills observation checklists, objective written questions, and structured questionnaires was employed. Two agriculture technical vocational education and training colleges in the 2 regions of Ethiopia. A total of 145 students were selected using stratified random sampling techniques from a population of 808 students with the response rate of 93%. Nutrition-sensitive agriculture competency (knowledge and skills attributes) of graduating students. Bivariate and multivariable statistical analyses were used to examine the association between the variables of students' gender, age, department, institutional ownership, and perception of learning environment and their performance in nutrition competency. Combined scores showed that 49% of students demonstrated mastery of nutrition competencies. Gender and institutional ownership were associated with the performance of students (P < .001); male students and students at a federal institution performed better. The study showed low performance of students in nutrition competency and suggested the need for strengthening the curriculum, building tutors' capacity, and providing additional support to female students and regional colleges. Copyright © 2016 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
2012-01-01
Background The relationship between the sleep/wake habits and the academic performance of medical students is insufficiently addressed in the literature. This study aimed to assess the relationship between sleep habits and sleep duration with academic performance in medical students. Methods This study was conducted between December 2009 and January 2010 at the College of Medicine, King Saud University, and included a systematic random sample of healthy medical students in the first (L1), second (L2) and third (L3) academic levels. A self-administered questionnaire was distributed to assess demographics, sleep/wake schedule, sleep habits, and sleep duration. Daytime sleepiness was evaluated using the Epworth Sleepiness Scale (ESS). School performance was stratified as “excellent” (GPA ≥3.75/5) or “average” (GPA <3.75/5). Results The final analysis included 410 students (males: 67%). One hundred fifteen students (28%) had “excellent” performance, and 295 students (72%) had “average” performance. The “average” group had a higher ESS score and a higher percentage of students who felt sleepy during class. In contrast, the “excellent” group had an earlier bedtime and increased TST during weekdays. Subjective feeling of obtaining sufficient sleep and non-smoking were the only independent predictors of “excellent” performance. Conclusion Decreased nocturnal sleep time, late bedtimes during weekdays and weekends and increased daytime sleepiness are negatively associated with academic performance in medical students. PMID:22853649
Assessment of Student Professional Outcomes for Continuous Improvement
ERIC Educational Resources Information Center
Keshavarz, Mohsen; Baghdarnia, Mostafa
2013-01-01
This article describes a method for the assessment of professional student outcomes (performance-type outcomes or soft skills). The method is based upon group activities, research on modern electrical engineering topics by individual students, classroom presentations on chosen research topics, final presentations, and technical report writing.…
School Accountability and Assessment: Should We Put the Roof up First?
ERIC Educational Resources Information Center
Klinger, Don A.; Maggi, Stefania; D'Angiulli, Amedeo
2011-01-01
School accountability and student assessment are closely associated in educational jurisdictions' attempts to monitor student achievement, focus instruction, and improve subsequent student and school performance. The research reported in this article examines the School Effectiveness Framework in Ontario, Canada, exploring the foundations upon…
The Impact of Poverty and School Size on the 2015-16 Kansas State Assessment Results
ERIC Educational Resources Information Center
Carter, Ted
2017-01-01
Schools with higher percentages of students in poverty have lower student assessment results on the 2015-16 Kansas Math and ELA assessments, and larger schools have lower student achievement results than smaller schools. In addition, higher poverty schools are likely to have larger gaps in performance based on special education status and possibly…
ERIC Educational Resources Information Center
Bulunuz, Nermin; Bulunuz, Mizrap; Karagoz, Funda; Tavsanli, Omer Faruk
2016-01-01
The present study has two aims. Firstly, it aims to determine eighth grade students' conceptual understanding of floating and sinking through formative assessment probes. Secondly, it aims to determine whether or not there is a significant difference between students' performance in formative assessment probes and their achievement in the…
ERIC Educational Resources Information Center
Cai, Jinfa, And Others
1996-01-01
Presents a conceptual framework for analyzing students' mathematical understanding, reasoning, problem solving, and communication. Analyses of student responses indicated that the tasks appear to measure the complex thinking and reasoning processes that they were designed to assess. Concludes that the QUASAR assessment tasks can capture changes in…
Yilmaz Soylu, Meryem; Zeleny, Mary G.; Zhao, Ruomeng; Bruning, Roger H.; Dempsey, Michael S.; Kauffman, Douglas F.
2017-01-01
The two studies reported here explored the factor structure of the newly constructed Writing Achievement Goal Scale (WAGS), and examined relationships among secondary students' writing achievement goals, writing self-efficacy, affect for writing, and writing achievement. In the first study, 697 middle school students completed the WAGS. A confirmatory factor analysis revealed a good fit for this data with a three-factor model that corresponds with mastery, performance approach, and performance avoidance goals. The results of Study 1 were an indication for the researchers to move forward with Study 2, which included 563 high school students. The secondary students completed the WAGS, as well as the Self-efficacy for Writing Scale, and the Liking Writing Scale. Students also self-reported grades for writing and for language arts courses. Approximately 6 weeks later, students completed a statewide writing assessment. We tested a theoretical model representing relationships among Study 2 variables using structural equation modeling including students' responses to the study scales and students' scores on the statewide assessment. Results from Study 2 revealed a good fit between a model depicting proposed relationships among the constructs and the data. Findings are discussed relative to achievement goal theory and writing. PMID:28878707
Zhou, Shaona; Han, Jing; Koenig, Kathleen; Raplinger, Amy; Pi, Yuan; Li, Dan; Xiao, Hua; Fu, Zhao
2015-01-01
Scientific reasoning is an important component under the cognitive strand of the 21st century skills and is highly emphasized in the new science education standards. This study focuses on the assessment of student reasoning in control of variables (COV), which is a core sub-skill of scientific reasoning. The main research question is to investigate the extent to which the existence of experimental data in questions impacts student reasoning and performance. This study also explores the effects of task contexts on student reasoning as well as students’ abilities to distinguish between testability and causal influences of variables in COV experiments. Data were collected with students from both USA and China. Students received randomly one of two test versions, one with experimental data and one without. The results show that students from both populations (1) perform better when experimental data are not provided, (2) perform better in physics contexts than in real-life contexts, and (3) students have a tendency to equate non-influential variables to non-testable variables. In addition, based on the analysis of both quantitative and qualitative data, a possible progression of developmental levels of student reasoning in control of variables is proposed, which can be used to inform future development of assessment and instruction. PMID:26949425
ERIC Educational Resources Information Center
Gulikers, Judith T. M.; Bastiaens, Theo J.; Kirschner, Paul A.; Kester, Liesbeth
2006-01-01
This article examines the relationships between perceptions of authenticity and alignment on study approach and learning outcome. Senior students of a vocational training program performed an authentic assessment and filled in a questionnaire about the authenticity of various assessment characteristics and the alignment between the assessment and…
The APU and Assessment in the Middle Years.
ERIC Educational Resources Information Center
Marjoram, D. T. E.
1978-01-01
Because of student characteristics in the middle years, several types of assessment are needed. The national monitoring system being developed by the Assessment of Performance Unit (APU) may prove a useful assessment framework for individual schools, providing developmental data for program comparisons and student movement between schools. (SJL)
The "pHunger Games": Manuscript Review to Assess Graduating Chemistry Majors
ERIC Educational Resources Information Center
Gorin, David J.; Jamieson, Elizabeth R.; Queeney, K. T.; Shea, Kevin M.; Spray, Carrie G. Read
2016-01-01
Numerous options exist to assess student performance using standardized, multiple-choice exams at the course and department levels. This paper describes the development and implementation of an alternative department-level assessment for graduating chemistry majors. The assessment detailed here evaluates students' ability to transfer chemical…
Bloomington Writing Assessment 1977; Student Exercise, Teacher Directions, Scoring.
ERIC Educational Resources Information Center
Bloomington Public Schools, MN.
This booklet contains the 14 exercises that are used in the Bloomington, Minnesota, school system's writing assessment program. Depending on their applicability, the exercises may be used to assess the writing performance of fourth-, eighth-, or eleventh-grade students. Thirteen of the exercises are from the National Assessment of Educational…
Measuring What High-Achieving Students Know and Can Do on Entry to School: PIPS 2002-2008
ERIC Educational Resources Information Center
Wildy, Helen; Styles, Irene
2011-01-01
Anecdotal evidence from teachers in Western Australia suggested that increasing numbers of on-entry students have been performing at high levels over recent years on the Performance Indicators in Primary Schools Baseline Assessment (PIPS-BLA). This paper reports the results of an investigation into the performance of high-scoring students. Data…
ERIC Educational Resources Information Center
Wong, Caroline; Delante, Nimrod Lawsin; Wang, Pengji
2017-01-01
This study examines the effectiveness of Post-Entry English Language Assessment (PELA) as a predictor of international business students' English writing performance and academic performance. An intervention involving the implementation of contextualised English writing workshops was embedded in a specific business subject targeted at students who…
Technology-Supported Performance Assessments for Middle School Geoscience
NASA Astrophysics Data System (ADS)
Zalles, D. R.; Quellmalz, E.; Rosenquist, A.; Kreikemeier, P.
2002-12-01
Under funding from the World Bank, the U.S. Department of Education, the National Science Foundation, and the Federal Government's Global Learning and Observations to Benefit the Environment Program (GLOBE), SRI International has developed and piloted web-accessible performance assessments that measure K-12 students' abilities to use learning technologies to reason with scientific information and communicate evidence-based conclusions to scientific problems. This presentation will describe the assessments that pertain to geoscience at the middle school level. They are the GLOBE Assessments and EPA Phoenix, an instantiation of SRI's model of assessment design known as Integrative Performance Assessments in Technology (IPAT). All are publicly-available on the web. GLOBE engages students in scientific data collection and observation about the environment. SRI's classroom assessments for GLOBE provide sample student assessment tools and frameworks that allow teachers and students to assess how well students can use the data in scientific inquiry projects. Teachers can use classroom assessment tools on the site to develop integrated investigations for assessing GLOBE within their particular science curricula. Rubrics are provided for measuring students' GLOBE-related skills, and alignments are made to state, national, and international science standards. Sample investigations are provided about atmosphere, hydrology, landcover, soils, earth systems, and visualizations. The IPAT assessments present students with engaging problems rooted in science or social science content, plus sets of tasks and questions that require them to gather relevant information on the web, use reasoning strategies to analyze and interpret the information, use spreadsheets, word processors, and other productivity tools, and communicate evidence-based findings and recommendations. In the process of gathering information and drawing conclusions, students are assessed on how well they can operate the technology as well as reason with the information made available through its use. In EPA Phoenix, students are asked to examine different representations of air quality data on the EPA website, as well as national weather data, in order to judge whether Phoenix would be a good site for holding certain athletic events. The students are assessed on how well they can interpret the data, synthesize it, and develop and communicate their conclusions. With the exception of formulating Web searches, results from piloting indicated that students were better at operating technology and interpreting single data sources than they were with synthesizing data from multiple sources and communicating cohesive evidence-based conclusions. Under the aegis of NSF and the International Association for the Evaluation of Educational Achievement, SRI is developing more IPAT assessments in science for a comparative international research study about student achievement in information and communication technology. These assessments will add other technologies into the mix such as dynamic modeling tools and geographic information systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thakur, Gautam; Olama, Mohammed M; McNair, Wade
Data-driven assessments and adaptive feedback are becoming a cornerstone research in educational data analytics and involve developing methods for exploring the unique types of data that come from the educational context. For example, predicting college student performance is crucial for both the students and educational institutions. It can support timely intervention to prevent students from failing a course, increasing efficacy of advising functions, and improving course completion rate. In this paper, we present our efforts in using data analytics that enable educationists to design novel data-driven assessment and feedback mechanisms. In order to achieve this objective, we investigate temporal stabilitymore » of students grades and perform predictive analytics on academic data collected from 2009 through 2013 in one of the most commonly used learning management systems, called Moodle. First, we have identified the data features useful for assessments and predicting student outcomes such as students scores in homework assignments, quizzes, exams, in addition to their activities in discussion forums and their total Grade Point Average(GPA) at the same term they enrolled in the course. Second, time series models in both frequency and time domains are applied to characterize the progression as well as overall projections of the grades. In particular, the model analyzed the stability as well as fluctuation of grades among students during the collegiate years (from freshman to senior) and disciplines. Third, Logistic Regression and Neural Network predictive models are used to identify students as early as possible who are in danger of failing the course they are currently enrolled in. These models compute the likelihood of any given student failing (or passing) the current course. The time series analysis indicates that assessments and continuous feedback are critical for freshman and sophomores (even with easy courses) than for seniors, and those assessments may be provided using the predictive models. Numerical results are presented to evaluate and compare the performance of the developed models and their predictive accuracy. Our results show that there are strong ties associated with the first few weeks for coursework and they have an impact on the design and distribution of individual modules.« less
Wettstein, Richard B; Wilkins, Robert L; Gardner, Donna D; Restrepo, Ruben D
2011-03-01
Critical thinking is an important characteristic to develop in respiratory care students. We used the short-form Watson-Glaser Critical Thinking Appraisal instrument to measure critical-thinking ability in 55 senior respiratory care students in a baccalaureate respiratory care program. We calculated the Pearson correlation coefficient to assess the relationships between critical-thinking score, age, and student performance on the clinical-simulation component of the national respiratory care boards examination. We used chi-square analysis to assess the association between critical-thinking score and educational background. There was no significant relationship between critical-thinking score and age, or between critical-thinking score and student performance on the clinical-simulation component. There was a significant (P = .04) positive association between a strong science-course background and critical-thinking score, which might be useful in predicting a student's ability to perform in areas where critical thinking is of paramount importance, such as clinical competencies, and to guide candidate-selection for respiratory care programs.
Utilizing a Meals on Wheels program to teach falls risk assessment to medical students.
Demons, Jamehl L; Chenna, Swapna; Callahan, Kathryn E; Davis, Brooke L; Kearsley, Linda; Sink, Kaycee M; Watkins, Franklin S; Williamson, Jeff D; Atkinson, Hal H
2014-01-01
Falls are a critical public health issue for older adults, and falls risk assessment is an expected competency for medical students. The aim of this study was to design an innovative method to teach falls risk assessment using community-based resources and limited geriatrics faculty. The authors developed a Fall Prevention Program through a partnership with Meals-on-Wheels (MOW). A 3rd-year medical student accompanies a MOW client services associate to a client's home and performs a falls risk assessment including history of falls, fear of falling, medication review, visual acuity, a Get Up and Go test, a Mini-Cog, and a home safety evaluation, reviewed in a small group session with a faculty member. During the 2010 academic year, 110 students completed the in-home falls risk assessment, rating it highly. One year later, 63 students voluntarily completed a retrospective pre/postsurvey, and the proportion of students reporting moderate to very high confidence in performing falls risk assessments increased from 30.6% to 87.3% (p < .001). Students also reported using most of the skills learned in subsequent clerkships. A single educational intervention in the MOW program effectively addressed geriatrics competencies with minimal faculty effort and could be adopted by many medical schools.
SOAP Methodology in General Practice/Family Medicine Teaching in Practical Context.
Santiago, Luiz Miguel; Neto, Isabel
2016-12-30
Medical records in General Practice/Family Medicine are an essential information support on the health status of the patient and a communication document between health professionals. The development of competencies in General Practice/Family Medicine during pre-graduation must include the ability to make adequate medical records in practical context. As of 2012, medicine students at the University of Beira Interior have been performing visits using the Subjective, Objective, Assessment and Plan - SOAP methodology, with a performance evaluation of the visit, with the aim to check on which Subjective, Objective, Assessment and Plan - SOAP aspects students reveal the most difficulties in order to define improvement techniques and to correlate patient grade with tutor evaluation. Analysing the evaluation data for the 2015 - 2016 school year at the General Practice/Family Medicine visit carried out by fourth year students in medicine, comparing the averages of each item in the Subjective, Objective, Assessment and Plan - SOAP checklist and the patient evaluation. In the Subjective, Objective, Assessment and Plan - SOAP, 29.7% of students are on the best grade quartile, 37.1% are on the best competencies quartile and 27.2% on the best patient grade quartile. 'Evolution was verified/noted' received the worst grades in Subjective, 'Record of physical examination focused on the problem of the visit' received the worst grades in Objective, 'Notes of Diagnostic reasoning / differential diagnostic' received de worst grades in Assessment and 'Negotiation of aims to achieve' received the worst grades in Plan. The best tutor evaluation is found in 'communication'. Only one single study evaluated student´s performance under examination during a visit, with similar results to the present one and none addressed the patient's evaluation. Students revealed a good performance in using the Subjective, Objective, Assessment and Plan - SOAP. The findings represent the beginning of the introduction of the Subjective, Objective, Assessment and Plan - SOAP to the students. This evaluation breaks ground towards better ways to teach the most difficult aspects.
Portfolio: a comprehensive method of assessment for postgraduates in oral and maxillofacial surgery.
Kadagad, Poornima; Kotrashetti, S M
2013-03-01
Post graduate learning and assessment is an important responsibility of an academic oral and maxillofacial surgeon. The current method of assessment for post graduate training include formative evaluation in the form of seminars, case presentations, log books and infrequently conducted end of year theory exams. End of the course theory and practical examination is a summative evaluation which awards the degree to the student based on grades obtained. Oral and maxillofacial surgery is mainly a skill based specialty and deliberate practice enhances skill. But the traditional system of assessment of post graduates emphasizes their performance on the summative exam which fails to evaluate the integral picture of the student throughout the course. Emphasis on competency and holistic growth of the post graduate student during training in recent years has lead to research and evaluation of assessment methods to quantify students' progress during training. Portfolio method of assessment has been proposed as a potentially functional method for post graduate evaluation. It is defined as a collection of papers and other forms of evidence that learning has taken place. It allows the collation and integration of evidence on competence and performance from different sources to gain a comprehensive picture of everyday practice. The benefits of portfolio assessment in health professions education are twofold: it's potential to assess performance and its potential to assess outcomes, such as attitudes and professionalism that are difficult to assess using traditional instruments. This paper is an endeavor for the development of portfolio method of assessment for post graduate student in oral and maxillofacial surgery.
Evaluation of an interactive, case-based review session in teaching medical microbiology.
Blewett, Earl L; Kisamore, Jennifer L
2009-08-27
Oklahoma State University-Center for Health Sciences (OSU-CHS) has replaced its microbiology wet laboratory with a variety of tutorials including a case-based interactive session called Microbial Jeopardy!. The question remains whether the time spent by students and faculty in the interactive case-based tutorial is worthwhile? This study was designed to address this question by analyzing both student performance data and assessing students' perceptions regarding the tutorial. Both quantitative and qualitative data were used in the current study. Part One of the study involved assessing student performance using archival records of seven case-based exam questions used in the 2004, 2005, 2006, and 2007 OSU-CHS Medical Microbiology course. Two sample t-tests for proportions were used to test for significant differences related to tutorial usage. Part Two used both quantitative and qualitative means to assess student's perceptions of the Microbial Jeopardy! session. First, a retrospective survey was administered to students who were enrolled in Medical Microbiology in 2006 or 2007. Second, responses to open-ended items from the 2008 course evaluations were reviewed for comments regarding the Microbial Jeopardy! session. Both student performance and student perception data support continued use of the tutorials. Quantitative and qualitative data converge to suggest that students like and learn from the interactive, case-based session. The case-based tutorial appears to improve student performance on case-based exam questions. Additionally, students perceived the tutorial as helpful in preparing for exam questions and reviewing the course material. The time commitment for use of the case-based tutorial appears to be justified.
Evaluation of an interactive, case-based review session in teaching medical microbiology
Blewett, Earl L; Kisamore, Jennifer L
2009-01-01
Background Oklahoma State University-Center for Health Sciences (OSU-CHS) has replaced its microbiology wet laboratory with a variety of tutorials including a case-based interactive session called Microbial Jeopardy!. The question remains whether the time spent by students and faculty in the interactive case-based tutorial is worthwhile? This study was designed to address this question by analyzing both student performance data and assessing students' perceptions regarding the tutorial. Methods Both quantitative and qualitative data were used in the current study. Part One of the study involved assessing student performance using archival records of seven case-based exam questions used in the 2004, 2005, 2006, and 2007 OSU-CHS Medical Microbiology course. Two sample t-tests for proportions were used to test for significant differences related to tutorial usage. Part Two used both quantitative and qualitative means to assess student's perceptions of the Microbial Jeopardy! session. First, a retrospective survey was administered to students who were enrolled in Medical Microbiology in 2006 or 2007. Second, responses to open-ended items from the 2008 course evaluations were reviewed for comments regarding the Microbial Jeopardy! session. Results Both student performance and student perception data support continued use of the tutorials. Quantitative and qualitative data converge to suggest that students like and learn from the interactive, case-based session. Conclusion The case-based tutorial appears to improve student performance on case-based exam questions. Additionally, students perceived the tutorial as helpful in preparing for exam questions and reviewing the course material. The time commitment for use of the case-based tutorial appears to be justified. PMID:19712473
Predicting academic performance of dental students using perception of educational environment.
Al-Ansari, Asim A; El Tantawi, Maha M A
2015-03-01
Greater emphasis on student-centered education means that students' perception of their educational environment is important. The ultimate proof of this importance is its effect on academic performance. The aim of this study was to assess the predictability of dental students' grades as indicator of academic performance through their perceptions of the educational environment. The Dundee Ready Educational Environment Measure (DREEM) questionnaire was used to assess dental students' perceptions of their educational environment at the University of Dammam, Saudi Arabia, in academic year 2012-13. Aggregate grades in courses were collected at the end of the semester and related to levels of perception of the five DREEM domains using regression analysis. The response rate was 87.1% among all students in Years 2-6. As the number of students perceiving excellence in learning increased, the number of students with A grades increased. Perception of an environment with problems in the atmosphere and social life increased the number of students with D and F grades. There was no relation between any of the DREEM domains and past academic performance as measured by GPA. This study concludes that these students' academic performance was affected by various aspects of perceiving the educational environment. Improved perception of learning increased the number of high achievers, whereas increased perception of problems in atmosphere and social life increased the number of low achievers and failing students.
Gadbois, Shannon A; Sturgeon, Ryan D
2011-06-01
Academic self-handicapping (ASH) tendencies, strategies students employ that increase their chances of failure on assessments while protecting self-esteem, are correlated with classroom goal structures and to learners' general self-perceptions and learning strategies. In particular, greater ASH is related to poorer academic performance but has yet to be examined with respect to learners' performance across a series of tests. This research was designed to examine the relationship between students' ASH tendencies and their self-concept clarity, learning strategies, and performance on a series of tests in a university course. A total of 209 (153 female; 56 male) Canadian university psychology students participated in this study. Participants' ASH tendencies, self-concept clarity, approaches to learning, and self-regulatory learning strategies were assessed along with expected grades and hours of study in the course from which they were recruited. Finally, students' grades were obtained for the three tests for the course from which they were recruited. Students reporting greater self-handicapping tendencies reported lower self-concept clarity, lower academic self-efficacy, greater test anxiety, more superficial learning strategies, and scored lower on all tests in the course. The relationships of ASH scores and learner variables with performance varied across the three performance indices. In particular, ASH scores were more strongly related to second and third tests, and prior performances were accounted for. ASH scores accounted for a relatively small but significant proportion of variance for all three tests. These results showed that ASH is a unique contributing factor in student performance outcomes, and may be particularly important after students complete the initial assessment in a course. ©2010 The British Psychological Society.
Effects of a rater training on rating accuracy in a physical examination skills assessment
Weitz, Gunther; Vinzentius, Christian; Twesten, Christoph; Lehnert, Hendrik; Bonnemeier, Hendrik; König, Inke R.
2014-01-01
Background: The accuracy and reproducibility of medical skills assessment is generally low. Rater training has little or no effect. Our knowledge in this field, however, relies on studies involving video ratings of overall clinical performances. We hypothesised that a rater training focussing on the frame of reference could improve accuracy in grading the curricular assessment of a highly standardised physical head-to-toe examination. Methods: Twenty-one raters assessed the performance of 242 third-year medical students. Eleven raters had been randomly assigned to undergo a brief frame-of-reference training a few days before the assessment. 218 encounters were successfully recorded on video and re-assessed independently by three additional observers. Accuracy was defined as the concordance between the raters' grade and the median of the observers' grade. After the assessment, both students and raters filled in a questionnaire about their views on the assessment. Results: Rater training did not have a measurable influence on accuracy. However, trained raters rated significantly more stringently than untrained raters, and their overall stringency was closer to the stringency of the observers. The questionnaire indicated a higher awareness of the halo effect in the trained raters group. Although the self-assessment of the students mirrored the assessment of the raters in both groups, the students assessed by trained raters felt more discontent with their grade. Conclusions: While training had some marginal effects, it failed to have an impact on the individual accuracy. These results in real-life encounters are consistent with previous studies on rater training using video assessments of clinical performances. The high degree of standardisation in this study was not suitable to harmonize the trained raters’ grading. The data support the notion that the process of appraising medical performance is highly individual. A frame-of-reference training as applied does not effectively adjust the physicians' judgement on medical students in real-live assessments. PMID:25489341
Effects of a rater training on rating accuracy in a physical examination skills assessment.
Weitz, Gunther; Vinzentius, Christian; Twesten, Christoph; Lehnert, Hendrik; Bonnemeier, Hendrik; König, Inke R
2014-01-01
The accuracy and reproducibility of medical skills assessment is generally low. Rater training has little or no effect. Our knowledge in this field, however, relies on studies involving video ratings of overall clinical performances. We hypothesised that a rater training focussing on the frame of reference could improve accuracy in grading the curricular assessment of a highly standardised physical head-to-toe examination. Twenty-one raters assessed the performance of 242 third-year medical students. Eleven raters had been randomly assigned to undergo a brief frame-of-reference training a few days before the assessment. 218 encounters were successfully recorded on video and re-assessed independently by three additional observers. Accuracy was defined as the concordance between the raters' grade and the median of the observers' grade. After the assessment, both students and raters filled in a questionnaire about their views on the assessment. Rater training did not have a measurable influence on accuracy. However, trained raters rated significantly more stringently than untrained raters, and their overall stringency was closer to the stringency of the observers. The questionnaire indicated a higher awareness of the halo effect in the trained raters group. Although the self-assessment of the students mirrored the assessment of the raters in both groups, the students assessed by trained raters felt more discontent with their grade. While training had some marginal effects, it failed to have an impact on the individual accuracy. These results in real-life encounters are consistent with previous studies on rater training using video assessments of clinical performances. The high degree of standardisation in this study was not suitable to harmonize the trained raters' grading. The data support the notion that the process of appraising medical performance is highly individual. A frame-of-reference training as applied does not effectively adjust the physicians' judgement on medical students in real-live assessments.
ERIC Educational Resources Information Center
Wolf, Mikyung Kim; Kim, Jinok; Kao, Jenny
2012-01-01
Glossary and reading aloud test items are commonly allowed in many states' accommodation policies for English language learner (ELL) students for large-scale mathematics assessments. However, little research is available regarding the effects of these accommodations on ELL students' performance. Further, no research exists that examines how…
Tutor versus Peer Group Assessment of Student Performance in a Simulation Training Exercise.
ERIC Educational Resources Information Center
Kwan, Kam-por; Leung, Roberta
1996-01-01
Performance in a simulation exercise of 96 third-year college students studying the hotel and tourism industries was assessed separately by teacher and peers using an identical checklist. Although results showed some agreement between teacher and peers, when averaged marks were converted into grades, agreement occurred in under half the cases.…
ERIC Educational Resources Information Center
Reese, De'borah Reese
2017-01-01
The purpose of this quantitative comparative study was to determine the existence or nonexistence of performance pass rate differences of special education middle school students on standardized assessments between pre and post co-teaching eras disaggregated by subject area and school. Co-teaching has altered classroom environments in many ways.…
ERIC Educational Resources Information Center
Grace, Christine Cooper
2017-01-01
This paper explores the potential of incorporating constructs of distributive justice and procedural justice into summative assessment of student learning in higher education. I systematically compare the process used by managers to evaluate employee performance in organizations--performance appraisal (PA)--with processes used by professors to…
ERIC Educational Resources Information Center
Lane, Suzanne; And Others
1995-01-01
Over 5,000 students participated in a study of the dimensionality and stability of the item parameter estimates of a mathematics performance assessment developed for the Quantitative Understanding: Amplifying Student Achievement and Reasoning (QUASAR) Project. Results demonstrate the test's dimensionality and illustrate ways to examine use of the…
ERIC Educational Resources Information Center
Gallant, Dorinda J.
2013-01-01
Early childhood professional organizations support teachers as the best assessors of students' academic, social, emotional, and physical development. This study investigates the predictive nature of teacher ratings of first-grade students' performance on a standards-based curriculum-embedded performance assessment within the context of a state…
ERIC Educational Resources Information Center
Cho, Hyun-Jeong; Kingston, Neal
2013-01-01
The purpose of this case study was to determine teachers' rationales for assigning students with mild disabilities to alternate assessment based on alternate achievement standards (AA-AAS). In interviews, special educators stated that their primary considerations in making the assignments were low academic performance, student use of extended…
The Impact of Participating in a Peer Assessment Activity on Subsequent Academic Performance
ERIC Educational Resources Information Center
Jhangiani, Rajiv S.
2016-01-01
The present study investigates the impact of participation in a peer assessment activity on subsequent academic performance. Students in two sections of an introductory psychology course completed a practice quiz 1 week prior to each of three course exams. Students in the experimental group participated in a five-step double-blind peer assessment…
Terregino, Carol A.; Saks, Norma S.
2010-01-01
Introduction A novel assessment of systems-based practice and practice-based learning and improvement learning objectives, implemented in a first-year patient-centered medicine course, is qualitatively described. Methods Student learning communities were asked to creatively demonstrate a problem and solution for health care delivery. Skits, filmed performances, plays, and documentaries were chosen by the students. Video recordings were reviewed for themes and the presence of course competencies. Results All performances demonstrated not only the index competencies of team work and facilitation of the learning of others, but many other core objectives of the course. The assignment was rated positively both by the faculty and the students, and has been added to the assessment modalities of the course. PMID:20174597
Terregino, Carol A; Saks, Norma S
2010-02-15
A novel assessment of systems-based practice and practice-based learning and improvement learning objectives, implemented in a first-year patient-centered medicine course, is qualitatively described. Student learning communities were asked to creatively demonstrate a problem and solution for health care delivery. Skits, filmed performances, plays, and documentaries were chosen by the students. Video recordings were reviewed for themes and the presence of course competencies. All performances demonstrated not only the index competencies of team work and facilitation of the learning of others, but many other core objectives of the course. The assignment was rated positively both by the faculty and the students, and has been added to the assessment modalities of the course.
Dubosh, Nicole M; Fisher, Jonathan; Lewis, Jason; Ullman, Edward A
2017-06-01
Clerkship directors routinely evaluate medical students using multiple modalities, including faculty assessment of clinical performance and written examinations. Both forms of evaluation often play a prominent role in final clerkship grade. The degree to which these modalities correlate in an emergency medicine (EM) clerkship is unclear. We sought to correlate faculty clinical evaluations with medical student performance on a written, standardized EM examination of medical knowledge. This is a retrospective study of fourth-year medical students in a 4-week EM elective at one academic medical center. EM faculty performed end of shift evaluations of students via a blinded online system using a 5-point Likert scale for 8 domains: data acquisition, data interpretation, medical knowledge base, professionalism, patient care and communication, initiative/reliability/dependability, procedural skills, and overall evaluation. All students completed the National EM M4 Examination in EM. Means, medians, and standard deviations for end of shift evaluation scores were calculated, and correlations with examination scores were assessed using a Spearman's rank correlation coefficient. Thirty-nine medical students with 224 discrete faculty evaluations were included. The median number of evaluations completed per student was 6. The mean score (±SD) on the examination was 78.6% ± 6.1%. The examination score correlated poorly with faculty evaluations across all 8 domains (ρ 0.074-0.316). Faculty evaluations of medical students across multiple domains of competency correlate poorly with written examination performance during an EM clerkship. Educators need to consider the limitations of examination score in assessing students' ability to provide quality patient clinical care. Copyright © 2016 Elsevier Inc. All rights reserved.
Naughton, Cynthia A; Friesner, Daniel L
2012-05-10
To determine whether a correlation exists between third-year PharmD students' perceived pharmacy knowledge and actual pharmacy knowledge as assessed by the Pharmacy Curricular Outcomes Assessment (PCOA). In 2010 and 2011, the PCOA was administered in a low-stakes environment to third-year pharmacy students at North Dakota State University College of Pharmacy, Nursing, and Allied Sciences (COPNAS). A survey instrument was also administered on which students self-assessed their perceived competencies in each of the core areas covered by the PCOA examination. The pharmacy students rated their competencies slightly higher than average. Performance on the PCOA was similar to but slightly higher than national averages. Correlations between each of the 4 content areas (basic biomedical sciences, pharmaceutical sciences, social/administrative sciences, and clinical sciences) mirrored those reported nationally by the National Association of Boards of Pharmacy (NABP). Student performance on the basic biomedical sciences portion of the PCOA was significantly correlated with students' perceived competencies in the biomedical sciences. No other correlations between actual and perceived competencies were significant. A lack of correlation exists between what students perceive they know and what they actually know in the areas of pharmaceutical science; social, behavioral, and administrative science; and clinical science. Therefore, additional standardized measures are needed to assess curricular effectiveness and provide comparisons among pharmacy programs.
Seizing the Opportunity for Performance Assessment: Resources and State Perspectives
ERIC Educational Resources Information Center
Gutmann, Laura; Jean, Christina; Hunziker, Joey
2017-01-01
This article reports from Stanford University's Innovative Assessments Institute on the development of performance assessment at scale, along with implementation recommendations. An accountability system built on the implementation of performance assessments has the potential to foster deeper and more authentic learning for students and more…
ERIC Educational Resources Information Center
Owusu-Acheaw, M.; Larson, Agatha Gifty
2015-01-01
The study sought to assess students' use of social media and its effect on academic performance of tertiary institutions students in Ghana with a focus on Koforidua Polytechnic students. Questionnaire was used for collecting data. Out of one thousand five hundred and seventy-eight copies of the questionnaire distributed, one thousand five hundred…
ERIC Educational Resources Information Center
Khanehkeshi, Ali; Basavarajappa
2011-01-01
This paper investigates the relationship of academic stress with aggression, depression and academic performance of college students. Using a random sampling technique, 60 students consist of boys and girls were selected as students having academic stress. The scale for assessing academic stress (Sinha, Sharma and Mahendra, 2001); the Buss-Perry…
The Relationship between Student Engagement and Academic Performance: Is It a Myth or Reality?
ERIC Educational Resources Information Center
Lee, Jung-Sook
2014-01-01
The author examined the relationship between student engagement and academic performance, using U.S. data of the Program for International Student Assessment 2000. The sample comprised 3,268 fifteen-year-old students from 121 U.S. schools. Multilevel analysis showed that behavioral engagement (defined as effort and perseverance in learning) and…
Where Immigrant Students Succeed: A Comparative Review of Performance and Engagement in PISA 2003
ERIC Educational Resources Information Center
Schleicher, Andreas
2006-01-01
This report examines how immigrant students performed, mainly in mathematics and reading, but also in science and problem-solving skills in the PISA 2003 assessment, both in comparison with native students in their adopted country and relative to other students across all countries covered in the report (the "case countries"). In…
Acting, Accidents and Performativity: Challenging the Hegemonic Good Student in Secondary Schools
ERIC Educational Resources Information Center
Thompson, Greg
2010-01-01
Current educational practice tends to ascribe a limiting vision of the good student as one who is well behaved, performs well in assessments and demonstrates values in keeping with dominant expectations. This paper argues that this vision of the good student is antithetical to the lived experience of students as they negotiate their positionality…
ERIC Educational Resources Information Center
Gok, Tolga; Gok, Ozge
2016-01-01
The aim of this research was to investigate the effects of peer instruction on learning strategies, problem solving performance, and conceptual understanding of college students in a general chemistry course. The research was performed students enrolled in experimental and control groups of a chemistry course were selected. Students in the…
ERIC Educational Resources Information Center
Arikan, Serkan
2014-01-01
There are many studies that focus on factors affecting achievement. However, there is limited research that used student characteristics indices reported by the Programme for International Student Assessment (PISA). Therefore, this study investigated the predictive effects of student characteristics on mathematics performance of Turkish students.…
ERIC Educational Resources Information Center
Cresswell, John
2004-01-01
The primary focus of this report is to examine the effect that immigrant status and home language background may have on the performance of Australian students who participated in the OECD/Programme for International Student Assessment (PISA 2000). Approximately 5,477 students from 231 schools across Australia participated in the study. In this…
Media Matter: The Effect of Medium of Presentation on Student's Recognition of Histopathology.
Telang, Ajay; Jong, Nynke De; Dalen, Jan Van
2016-12-01
Pathology teaching has undergone transformation with the introduction of virtual microscopy as a teaching and learning tool. To assess if dental students can identify histopathology irrespective of the media of presentation and if the media affect student's oral pathology case based learning scores. The perception of students towards "hybrid" approach in teaching and learning histopathology in oral pathology was also assessed. A controlled experiment was conduc-ted on year 4 and year 5 dental student groups using a perfor-mance test and a questionnaire survey. A response rate of 81% was noted for the performance test as well as the questionnaire survey. Results show a significant effect of media on performance of students with virtual microscopy bringing out the best performance across all student groups in case based learning scenarios. The order of preference for media was found to be virtual microscopy followed by photomicrographs and light microscopy. However, 94% of students still prefer the present hybrid system for teaching and learning of oral pathology. The study shows that identification of histo-pathology by students is dependent on media and the type of media has a significant effect on the performance. Virtual microscopy is strongly perceived as a useful tool for learning which thus brings out the best performance, however; the hybrid approach still remains the most preferred approach for histopathology learning.
A Student Assessment Tool for Standardized Patient Simulations (SAT-SPS): Psychometric analysis.
Castro-Yuste, Cristina; García-Cabanillas, María José; Rodríguez-Cornejo, María Jesús; Carnicer-Fuentes, Concepción; Paloma-Castro, Olga; Moreno-Corral, Luis Javier
2018-05-01
The evaluation of the level of clinical competence acquired by the student is a complex process that must meet various requirements to ensure its quality. The psychometric analysis of the data collected by the assessment tools used is a fundamental aspect to guarantee the student's competence level. To conduct a psychometric analysis of an instrument which assesses clinical competence in nursing students at simulation stations with standardized patients in OSCE-format tests. The construct of clinical competence was operationalized as a set of observable and measurable behaviors, measured by the newly-created Student Assessment Tool for Standardized Patient Simulations (SAT-SPS), which was comprised of 27 items. The categories assigned to the items were 'incorrect or not performed' (0), 'acceptable' (1), and 'correct' (2). 499 nursing students. Data were collected by two independent observers during the assessment of the students' performance at a four-station OSCE with standardized patients. Descriptive statistics were used to summarize the variables. The difficulty levels and floor and ceiling effects were determined for each item. Reliability was analyzed using internal consistency and inter-observer reliability. The validity analysis was performed considering face validity, content and construct validity (through exploratory factor analysis), and criterion validity. Internal reliability and inter-observer reliability were higher than 0.80. The construct validity analysis suggested a three-factor model accounting for 37.1% of the variance. These three factors were named 'Nursing process', 'Communication skills', and 'Safe practice'. A significant correlation was found between the scores obtained and the students' grades in general, as well as with the grades obtained in subjects with clinical content. The assessment tool has proven to be sufficiently reliable and valid for the assessment of the clinical competence of nursing students using standardized patients. This tool has three main components: the nursing process, communication skills, and safety management. Copyright © 2018 Elsevier Ltd. All rights reserved.
Effect of Computer-Assisted Learning on Students' Dental Anatomy Waxing Performance.
Kwon, So Ran; Hernández, Marcela; Blanchette, Derek R; Lam, Matthew T; Gratton, David G; Aquilino, Steven A
2015-09-01
The aim of this study was to evaluate the impact of computer-assisted learning on first-year dental students' waxing abilities and self-evaluation skills. Additionally, this study sought to determine how well digital evaluation software performed compared to faculty grading with respect to students' technical scores on a practical competency examination. First-year students at one U.S. dental school were assigned to one of three groups: control (n=40), E4D Compare (n=20), and Sirona prepCheck (n=19). Students in the control group were taught by traditional teaching methodologies, and the technology-assisted groups received both traditional training and supplementary feedback from the corresponding digital system. Five outcomes were measured: visual assessment score, self-evaluation score, and digital assessment scores at 0.25 mm, 0.30 mm, and 0.35 mm tolerance. The scores from visual assessment and self-evaluation were examined for differences among groups using the Kruskal-Wallis test. Correlation between the visual assessment and digital scores was measured using Pearson and Spearman rank correlation coefficients. At completion of the course, students were asked to complete a survey on the use of these digital technologies. All 79 students in the first-year class participated in the study, for a 100% response rate. The results showed that the visual assessment and self-evaluation scores did not differ among groups (p>0.05). Overall correlations between visual and digital assessment scores were modest though statistically significant (5% level of significance). Analysis of survey responses completed by students in the technology groups showed that profiles for the two groups were similar and not favorable towards digital technology. The study concluded that technology-assisted training did not affect these students' waxing performance or self-evaluation skills and that visual scores given by faculty and digital assessment scores correlated moderately.
Hirokawa, Randy Y; Daub, Katharyn; Lovell, Eileen; Smith, Sarah; Davis, Alice; Beck, Christine
2012-11-01
This study examined the relationship between communication and nursing students' team performance by determining whether variations in team performance are related to differences in communication regarding five task-relevant functions: assessment, diagnosis, planning, implementation, and evaluation. The study results indicate a positive relationship between nursing students' team performance and comments focused on the implementation of treatment(s) and the evaluation of treatment options. A negative relationship between nursing students' team performance and miscellaneous comments made by team members was also observed. Copyright 2012, SLACK Incorporated.
The Impact of Feedback as Formative Assessment on Student Performance
ERIC Educational Resources Information Center
Owen, Leanne
2016-01-01
This article provides an evaluation of the redesign of a research methods course intended to enhance students' learning for understanding and transfer. Drawing on principles of formative assessment from the existing academic literature, the instructor introduced a number of increasingly complex low-stakes assignments for students to complete prior…
Cardiovascular Risk Factors among College Students: Knowledge, Perception, and Risk Assessment
ERIC Educational Resources Information Center
Tran, Dieu-My T.; Zimmerman, Lani M.; Kupzyk, Kevin A.; Shurmur, Scott W.; Pullen, Carol H.; Yates, Bernice C.
2017-01-01
Objective: To assess college students' knowledge and perception of cardiovascular risk factors and to screen for their cardiovascular risks. Participants: The final sample that responded to recruitment consisted of 158 college students from a midwestern university. Methods: A cross-sectional, descriptive study was performed using convenience…
ERIC Educational Resources Information Center
Bouck, Emily C.; Joshi, Gauri S.; Johnson, Linley
2013-01-01
This study assessed if students with and without disabilities used calculators (fourfunction, scientific, or graphing) to solve mathematics assessment problems and whether using calculators improved their performance. Participants were sixth and seventh-grade students educated with either National Science Foundation (NSF)-funded or traditional…
Formative Assessment and Writing: A Meta-Analysis
ERIC Educational Resources Information Center
Graham, Steve; Hebert, Michael; Harris, Karen R.
2015-01-01
To determine whether formative writing assessments that are directly tied to everyday classroom teaching and learning enhance students' writing performance, we conducted a meta-analysis of true and quasi-experiments conducted with students in grades 1 to 8. We found that feedback to students about writing from adults, peers, self, and computers…
ERIC Educational Resources Information Center
Guzzomi, Andrew L.; Male, Sally A.; Miller, Karol
2017-01-01
Engineering educators should motivate and support students in developing not only technical competence but also professional competence including commitment to excellence. We developed an authentic assessment to improve students' understanding of the importance of "perfection" in engineering--whereby 50% good enough will not be…
ERIC Educational Resources Information Center
Barlow, Dudley
2004-01-01
The Michigan Educational Assessment Program (MEAP) test scores for 2004 have been announced. The tests, administered to juniors, attempt to assess how well students perform in mathematics, reading, writing, science, and social studies. There is a good deal at stake here. For students, it means money: Any student who meets or exceeds the state's…
ERIC Educational Resources Information Center
Pekrun, Reinhard; Goetz, Thomas; Frenzel, Anne C.; Barchfeld, Petra; Perry, Raymond P.
2011-01-01
Aside from test anxiety scales, measurement instruments assessing students' achievement emotions are largely lacking. This article reports on the construction, reliability, internal validity, and external validity of the Achievement Emotions Questionnaire (AEQ) which is designed to assess various achievement emotions experienced by students in…
Exploratory Evaluation of Audio Email Technology in Formative Assessment Feedback
ERIC Educational Resources Information Center
Macgregor, George; Spiers, Alex; Taylor, Chris
2011-01-01
Formative assessment generates feedback on students' performance, thereby accelerating and improving student learning. Anecdotal evidence gathered by a number of evaluations has hypothesised that audio feedback may be capable of enhancing student learning more than other approaches. In this paper we report on the preliminary findings of a…
ERIC Educational Resources Information Center
National Center for Education Statistics, 2011
2011-01-01
This report provides a detailed portrait of Hispanic and White academic achievement gaps and how students' performance has changed over time at both the national and state levels. The report presents achievement gaps using reading and mathematics assessment data from the National Assessment of Educational Progress (NAEP) for the 4th- and 8th-grade…
ERIC Educational Resources Information Center
Shivraj, Pooja
2017-01-01
The Programme of International Student Assessment (PISA) has been administered to 15-year- olds every three years since 2000. Since then, the U.S. has performed below average in mathematics, with no significant changes in performance. The objective of this study was to examine the alignment of the content students in the U.S. are assessed on in…
Adeniyi, Olasupo Stephen; Ogli, Sunday Adakole; Ojabo, Cecelia Omaile; Musa, Danladi Ibrahim
2013-01-01
Background: This study was carried out to assess the relationship between thevarious assessment parameters, viz. continuous assessment (CA), multiple choice questions (MCQ), essay, practical, oral with the overall performance in the first professional examination in Physiology. Materials and Methods: The results of all 244 students that sat for the examination over 4 years were used. The CA, MCQ, essay, practical, oral and overall performance scores were obtained. All the scores were rounded up to 100% to give each parameter equal weighting. Results: Analysis showed that the average overall performance was 50.8 ± 5.3. The best average performance was in practical (55.5 ± 9.1), while the least was in MCQ (44.1 ± 7.8). In the study, 81.1% of students passed orals, 80.3% passed practical, 72.5% passed CA, 58.6% passed essay, 22.5% passed MCQ and 71.7% of students passed on the overall performance. All assessment parameters significantly correlated with overall performance. Continuous assessment had the best correlation (r = 0.801, P = 0.000), while oral had the least correlation (r = 0.277, P = 0.000) with overall performance. Essay was the best predictor of overall performance (β = 0.421, P = 000), followed by MCQ (β = 0.356, P = 000), while practical was the least predictor of performance (β = 0.162, P = 000). Conclusion: We suggest that the department should uphold the principle of continuous assessment and more effort be made in the design of MCQ so that performance can improve. PMID:24403705
Progress and Promise: Results from the Boston Pilot Schools
ERIC Educational Resources Information Center
Tung, Rosann; Ouimette, Monique; Rugen, Leah
2006-01-01
New research conducted by Boston's Center for Collaborative Education documents significant achievement by students who attend the city's Pilot Schools. Pilot School students are performing better than the district averages across every indicator of student engagement and performance, including the statewide standardized assessment (MCAS). In…
Monitoring Trends in Student Satisfaction
ERIC Educational Resources Information Center
Grebennikov, Leonid; Shah, Mahsood
2013-01-01
Over the last decade, the assessment of student experience has gained significant prominence in Australian higher education. Universities conduct internal surveys in order to identify which of their services students rate higher or lower on importance and performance. Thus, institutions can promote highly performing areas and work on those needing…
Communicating Learning Outcomes and Student Performance through the Student Transcript
ERIC Educational Resources Information Center
Kenyon, George; Barnes, Cynthia
2010-01-01
The university accreditation process now puts more emphasis on self assessment. This change requires universities to identify program objectives, performance indicators, and areas for improvement. Many accrediting institutions are requiring that institutions communicate clearly to constituents: 1) what learning outcomes were achieved by students,…
ERIC Educational Resources Information Center
Pereira, Maria A.
2011-01-01
This study examined the strength and the direction of the relationships between student (i.e., socioeconomic status, attendance, and gender) and school variables (i.e., formative assessment usage and ASI classification) found in the extant literature to influence student achievement in language arts and mathematics. Analyses were conducted using…
ERIC Educational Resources Information Center
Matthews, Kelly E.; Adams, Peter; Goos, Merrilyn
2016-01-01
Application of mathematical and statistical thinking and reasoning, typically referred to as quantitative skills, is essential for university bioscience students. First, this study developed an assessment task intended to gauge graduating students' quantitative skills. The Quantitative Skills Assessment of Science Students (QSASS) was the result,…
The learning of aquaponics practice in university
NASA Astrophysics Data System (ADS)
Agustina, T. W.; Rustaman, N. Y.; Riandi; Purwianingsih, W.
2018-05-01
This study aims to obtain a description of the perfomance capabilities of aquaponic technology and the assessment of product and packaging of harvest kale. The aquaponic practice used a STREAM (Science Technology Religion Art Matematics) approach. The method was explanatory sequential mixed method. The research was conducted on one class of Biology Education students in 6th semester. The sample was chosen purposively with 49 students. The study instruments are student worksheet, observation sheet, rubric performance and product assessment, interview sheet and field notes. The indicator of performance rubrics on the manufacture of aquaponic technology consisted of the product rubric, cultivation criteria and packing method of kale. The interview rubric is in the form of student constraints on the manufacture of aquaponics. Based on the results, most students have performance in designing technology that is categorized as enough up to good. Almost all students produce a very good kale harvest. Most of the students produce kale packaging products that are categorized as enough. The implications of this research are the learning of aquaponic with the STREAM approach can equip student’s performance and product capabilities.
David, Michael C; Eley, Diann S; Schafer, Jennifer; Davies, Leo
2016-01-01
The primary aim of this study was to assess the predictive validity of cumulative grade point average (GPA) for performance in the International Foundations of Medicine (IFOM) Clinical Science Examination (CSE). A secondary aim was to develop a strategy for identifying students at risk of performing poorly in the IFOM CSE as determined by the National Board of Medical Examiners' International Standard of Competence. Final year medical students from an Australian university medical school took the IFOM CSE as a formative assessment. Measures included overall IFOM CSE score as the dependent variable, cumulative GPA as the predictor, and the factors age, gender, year of enrollment, international or domestic status of student, and language spoken at home as covariates. Multivariable linear regression was used to measure predictor and covariate effects. Optimal thresholds of risk assessment were based on receiver-operating characteristic (ROC) curves. Cumulative GPA (nonstandardized regression coefficient [B]: 81.83; 95% confidence interval [CI]: 68.13 to 95.53) and international status (B: -37.40; 95% CI: -57.85 to -16.96) from 427 students were found to be statistically associated with increased IFOM CSE performance. Cumulative GPAs of 5.30 (area under ROC [AROC]: 0.77; 95% CI: 0.72 to 0.82) and 4.90 (AROC: 0.72; 95% CI: 0.66 to 0.78) were identified as being thresholds of significant risk for domestic and international students, respectively. Using cumulative GPA as a predictor of IFOM CSE performance and accommodating for differences in international status, it is possible to identify students who are at risk of failing to satisfy the National Board of Medical Examiners' International Standard of Competence.
A yoga intervention for music performance anxiety in conservatory students.
Stern, Judith R S; Khalsa, Sat Bir S; Hofmann, Stefan G
2012-09-01
Music performance anxiety can adversely affect musicians. There is a need for additional treatment strategies, especially those that might be more acceptable to musicians than existing therapies. This pilot study examined the effectiveness of a 9-week yoga practice on reducing music performance anxiety in undergraduate and graduate music conservatory students, including both vocalists and instrumentalists. The intervention consisted of fourteen 60-minute yoga classes approximately twice a week and a brief daily home practice. Of the 24 students enrolled in the study, 17 attended the post-intervention assessment. Participants who completed the measures at both pre- and post-intervention assessments showed large decreases in music performance anxiety as well as in trait anxiety. Improvements were sustained at 7- to 14-month follow-up. Participants generally provided positive comments about the program and its benefits. This study suggests that yoga is a promising intervention for music performance anxiety in conservatory students and therefore warrants further research.
Chang, Anna; Boscardin, Christy; Chou, Calvin L; Loeser, Helen; Hauer, Karen E
2009-10-01
The purpose is to determine which assessment measures identify medical students at risk of failing a clinical performance examination (CPX). Retrospective case-control, multiyear design, contingency table analysis, n = 149. We identified two predictors of CPX failure in patient-physician interaction skills: low clerkship ratings (odds ratio 1.79, P = .008) and student progress review for communication or professionalism concerns (odds ratio 2.64, P = .002). No assessments predicted CPX failure in clinical skills. Performance concerns in communication and professionalism identify students at risk of failing the patient-physician interaction portion of a CPX. This correlation suggests that both faculty and standardized patients can detect noncognitive traits predictive of failing performance. Early identification of these students may allow for development of a structured supplemental curriculum with increased opportunities for practice and feedback. The lack of predictors in the clinical skills portion suggests limited faculty observation or feedback.
Evaluation of a Modified Debate Exercise Adapted to the Pedagogy of Team-Based Learning
Yang, Haoshu; Gupta, Vasudha
2018-01-01
Objective. To assess the impact of a debate exercise on self-reported evidence of student learning in literature evaluation, evidence-based decision making, and oral presentation. Methods. Third-year pharmacy students in a required infectious disease therapeutics course participated in a modified debate exercise that included a reading assignment and readiness assessment tests consistent with team-based learning (TBL) pedagogy. Peer and faculty assessment of student learning was accomplished with a standardized rubric. A pre- and post-debate survey was used to assess self-reported perceptions of abilities to perform skills outlined by the learning objectives. Results. The average individual readiness assessment score was 93.5% and all teams scored 100% on their team readiness assessments. Overall student performance on the debates was also high with an average score of 88.2% prior to extra credit points. Of the 95 students, 88 completed both pre- and post-surveys (93% participation rate). All learning objectives were associated with a statistically significant difference between pre- and post-debate surveys with the majority of students reporting an improvement in self-perceived abilities. Approximately two-thirds of students enjoyed the debates exercise and believed it improved their ability to make and defend clinical decisions. Conclusion. A debate format adapted to the pedagogy of TBL was well-received by students, documented high achievement in assessment of skills, and improved students’ self-reported perceptions of abilities to evaluate the literature, develop evidence-based clinical decisions, and deliver an effective oral presentation.
Trends in computer applications in science assessment
NASA Astrophysics Data System (ADS)
Kumar, David D.; Helgeson, Stanley L.
1995-03-01
Seven computer applications to science assessment are reviewed. Conventional test administration includes record keeping, grading, and managing test banks. Multiple-choice testing involves forced selection of an answer from a menu, whereas constructed-response testing involves options for students to present their answers within a set standard deviation. Adaptive testing attempts to individualize the test to minimize the number of items and time needed to assess a student's knowledge. Figurai response testing assesses science proficiency in pictorial or graphic mode and requires the student to construct a mental image rather than selecting a response from a multiple choice menu. Simulations have been found useful for performance assessment on a large-scale basis in part because they make it possible to independently specify different aspects of a real experiment. An emerging approach to performance assessment is solution pathway analysis, which permits the analysis of the steps a student takes in solving a problem. Virtually all computer-based testing systems improve the quality and efficiency of record keeping and data analysis.
Implementing Performance Assessment in the Classroom.
ERIC Educational Resources Information Center
Brualdi, Amy
1999-01-01
Provides advice on implementing performance assessment in the classroom. Outlines the basic steps from defining the purpose of the assessment to giving the student feedback. Advice is also given about scoring rubrics. (SLD)
Smeding, Annique; Darnon, Céline; Souchal, Carine; Toczek-Capelle, Marie-Christine; Butera, Fabrizio
2013-01-01
In spite of official intentions to reduce inequalities at University, students' socio-economic status (SES) is still a major determinant of academic success. The literature on the dual function of University suggests that University serves not only an educational function (i.e., to improve students' learning), but also a selection function (i.e., to compare people, and orient them towards different positions in society). Because current assessment practices focus on the selection more than on the educational function, their characteristics fit better with norms and values shared by dominant high-status groups and may favour high-SES students over low-SES students in terms of performances. A focus on the educational function (i.e., mastery goals), instead, may support low-SES students' achievement, but empirical evidence is currently lacking. The present research set out to provide such evidence and tested, in two field studies and a randomised field experiment, the hypothesis that focusing on University's educational function rather than on its selection function may reduce the SES achievement gap. Results showed that a focus on learning, mastery-oriented goals in the assessment process reduced the SES achievement gap at University. For the first time, empirical data support the idea that low-SES students can perform as well as high-SES students if they are led to understand assessment as part of the learning process, a way to reach mastery goals, rather than as a way to compare students to each other and select the best of them, resulting in performance goals. This research thus provides a theoretical framework to understand the differential effects of assessment on the achievement of high and low-SES students, and paves the way toward the implementation of novel, theory-driven interventions to reduce the SES-based achievement gap at University.
Exploring the Utility of a Virtual Performance Assessment
ERIC Educational Resources Information Center
Clarke-Midura, Jody; Code, Jillianne; Zap, Nick; Dede, Chris
2011-01-01
With funding from the Institute of Education Sciences (IES), the Virtual Performance Assessment project at the Harvard Graduate School of Education is developing and studying the feasibility of immersive virtual performance assessments (VPAs) to assess scientific inquiry of middle school students as a standardized component of an accountability…
ERIC Educational Resources Information Center
Hillstrom, Crowley
2013-01-01
The Minnesota Department of Education has collected Minnesota Comprehensive Assessments (MCA) results on every American Indian student who has taken the tests. This information has been made available so communities and parents can assess how their districts, schools, and students are performing based upon MCA proficiency criteria. Prior to this…
ERIC Educational Resources Information Center
Cawthon, Stephanie; Leppo, Rachel
2013-01-01
The authors conducted a qualitative meta-analysis of the research on assessment accommodations for students who are deaf or hard of hearing. There were 16 identified studies that analyzed the impact of factors related to student performance on academic assessments across different educational settings, content areas, and types of assessment…
ERIC Educational Resources Information Center
Deplazes, Svetlana P.
2014-01-01
The purpose of this study was to examine the overall level of student achievement on the 2012 Kansas History-Government Assessment in Grades 6, 8, and high school, with major emphasis on the subject area of economics. It explored four specific research questions in order to: (1) determine the level of student knowledge of assessed economic…
Hammoud, Maya M; Morgan, Helen K; Edwards, Mary E; Lyon, Jennifer A; White, Casey
2012-01-01
Purpose To determine if video review of student performance during patient encounters is an effective tool for medical student learning. Methods Multiple bibliographic databases that include medical, general health care, education, psychology, and behavioral science literature were searched for the following terms: medical students, medical education, undergraduate medical education, education, self-assessment, self-evaluation, self-appraisal, feedback, videotape, video recording, televised, and DVD. The authors examined all abstracts resulting from this search and reviewed the full text of the relevant articles as well as additional articles identified in the reference lists of the relevant articles. Studies were classified by year of student (preclinical or clinical) and study design (controlled or non-controlled). Results A total of 67 articles met the final search criteria and were fully reviewed. Most studies were non-controlled and performed in the clinical years. Although the studies were quite variable in quality, design, and outcomes, in general video recording of performance and subsequent review by students with expert feedback had positive outcomes in improving feedback and ultimate performance. Video review with self-assessment alone was not found to be generally effective, but when linked with expert feedback it was superior to traditional feedback alone. Conclusion There are many methods for integrating effective use of video-captured performance into a program of learning. We recommend combining student self-assessment with feedback from faculty or other trained individuals for maximum effectiveness. We also recommend additional research in this area. PMID:23761999
ERIC Educational Resources Information Center
Garcia, Lucy; Nussbaum, Miguel; Preiss, David D.
2011-01-01
The main purpose of this study was to assess whether seventh-grade students use of information and communication technology (ICT) was related to performance on working memory tasks. In addition, the study tested whether the relationship between ICT use and performance on working memory tasks interacted with seventh-grade students' socioeconomic…
ERIC Educational Resources Information Center
Arikan, Serkan; van de Vijver, Fons J. R.; Yagmur, Kutlay
2018-01-01
We examined Differential Item Functioning (DIF) and the size of cross-cultural performance differences in the Programme for International Student Assessment (PISA) 2012 mathematics data before and after application of propensity score matching. The mathematics performance of Indonesian, Turkish, Australian, and Dutch students on released items was…
ERIC Educational Resources Information Center
Andreou, Georgia; Riga, Asimina; Papayiannis, Nikolaos
2016-01-01
The present study investigates whether the use of ICTs improves the writing performance of students with ADHD (Attention Deficit Hyperactivity Disorder). It also examines whether gender affects performance. A number of ADHD students were selected and were assessed for their use of a combination of distinct educational tools. Divided into two…
Lemma, Seblewengel; Berhane, Yemane; Worku, Alemayehu; Gelaye, Bizu; Williams, Michelle A
2014-05-01
This study assessed the association of sleep quality with academic performance among university students in Ethiopia. This cross-sectional study of 2,173 college students (471 female and 1,672 male) was conducted in two universities in Ethiopia. Students were selected into the study using a multistage sampling procedure, and data were collected through a self-administered questionnaire. Sleep quality was assessed using Pittsburgh Sleep Quality Index, and academic performance was based on self-reported cumulative grade point average. The Student's "t" test, analysis of variance, and multiple linear regression were used to evaluate associations. We found that students with better sleep quality score achieved better on their academic performance (P value = 0.001), while sleep duration was not associated with academic performance in the final model. Our study underscores the importance of sleep quality on better academic performance. Future studies need to identify the possible factors which influence sleep quality other than the academic environment repeatedly reported by other literature. It is imperative to design and implement appropriate interventions to improve sleep quality in light of the current body of evidence to enhance academic success in the study setting.
Lakshminarayan, Nagesh; Potdar, Shrudha; Reddy, Siddana Goud
2013-04-01
Procrastination, generally defined as a voluntary, irrational delay of behavior, is a prevalent phenomenon among college students throughout the world and occurs at alarmingly high rates. For this study, a survey was conducted of 209 second-, third-, and fourth-year undergraduate dental students of Bapuji Dental College and Hospital, Davangere, India, to identify the relationship between their level of procrastination and academic performance. A sixteen-item questionnaire was used to assess the level of procrastination among these students. Data related to their academic performance were also collected. Spearman's correlation coefficient test was used to assess the relationship between procrastination and academic performance. It showed a negative correlation of -0.63 with a significance level of p<0.01 (two-tailed test), indicating that students who showed high procrastination scores performed below average in their academics. In addition, analysis with the Mann-Whitney U test found a significant difference in procrastination scores between the two gender groups (p<0.05). Hence, among the Indian undergraduate dental students evaluated in this study, it appeared that individuals with above average and average academic performance had lower scores of procrastination and vice versa.
Does learning style influence academic performance in different forms of assessment?
Wilkinson, Tracey; Boohan, Mairead; Stevenson, Michael
2014-03-01
Educational research on learning styles has been conducted for some time, initially within the field of psychology. Recent research has widened to include more diverse disciplines, with greater emphasis on application. Although there are numerous instruments available to measure several different dimensions of learning style, it is generally accepted that styles differ, although the qualities of more than one style may be inherent in any one learner. But do these learning styles have a direct effect on student performance in examinations, specifically in different forms of assessment? For this study, hypotheses were formulated suggesting that academic performance is influenced by learning style. Using the Honey and Mumford Learning Style Questionnaire, learning styles of a cohort of first year medical and dental students at Queen's University Belfast were assessed. Pearson correlation was performed between the score for each of the four learning styles and the student examination results in a variety of subject areas (including anatomy) and in different types of assessments - single best answer, short answer questions and Objective Structured Clinical Examinations. In most of the analyses, there was no correlation between learning style and result and in the few cases where the correlations were statistically significant, they generally appeared to be weak. It seems therefore from this study that although the learning styles of students vary, they have little effect on academic performance, including in specific forms of assessment. © 2013 Anatomical Society.
Johnsen, David C; Lipp, Mitchell J; Finkelstein, Michael W; Cunningham-Ford, Marsha A
2012-12-01
Patient-centered care involves an inseparable set of knowledge, abilities, and professional traits on the part of the health care provider. For practical reasons, health professions education is segmented into disciplines or domains like knowledge, technical skills, and critical thinking, and the culture of dental education is weighted toward knowledge and technical skills. Critical thinking, however, has become a growing presence in dental curricula. To guide student learning and assess performance in critical thinking, guidelines have been developed over the past several decades in the educational literature. Prominent among these guidelines are the following: engage the student in multiple situations/exercises reflecting critical thinking; for each exercise, emulate the intended activity for validity; gain agreement of faculty members across disciplines and curriculum years on the learning construct, application, and performance assessment protocol for reliability; and use the same instrument to guide learning and assess performance. The purposes of this article are 1) to offer a set of concepts from the education literature potentially helpful to guide program design or corroborate existing programs in dental education; 2) to offer an implementation model consolidating these concepts as a guide for program design and execution; 3) to cite specific examples of exercises and programs in critical thinking in the dental education literature analyzed against these concepts; and 4) to discuss opportunities and challenges in guiding student learning and assessing performance in critical thinking for dentistry.
Riese, Alison; Rappaport, Leah; Alverson, Brian; Park, Sangshin; Rockney, Randal M
2017-06-01
Clinical performance evaluations are major components of medical school clerkship grades. But are they sufficiently objective? This study aimed to determine whether student and evaluator gender is associated with assessment of overall clinical performance. This was a retrospective analysis of 4,272 core clerkship clinical performance evaluations by 829 evaluators of 155 third-year students, within the Alpert Medical School grading database for the 2013-2014 academic year. Overall clinical performance, assessed on a three-point scale (meets expectations, above expectations, exceptional), was extracted from each evaluation, as well as evaluator gender, age, training level, department, student gender and age, and length of observation time. Hierarchical ordinal regression modeling was conducted to account for clustering of evaluations. Female students were more likely to receive a better grade than males (adjusted odds ratio [AOR] 1.30, 95% confidence interval [CI] 1.13-1.50), and female evaluators awarded lower grades than males (AOR 0.72, 95% CI 0.55-0.93), adjusting for department, observation time, and student and evaluator age. The interaction between student and evaluator gender was significant (P = .03), with female evaluators assigning higher grades to female students, while male evaluators' grading did not differ by student gender. Students who spent a short time with evaluators were also more likely to get a lower grade. A one-year examination of all third-year clerkship clinical performance evaluations at a single institution revealed that male and female evaluators rated male and female students differently, even when accounting for other measured variables.
The Use of Video Technology in Science Teaching: A Vehicle for Alternative Assessment.
ERIC Educational Resources Information Center
Lawrence, Michael
1994-01-01
A secondary physics teacher used video assessments in science as an economical assessment form that required students to use the scientific method, explanation, feedback, critical thinking, and metacognition. When using video assessment in optics, he found his scoring was not biased and that students improved their performance following video…
Web-Based Portfolio Assessment: Validation of an Open Source Platform
ERIC Educational Resources Information Center
Collins, Regina; Elliot, Norbert; Klobucar, Andrew; Deek, Fadi P.
2013-01-01
Assessment of educational outcomes through purchased tests is commonplace in the evaluation of individual student ability and of educational programs. Focusing on the assessment of writing performance in a longitudinal study of first-time, full-time students (n = 598), this research describes the design, use, and assessment of an open-source…
Providing Formative Feedback From a Summative Computer-aided Assessment
Sewell, Robert D. E.
2007-01-01
Objectives To examine the effectiveness of providing formative feedback for summative computer-aided assessment. Design Two groups of first-year undergraduate life science students in pharmacy and neuroscience who were studying an e-learning package in a common pharmacology module were presented with a computer-based summative assessment. A sheet with individualized feedback derived from each of the 5 results sections of the assessment was provided to each student. Students were asked via a questionnaire to evaluate the form and method of feedback. Assessment The students were able to reflect on their performance and use the feedback provided to guide their future study or revision. There was no significant difference between the responses from pharmacy and neuroscience students. Students' responses on the questionnaire indicated a generally positive reaction to this form of feedback. Conclusions Findings suggest that additional formative assessment conveyed by this style and method would be appreciated and valued by students. PMID:17533442
The association between higher body mass index and poor school performance in high school students.
Tonetti, L; Fabbri, M; Filardi, M; Martoni, M; Natale, V
2016-12-01
This study aimed to examine the association between body mass index (BMI) and school performance in high school students by controlling for relevant mediators such as sleep quality, sleep duration and socioeconomic status. Thirty-seven high school students (mean age: 18.16 ± 0.44 years) attending the same school type, i.e. 'liceo scientifico' (science-based high school), were enrolled. Students' self-reported weight and height were used to calculate BMI. Participants wore an actigraph to objectively assess the quality and duration of sleep. School performance was assessed through the actual grade obtained at the final school-leaving exam, in which higher grades indicate higher performance. BMI, get-up time, mean motor activity, wake after sleep onset and number of awakenings were negatively correlated with the grade, while sleep efficiency was positively correlated. When performing a multiple regression analysis, BMI proved the only significant (negative) predictor of grade. When controlling for sleep quality, sleep duration and socioeconomic status, a higher BMI is associated with a poorer school performance in high school students. © 2015 World Obesity Federation.
A Quantitative Assessment of Student Performance and Examination Format
ERIC Educational Resources Information Center
Davison, Christopher B.; Dustova, Gandzhina
2017-01-01
This research study describes the correlations between student performance and examination format in a higher education teaching and research institution. The researchers employed a quantitative, correlational methodology utilizing linear regression analysis. The data was obtained from undergraduate student test scores over a three-year time span.…
Potential Predictors of Student Teaching Performance: Considering Emotional Intelligence
ERIC Educational Resources Information Center
Hall, P. Cougar; West, Joshua H.
2011-01-01
Efforts to increase teacher quality have focused on increasing both the admission and graduation standards required for students entering the profession. This study examined the relationship between common standards, such as college GPA, ACT scores, and Praxis exam scores, with student teacher performance as measured by an assessment rubric based…
Concept Inventories: Predicting the Wrong Answer May Boost Performance
ERIC Educational Resources Information Center
Talanquer, Vincente
2017-01-01
Several concept inventories have been developed to elicit students' alternative conceptions in chemistry. It is suggested that heuristic reasoning may bias students' answers in these types of assessments toward intuitively appealing choices. If this is the case, one could expect students to improve their performance by engaging in more analytical…
45 CFR 2522.700 - How does evaluation differ from performance measurement?
Code of Federal Regulations, 2010 CFR
2010-10-01
... progress, evaluation uses scientifically-based research methods to assess the effectiveness of programs by... the reading ability of students in a program over time to a similar group of students not... example, a performance measure for a literacy program may include the percentage of students receiving...
High-Stakes Accountability: Student Anxiety and Large-Scale Testing
ERIC Educational Resources Information Center
von der Embse, Nathaniel P.; Witmer, Sara E.
2014-01-01
This study examined the relationship between student anxiety about high-stakes testing and their subsequent test performance. The FRIEDBEN Test Anxiety Scale was administered to 1,134 11th-grade students, and data were subsequently collected on their statewide assessment performance. Test anxiety was a significant predictor of test performance…
Making Sense of the Performance (Dis)Advantage for Immigrant Students across Canada
ERIC Educational Resources Information Center
Volante, Louis; Klinger, Don; Bilgili, Özge; Siegel, Melissa
2017-01-01
International achievement measures such as the Programme for International Student Assessment (PISA) have traditionally reported a significant gap between non-migrant and immigrant student groups--a result that is often referred to as the "immigrant performance disadvantage". This article examines first- and second-generation immigrant…
ERIC Educational Resources Information Center
McAdams, Charles R.; Foster, Victoria A.
2007-01-01
Ethical standards for counselor training require remediation of students with professional performance deficiencies. However, standards fail to specify the type or extent of remediation necessary to safeguard students' legal rights or justify dismissal if remediation is unsuccessful. Critical assessment of remedial practices in counselor…
ERIC Educational Resources Information Center
Touchton, Michael
2015-01-01
I administer a quasi-experiment using undergraduate political science majors in statistics classes to evaluate whether "flipping the classroom" (the treatment) alters students' applied problem-solving performance and satisfaction relative to students in a traditional classroom environment (the control). I also assess whether general…
Singapore Students' Performance on Australian and Singapore Assessment Items
ERIC Educational Resources Information Center
Ho, Siew Yin; Lowrie, Tom
2012-01-01
This study describes Singapore students' (N = 607) performance on a recently developed Mathematics Processing Instrument (MPI). The MPI comprised tasks sourced from Australia's NAPLAN and Singapore's PSLE. In addition, the MPI had a corresponding question which encouraged students to describe how they solved the respective tasks. In particular,…
Mori, Brenda; Brooks, Dina; Norman, Kathleen E; Herold, Jodi; Beaton, Dorcas E
2015-08-01
To develop the first draft of a Canadian tool to assess physiotherapy (PT) students' performance in clinical education (CE). Phase 1: to gain consensus on the items within the new tool, the number and placement of the comment boxes, and the rating scale; Phase 2: to explore the face and content validity of the draft tool. Phase 1 used the Delphi method; Phase 2 used cognitive interviewing methods with recent graduates and clinical instructors (CIs) and detailed interviews with clinical education and measurement experts. Consensus was reached on the first draft of the new tool by round 3 of the Delphi process, which was completed by 21 participants. Interviews were completed with 13 CIs, 6 recent graduates, and 7 experts. Recent graduates and CIs were able to interpret the tool accurately, felt they could apply it to a recent CE experience, and provided suggestions to improve the draft. Experts provided salient advice. The first draft of a new tool to assess PT students in CE, the Canadian Physiotherapy Assessment of Clinical Performance (ACP), was developed and will undergo further development and testing, including national consultation with stakeholders. Data from Phase 2 will contribute to developing an online education module for CIs and students.
Student Perceptions of Online Lectures and WebCT in an Introductory Drug Information Course
Freeman, Maisha Kelly; Schrimsher, Robert H.; Kendrach, Michael G.
2006-01-01
Objectives To determine student perceptions regarding online lectures and quizzes during an introductory drug information course for first-year professional doctor of pharmacy students. Design Formal and online lectures, online quizzes, written semester projects, a practice-based examination, a careers in pharmacy exercise, and a final examination were used to deliver the course content and assess performance. A multiple-choice survey instrument was used to evaluate student perceptions of WebCT and online lectures. Assessment More than 47% of students reported that online lectures helped them learn the material better, 77% reported that lectures would be used to study for the final examination, and 59% reported that they would use WebCT lectures for future classes. Approximately 40% of students agreed that online lectures should be used in future courses. Conclusion Students reported that WebCT was easy to use; however, the majority of students preferred in-class lectures compared to online lectures. A positive correlation was observed for those students who performed well on the online quizzes and those who performed well on the final examination. PMID:17332852
The Workshop Program on Authentic Assessment for Science Teachers
NASA Astrophysics Data System (ADS)
Rustaman, N. Y.; Rusdiana, D.; Efendi, R.; Liliawati, W.
2017-02-01
A study on implementing authentic assessment program through workshop was conducted to investigate the improvement of the competence of science teachers in designing performance assessment in real life situation at school level context. A number of junior high school science teachers and students as participants were involved in this study. Data was collected through questionnaire, observation sheets, and pre-and post-test during 4 day workshop. This workshop had facilitated them direct experience with seventh grade junior high school students during try out. Science teachers worked in group of four and communicated each other by think-pair share in cooperative learning approach. Research findings show that generally the science teachers’ involvement and their competence in authentic assessment improved. Their knowledge about the nature of assessment in relation to the nature of science and its instruction was improved, but still have problem in integrating their design performance assessment to be implemented in their lesson plan. The 7th grade students enjoyed participating in the science activities, and performed well the scientific processes planned by group of science teachers. The response of science teachers towards the workshop was positive. They could design the task and rubrics for science activities, and revised them after the implementation towards the students. By participating in this workshop they have direct experience in designing and trying out their ability within their professional community in real situation towards their real students in junior high school.
Tackling student neurophobia in neurosciences block with team-based learning.
Anwar, Khurshid; Shaikh, Abdul A; Sajid, Muhammad R; Cahusac, Peter; Alarifi, Norah A; Al Shedoukhy, Ahlam
2015-01-01
Traditionally, neurosciences is perceived as a difficult course in undergraduate medical education with literature suggesting use of the term "Neurophobia" (fear of neurology among medical students). Instructional strategies employed for the teaching of neurosciences in undergraduate curricula traditionally include a combination of lectures, demonstrations, practical classes, problem-based learning and clinico-pathological conferences. Recently, team-based learning (TBL), a student-centered instructional strategy, has increasingly been regarded by many undergraduate medical courses as an effective method to assist student learning. In this study, 156 students of year-three neuroscience block were divided into seven male and seven female groups, comprising 11-12 students in each group. TBL was introduced during the 6 weeks of this block, and a total of eight TBL sessions were conducted during this duration. We evaluated the effect of TBL on student learning and correlated it with the student's performance in summative assessment. Moreover, the students' perceptions regarding the process of TBL was assessed by online survey. We found that students who attended TBL sessions performed better in the summative examinations as compared to those who did not. Furthermore, students performed better in team activities compared to individual testing, with male students performing better with a more favorable impact on their grades in the summative examination. There was an increase in the number of students achieving higher grades (grade B and above) in this block when compared to the previous block (51.7% vs. 25%). Moreover, the number of students at risk for lower grades (Grade B- and below) decreased in this block when compared to the previous block (30.6% vs. 55%). Students generally elicited a favorable response regarding the TBL process, as well as expressed satisfaction with the content covered and felt that such activities led to improvement in communication and interpersonal skills. We conclude that implementing TBL strategy increased students' responsibility for their own learning and helped the students in bridging the gap in their cognitive knowledge to tackle 'neurophobia' in a difficult neurosciences block evidenced by their improved performance in the summative assessment.
Koo, Cathy L.; Demps, Elaine L.; Bowman, John D.; Panahi, Ladan; Boyle, Paul
2016-01-01
Objective. To determine whether a flipped classroom design would improve student performance and perceptions of the learning experience compared to traditional lecture course design in a required pharmacotherapy course for second-year pharmacy students. Design. Students viewed short online videos about the foundational concepts and answered self-assessment questions prior to face-to-face sessions involving patient case discussions. Assessment. Pretest/posttest and precourse/postcourse surveys evaluated students’ short-term knowledge retention and perceptions before and after the redesigned course. The final grades improved after the redesign. Mean scores on the posttest improved from the pretest. Postcourse survey showed 88% of students were satisfied with the redesign. Students reported that they appreciated the flexibility of video viewing and knowledge application during case discussions but some also struggled with time requirements of the course. Conclusion. The redesigned course improved student test performance and perceptions of the learning experience during the first year of implementation. PMID:27073286
ERIC Educational Resources Information Center
Eliason, Norma Lynn
2014-01-01
The effects of incorporating an online social networking platform, hosted through Wikispace, as a method to potential improve the performance of middle school students on standardized math assessments was investigated in this study. A principal strategy for any educational setting may provide an instructional approach that improves the delivery of…
ERIC Educational Resources Information Center
Nevels, Nevels
2012-01-01
The dissertation study reported here describes various policies and strategies used by school districts that impact student performance on the Missouri Algebra 1 End-of- Course (EOC) assessment. Analysis of state testing data, teacher survey data, and interview data were used to describe policies and strategies used by 42 teachers and…
ERIC Educational Resources Information Center
Ohio State Univ., Columbus. National Center for Research in Vocational Education.
This second in a series of six learning modules on instructional evaluation is designed to give secondary and postsecondary vocational teachers help in assessing student performance as it relates to knowledge of the facts, data, related information, and procedures taught in their vocational courses. The terminal objective for the module is to…
ERIC Educational Resources Information Center
Center for Research on Evaluation, Standards, and Student Testing, Los Angeles, CA.
Information was gathered about current state interest, activity, and concerns related to performance assessment for students. The Center for Research on Evaluation, Standards, and Student Testing of the University of California (Los Angeles) conducted telephone interviews with directors of testing in each of the 50 states in the spring of 1990.…
Four Studies on Aspects of Assessing Computational Performance. Technical Report No. 297.
ERIC Educational Resources Information Center
Romberg, Thomas A., Ed.
The four studies reported in this document deal with aspects of assessing students' performance on computational skills. The first study grew out of a need for an instrument to measure students' speed at recalling addition facts. This had seemed to be a very easy task, but it proved to be much more difficult than anticipated. The second study grew…
ERIC Educational Resources Information Center
Tuckwiller, Brenda L.
2012-01-01
The purpose of this study was to investigate career and technical education teachers' level of knowledge and use of performance based student assessment practices in West Virginia's secondary and post-secondary career education centers. In addition, this study sought to determine what relationships, if any, exist between levels of knowledge and…
ERIC Educational Resources Information Center
Rolka, Christine; Remshagen, Anja
2015-01-01
Contextualized learning is considered beneficial for student success. In this article, we assess the impact of context-based learning tools on student grade performance in an introductory computer science course. In particular, we investigate two central questions: (1) does the use context-based learning tools, robots and animations, affect…
ERIC Educational Resources Information Center
Bolona Lopez, Maria del Carmen; Ortiz, Margarita Elizabeth; Allen, Christopher
2015-01-01
This paper describes a project to use mobile devices and video conferencing technology in the assessment of student English as a Foreign Language (EFL) teacher performance on teaching practice in Ecuador. With the increasing availability of mobile devices with video recording facilities, it has become easier for trainers to capture teacher…
ERIC Educational Resources Information Center
Georgia Univ., Athens. Div. of Vocational Education.
This booklet lists tasks and functions the student in the transportation cluster should be able to do upon entering an employment situation or a postsecondary school. (Listings are also available for the areas of allied health occupations/practical nursing and cosmetology.) Tasks are coded to correspond to those on the Student Performance Record,…
ERIC Educational Resources Information Center
Stanger-Hall, Kathrin F.; Wenner, Julianne A.
2014-01-01
We assessed the performance of students with a self-reported conflict between their religious belief and the theory of evolution in two sections of a large introductory biology course (N = 373 students). Student performance was measured through pretest and posttest evolution essays and multiple- choice (MC) questions (evolution-related and…
ERIC Educational Resources Information Center
Rueda, Robert; And Others
The study examined performance of limited-English proficient Hispanic students on a battery of psychometric instruments designed to appropriately assess linguistic minority students. Subjects consisted of three groups: 44 nonhandicapped, 45 learning-disabled, and 39 mildly mentally retarded elementary-level students. Instruments included the…
ERIC Educational Resources Information Center
Warner, Zachary B.
2013-01-01
This study compared an expert-based cognitive model of domain mastery with student-based cognitive models of task performance for Integrated Algebra. Interpretations of student test results are limited by experts' hypotheses of how students interact with the items. In reality, the cognitive processes that students use to solve each item may be…
ERIC Educational Resources Information Center
Alegre, Alberto A.
2014-01-01
The aim of this research was to determine the relationship between academic self-efficacy, self-regulated learning and academic performance of first-year university students in the Metropolitan Lima area. An assessment was made of 284 students (138 male and 146 female students) admitted to a private university of Lima for the 2013-2 term by using…
The Impact of Middle-School Students' Feedback Choices and Performance on Their Feedback Memory
ERIC Educational Resources Information Center
Cutumisu, Maria; Schwartz, Daniel L.
2016-01-01
This paper presents a novel examination of the impact of students' feedback choices and performance on their feedback memory. An empirical study was designed to collect the choices to seek critical feedback from a hundred and six Grade 8 middle-school students via Posterlet, a digital assessment game in which students design posters. Upon…
ERIC Educational Resources Information Center
Doherty, Frank J.; Vaughan, George B.
A study was conducted to assess the academic performance of students who transferred from Piedmont Virginia Community College (PVCC) to the University of Virginia (UVA), using information on student application and acceptance rates, test scores, grade point averages (GPA's), and graduation rates. Data supplied by UVA on 291 students who…
School Closures in New York City: Did Students Do Better after Their High Schools Were Closed?
ERIC Educational Resources Information Center
Kemple, James J.
2016-01-01
Much has been written about the controversy surrounding performance-based school closures, but there has been no rigorous assessment of their impact on student achievement. Does the closure process harm students who are enrolled in a school while it is being phased out? Are future students better-off because a low-performing option has been…
ERIC Educational Resources Information Center
Petscher, Yaacov; Kershaw, Sarah; Koon, Sharon; Foorman, Barbara R.
2014-01-01
Districts and schools use progress monitoring to assess student progress, to identify students who fail to respond to intervention, and to further adapt instruction to student needs. Researchers and practitioners often use progress monitoring data to estimate student achievement growth (slope) and evaluate changes in performance over time for…
ERIC Educational Resources Information Center
Clemens, Nathan H.; Davis, John L.; Simmons, Leslie E.; Oslund, Eric L.; Simmons, Deborah C.
2015-01-01
Standardized measures are often used as an index of students' reading comprehension and scores have important implications, particularly for students who perform below expectations. This study examined secondary-level students' patterns of responding and the prevalence and impact of non-attempted items on a timed, group-administered,…
Park, Hyung-Ran; Kim, Chun-Ja; Park, Jee-Won; Park, Eunyoung
2015-01-01
The purpose of this study was to examine the effectiveness of team-based learning (a well-recognized learning and teaching strategy), applied in a health assessment subject, on nursing students' perceived teamwork (team-efficacy and team skills) and academic performance (individual and team readiness assurance tests, and examination scores). A prospective, one-group, pre- and post-test design enrolled a convenience sample of 74 second-year nursing students at a university in Suwon, Korea. Team-based learning was applied in a 2-credit health assessment subject over a 16-week semester. All students received written material one week before each class for readiness preparation. After administering individual- and team-readiness assurance tests consecutively, the subject instructor gave immediate feedback and delivered a mini-lecture to the students. Finally, students carried out skill based application exercises. The findings showed significant improvements in the mean scores of students' perceived teamwork after the introduction of team-based learning. In addition, team-efficacy was associated with team-adaptability skills and team-interpersonal skills. Regarding academic performance, team readiness assurance tests were significantly higher than individual readiness assurance tests over time. Individual readiness assurance tests were significantly related with examination scores, while team readiness assurance tests were correlated with team-efficacy and team-interpersonal skills. The application of team-based learning in a health assessment subject can enhance students' perceived teamwork and academic performance. This finding suggests that team-based learning may be an effective learning and teaching strategy for improving team-work of nursing students, who need to collaborate and effectively communicate with health care providers to improve patients' health.
Investigating the Effects of Exam Length on Performance and Cognitive Fatigue
Jensen, Jamie L.; Berry, Dane A.; Kummer, Tyler A.
2013-01-01
This study examined the effects of exam length on student performance and cognitive fatigue in an undergraduate biology classroom. Exams tested higher order thinking skills. To test our hypothesis, we administered standard- and extended-length high-level exams to two populations of non-majors biology students. We gathered exam performance data between conditions as well as performance on the first and second half of exams within conditions. We showed that lengthier exams led to better performance on assessment items shared between conditions, possibly lending support to the spreading activation theory. It also led to greater performance on the final exam, lending support to the testing effect in creative problem solving. Lengthier exams did not result in lower performance due to fatiguing conditions, although students perceived subjective fatigue. Implications of these findings are discussed with respect to assessment practices. PMID:23950918
Design and assessment of an interactive physics tutoring environment
NASA Astrophysics Data System (ADS)
Scott, Lisa Ann
2001-07-01
The application of scientific principles is an extremely important skill taught in undergraduate introductory science courses, yet many students emerge from such courses unable to reliably apply the scientific principles they have ostensibly learned. In an attempt to address this problem, the knowledge and thought processes needed to apply an important principle in introductory physics (Newton's law) were carefully analyzed. Reliable performance requires not only declarative knowledge but also corresponding procedural knowledge and the basic cognitive functions of deciding, implementing and assessing. Computer programs called guided-practice PALs (P&barbelow;ersonal A&barbelow;ssistants for Ḻearning) were developed to teach explicitly the knowledge and thought processes needed to apply Newton's law to solve problems. These programs employ a modified form of Palincsar and Brown's reciprocal-teaching strategy (1984) in which students and computers alternately coach each other, taking turns making decisions, implementing and assessing them. The computer programs make it practically feasible to provide students with individual guidance and feedback ordinarily unavailable in most courses. In a pilot study, the guided-practice PALs were found to be nearly as effective as individual tutoring by expert teachers and significantly more effective than the instruction provided in a well-taught physics course. This guided practice however is not sufficient to ensure that students develop the ability to perform independently. Accordingly, independent-performance PALs were developed which require students to work independently, receiving only the minimal feedback necessary to successfully complete the task. These independent-performance PALS are interspersed with guided-practice PALs to create an instructional environment which facilitates a gradual transition to independent performance. In a study designed to assess the efficacy of the PAL instruction, students in the PAL group used only guided-practice PALS and students in the PAL+ group used both guided-practice and independent-performance PALS. The performance of the PAL and PAL+ groups were compared to the performance of a Control group which received traditional instruction. The addition of the independent-performance PALS proved to be at least as effective as the guided-practice PALs alone, and both forms of PAL instruction were significantly more effective than traditional instruction.
Loeding, B L; Greenan, J P
1998-12-01
The study examined the validity and reliability of four assessments, with three instruments per domain. Domains included generalizable mathematics, communication, interpersonal relations, and reasoning skills. Participants were deaf, legally blind, or visually impaired students enrolled in vocational classes at residential secondary schools. The researchers estimated the internal consistency reliability, test-retest reliability, and construct validity correlations of three subinstruments: student self-ratings, teacher ratings, and performance assessments. The data suggest that these instruments are highly internally consistent measures of generalizable vocational skills. Four performance assessments have high-to-moderate test-retest reliability estimates, and were generally considered to possess acceptable validity and reliability.