Science.gov

Sample records for 10-question multiple-choice test

  1. Accommodations for Multiple Choice Tests

    ERIC Educational Resources Information Center

    Trammell, Jack

    2011-01-01

    Students with learning or learning-related disabilities frequently struggle with multiple choice assessments due to difficulty discriminating between items, filtering out distracters, and framing a mental best answer. This Practice Brief suggests accommodations and strategies that disability service providers can utilize in conjunction with…

  2. Constructive Multiple-Choice Testing System

    ERIC Educational Resources Information Center

    Park, Jooyong

    2010-01-01

    The newly developed computerized Constructive Multiple-choice Testing system is introduced. The system combines short answer (SA) and multiple-choice (MC) formats by asking examinees to respond to the same question twice, first in the SA format, and then in the MC format. This manipulation was employed to collect information about the two…

  3. Formatting Issues in Multiple Choice Tests.

    ERIC Educational Resources Information Center

    Siskind, Theresa G.; And Others

    The purposes of the present study were to ascertain whether or not two particular formatting characteristics influenced seventh grade student performance on math tests. One research question focused on the effects of decimal positioning in multiple choice options for computational items. In a second question, labeling format was compared with…

  4. Comparative Reliabilities and Validities of Multiple Choice and Complex Multiple Choice Nursing Education Tests.

    ERIC Educational Resources Information Center

    Dryden, Russell E.; Frisbie, David A.

    The purpose of this study was to compare certain characteristics of multiple-choice (MC) and complex multiple-choice (CMC) achievement tests designed to measure knowledge in medical-surgical nursing. Each of 268 junior and senior nursing students from four midwestern schools responded to one of four test forms. MC items were developed by…

  5. The Positive and Negative Consequences of Multiple-Choice Testing

    ERIC Educational Resources Information Center

    Roediger, Henry L.; Marsh, Elizabeth J.

    2005-01-01

    Multiple-choice tests are commonly used in educational settings but with unknown effects on students' knowledge. The authors examined the consequences of taking a multiple-choice test on a later general knowledge test in which students were warned not to guess. A large positive testing effect was obtained: Prior testing of facts aided final…

  6. Reducing the Need for Guesswork in Multiple-Choice Tests

    ERIC Educational Resources Information Center

    Bush, Martin

    2015-01-01

    The humble multiple-choice test is very widely used within education at all levels, but its susceptibility to guesswork makes it a suboptimal assessment tool. The reliability of a multiple-choice test is partly governed by the number of items it contains; however, longer tests are more time consuming to take, and for some subject areas, it can be…

  7. Optimizing Multiple-Choice Tests as Learning Events

    ERIC Educational Resources Information Center

    Little, Jeri Lynn

    2011-01-01

    Although generally used for assessment, tests can also serve as tools for learning--but different test formats may not be equally beneficial. Specifically, research has shown multiple-choice tests to be less effective than cued-recall tests in improving the later retention of the tested information (e.g., see meta-analysis by Hamaker, 1986),…

  8. Valuing Assessment in Teacher Education - Multiple-Choice Competency Testing

    ERIC Educational Resources Information Center

    Martin, Dona L.; Itter, Diane

    2014-01-01

    When our focus is on assessment educators should work to value the nature of assessment. This paper presents a new approach to multiple-choice competency testing in mathematics education. The instrument discussed here reflects student competence, encourages self-regulatory learning behaviours and links content with current curriculum documents and…

  9. Guessing, Partial Knowledge, and Misconceptions in Multiple-Choice Tests

    ERIC Educational Resources Information Center

    Lau, Paul Ngee Kiong; Lau, Sie Hoe; Hong, Kian Sam; Usop, Hasbee

    2011-01-01

    The number right (NR) method, in which students pick one option as the answer, is the conventional method for scoring multiple-choice tests that is heavily criticized for encouraging students to guess and failing to credit partial knowledge. In addition, computer technology is increasingly used in classroom assessment. This paper investigates the…

  10. DIATEST, A System for Programme Control of Multiple Choice Tests.

    ERIC Educational Resources Information Center

    Eriksson, Christer

    The DIATEST responder system is a control system for fully programed running of diagnostic tests of multiple-choice type. The system makes use of the control unit earlier developed at the Institute of Technology for programed four-screen slide projection and the electronic response analyser (ESAU). Presentation of a question is done audiovisually,…

  11. A Comparison of a Multiple Choice and an Essay Test of Writing Skills.

    ERIC Educational Resources Information Center

    Culpepper, Marilyn Mayer; Ramsdell, Rae

    1982-01-01

    The test scores of college freshmen given both a multiple choice test and an essay test of writing skills were compared to assess the validity of a multiple choice test compared with an essay test. (HOD)

  12. Effects of Test Expectation on Multiple-Choice Performance and Subjective Ratings

    ERIC Educational Resources Information Center

    Balch, William R.

    2007-01-01

    Undergraduates studied the definitions of 16 psychology terms, expecting either a multiple-choice (n = 132) or short-answer (n = 122) test. All students then received the same multiple-choice test, requiring them to recognize the definitions as well as novel examples of the terms. Compared to students expecting a multiple-choice test, those…

  13. The Testing Methods and Gender Differences in Multiple-Choice Assessment

    NASA Astrophysics Data System (ADS)

    Ng, Annie W. Y.; Chan, Alan H. S.

    2009-10-01

    This paper provides a comprehensive review of the multiple-choice assessment in the past two decades for facilitating people to conduct effective testing in various subject areas. It was revealed that a variety of multiple-choice test methods viz. conventional multiple-choice, liberal multiple-choice, elimination testing, confidence marking, probability testing, and order-of-preference scheme are available for use in assessing subjects' knowledge and decision ability. However, the best multiple-choice test method for use has not yet been identified. The review also indicated that the existence of gender differences in multiple-choice task performance might be due to the test area, instruction/scoring condition, and item difficulty.

  14. Measures of Partial Knowledge and Unexpected Responses in Multiple-Choice Tests

    ERIC Educational Resources Information Center

    Chang, Shao-Hua; Lin, Pei-Chun; Lin, Zih-Chuan

    2007-01-01

    This study investigates differences in the partial scoring performance of examinees in elimination testing and conventional dichotomous scoring of multiple-choice tests implemented on a computer-based system. Elimination testing that uses the same set of multiple-choice items rewards examinees with partial knowledge over those who are simply…

  15. The Effect of Type and Timing of Feedback on Learning from Multiple-Choice Tests

    ERIC Educational Resources Information Center

    Butler, Andrew C.; Karpicke, Jeffrey D.; Roediger, Henry L., III

    2007-01-01

    Two experiments investigated how the type and timing of feedback influence learning from a multiple-choice test. First, participants read 12 prose passages, which covered various general knowledge topics (e.g., The Sun) and ranged between 280 and 300 words in length. Next, they took an initial six-alternative, multiple-choice test on information…

  16. Are Multiple Choice Tests Fair to Medical Students with Specific Learning Disabilities?

    ERIC Educational Resources Information Center

    Ricketts, Chris; Brice, Julie; Coombes, Lee

    2010-01-01

    The purpose of multiple choice tests of medical knowledge is to estimate as accurately as possible a candidate's level of knowledge. However, concern is sometimes expressed that multiple choice tests may also discriminate in undesirable and irrelevant ways, such as between minority ethnic groups or by sex of candidates. There is little literature…

  17. On the Equivalence of Constructed-Response and Multiple-Choice Tests.

    ERIC Educational Resources Information Center

    Traub, Ross E.; Fisher, Charles W.

    Two sets of mathematical reasoning and two sets of verbal comprehension items were cast into each of three formats--constructed response, standard multiple-choice, and Coombs multiple-choice--in order to assess whether tests with indentical content but different formats measure the same attribute, except for possible differences in error variance…

  18. On the Equivalence of Constructed-Response and Multiple-Choice Tests

    ERIC Educational Resources Information Center

    Traub, Ross E.; Fisher, Charles W.

    1977-01-01

    Two sets of mathematical reasoning and two sets of verbal comprehension items were cast into each of three formats--constructed response, standard multiple-choice, and Coombs multiple-choice--in order to assess whether tests with identical content but different formats measure the same attribute. (Author/CTM)

  19. Multiple-Choice Test Items: What Are Textbook Authors Telling Teachers?

    ERIC Educational Resources Information Center

    Ellsworth, Randy A.

    1990-01-01

    Analysis of educational psychology textbooks identified textbook authors' guidelines for teachers to follow when writing multiple-choice test items. Selected guidelines were used to evaluate multiple-choice items (N=1,080) from 18 college instructor guides to educational psychology texts. Results indicated that approximately 60 percent of the…

  20. Testing Collective Memory: Representing the Soviet Union on Multiple-Choice Questions

    ERIC Educational Resources Information Center

    Reich, Gabriel A.

    2011-01-01

    This article tests the assumption that state-mandated multiple-choice history exams are a cultural tool for disseminating an "official" collective memory. Findings from a qualitative study of a collection of multiple-choice questions that relate to the history of the Soviet Union are presented. The 263 questions all come from New York State…

  1. Evidence-Based Decision about Test Scoring Rules in Clinical Anatomy Multiple-Choice Examinations

    ERIC Educational Resources Information Center

    Severo, Milton; Gaio, A. Rita; Povo, Ana; Silva-Pereira, Fernanda; Ferreira, Maria Amélia

    2015-01-01

    In theory the formula scoring methods increase the reliability of multiple-choice tests in comparison with number-right scoring. This study aimed to evaluate the impact of the formula scoring method in clinical anatomy multiple-choice examinations, and to compare it with that from the number-right scoring method, hoping to achieve an…

  2. Are multiple choice tests fair to medical students with specific learning disabilities?

    PubMed

    Ricketts, Chris; Brice, Julie; Coombes, Lee

    2010-05-01

    The purpose of multiple choice tests of medical knowledge is to estimate as accurately as possible a candidate's level of knowledge. However, concern is sometimes expressed that multiple choice tests may also discriminate in undesirable and irrelevant ways, such as between minority ethnic groups or by sex of candidates. There is little literature to establish whether multiple choice tests may also discriminate against students with specific learning disabilities (SLDs), in particular those with a diagnosis of dyslexia, and whether the commonly-used accommodations allow such students to perform up to their capability. We looked for evidence to help us determine whether multiple choice tests could be relied upon to test all medical students fairly, regardless of disability. We analyzed the mean scores of over 900 undergraduate medical students on eight multiple-choice progress tests containing 1,000 items using a repeated-measures analysis of variance. We included disability, gender and ethnicity as possible explanatory factors, as well as year group. There was no significant difference between mean scores of students with an SLD who had test accommodations and students with no SLD and no test accommodation. Virtually all students were able to complete the tests within the allowed time. There were no significant differences between the mean scores of known minority ethnic groups or between the genders. We conclude that properly-designed multiple-choice tests of medical knowledge do not systematically discriminate against medical students with specific learning disabilities. PMID:19763855

  3. Quality Multiple-Choice Test Questions: Item-Writing Guidelines and an Analysis of Auditing Testbanks.

    ERIC Educational Resources Information Center

    Hansen, James D.; Dexter, Lee

    1997-01-01

    Analysis of test item banks in 10 auditing textbooks found that 75% of questions violated one or more guidelines for multiple-choice items. In comparison, 70% of a certified public accounting exam bank had no violations. (SK)

  4. A Comparison of Multiple-Choice Tests and True-False Tests Used in Evaluating Student Progress

    ERIC Educational Resources Information Center

    Tasdemir, Mehmet

    2010-01-01

    This study aims at comparing the difficulty levels, discrimination powers and powers of testing achievement of multiple choice tests and true-false tests, and thus revealing the rightness or wrongness of the commonly believed hypothesis that multiple choice tests don't bear the same properties as true-false tests. The research was performed with…

  5. Basic Item Analysis for Multiple-Choice Tests. ERIC/AE Digest.

    ERIC Educational Resources Information Center

    Kehoe, Jerard

    This digest presents a list of recommendations for writing multiple-choice test items, based on psychometrics statistics are typically provided by a measurement, or test scoring, service, where tests are machine-scored or by testing software packages. Test makers can capitalize on the fact that "bad" items can be differentiated from "good" items…

  6. TESTER: A Computer Program to Produce Individualized Multiple Choice Tests.

    ERIC Educational Resources Information Center

    Hamer, Robert; Young, Forrest W.

    1978-01-01

    TESTER, a computer program which produces individualized objective tests from a pool of items, is described. Available in both PL/1 and FORTRAN, TESTER may be executed either interactively or in batch. (Author/JKS)

  7. Multiple Choices: A Tale of Testing in Four Voices.

    ERIC Educational Resources Information Center

    Cameron, Ann; Durham, Nedra; Long, Yvette; Noffke, Susan E.

    2001-01-01

    Describes a "mistake" on the newly developed Illinois State Achievement Test for Third Grade reading comprehension involving the substitution of illustrations of a White family for the African-American family members in a story. Tells the story of how a group of third-graders discovered the mistake and the reactions and events which took place…

  8. Comparing Assessments of Students' Knowledge by Computerized Open-Ended and Multiple-Choice Tests.

    ERIC Educational Resources Information Center

    Anbar, Michael

    1991-01-01

    Interactive computerized tests accepting unrestricted natural-language input were used to assess knowledge of clinical biophysics at the State University of New York at Buffalo. Comparison of responses to open-ended sequential questions and multiple-choice questions on the same material found the two formats test different aspects of competence.…

  9. The Relationship of Headings, Questions, and Locus of Control to Multiple-Choice Test Performance.

    ERIC Educational Resources Information Center

    Wilhite, Stephen C.

    This experiment examined the effects of headings and adjunct questions embedded in expository text on the delayed multiple-choice test performance of college students. Subjects in the headings-present group performed significantly better on the retention test than did the subjects in the headings-absent group. The main effect of adjunct questions…

  10. Cognitive Diagnostic Models for Tests with Multiple-Choice and Constructed-Response Items

    ERIC Educational Resources Information Center

    Kuo, Bor-Chen; Chen, Chun-Hua; Yang, Chih-Wei; Mok, Magdalena Mo Ching

    2016-01-01

    Traditionally, teachers evaluate students' abilities via their total test scores. Recently, cognitive diagnostic models (CDMs) have begun to provide information about the presence or absence of students' skills or misconceptions. Nevertheless, CDMs are typically applied to tests with multiple-choice (MC) items, which provide less diagnostic…

  11. Predictive Validity of a Multiple-Choice Test for Placement in a Community College

    ERIC Educational Resources Information Center

    Verbout, Mary F.

    2013-01-01

    Multiple-choice tests of punctuation and usage are used throughout the United States to assess the writing skills of new community college students in order to place them in either a basic writing course or first-year composition. To determine whether using the COMPASS Writing Test (CWT) is a valid placement at a community college, student test…

  12. Wrong Answers on Multiple-Choice Achievement Tests: Blind Guesses or Systematic Choices?.

    ERIC Educational Resources Information Center

    Powell, J. C.

    A multi-faceted model for the selection of answers for multiple-choice tests was developed from the findings of a series of exploratory studies. This model implies that answer selection should be curvilinear. A series of models were tested for fit using the chi square procedure. Data were collected from 359 elementary school students ages 9-12.…

  13. Can Multiple-Choice Testing Induce Desirable Difficulties? Evidence from the Laboratory and the Classroom.

    PubMed

    Bjork, Elizabeth Ligon; Soderstrom, Nicholas C; Little, Jeri L

    2015-01-01

    The term desirable difficulties (Bjork, 1994) refers to conditions of learning that, though often appearing to cause difficulties for the learner and to slow down the process of acquisition, actually improve long-term retention and transfer. One known desirable difficulty is testing (as compared with restudy), although typically it is tests that clearly involve retrieval--such as free and cued recall tests--that are thought to induce these learning benefits and not multiple-choice tests. Nonetheless, multiple-choice testing is ubiquitous in educational settings and many other high-stakes situations. In this article, we discuss research, in both the laboratory and the classroom, exploring whether multiple-choice testing can also be fashioned to promote the type of retrieval processes known to improve learning, and we speculate about the necessary properties that multiple-choice questions must possess, as well as the metacognitive strategy students need to use in answering such questions, to achieve this goal. PMID:26255442

  14. A Practical Methodology for the Systematic Development of Multiple Choice Tests.

    ERIC Educational Resources Information Center

    Blumberg, Phyllis; Felner, Joel

    Using Guttman's facet design analysis, four parallel forms of a multiple-choice test were developed. A mapping sentence, logically representing the universe of content of a basic cardiology course, specified the facets of the course and the semantic structural units linking them. The facets were: cognitive processes, disease priority, specific…

  15. Multiple-Choice Tests and Student Understanding: What Is the Connection?

    ERIC Educational Resources Information Center

    Simkin, Mark G.; Kuechler, William L.

    2005-01-01

    Instructors can use both "multiple-choice" (MC) and "constructed response" (CR) questions (such as short answer, essay, or problem-solving questions) to evaluate student understanding of course materials and principles. This article begins by discussing the advantages and concerns of using these alternate test formats and…

  16. Multiple-Choice Question Tests: A Convenient, Flexible and Effective Learning Tool? A Case Study

    ERIC Educational Resources Information Center

    Douglas, Mercedes; Wilson, Juliette; Ennis, Sean

    2012-01-01

    The research presented in this paper is part of a project investigating assessment practices, funded by the Scottish Funding Council. Using established principles of good assessment and feedback, the use of online formative and summative multiple choice tests (MCT's) was piloted to support independent and self-directed learning and improve…

  17. A FORTRAN IV Program for Multiple-choice Tests with Predetermined Minimal Acceptable Performance Levels

    ERIC Educational Resources Information Center

    Noe, Michael J.

    1976-01-01

    A Fortran IV multiple choice test scoring program for an IBM 370 computer is described that computes minimally acceptable performance levels and compares student scores to these levels. The program accomodates up to 500 items with no more than nine alternatives from a group of examinees numbering less than 10,000. (Author)

  18. A New Method for Administering and Scoring Multiple-Choice Tests: Theoretical Considerations and Empirical Results.

    ERIC Educational Resources Information Center

    Cross, Lawrence H.; And Others

    A new scoring procedure for multiple choice tests attempts to assess partial knowledge and to restrict guessing. It is a variant of Coombs' elimination scoring method, adapted for use with the carbon-shield answer sheets commonly used with answer-until-correct scoring. Examinees are directed to erase the carbon shields of choices they are certain…

  19. Expected Multiple-Choice Test Item Scores Under Ordinal Response Modes.

    ERIC Educational Resources Information Center

    Frary, Robert B.

    Ordinal response modes for multiple choice tests are those under which the examinee marks one or more choices in an effort to identify the correct choice, or include it in a proper subset of the choices. Two ordinal response modes: answer-until-correct, and Coomb's elimination of choices which examinees identify as wrong, were analyzed for scoring…

  20. A Case Study on Multiple-Choice Testing in Anatomical Sciences

    ERIC Educational Resources Information Center

    Golda, Stephanie DuPont

    2011-01-01

    Objective testing techniques, such as multiple-choice examinations, are a widely accepted method of assessment in gross anatomy. In order to deter cheating on these types of examinations, instructors often design several versions of an examination to distribute. These versions usually involve the rearrangement of questions and their corresponding…

  1. The "None of the Above" Option in Multiple-Choice Testing: An Experimental Study

    ERIC Educational Resources Information Center

    DiBattista, David; Sinnige-Egger, Jo-Anne; Fortuna, Glenda

    2014-01-01

    The authors assessed the effects of using "none of the above" as an option in a 40-item, general-knowledge multiple-choice test administered to undergraduate students. Examinees who selected "none of the above" were given an incentive to write the correct answer to the question posed. Using "none of the above" as the…

  2. Problem Solving Questions for Multiple Choice Tests: A Method for Analyzing the Cognitive Demands of Items.

    ERIC Educational Resources Information Center

    Simpson, Deborah E.; Cohen, Elsa B.

    This paper reports a multi-method approach for examining the cognitive level of multiple-choice items used in a medical pathology course at a large midwestern medical school. Analysis of the standard item analysis data and think-out-loud reports of a sample of students completing a 66 item examination were used to test assumptions related to the…

  3. Application of a Multidimensional Nested Logit Model to Multiple-Choice Test Items

    ERIC Educational Resources Information Center

    Bolt, Daniel M.; Wollack, James A.; Suh, Youngsuk

    2012-01-01

    Nested logit models have been presented as an alternative to multinomial logistic models for multiple-choice test items (Suh and Bolt in "Psychometrika" 75:454-473, 2010) and possess a mathematical structure that naturally lends itself to evaluating the incremental information provided by attending to distractor selection in scoring. One potential…

  4. Effects of Mayfield's Four Questions (M4Q) on Nursing Students' Self-Efficacy and Multiple-Choice Test Scores

    ERIC Educational Resources Information Center

    Mayfield, Linda Riggs

    2010-01-01

    This study examined the effects of being taught the Mayfield's Four Questions multiple-choice test-taking strategy on the perceived self-efficacy and multiple-choice test scores of nursing students in a two-year associate degree program. Experimental and control groups were chosen by stratified random sampling. Subjects completed the 10-statement…

  5. The Relationship of Deep and Surface Study Approaches on Factual and Applied Test-Bank Multiple-Choice Question Performance

    ERIC Educational Resources Information Center

    Yonker, Julie E.

    2011-01-01

    With the advent of online test banks and large introductory classes, instructors have often turned to textbook publisher-generated multiple-choice question (MCQ) exams in their courses. Multiple-choice questions are often divided into categories of factual or applied, thereby implicating levels of cognitive processing. This investigation examined…

  6. Test of understanding of vectors: A reliable multiple-choice vector concept test

    NASA Astrophysics Data System (ADS)

    Barniol, Pablo; Zavala, Genaro

    2014-06-01

    In this article we discuss the findings of our research on students' understanding of vector concepts in problems without physical context. First, we develop a complete taxonomy of the most frequent errors made by university students when learning vector concepts. This study is based on the results of several test administrations of open-ended problems in which a total of 2067 students participated. Using this taxonomy, we then designed a 20-item multiple-choice test [Test of understanding of vectors (TUV)] and administered it in English to 423 students who were completing the required sequence of introductory physics courses at a large private Mexican university. We evaluated the test's content validity, reliability, and discriminatory power. The results indicate that the TUV is a reliable assessment tool. We also conducted a detailed analysis of the students' understanding of the vector concepts evaluated in the test. The TUV is included in the Supplemental Material as a resource for other researchers studying vector learning, as well as instructors teaching the material.

  7. The use of multiple-choice tests in anatomy: common pitfalls and how to avoid them.

    PubMed

    Vahalia, K V; Subramaniam, K; Marks, S C; De Souza, E J

    1995-01-01

    Multiple-choice questions (MCQ) are widely used to evaluate students in the health sciences, including anatomy. Unusual responses in 90 simple MCQ examinations have been identified and classified as to cause, including a number of illustrated examples. About one-quarter of these errors were attributable to the teacher and could have been avoided by a critical analysis of the questions before use. The increasing use of sophisticated formats of the MCQ in medical education indicates that teachers need to analyze their questions more carefully before and after actual tests to minimize errors. PMID:7697515

  8. Mechanical waves conceptual survey: Its modification and conversion to a standard multiple-choice test

    NASA Astrophysics Data System (ADS)

    Barniol, Pablo; Zavala, Genaro

    2016-06-01

    In this article we present several modifications of the mechanical waves conceptual survey, the most important test to date that has been designed to evaluate university students' understanding of four main topics in mechanical waves: propagation, superposition, reflection, and standing waves. The most significant changes are (i) modification of several test questions that had some problems in their original design, (ii) standardization of the number of options for each question to five, (iii) conversion of the two-tier questions to multiple-choice questions, and (iv) modification of some questions to make them independent of others. To obtain a final version of the test, we administered both the original and modified versions several times to students at a large private university in Mexico. These students were completing a course that covers the topics tested by the survey. The final modified version of the test was administered to 234 students. In this study we present the modifications for each question, and discuss the reasons behind them. We also analyze the results obtained by the final modified version and offer a comparison between the original and modified versions. In the Supplemental Material we present the final modified version of the test. It can be used by teachers and researchers to assess students' understanding of, and learning about, mechanical waves.

  9. Multiple-Choice Cloze Exercises: Textual Domain, Science. SPPED Test Development Notebook, Form 81-S [and] Answer Key for Multiple-Choice Cloze Exercises: Textual Domain, Science. SPPED Test Development Notebook, Form 85-S. Revised.

    ERIC Educational Resources Information Center

    New York State Education Dept., Albany. Div. of Research.

    The "Test Development Notebook" is a resource designed for the preparation of tests of literal comprehension for students in grades 1 through 12. This volume contains 200 multiple-choice cloze exercises taken from textbooks in science, and the accompanying answer key. Each exercise carries the code letter of the section to which it belongs. The…

  10. The Impact of Escape Alternative Position Change in Multiple-Choice Test on the Psychometric Properties of a Test and Its Items Parameters

    ERIC Educational Resources Information Center

    Hamadneh, Iyad Mohammed

    2015-01-01

    This study aimed at investigating the impact changing of escape alternative position in multiple-choice test on the psychometric properties of a test and it's items parameters (difficulty, discrimination & guessing), and estimation of examinee ability. To achieve the study objectives, a 4-alternative multiple choice type achievement test…

  11. The Development and Validation of a Multiple-Choice Cloze Test for Non-Native College Freshmen.

    ERIC Educational Resources Information Center

    El-Banna, Adel I.

    Procedures for developing and validating a modified cloze test are described. The multiple-choice cloze procedure was developed to aid in the rapid placement of college freshmen who are students of English as a second language. It was designed to overcome some problems in the original cloze test, to reduce anxiety, and to be machine scored. The…

  12. Set of Criteria for Efficiency of the Process Forming the Answers to Multiple-Choice Test Items

    ERIC Educational Resources Information Center

    Rybanov, Alexander Aleksandrovich

    2013-01-01

    Is offered the set of criteria for assessing efficiency of the process forming the answers to multiple-choice test items. To increase accuracy of computer-assisted testing results, it is suggested to assess dynamics of the process of forming the final answer using the following factors: loss of time factor and correct choice factor. The model…

  13. Beyond the Bubble: How to Use Multiple-Choice Tests to Improve Math Instruction, Grades 4-5

    ERIC Educational Resources Information Center

    Wickett, Maryann; Hendrix-Martin, Eunice

    2011-01-01

    Multiple-choice testing is an educational reality. Rather than complain about the negative impact these tests may have on teaching and learning, why not use them to better understand your students' true mathematical knowledge and comprehension? Maryann Wickett and Eunice Hendrix-Martin show teachers how to move beyond the student's answer--right…

  14. Force Concept Inventory-based multiple-choice test for investigating students' representational consistency

    NASA Astrophysics Data System (ADS)

    Nieminen, Pasi; Savinainen, Antti; Viiri, Jouni

    2010-07-01

    This study investigates students’ ability to interpret multiple representations consistently (i.e., representational consistency) in the context of the force concept. For this purpose we developed the Representational Variant of the Force Concept Inventory (R-FCI), which makes use of nine items from the 1995 version of the Force Concept Inventory (FCI). These original FCI items were redesigned using various representations (such as motion map, vectorial and graphical), yielding 27 multiple-choice items concerning four central concepts underpinning the force concept: Newton’s first, second, and third laws, and gravitation. We provide some evidence for the validity and reliability of the R-FCI; this analysis is limited to the student population of one Finnish high school. The students took the R-FCI at the beginning and at the end of their first high school physics course. We found that students’ (n=168) representational consistency (whether scientifically correct or not) varied considerably depending on the concept. On average, representational consistency and scientifically correct understanding increased during the instruction, although in the post-test only a few students performed consistently both in terms of representations and scientifically correct understanding. We also compared students’ (n=87) results of the R-FCI and the FCI, and found that they correlated quite well.

  15. Effectiveness of Guided Multiple Choice Objective Questions Test on Students' Academic Achievement in Senior School Mathematics by School Location

    ERIC Educational Resources Information Center

    Igbojinwaekwu, Patrick Chukwuemeka

    2015-01-01

    This study investigated, using pretest-posttest quasi-experimental research design, the effectiveness of guided multiple choice objective questions test on students' academic achievement in Senior School Mathematics, by school location, in Delta State Capital Territory, Nigeria. The sample comprised 640 Students from four coeducation secondary…

  16. The Effect of Luck and Misinformation on the Discrepancy between Multiple-Choice Test Scores and True Ability.

    ERIC Educational Resources Information Center

    Lowry, Stephen R.

    The effects of luck and misinformation on ability of multiple-choice test scores to estimate examinee ability were investigated. Two measures of examinee ability were defined. Misinformation was shown to have little effect on ability of raw scores and a substantial effect on ability of corrected-for-guessing scores to estimate examinee ability.…

  17. Validation of a Simplified Method for Determining Passing Scores for Criterion-Referenced, Multiple-Choice Tests.

    ERIC Educational Resources Information Center

    Meredith, John B., Jr.

    The complexity of defining accurate passing scores with a minimum classification error when evaluating criterion-referenced, multiple-choice tests has been a major problem for classroom teachers. Therefore, a practical procedure in which the instructor determines the plausibility of each item option for the minimally acceptable examinee is…

  18. AN INVESTIGATION OF NON-INDEPENDENCE OF COMPONENTS OF SCORES ON MULTIPLE-CHOICE TESTS. FINAL REPORT.

    ERIC Educational Resources Information Center

    ZIMMERMAN, DONALD W.; BURKHEIMER, GRAHAM J., JR.

    INVESTIGATION IS CONTINUED INTO VARIOUS EFFECTS OF NON-INDEPENDENT ERROR INTRODUCED INTO MULTIPLE-CHOICE TEST SCORES AS A RESULT OF CHANCE GUESSING SUCCESS. A MODEL IS DEVELOPED IN WHICH THE CONCEPT OF THEORETICAL COMPONENTS OF SCORES IS NOT INTRODUCED AND IN WHICH, THEREFORE, NO ASSUMPTIONS REGARDING ANY RELATIONSHIP BETWEEN SUCH COMPONENTS NEED…

  19. Testing Historical Knowledge: Standards, Multiple-Choice Questions and Student Reasoning

    ERIC Educational Resources Information Center

    Reich, Gabriel A.

    2009-01-01

    This article explores the reasoning employed by high school students to answer a set of multiple-choice history questions. The questions come from New York State's Global History and Geography Regents exam. The Regents exams, together with a particularly well-regarded and ambitious set of content standards, are the cornerstone of the state's…

  20. Assertion-Reason Multiple-Choice Testing as a Tool for Deep Learning: A Qualitative Analysis

    ERIC Educational Resources Information Center

    Williams, Jeremy B.

    2006-01-01

    This paper reflects on the ongoing debate surrounding the usefulness (or otherwise) of multiple-choice questions (MCQ) as an assessment instrument. The context is a graduate school of business in Australia where an experiment was conducted to investigate the use of assertion-reason questions (ARQ), a sophisticated form of MCQ that aims to…

  1. Test-Taking Strategies of Arab EFL Learners on Multiple Choice Tests

    ERIC Educational Resources Information Center

    Al Fraidan, Abdullah; Al-Khalaf, Khadija

    2012-01-01

    Many studies have focused on the function of learners' strategies in a variety of EFL domains. However, research on test-taking strategies (TTSs) has been limited, even though such strategies might influence test scores and, as a result, test validity. Motivated by this fact and in light of our own experience as EFL test-makers, this article will…

  2. Why Is Performance on Multiple-Choice Tests and Constructed-Response Tests Not More Closely Related? Theory and an Empirical Test

    ERIC Educational Resources Information Center

    Kuechler, William L.; Simkin, Mark G.

    2010-01-01

    Both professional certification and academic tests rely heavily on multiple-choice questions, despite the widespread belief that alternate, constructed-response questions are superior measures of a test taker's understanding of the underlying material. Empirically, the search for a link between these two assessment metrics has met with limited…

  3. Test of Understanding of Vectors: A Reliable Multiple-Choice Vector Concept Test

    ERIC Educational Resources Information Center

    Barniol, Pablo; Zavala, Genaro

    2014-01-01

    In this article we discuss the findings of our research on students' understanding of vector concepts in problems without physical context. First, we develop a complete taxonomy of the most frequent errors made by university students when learning vector concepts. This study is based on the results of several test administrations of…

  4. Measuring the Consistency in Change in Hepatitis B Knowledge among Three Different Types of Tests: True/False, Multiple Choice, and Fill in the Blanks Tests.

    ERIC Educational Resources Information Center

    Sahai, Vic; Demeyere, Petra; Poirier, Sheila; Piro, Felice

    1998-01-01

    The recall of information about Hepatitis B demonstrated by 180 seventh graders was tested with three test types: (1) short-answer; (2) true/false; and (3) multiple-choice. Short answer testing was the most reliable. Suggestions are made for the use of short-answer tests in evaluating student knowledge. (SLD)

  5. Post-Graduate Student Performance in "Supervised In-Class" vs. "Unsupervised Online" Multiple Choice Tests: Implications for Cheating and Test Security

    ERIC Educational Resources Information Center

    Ladyshewsky, Richard K.

    2015-01-01

    This research explores differences in multiple choice test (MCT) scores in a cohort of post-graduate students enrolled in a management and leadership course. A total of 250 students completed the MCT in either a supervised in-class paper and pencil test or an unsupervised online test. The only statistically significant difference between the nine…

  6. A Multiple-Choice Mushroom: Schools, Colleges Rely More than Ever on Standardized Tests.

    ERIC Educational Resources Information Center

    Hawkins, B. Denise

    1995-01-01

    This discussion of college entrance examinations reviews differences between the Scholastic Assessment Test (SAT) and the American College Test. It then focuses on the SAT, discussing numbers of students taking the tests, changes in test construction to recognize contributions of women and minorities, involvement of African Americans in…

  7. Multiple Choice and True/False Tests: Reliability Measures and Some Implications of Negative Marking

    ERIC Educational Resources Information Center

    Burton, Richard F.

    2004-01-01

    The standard error of measurement usefully provides confidence limits for scores in a given test, but is it possible to quantify the reliability of a test with just a single number that allows comparison of tests of different format? Reliability coefficients do not do this, being dependent on the spread of examinee attainment. Better in this…

  8. The Effect of Misinformation, Partial Information, and Guessing on Expected Multiple-Choice Test Item Scores.

    ERIC Educational Resources Information Center

    Frary, Robert B.

    1980-01-01

    Six scoring methods for assigning weights to right or wrong responses according to various instructions given to test takers are analyzed with respect to expected change scores and the effect of various levels of information and misinformation. Three of the methods provide feedback to the test taker. (Author/CTM)

  9. Validation of a Standardized Multiple-Choice Multicultural Competence Test: Implications for Training, Assessment, and Practice

    ERIC Educational Resources Information Center

    Gillem, Angela R.; Bartoli, Eleonora; Bertsch, Kristin N.; McCarthy, Maureen A.; Constant, Kerra; Marrero-Meisky, Sheila; Robbins, Steven J.; Bellamy, Scarlett

    2016-01-01

    The Multicultural Counseling and Psychotherapy Test (MCPT), a measure of multicultural counseling competence (MCC), was validated in 2 phases. In Phase 1, the authors administered 451 test items derived from multicultural guidelines in counseling and psychology to 32 multicultural experts and 30 nonexperts. In Phase 2, the authors administered the…

  10. Criterion Validation of a Written Multiple-Choice Test of Spanish/English Bilingual Skills.

    ERIC Educational Resources Information Center

    Doyle, Teresa F.; Lin, Thung-Rung

    Supervisory performance appraisals may be of limited utility in the validation of bilingual tests because incumbents are often hired to be the only employee in a unit who possesses the skills necessary to do the job. In an effort to provide criterion-related validity for four equivalent forms of a Spanish/English bilingual test for school district…

  11. Grade 9 English Language Arts Achievement Test. Part B: Reading (Multiple Choice). Readings Booklet. 1986 Edition.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    Intended for students taking the Grade 9 English Language Arts Achievement Test in Alberta, Canada, this reading test (to be administered along with the questions booklet) contains eight short reading selections taken from fiction, nonfiction, and poetry, including the following: "Thieving Raffles" (Eric Nicol); "Flight of the Roller Coaster"…

  12. Memory-Context Effects of Screen Color in Multiple-Choice and Fill-In Tests

    ERIC Educational Resources Information Center

    Prestera, Gustavo E.; Clariana, Roy; Peck, Andrew

    2005-01-01

    In this experimental study, 44 undergraduates completed five computer-based instructional lessons and either two multiplechoice tests or two fill-in-the-blank tests. Color-coded borders were displayed during the lesson, adjacent to the screen text and illustrations. In the experimental condition, corresponding border colors were shown at posttest.…

  13. Influences of Item Content and Format on the Dimensionality of Tests Combining Multiple-Choice and Open-Response Items: An Application of the Poly-DIMTEST Procedure.

    ERIC Educational Resources Information Center

    Perkhounkova, Yelena; Dunbar, Stephen B.

    The DIMTEST statistical procedure was used in a confirmatory manner to explore the dimensionality structures of three kinds of achievement tests: multiple-choice tests, constructed-response tests, and tests combining both formats. The DIMTEST procedure is based on estimating conditional covariances of the responses to the item pairs. The analysis…

  14. How to Assess Student Performance in History: Going beyond Multiple-Choice Tests

    ERIC Educational Resources Information Center

    Edmunds, Julie

    2006-01-01

    This paper addresses some real assessment challenges that teachers have identified: (1) Figuring out what really is important for students to know and be able to do in history; (2) Teaching the skills of "doing history" in a world of testing that often seems to value only factual knowledge; (3) Identifying and using assessments that provide…

  15. Mechanical Waves Conceptual Survey: Its Modification and Conversion to a Standard Multiple-Choice Test

    ERIC Educational Resources Information Center

    Barniol, Pablo; Zavala, Genaro

    2016-01-01

    In this article we present several modifications of the mechanical waves conceptual survey, the most important test to date that has been designed to evaluate university students' understanding of four main topics in mechanical waves: propagation, superposition, reflection, and standing waves. The most significant changes are (i) modification of…

  16. Does Linking Mixed-Format Tests Using a Multiple-Choice Anchor Produce Comparable Results for Male and Female Subgroups? Research Report. ETS RR-11-44

    ERIC Educational Resources Information Center

    Kim, Sooyeon; Walker, Michael E.

    2011-01-01

    This study examines the use of subpopulation invariance indices to evaluate the appropriateness of using a multiple-choice (MC) item anchor in mixed-format tests, which include both MC and constructed-response (CR) items. Linking functions were derived in the nonequivalent groups with anchor test (NEAT) design using an MC-only anchor set for 4…

  17. Beyond the Bubble, Grades 2-3: How to Use Multiple-Choice Tests to Improve Math Instruction, Grades 2-3

    ERIC Educational Resources Information Center

    Wickett, Maryann; Hendrix-Martin, Eunice

    2011-01-01

    Multiple-choice testing is an educational reality. Rather than complain about the negative impact these tests may have on teaching and learning, why not use them to better understand your students' true mathematical knowledge and comprehension? Maryann Wickett and Eunice Hendrix-Martin show teachers how to move beyond the student's answer--right…

  18. A Clarification of the Effects of Rapid Guessing on Coefficient [Alpha]: A Note on Attali's "Reliability of Speeded Number-Right Multiple-Choice Tests"

    ERIC Educational Resources Information Center

    Wise, Steven L.; DeMars, Christine E.

    2009-01-01

    Attali (2005) recently demonstrated that Cronbach's coefficient [alpha] estimate of reliability for number-right multiple-choice tests will tend to be deflated by speededness, rather than inflated as is commonly believed and taught. Although the methods, findings, and conclusions of Attali (2005) are correct, his article may inadvertently invite a…

  19. The Impact of Item Position in Multiple-Choice Test on Student Performance at the Basic Education Certificate Examination (BECE) Level

    ERIC Educational Resources Information Center

    Ollennu, Sam Nii Nmai; Etsey, Y. K. A.

    2015-01-01

    The study investigated the impact of item position in multiple-choice test on student performance at the Basic Education Certificate Examination (BECE) level in Ghana. The sample consisted of 810 Junior Secondary School (JSS) Form 3 students selected from 12 different schools. A quasi-experimental design was used. The instrument for the project…

  20. Multiple-Choice Testing Using Immediate Feedback--Assessment Technique (IF AT®) Forms: Second-Chance Guessing vs. Second-Chance Learning?

    ERIC Educational Resources Information Center

    Merrel, Jeremy D.; Cirillo, Pier F.; Schwartz, Pauline M.; Webb, Jeffrey A.

    2015-01-01

    Multiple choice testing is a common but often ineffective method for evaluating learning. A newer approach, however, using Immediate Feedback Assessment Technique (IF AT®, Epstein Educational Enterprise, Inc.) forms, offers several advantages. In particular, a student learns immediately if his or her answer is correct and, in the case of an…

  1. Making the Most of Multiple Choice

    ERIC Educational Resources Information Center

    Brookhart, Susan M.

    2015-01-01

    Multiple-choice questions draw criticism because many people perceive they test only recall or atomistic, surface-level objectives and do not require students to think. Although this can be the case, it does not have to be that way. Susan M. Brookhart suggests that multiple-choice questions are a useful part of any teacher's questioning repertoire…

  2. Evaluation of the Randomized Multiple Choice Format.

    ERIC Educational Resources Information Center

    Harke, Douglas James

    Each physics problem used in evaluating the effectiveness of Randomized Multiple Choice (RMC) tests was stated in the conventional manner and was followed by several multiple choice items corresponding to the steps in a written solution but presented in random order. Students were instructed to prepare a written answer and to use it to answer the…

  3. C-Test vs. Multiple-Choice Cloze Test as Tests of Reading Comprehension in Iranian EFL Context: Learners' Perspective

    ERIC Educational Resources Information Center

    Ajideh, Parviz; Mozaffarzadeh, Sorayya

    2012-01-01

    Cloze tests have been widely used for measuring reading comprehension since their introducing to the testing world by Taylor in 1953. But in 1982, Klein-Braley criticized cloze procedure mostly for their deletion and scoring problems. They introduced their newly developed testing procedure, C-test, which was an evolved form of cloze tests without…

  4. A Systematic Assessment of "None of the Above" on Multiple Choice Tests in a First Year Psychology Classroom

    ERIC Educational Resources Information Center

    Pachai, Matthew V.; DiBattista, David; Kim, Joseph A.

    2015-01-01

    Multiple choice writing guidelines are decidedly split on the use of "none of the above" (NOTA), with some authors discouraging and others advocating its use. Moreover, empirical studies of NOTA have produced mixed results. Generally, these studies have utilized NOTA as either the correct response or a distractor and assessed its effect…

  5. Science Library of Test Items. Volume Seventeen. A Collection of Multiple Choice Test Items Relating Mainly to Biology.

    ERIC Educational Resources Information Center

    New South Wales Dept. of Education, Sydney (Australia).

    As one in a series of test item collections developed by the Assessment and Evaluation Unit of the Directorate of Studies, items are made available to teachers for the construction of unit tests or term examinations or as a basis for class discussion. Each collection was reviewed for content validity and reliability. The test items meet syllabus…

  6. Science Library of Test Items. Volume Twenty. A Collection of Multiple Choice Test Items Relating Mainly to Physics, 1.

    ERIC Educational Resources Information Center

    New South Wales Dept. of Education, Sydney (Australia).

    As one in a series of test item collections developed by the Assessment and Evaluation Unit of the Directorate of Studies, items are made available to teachers for the construction of unit tests or term examinations or as a basis for class discussion. Each collection was reviewed for content validity and reliability. The test items meet syllabus…

  7. Science Library of Test Items. Volume Nineteen. A Collection of Multiple Choice Test Items Relating Mainly to Geology.

    ERIC Educational Resources Information Center

    New South Wales Dept. of Education, Sydney (Australia).

    As one in a series of test item collections developed by the Assessment and Evaluation Unit of the Directorate of Studies, items are made available to teachers for the construction of unit tests or term examinations or as a basis for class discussion. Each collection was reviewed for content validity and reliability. The test items meet syllabus…

  8. Multiple-Choice Exams and Guessing: Results from a One-Year Study of General Chemistry Tests Designed to Discourage Guessing

    ERIC Educational Resources Information Center

    Campbell, Mark L.

    2015-01-01

    Multiple-choice exams, while widely used, are necessarily imprecise due to the contribution of the final student score due to guessing. This past year at the United States Naval Academy the construction and grading scheme for the department-wide general chemistry multiple-choice exams were revised with the goal of decreasing the contribution of…

  9. Investigating Administered Essay and Multiple-Choice Tests in the English Department of Islamic Azad University, Hamedan Branch

    ERIC Educational Resources Information Center

    Karimi, Lotfollah; Mehrdad, Ali Gholami

    2012-01-01

    This study has attempted to investigate the administered written tests in the language department of Islamic Azad University of Hamedan, Iran from validity, practicality and reliability points of view. To this end two steps were taken. First, examining 112 tests, we knew that the face validity of 50 tests had been threatened, 9 tests lacked…

  10. Comparability of Computer- and Paper-Administered Multiple-Choice Tests for K-12 Populations: A Synthesis

    ERIC Educational Resources Information Center

    Kingston, Neal M.

    2009-01-01

    There have been many studies of the comparability of computer-administered and paper-administered tests. Not surprisingly (given the variety of measurement and statistical sampling issues that can affect any one study) the results of such studies have not always been consistent. Moreover, the quality of computer-based test administration systems…

  11. Item Order, Response Format, and Examinee Sex and Handedness and Performance on a Multiple-Choice Test.

    ERIC Educational Resources Information Center

    Kleinke, David J.

    Four forms of a 36-item adaptation of the Stanford Achievement Test were administered to 484 fourth graders. External factors potentially influencing test performance were examined, namely: (1) item order (easy-to-difficult vs. uniform); (2) response location (left column vs. right column); (3) handedness which may interact with response location;…

  12. catcher: A Software Program to Detect Answer Copying in Multiple-Choice Tests Based on Nominal Response Model

    ERIC Educational Resources Information Center

    Kalender, Ilker

    2012-01-01

    catcher is a software program designed to compute the [omega] index, a common statistical index for the identification of collusions (cheating) among examinees taking an educational or psychological test. It requires (a) responses and (b) ability estimations of individuals, and (c) item parameters to make computations and outputs the results of…

  13. The Effect of Misinformation on Item Discrimination Indices and Estimation Priorities of Multiple-Choice Test Scores.

    ERIC Educational Resources Information Center

    Lowry, Stephen R.

    A specially designed answer format was used for three tests in a college level agriculture class of 19 students to record responses to three things about each item: (1) the student's choice of the best answer; (2) the degree of certainty with which the answer was chosen; and (3) all the answer choices which the student was certain were incorrect.…

  14. Strategies for the Meaningful Evaluation of Multiple-Choice Assessments

    ERIC Educational Resources Information Center

    Chesbro, Robert

    2010-01-01

    Too many multiple-choice tests are administered without an evaluative component. Teachers often return student assessments or Scantron cards--computerized bubble forms--without review, assuming that the printing of the correct answer will suffice. However, a more constructivist approach to follow up multiple-choice tests can make for more…

  15. The Role of Computer-Aided Assessment in Health Professional Education: A Comparison of Student Performance in Computer-Based and Paper-and-Pen Multiple-Choice Tests.

    ERIC Educational Resources Information Center

    Lee, Gary; Weerakoon, Patricia

    2001-01-01

    Investigates the differences of student performance between computerized and paper-and-pen multiple-choice tests and assesses the extent and effect of "computer anxiety." Uses the LXR.TEST 5.1 software for computerized tests and suggests that computer-based assessment could be used with confidence for the purposes of ranking students, but care…

  16. Quantitative Comparisons of Difficulty, Discrimination and Reliability of Machine-Scored Completion Items and Tests (in the MDT Un-Cued Answer-Bank Format) in Contrast with Statistics from Comparable Multiple Choice Questions: The First Round of Results.

    ERIC Educational Resources Information Center

    Anderson, Paul S.; Hyers, Albert D.

    Three descriptive statistics (difficulty, discrimination, and reliability) of multiple-choice (MC) test items were compared to those of a new (1980s) format of machine-scored questions. The new method, answer-bank multi-digit testing (MDT), uses alphabetized lists of up to 1,000 alternatives and approximates the completion style of assessment…

  17. Exploring Clinical Reasoning Strategies and Test-Taking Behaviors During Clinical Vignette Style Multiple-Choice Examinations: A Mixed Methods Study

    PubMed Central

    Heist, Brian Sanjay; Gonzalo, Jed David; Durning, Steven; Torre, Dario; Elnicki, David Michael

    2014-01-01

    Background Clinical vignette multiple-choice questions (MCQs) are widely used in medical education, but clinical reasoning (CR) strategies employed when approaching these questions have not been well described. Objectives The aims of the study were (1) to identify CR strategies and test-taking (TT) behaviors of physician trainees while solving clinical vignette MCQs; and (2) to examine the relationships between CR strategies and behaviors, and performance on a high-stakes clinical vignette MCQ examination. Methods Thirteen postgraduate year–1 level trainees completed 6 clinical vignette MCQs using a think-aloud protocol. Thematic analysis employing elements of grounded theory was performed on data transcriptions to identify CR strategies and TT behaviors. Participants' CR strategies and TT behaviors were then compared with their US Medical Licensing Examination Step 2 Clinical Knowledge scores. Results Twelve CR strategies and TT behaviors were identified. Individuals with low performance on Step 2 Clinical Knowledge demonstrated increased premature closure and increased faulty knowledge, and showed comparatively less ruling out of alternatives or admission of knowledge deficits. High performers on Step 2 Clinical Knowledge demonstrated increased ruling out of alternatives and admission of knowledge deficits, and less premature closure, faulty knowledge, or closure prior to reading the alternatives. Conclusions Different patterns of CR strategies and TT behaviors may be used by high and low performers during high-stakes clinical vignette MCQ examinations. PMID:26140123

  18. The Effect of Test Instructions, Test Anxiety, Defensiveness, and Confidence in Judgment on Guessing Behavior in Multiple-Choice Test Situations

    ERIC Educational Resources Information Center

    Bauer, David H.

    1971-01-01

    The results of this study suggest that instructions are a source of information to students about the testing environment that modifies their test-taking behavior. Individual students interpret the same instructions in different ways, and these differences, in turn, result in variations in behavior reflected in test scores. (Author)

  19. Simulation-based multiple-choice test assessment of clinical competence for large groups of medical students: a comparison of auscultation sound identification either with or without clinical context

    PubMed Central

    Nguyen, Diem Quyen; Patenaude, Jean Victor; Gagnon, Robert; Deligne, Benoit; Bouthillier, Isabelle

    2015-01-01

    Background Although simulation-based teaching is popular, high-fidelity, high-cost approaches may be unsuitable or unavailable for use with large groups. We designed a multiple-choice test for large groups of medical students to explore a low-cost approach in assessing clinical competence. We tested two different scenarios in assessing student’s ability to identify heart and lung sounds: by hearing the sounds alone, or in an enhanced scenario where sounds are incorporated into clinical vignettes to give clinical context. Method The two-section test consists of multiple-choice questions with one best answer. In the first section, the student must identify 25 auscultation sounds from amongst a choice of 14 heart sounds and 11 lung-sounds. The second section integrates these same sounds into clinical vignettes to provide clinical context. Students must either identify the illness or the next clinical step, choosing from four possible answers. Performances of 859 students were evaluated. Results The alpha coefficient of reliability is 0.54 and 0.76 respectively for the first and the second section. In the latter section there is significant difference between scores of first, second, fourth year students and residents, in contrast to the first-section scores. Conclusions A multiple-choice test to assess clinical competence based on simulated auscultation sounds incorporated into clinical vignettes allows us to differentiate between training levels and seems to be a valid assessment method suitable for large-group format. PMID:26451229

  20. Social Attribution TestMultiple Choice (SAT-MC) in Schizophrenia: Comparison with Community Sample and Relationship to Neurocognitive, Social Cognitive and Symptom Measures

    PubMed Central

    Bell, Morris D.; Fiszdon, Joanna M.; Greig, Tamasine C.; Wexler, Bruce E.

    2010-01-01

    This is the first report on the use of the Social Attribution Task – Multiple Choice (SAT-MC) to assess social cognitive impairments in schizophrenia. The SAT-MC was originally developed for autism research, and consists of a 64-second animation showing geometric figures enacting a social drama, with 19 multiple choice questions about the interactions. Responses from 85 community-dwelling participants and 66 participants with SCID confirmed schizophrenia or schizoaffective disorders (Scz) revealed highly significant group differences. When the two samples were combined, SAT-MC scores were significantly correlated with other social cognitive measures, including measures of affect recognition, theory of mind, self-report of egocentricity and the Social Cognition Index from the MATRICS battery. Using a cut-off score, 53% of Scz were significantly impaired on SAT-MC compared with 9% of the community sample. Most Scz participants with impairment on SAT-MC also had impairment on affect recognition. Significant correlations were also found with neurocognitive measures but with less dependence on verbal processes than other social cognitive measures. Logistic regression using SAT-MC scores correctly classified 75% of both samples. Results suggest that this measure may have promise, but alternative versions will be needed before it can be used in pre-post or longitudinal designs. PMID:20400268

  1. Social attribution test--multiple choice (SAT-MC) in schizophrenia: comparison with community sample and relationship to neurocognitive, social cognitive and symptom measures.

    PubMed

    Bell, Morris D; Fiszdon, Joanna M; Greig, Tamasine C; Wexler, Bruce E

    2010-09-01

    This is the first report on the use of the Social Attribution Task - Multiple Choice (SAT-MC) to assess social cognitive impairments in schizophrenia. The SAT-MC was originally developed for autism research, and consists of a 64-second animation showing geometric figures enacting a social drama, with 19 multiple choice questions about the interactions. Responses from 85 community-dwelling participants and 66 participants with SCID confirmed schizophrenia or schizoaffective disorders (Scz) revealed highly significant group differences. When the two samples were combined, SAT-MC scores were significantly correlated with other social cognitive measures, including measures of affect recognition, theory of mind, self-report of egocentricity and the Social Cognition Index from the MATRICS battery. Using a cut-off score, 53% of Scz were significantly impaired on SAT-MC compared with 9% of the community sample. Most Scz participants with impairment on SAT-MC also had impairment on affect recognition. Significant correlations were also found with neurocognitive measures but with less dependence on verbal processes than other social cognitive measures. Logistic regression using SAT-MC scores correctly classified 75% of both samples. Results suggest that this measure may have promise, but alternative versions will be needed before it can be used in pre-post or longitudinal designs. PMID:20400268

  2. [Continuing medical education: how to write multiple choice questions].

    PubMed

    Soler Fernández, R; Méndez Díaz, C; Rodríguez García, E

    2013-06-01

    Evaluating professional competence in medicine is a difficult but indispensable task because it makes it possible to evaluate, at different times and from different perspectives, the extent to which the knowledge, skills, and values required for exercising the profession have been acquired. Tests based on multiple choice questions have been and continue to be among the most useful tools for objectively evaluating learning in medicine. When these tests are well designed and correctly used, they can stimulate learning and even measure higher cognitive skills. Designing a multiple choice test is a difficult task that requires knowledge of the material to be tested and of the methodology of test preparation as well as time to prepare the test. The aim of this article is to review what can be evaluated through multiple choice tests, the rules and guidelines that should be taken into account when writing multiple choice questions, the different formats that can be used, the most common errors in elaborating multiple choice tests, and how to analyze the results of the test to verify its quality. PMID:23489769

  3. A Close Look at the Relationship between Multiple Choice Vocabulary Test and Integrative Cloze Test of Lexical Words in Iranian Context

    ERIC Educational Resources Information Center

    Ajideh, Parviz

    2009-01-01

    In spite of various definitions provided for it, language proficiency has been always a difficult concept to define and realize. However the commonality of all the definitions for this illusive concept is that language tests should seek to test the learners' ability to use real-life language. The best type of test to show such ability is…

  4. Science Library of Test Items. Volume Twenty-One. A Collection of Multiple Choice Test Items Relating Mainly to Physics, 2.

    ERIC Educational Resources Information Center

    New South Wales Dept. of Education, Sydney (Australia).

    As one in a series of test item collections developed by the Assessment and Evaluation Unit of the Directorate of Studies, items are made available to teachers for the construction of unit tests or term examinations or as a basis for class discussion. Each collection was reviewed for content validity and reliability. The test items meet syllabus…

  5. Making consent more informed: preliminary results from a multiple-choice test among probation-referred marijuana users entering a randomized clinical trial.

    PubMed

    Rounsaville, Daniel B; Hunkele, Karen; Easton, Caroline J; Nich, Charla; Carroll, Kathleen M

    2008-01-01

    Although individuals who use illicit drugs are a potentially vulnerable population, there have been no objective evaluations of the effectiveness of standard informed consent procedures in assuring that prospective participants entering drug abuse treatment trials fully understand the nature of the research and treatments in which they have agreed to participate. Young, marijuana-dependent adults referred by the criminal justice system who were enrolling in a randomized treatment trial were asked to complete a multiple-choice quiz concerning basic elements of the trial before providing written informed consent. Participants were assigned to standard drug counseling or motivational interviewing/skills-building therapy, delivered alone or with incentives for attending sessions and submitting marijuana-free urine specimens. Only 55 percent of the 130 participants correctly answered all four questions, and 20 percent incorrectly answered a question concerning their right to refuse to participate. An unexpected finding was that quiz scores were modestly associated with marijuana use outcome measures. These preliminary findings highlight the importance of systematically evaluating the understanding of research participants, particularly those in vulnerable populations, of their rights and key aspects of the trials in which they agree to participate. PMID:18802184

  6. Tests of Anaphoric Reference--Multiple Choice Format (TAR-MC) and Tests of Anaphoric Reference--Cloze Format (TAR-C).

    ERIC Educational Resources Information Center

    Miller, Larry A.

    Designed to investigate how the beginning reader understands the antecedent/anaphora relationship in written discourse, this test contains four stories which were drawn from basal readers and modified so that the categories of pronouns were represented in proportion to their occurrence in the basal readers. Stories were further modified so that…

  7. Multiple choice questions: their value as an assessment tool.

    PubMed

    Moss, E

    2001-12-01

    Multiple choice questions are a well-established, reliable method of assessing knowledge and are used widely in postgraduate examinations in anaesthesiology. Like other methods of assessment they have their strengths and weaknesses. With the drive for revalidation and changes in undergraduate medical education much work has been done on devising valid, reliable and feasible methods of assessment of clinical practice including the need for the use of several different methods. Different multiple choice question formats have been devised and the importance of well-written multiple choice questions with clear assessment objectives recognized. There is controversy about the use of number-right as opposed to negative marking but, provided that the candidates know which marking system is being used, either method is satisfactory. The pass standard should be determined using criterion-based rather than norm-based referencing. Multiple choice questions could be used to validate continuing education and professional development from reading, possibly using web-based technology. For as long as there is a need to test knowledge in the assessment of doctors and medical undergraduates multiple choice questions will have a part to play, but only as one component of the assessment of clinical competence. PMID:17019162

  8. Approaches to data analysis of multiple-choice questions

    NASA Astrophysics Data System (ADS)

    Ding, Lin; Beichner, Robert

    2009-12-01

    This paper introduces five commonly used approaches to analyzing multiple-choice test data. They are classical test theory, factor analysis, cluster analysis, item response theory, and model analysis. Brief descriptions of the goals and algorithms of these approaches are provided, together with examples illustrating their applications in physics education research. We minimize mathematics, instead placing emphasis on data interpretation using these approaches.

  9. Scores Based on Dangerous Responses to Multiple-Choice Items.

    ERIC Educational Resources Information Center

    Grosse, Martin E.

    1986-01-01

    Scores based on the number of correct answers were compared with scores based on dangerous responses to items in the same multiple choice test developed by American Board of Orthopaedic Surgery. Results showed construct validity for both sets of scores. However, both scores were redundant when evaluated by correlation coefficient. (Author/JAZ)

  10. Further Support for Changing Multiple-Choice Answers.

    ERIC Educational Resources Information Center

    Fabrey, Lawrence J.; Case, Susan M.

    1985-01-01

    The effect on test scores of changing answers to multiple-choice questions was studied and compared to earlier research. The current setting was a nationally administered, in-training, specialty examination for medical residents in obstetrics and gynecology. Both low and high scorers improved their scores when they changed answers. (SW)

  11. Using Multiple Choice Examination Items To Measure Teachers' Content-Specific Pedagogical Knowledge.

    ERIC Educational Resources Information Center

    Kromrey, Jeffrey D.; Renfrow, Donata D.

    The use of multiple-choice test items measuring content-specific pedagogical knowledge (C-P) as a viable method of increasing the validity of teacher tests is described. The purposes of the paper are to: (1) present examples of multiple-choice test items used for the assessment of C-P and contrast these items with items used for assessing content…

  12. A Diagnostic Study of Pre-Service Teachers' Competency in Multiple-Choice Item Development

    ERIC Educational Resources Information Center

    Asim, Alice E.; Ekuri, Emmanuel E.; Eni, Eni I.

    2013-01-01

    Large class size is an issue in testing at all levels of Education. As a panacea to this, multiple choice test formats has become very popular. This case study was designed to diagnose pre-service teachers' competency in constructing questions (IQT); direct questions (DQT); and best answer (BAT) varieties of multiple choice items. Subjects were 88…

  13. Development and Application of a Two-Tier Multiple-Choice Diagnostic Test for High School Students' Understanding of Cell Division and Reproduction

    ERIC Educational Resources Information Center

    Sesli, Ertugrul; Kara, Yilmaz

    2012-01-01

    This study involved the development and application of a two-tier diagnostic test for measuring students' understanding of cell division and reproduction. The instrument development procedure had three general steps: defining the content boundaries of the test, collecting information on students' misconceptions, and instrument development.…

  14. Using Multiple-Choice Questions to Evaluate In-Depth Learning of Economics

    ERIC Educational Resources Information Center

    Buckles, Stephen; Siegfried, John J.

    2006-01-01

    Multiple-choice questions are the basis of a significant portion of assessment in introductory economics courses. However, these questions, as found in course assessments, test banks, and textbooks, often fail to evaluate students' abilities to use and apply economic analysis. The authors conclude that multiple-choice questions can be used to…

  15. Mind the Red Herrings--Deliberate Distraction of Pupil's Strategies Solving Multiple Choice Questions in Chemistry.

    ERIC Educational Resources Information Center

    Schmidt, Hans-Jurgen

    This study assumes that multiple choice test items generally provide the testee with several solutions, one of which is correct and the others of which are wrong. If pupils are unable to answer a question, one would expect that the wrong choices have equal chances of being selected. In many multiple choice items on stoichiometric calculation which…

  16. The Effects of Item Preview on Video-Based Multiple-Choice Listening Assessments

    ERIC Educational Resources Information Center

    Koyama, Dennis; Sun, Angela; Ockey, Gary J.

    2016-01-01

    Multiple-choice formats remain a popular design for assessing listening comprehension, yet no consensus has been reached on how multiple-choice formats should be employed. Some researchers argue that test takers must be provided with a preview of the items prior to the input (Buck, 1995; Sherman, 1997); others argue that a preview may decrease the…

  17. The Impact of Judges' Consensus on the Accuracy of Anchor-Based Judgmental Estimates of Multiple-Choice Test Item Difficulty: The Case of the NATABOC Examination

    ERIC Educational Resources Information Center

    DiBartolomeo, Matthew

    2010-01-01

    Multiple factors have influenced testing agencies to more carefully consider the manner and frequency in which pretest item data are collected and analyzed. One potentially promising development is judges' estimates of item difficulty. Accurate estimates of item difficulty may be used to reduce pretest samples sizes, supplement insufficient…

  18. End-of-Course (EOC) Multiple-Choice Test Results, 2009-10. Measuring Up. E&R Report No. 10.21

    ERIC Educational Resources Information Center

    Haynie, Glenda

    2011-01-01

    End-of-Course (EOC) tests are given statewide in selected courses typically taken in high school. Results for 2009-10 (and prior years, where available) are reported in terms of both average scale scores and the percentage of students scoring proficient. For the first time in 2009-10, all students who scored at Level II on EOCs were retested.…

  19. Components of Answers to Multiple-Choice Questions on a Published Reading Comprehension Test: An Application of the Hanna-Oaster Approach.

    ERIC Educational Resources Information Center

    Entin, Eileen B.; Klare, George B.

    1980-01-01

    An approach to assessing context dependence was applied to data from the Nelson-Denny Reading Test. The results suggest that scores on the difficult passages are inflated because the examinees can answer the questions without having to comprehend the passage. (MKM)

  20. Using a Classroom Response System to Improve Multiple-Choice Performance in AP® Physics

    NASA Astrophysics Data System (ADS)

    Bertrand, Peggy

    2009-04-01

    Participation in rigorous high school courses such as Advanced Placement (AP®) Physics increases the likelihood of college success, especially for students who are traditionally underserved. Tackling difficult multiple-choice exams should be part of any AP program because well-constructed multiple-choice questions, such as those on AP exams and on the Force Concept Inventory,2 are particularly good at rooting out common and persisting student misconceptions. Additionally, there are barriers to multiple-choice performance that have little to do with content mastery. For example, a student might fail to read the question thoroughly, forget to apply a reasonableness test to the answer, or simply work too slowly.

  1. Genetic Algorithms for Multiple-Choice Problems

    NASA Astrophysics Data System (ADS)

    Aickelin, Uwe

    2010-04-01

    This thesis investigates the use of problem-specific knowledge to enhance a genetic algorithm approach to multiple-choice optimisation problems.It shows that such information can significantly enhance performance, but that the choice of information and the way it is included are important factors for success.Two multiple-choice problems are considered.The first is constructing a feasible nurse roster that considers as many requests as possible.In the second problem, shops are allocated to locations in a mall subject to constraints and maximising the overall income.Genetic algorithms are chosen for their well-known robustness and ability to solve large and complex discrete optimisation problems.However, a survey of the literature reveals room for further research into generic ways to include constraints into a genetic algorithm framework.Hence, the main theme of this work is to balance feasibility and cost of solutions.In particular, co-operative co-evolution with hierarchical sub-populations, problem structure exploiting repair schemes and indirect genetic algorithms with self-adjusting decoder functions are identified as promising approaches.The research starts by applying standard genetic algorithms to the problems and explaining the failure of such approaches due to epistasis.To overcome this, problem-specific information is added in a variety of ways, some of which are designed to increase the number of feasible solutions found whilst others are intended to improve the quality of such solutions.As well as a theoretical discussion as to the underlying reasons for using each operator,extensive computational experiments are carried out on a variety of data.These show that the indirect approach relies less on problem structure and hence is easier to implement and superior in solution quality.

  2. Auditory Comprehension: Is Multiple Choice Really Good Enough?

    ERIC Educational Resources Information Center

    Breese, Elisabeth L.; Hillis, Argye E.

    2004-01-01

    Auditory comprehension is commonly measured with multiple choice tasks. The sensitivity of these tasks in identifying deficits, however, is limited by credit given for correct guesses by forced choice. In this study, we compare performance on the multiple choice task to an alternative word/picture verification task, in 122 subjects with acute left…

  3. Marking sense of students' answers to multiple-choice questions

    NASA Astrophysics Data System (ADS)

    Dufresne, Robert J.; Leonard, William J.; Gerace, William J.

    2002-03-01

    A detailed example is used to illustrate the difficulties making sense of students' answers to multiple-choice questions. We explore how correct answers can be false indicators of student knowledge and understanding. We caution against excessive reliance on multiple-choice questions for instructional decisions.

  4. Evaluating Multiple-Choice Exams in Large Introductory Physics Courses

    ERIC Educational Resources Information Center

    Scott, Michael; Stelzer, Tim; Gladding, Gary

    2006-01-01

    The reliability and validity of professionally written multiple-choice exams have been extensively studied for exams such as the SAT, graduate record examination, and the force concept inventory. Much of the success of these multiple-choice exams is attributed to the careful construction of each question, as well as each response. In this study,…

  5. Nested Logit Models for Multiple-Choice Item Response Data

    ERIC Educational Resources Information Center

    Suh, Youngsuk; Bolt, Daniel M.

    2010-01-01

    Nested logit item response models for multiple-choice data are presented. Relative to previous models, the new models are suggested to provide a better approximation to multiple-choice items where the application of a solution strategy precedes consideration of response options. In practice, the models also accommodate collapsibility across all…

  6. Benford's Law: textbook exercises and multiple-choice testbanks.

    PubMed

    Slepkov, Aaron D; Ironside, Kevin B; DiBattista, David

    2015-01-01

    Benford's Law describes the finding that the distribution of leading (or leftmost) digits of innumerable datasets follows a well-defined logarithmic trend, rather than an intuitive uniformity. In practice this means that the most common leading digit is 1, with an expected frequency of 30.1%, and the least common is 9, with an expected frequency of 4.6%. Currently, the most common application of Benford's Law is in detecting number invention and tampering such as found in accounting-, tax-, and voter-fraud. We demonstrate that answers to end-of-chapter exercises in physics and chemistry textbooks conform to Benford's Law. Subsequently, we investigate whether this fact can be used to gain advantage over random guessing in multiple-choice tests, and find that while testbank answers in introductory physics closely conform to Benford's Law, the testbank is nonetheless secure against such a Benford's attack for banal reasons. PMID:25689468

  7. Multiple-Choice Exams: An Obstacle for Higher-Level Thinking in Introductory Science Classes

    ERIC Educational Resources Information Center

    Stanger-Hall, Kathrin F.

    2012-01-01

    Learning science requires higher-level (critical) thinking skills that need to be practiced in science classes. This study tested the effect of exam format on critical-thinking skills. Multiple-choice (MC) testing is common in introductory science courses, and students in these classes tend to associate memorization with MC questions and may not…

  8. [Multiple choice questionnaires: use in electronic group teaching].

    PubMed

    Gilles, J L; Detroz, P; Bourguignon, J P

    2000-12-01

    Based on a continuing education session on childhood and adolescent diabetes, the strategy of electronic vote system is presented and illustrated using multiple choice questions given in the previous article. PMID:11205191

  9. Usage of Multiple-Choice Examinations in Chemical Engineering.

    ERIC Educational Resources Information Center

    Sommerfeld, Jude T.

    1981-01-01

    Discusses rationale for and use of multiple choice examinations in material balances, unit operations, reactor design, and process control courses. Describes computer scoring of student reaction to, and future plans for these examinations. (SK)

  10. Multiple-Choice and Alternate-Choice Questions: Description and Analysis.

    ERIC Educational Resources Information Center

    Dowd, Steven B.

    An alternative to multiple-choice (MC) testing is suggested as it pertains to the field of radiologic technology education. General principles for writing MC questions are given and contrasted with a new type of MC question, the alternate-choice (AC) question, in which the answer choices are embedded in the question in a short form that resembles…

  11. A Self-Correcting Approach to Multiple-Choice Exams Improves Students' Learning

    ERIC Educational Resources Information Center

    Grühn, Daniel; Cheng, Yanhua

    2014-01-01

    Montepare suggested the use of a self-correcting approach to multiple-choice tests: Students first take the exam as usual, but are allowed to hand in a self-corrected version afterwards. The idea of this approach is that the additional interaction with the material may foster further learning. To examine whether such an approach actually improves…

  12. Acceptance and Accuracy of Multiple Choice, Confidence-Level, and Essay Question Formats for Graduate Students

    ERIC Educational Resources Information Center

    Swartz, Stephen M.

    2006-01-01

    The confidence level (information-referenced testing; IRT) design is an attempt to improve upon the multiple choice format by allowing students to express a level of confidence in the answers they choose. In this study, the author evaluated student perceptions of the ease of use and accuracy of and general preference for traditional multiple…

  13. A Better Benchmark Assessment: Multiple-Choice versus Project-Based

    ERIC Educational Resources Information Center

    Peariso, Jamon F.

    2006-01-01

    The purpose of this literature review and Ex Post Facto descriptive study was to determine which type of benchmark assessment, multiple-choice or project-based, provides the best indication of general success on the history portion of the CST (California Standards Tests). The result of the study indicates that although the project-based benchmark…

  14. Multiple Choice Questions Can Be Designed or Revised to Challenge Learners' Critical Thinking

    ERIC Educational Resources Information Center

    Tractenberg, Rochelle E.; Gushta, Matthew M.; Mulroney, Susan E.; Weissinger, Peggy A.

    2013-01-01

    Multiple choice (MC) questions from a graduate physiology course were evaluated by cognitive-psychology (but not physiology) experts, and analyzed statistically, in order to test the independence of content expertise and cognitive complexity ratings of MC items. Integration of higher order thinking into MC exams is important, but widely known to…

  15. A Response Model for Multiple Choice Items. Psychometric Technical Report No. 1.

    ERIC Educational Resources Information Center

    Thissen, David; Steinberg, Lynne

    An extension of the Bock-Samejima model for multiple choice items is introduced. The model provides for varying probabilities of the response alternative when the examinee guesses. A marginal maximum likelihood method is devised for estimating the item parameters, and likelihood ratio tests for comparing more and less constrained forms of the…

  16. Pick-N Multiple Choice-Exams: A Comparison of Scoring Algorithms

    ERIC Educational Resources Information Center

    Bauer, Daniel; Holzer, Matthias; Kopp, Veronika; Fischer, Martin R.

    2011-01-01

    To compare different scoring algorithms for Pick-N multiple correct answer multiple-choice (MC) exams regarding test reliability, student performance, total item discrimination and item difficulty. Data from six 3rd year medical students' end of term exams in internal medicine from 2005 to 2008 at Munich University were analysed (1,255 students,…

  17. Using cognitive models to develop quality multiple-choice questions.

    PubMed

    Pugh, Debra; De Champlain, Andre; Gierl, Mark; Lai, Hollis; Touchie, Claire

    2016-08-01

    With the recent interest in competency-based education, educators are being challenged to develop more assessment opportunities. As such, there is increased demand for exam content development, which can be a very labor-intense process. An innovative solution to this challenge has been the use of automatic item generation (AIG) to develop multiple-choice questions (MCQs). In AIG, computer technology is used to generate test items from cognitive models (i.e. representations of the knowledge and skills that are required to solve a problem). The main advantage yielded by AIG is the efficiency in generating items. Although technology for AIG relies on a linear programming approach, the same principles can also be used to improve traditional committee-based processes used in the development of MCQs. Using this approach, content experts deconstruct their clinical reasoning process to develop a cognitive model which, in turn, is used to create MCQs. This approach is appealing because it: (1) is efficient; (2) has been shown to produce items with psychometric properties comparable to those generated using a traditional approach; and (3) can be used to assess higher order skills (i.e. application of knowledge). The purpose of this article is to provide a novel framework for the development of high-quality MCQs using cognitive models. PMID:26998566

  18. Item analysis of in use multiple choice questions in pharmacology

    PubMed Central

    Kaur, Mandeep; Singla, Shweta; Mahajan, Rajiv

    2016-01-01

    Background: Multiple choice questions (MCQs) are a common method of assessment of medical students. The quality of MCQs is determined by three parameters such as difficulty index (DIF I), discrimination index (DI), and distracter efficiency (DE). Objectives: The objective of this study is to assess the quality of MCQs currently in use in pharmacology and discard the MCQs which are not found useful. Materials and Methods: A class test of central nervous system unit was conducted in the Department of Pharmacology. This test comprised 50 MCQs/items and 150 distracters. A correct response to an item was awarded one mark with no negative marking for incorrect response. Each item was analyzed for three parameters such as DIF I, DI, and DE. Results: DIF of 38 (76%) items was in the acceptable range (P = 30–70%), 11 (22%) items were too easy (P > 70%), and 1 (2%) item was too difficult (P < 30%). DI of 31 (62%) items was excellent (d > 0.35), of 12 (24%) items was good (d = 0.20–0.34), and of 7 (14%) items was poor (d < 0.20). A total of 50 items had 150 distracters. Among these, 27 (18%) were nonfunctional distracters (NFDs) and 123 (82%) were functional distracters. Items with one NFD were 11 and with two NFDs were 8. Based on these parameters, 6 items were discarded, 17 were revised, and 27 were kept for subsequent use. Conclusion: Item analysis is a valuable tool as it helps us to retain the valuable MCQs and discard the items which are not useful. It also helps in increasing our skills in test construction and identifies the specific areas of course content which need greater emphasis or clarity. PMID:27563581

  19. Challenging Multiple-Choice Questions to Engage Critical Thinking

    ERIC Educational Resources Information Center

    Kerkman, Dennis D.; Johnson, Andrew T.

    2014-01-01

    This article examines a technique for engaging critical thinking on multiple-choice exams. University students were encouraged to "challenge" the validity of any exam question they believed to be unfair (e.g., more than one equally correct answer, ambiguous wording, etc.). The number of valid challenges a student wrote was a better…

  20. Analyzing Student Confidence in Classroom Voting with Multiple Choice Questions

    ERIC Educational Resources Information Center

    Stewart, Ann; Storm, Christopher; VonEpps, Lahna

    2013-01-01

    The purpose of this paper is to present results of a recent study in which students voted on multiple choice questions in mathematics courses of varying levels. Students used clickers to select the best answer among the choices given; in addition, they were also asked whether they were confident in their answer. In this paper we analyze data…

  1. Initial Correction versus Negative Marking in Multiple Choice Examinations

    ERIC Educational Resources Information Center

    Van Hecke, Tanja

    2015-01-01

    Optimal assessment tools should measure in a limited time the knowledge of students in a correct and unbiased way. A method for automating the scoring is multiple choice scoring. This article compares scoring methods from a probabilistic point of view by modelling the probability to pass: the number right scoring, the initial correction (IC) and…

  2. Using the Multiple Choice Procedure to Measure College Student Gambling

    ERIC Educational Resources Information Center

    Butler, Leon Harvey

    2010-01-01

    Research suggests that gambling is similar to addictive behaviors such as substance use. In the current study, gambling was investigated from a behavioral economics perspective. The Multiple Choice Procedure (MCP) with gambling as the target behavior was used to assess for relative reinforcing value, the effect of alternative reinforcers, and…

  3. Graded Multiple Choice Questions: Rewarding Understanding and Preventing Plagiarism

    NASA Astrophysics Data System (ADS)

    Denyer, G. S.; Hancock, D.

    2002-08-01

    This paper describes an easily implemented method that allows the generation and analysis of graded multiple-choice examinations. The technique, which uses standard functions in user-end software (Microsoft Excel 5+), can also produce several different versions of an examination that can be employed to prevent the reward of plagarism. The manuscript also discusses the advantages of having a graded marking system for the elimination of ambiguities, use in multi-step calculation questions, and questions that require extrapolation or reasoning. The advantages of the scrambling strategy, which maintains the same question order, is discussed with reference to student equity. The system provides a non-confrontational mechanism for dealing with cheating in large-class multiple-choice examinations, as well as providing a reward for problem solving over surface learning.

  4. Benford's law and distractors in multiple choice exams

    NASA Astrophysics Data System (ADS)

    Hoppe, Fred M.

    2016-05-01

    Suppose that in a multiple choice examination the leading significant digit of the correct options follows Benford's Law, while the leading digit of the distractors is uniform. Consider a strategy for guessing at answers that selects the option with the lowest leading digit with ties broken at random. We provide an expression for both the probability that this strategy selects the correct option and also the generalization to the probability of selecting the option with the lowest r significant digit string.

  5. Delayed, but not immediate, feedback after multiple-choice questions increases performance on a subsequent short-answer, but not multiple-choice, exam: evidence for the dual-process theory of memory.

    PubMed

    Sinha, Neha; Glass, Arnold Lewis

    2015-01-01

    Three experiments, two performed in the laboratory and one embedded in a college psychology lecture course, investigated the effects of immediate versus delayed feedback following a multiple-choice exam on subsequent short answer and multiple-choice exams. Performance on the subsequent multiple-choice exam was not affected by the timing of the feedback on the prior exam; however, performance on the subsequent short answer exam was better following delayed than following immediate feedback. This was true regardless of the order in which immediate versus delayed feedback was given. Furthermore, delayed feedback only had a greater effect than immediate feedback on subsequent short answer performance following correct, confident responses on the prior exam. These results indicate that delayed feedback cues a student's prior response and increases subsequent recollection of that response. The practical implication is that delayed feedback is better than immediate feedback during academic testing. PMID:25832741

  6. The Effect of the Multiple-Choice Item Format on the Measurement of Knowledge of Language Structure

    ERIC Educational Resources Information Center

    Currie, Michael; Chiramanee, Thanyapa

    2010-01-01

    Noting the widespread use of multiple-choice items in tests in English language education in Thailand, this study compared their effect against that of constructed-response items. One hundred and fifty-two university undergraduates took a test of English structure first in constructed-response format, and later in three, stem-equivalent…

  7. Assessing students' abilities to construct and interpret line graphs: Disparities between multiple-choice and free-response instruments

    NASA Astrophysics Data System (ADS)

    Berg, Craig A.; Smith, Philip

    The author is concerned about the methodology and instrumentation used to assess both graphing abilities and the impact of microcomputer-based laboratories (MBL) on students' graphing abilities for four reasons: (1) the ability to construct and interpret graphs is critical for developing key ideas in science; (2) science educators need to have valid information for making teaching decisions; (3) educators and researchers are heralding the arrival of MBL as a tool for developing graphing abilities; and (4) some of the research which supports using MBL appears to have significant validity problems. In this article, the author will describe the research which challenges the validity of using multiple-choice instruments to assess graphing abilities. The evidence from this research will identify numerous disparities between the results of multiple-choice and free-response instruments. In the first study, 72 subjects in the seventh, ninth, and eleventh grades were administered individual clinical interviews to assess their ability to construct and interpret graphs. A wide variety of graphs and situations were assessed. In three instances during the interview, students drew a graph that would best represent a situation and then explained their drawings. The results of these clinical graphing interviews were very different from similar questions assessed through multiple-choice formats in other research studies. In addition, insights into students' thinking about graphing reveal that some multiple-choice graphing questions from prior research studies and standardized tests do not discriminate between right answers/right reasons, right answers/wrong reasons, and answers scored wrong but correct for valid reasons. These results indicate that in some instances multiple-choice questions are not a valid measure of graphing abilities. In a second study, the researcher continued to pursue the questions raised about the validity of multiple-choice tests to assess graphing

  8. Polytomous versus Dichotomous Scoring on Multiple-Choice Examinations: Development of a Rubric for Rating Partial Credit

    ERIC Educational Resources Information Center

    Grunert, Megan L.; Raker, Jeffrey R.; Murphy, Kristen L.; Holme, Thomas A.

    2013-01-01

    The concept of assigning partial credit on multiple-choice test items is considered for items from ACS Exams. Because the items on these exams, particularly the quantitative items, use common student errors to define incorrect answers, it is possible to assign partial credits to some of these incorrect responses. To do so, however, it becomes…

  9. The Development of Multiple-Choice Items Consistent with the AP Chemistry Curriculum Framework to More Accurately Assess Deeper Understanding

    ERIC Educational Resources Information Center

    Domyancich, John M.

    2014-01-01

    Multiple-choice questions are an important part of large-scale summative assessments, such as the advanced placement (AP) chemistry exam. However, past AP chemistry exam items often lacked the ability to test conceptual understanding and higher-order cognitive skills. The redesigned AP chemistry exam shows a distinctive shift in item types toward…

  10. An Empirical Comparison of DDF Detection Methods for Understanding the Causes of DIF in Multiple-Choice Items

    ERIC Educational Resources Information Center

    Suh, Youngsuk; Talley, Anna E.

    2015-01-01

    This study compared and illustrated four differential distractor functioning (DDF) detection methods for analyzing multiple-choice items. The log-linear approach, two item response theory-model-based approaches with likelihood ratio tests, and the odds ratio approach were compared to examine the congruence among the four DDF detection methods.…

  11. A Cognitive Analysis of Gender Differences on Constructed-Response and Multiple-Choice Assessments in Mathematics.

    ERIC Educational Resources Information Center

    Wilson, Linda Dager; Zhang, Liru

    This study is based on data from a state-wide assessment that included both multiple-choice and constructed-response items. The intent of the study was to see whether item types make a difference in gender results. The items on both tests were categorized according to whether they assessed procedural knowledge, concepts, problem solving, or…

  12. Cost analysis for computer supported multiple-choice paper examinations

    PubMed Central

    Mandel, Alexander; Hörnlein, Alexander; Ifland, Marianus; Lüneburg, Edeltraud; Deckert, Jürgen; Puppe, Frank

    2011-01-01

    Introduction: Multiple-choice-examinations are still fundamental for assessment in medical degree programs. In addition to content related research, the optimization of the technical procedure is an important question. Medical examiners face three options: paper-based examinations with or without computer support or completely electronic examinations. Critical aspects are the effort for formatting, the logistic effort during the actual examination, quality, promptness and effort of the correction, the time for making the documents available for inspection by the students, and the statistical analysis of the examination results. Methods: Since three semesters a computer program for input and formatting of MC-questions in medical and other paper-based examinations is used and continuously improved at Wuerzburg University. In the winter semester (WS) 2009/10 eleven, in the summer semester (SS) 2010 twelve and in WS 2010/11 thirteen medical examinations were accomplished with the program and automatically evaluated. For the last two semesters the remaining manual workload was recorded. Results: The cost of the formatting and the subsequent analysis including adjustments of the analysis of an average examination with about 140 participants and about 35 questions was 5-7 hours for exams without complications in the winter semester 2009/2010, about 2 hours in SS 2010 and about 1.5 hours in the winter semester 2010/11. Including exams with complications, the average time was about 3 hours per exam in SS 2010 and 2.67 hours for the WS 10/11. Discussion: For conventional multiple-choice exams the computer-based formatting and evaluation of paper-based exams offers a significant time reduction for lecturers in comparison with the manual correction of paper-based exams and compared to purely electronically conducted exams it needs a much simpler technological infrastructure and fewer staff during the exam. PMID:22205913

  13. Potential Values of Incorporating a Multiple-Choice Question Construction in Physics Experimentation Instruction

    NASA Astrophysics Data System (ADS)

    Yu, Fu-Yun; Liu, Yu-Hsin

    2005-09-01

    The potential value of a multiple-choice question-construction instructional strategy for the support of students’ learning of physics experiments was examined in the study. Forty-two university freshmen participated in the study for a whole semester. A constant comparison method adopted to categorize students’ qualitative data indicated that the influences of multiple-choice question construction were evident in several significant ways (promoting constructive and productive studying habits; reflecting and previewing course-related materials; increasing in-group communication and interaction; breaking passive learning style and habits, etc.), which, worked together, not only enhanced students’ comprehension and retention of the obtained knowledge, but also helped distil a sense of empowerment and learning community within the participants. Analysis with one-group t-tests, using 3 as the expected mean, on quantitative data further found that students’ satisfaction toward past learning experience, and perceptions toward this strategy’s potentials for promoting learning were statistically significant at the 0.0005 level, while learning anxiety was not statistically significant. Suggestions for incorporating question-generation activities within classroom and topics for future studies were rendered.

  14. The detection of cheating in multiple choice examinations

    NASA Astrophysics Data System (ADS)

    Richmond, Peter; Roehner, Bertrand M.

    2015-10-01

    Cheating in examinations is acknowledged by an increasing number of organizations to be widespread. We examine two different approaches to assess their effectiveness at detecting anomalous results, suggestive of collusion, using data taken from a number of multiple-choice examinations organized by the UK Radio Communication Foundation. Analysis of student pair overlaps of correct answers is shown to give results consistent with more orthodox statistical correlations for which confidence limits as opposed to the less familiar "Bonferroni method" can be used. A simulation approach is also developed which confirms the interpretation of the empirical approach. Then the variables Xi =(1 -Ui) Yi +Ui Z are a system of symmetric dependent binary variables (0 , 1 ; p) whose correlation matrix is ρij = r. The proof is easy and given in the paper. Let us add two remarks. • We used the expression "symmetric variables" to reflect the fact that all Xi play the same role. The expression "exchangeable variables" is often used with the same meaning. • The correlation matrix has only positive elements. This is of course imposed by the symmetry condition. ρ12 < 0 and ρ23 < 0 would imply ρ13 > 0, thus violating the symmetry requirement. In the following subsections we will be concerned with the question of uniqueness of the set of Xi generated above. Needless to say, it is useful to know whether the proposition gives the answer or only one among many. More precisely, the problem can be stated as follows.

  15. Role of the plurality rule in multiple choices

    NASA Astrophysics Data System (ADS)

    Calvão, A. M.; Ramos, M.; Anteneodo, C.

    2016-02-01

    People are often challenged to select one among several alternatives. This situation is present not only in decisions about complex issues, e.g. political or academic choices, but also about trivial ones, such as in daily purchases at a supermarket. We tackle this scenario by means of the tools of statistical mechanics. Following this approach, we introduce and analyse a model of opinion dynamics, using a Potts-like state variable to represent the multiple choices, including the ‘undecided state’, which represents the individuals who do not make a choice. We investigate the dynamics over Erdös–Rényi and Barabási–Albert networks, two paradigmatic classes with the small-world property, and we show the impact of the type of network on the opinion dynamics. Depending on the number of available options q and on the degree distribution of the network of contacts, different final steady states are accessible: from a wide distribution of choices to a state where a given option largely dominates. The abrupt transition between them is consistent with the sudden viral dominance of a given option over many similar ones. Moreover, the probability distributions produced by the model are validated by real data. Finally, we show that the model also contemplates the real situation of overchoice, where a large number of similar alternatives makes the choice process harder and indecision prevail.

  16. The Answering Process for Multiple-Choice Questions in Collaborative Learning: A Mathematical Learning Model Analysis

    ERIC Educational Resources Information Center

    Nakamura, Yasuyuki; Nishi, Shinnosuke; Muramatsu, Yuta; Yasutake, Koichi; Yamakawa, Osamu; Tagawa, Takahiro

    2014-01-01

    In this paper, we introduce a mathematical model for collaborative learning and the answering process for multiple-choice questions. The collaborative learning model is inspired by the Ising spin model and the model for answering multiple-choice questions is based on their difficulty level. An intensive simulation study predicts the possibility of…

  17. The Use of a Comprehensive Multiple Choice Final Exam in the Macroeconomics Principles Course: An Assessment.

    ERIC Educational Resources Information Center

    Petrowsky, Michael C.

    This paper analyzes the results of a pilot study at Glendale Community College (Arizona) to assess the effectiveness of a comprehensive multiple choice final exam in the macroeconomic principles course. The "pilot project" involved the administration of a 50-question multiple choice exam to 71 students in three macroeconomics sections. The…

  18. Learning Physics Teaching through Collaborative Design of Conceptual Multiple-Choice Questions

    ERIC Educational Resources Information Center

    Milner-Bolotin, Marina

    2015-01-01

    Increasing student engagement through Electronic Response Systems (clickers) has been widely researched. Its success largely depends on the quality of multiple-choice questions used by instructors. This paper describes a pilot project that focused on the implementation of online collaborative multiple-choice question repository, PeerWise, in a…

  19. Using a Classroom Response System to Improve Multiple-Choice Performance in AP[R] Physics

    ERIC Educational Resources Information Center

    Bertrand, Peggy

    2009-01-01

    Participation in rigorous high school courses such as Advanced Placement (AP[R]) Physics increases the likelihood of college success, especially for students who are traditionally underserved. Tackling difficult multiple-choice exams should be part of any AP program because well-constructed multiple-choice questions, such as those on AP exams and…

  20. An Investigation of Multiple-Response-Option Multiple-Choice Items: Item Performance and Processing Demands.

    ERIC Educational Resources Information Center

    Huntley, Renee M.; Plake, Barbara S.

    The combinational-format item (CFI)--multiple-choice item with combinations of alternatives presented as response choices--was studied to determine whether CFIs were different from regular multiple-choice items in item characteristics or in cognitive processing demands. Three undergraduate Foundations of Education classes (consisting of a total of…

  1. Teaching Critical Thinking without (Much) Writing: Multiple-Choice and Metacognition

    ERIC Educational Resources Information Center

    Bassett, Molly H.

    2016-01-01

    In this essay, I explore an exam format that pairs multiple-choice questions with required rationales. In a space adjacent to each multiple-choice question, students explain why or how they arrived at the answer they selected. This exercise builds the critical thinking skill known as metacognition, thinking about thinking, into an exam that also…

  2. The Top 10 Questions for Active Debris Removal

    NASA Technical Reports Server (NTRS)

    Liou, J. -C.

    2010-01-01

    This slide presentation reviews the requirement and issues around removal of debris from the earth orbital environment. The 10 questions discussed are: 1. Which region (LEO/MEO/GEO) has the fastest projected growth rate and the highest collision activities? 2. Can the commonly-adopted mitigation measures stabilize the future environment? 3. What are the objectives of active debris removal (ADR)? 4. How can effective ADR target selection criteria to stabilize the future LEO environment be defined? 5. What are the keys to remediate the future LEO environment? 6. What is the timeframe for ADR implementation? 7. What is the effect of practical/operational constraints? 8. What are the collision probabilities and masses of the current objects? 9. What are the benefits of collision avoidance maneuvers? 10. What is the next step?

  3. Using a Theorem by Andersen and the Dichotomous Rasch Model to Assess the Presence of Random Guessing in Multiple Choice Items

    ERIC Educational Resources Information Center

    Andrich, David; Marais, Ida; Humphry, Stephen

    2012-01-01

    Andersen (1995, 2002) proves a theorem relating variances of parameter estimates from samples and subsamples and shows its use as an adjunct to standard statistical analyses. The authors show an application where the theorem is central to the hypothesis tested, namely, whether random guessing to multiple choice items affects their estimates in the…

  4. Measuring University students' understanding of the greenhouse effect - a comparison of multiple-choice, short answer and concept sketch assessment tools with respect to students' mental models

    NASA Astrophysics Data System (ADS)

    Gold, A. U.; Harris, S. E.

    2013-12-01

    The greenhouse effect comes up in most discussions about climate and is a key concept related to climate change. Existing studies have shown that students and adults alike lack a detailed understanding of this important concept or might hold misconceptions. We studied the effectiveness of different interventions on University-level students' understanding of the greenhouse effect. Introductory level science students were tested for their pre-knowledge of the greenhouse effect using validated multiple-choice questions, short answers and concept sketches. All students participated in a common lesson about the greenhouse effect and were then randomly assigned to one of two lab groups. One group explored an existing simulation about the greenhouse effect (PhET-lesson) and the other group worked with absorption spectra of different greenhouse gases (Data-lesson) to deepen the understanding of the greenhouse effect. All students completed the same assessment including multiple choice, short answers and concept sketches after participation in their lab lesson. 164 students completed all the assessments, 76 completed the PhET lesson and 77 completed the data lesson. 11 students missed the contrasting lesson. In this presentation we show the comparison between the multiple-choice questions, short answer questions and the concept sketches of students. We explore how well each of these assessment types represents student's knowledge. We also identify items that are indicators of the level of understanding of the greenhouse effect as measured in correspondence of student answers to an expert mental model and expert responses. Preliminary data analysis shows that student who produce concept sketch drawings that come close to expert drawings also choose correct multiple-choice answers. However, correct multiple-choice answers are not necessarily an indicator that a student produces an expert-like correlating concept sketch items. Multiple-choice questions that require detailed

  5. Faculty development programs improve the quality of Multiple Choice Questions items' writing

    PubMed Central

    Abdulghani, Hamza Mohammad; Ahmad, Farah; Irshad, Mohammad; Khalil, Mahmoud Salah; Al-Shaikh, Ghadeer Khalid; Syed, Sadiqa; Aldrees, Abdulmajeed Abdurrahman; Alrowais, Norah; Haque, Shafiul

    2015-01-01

    The aim of this study was to assess the utility of long term faculty development programs (FDPs) in order to improve the quality of multiple choice questions (MCQs) items' writing. This was a quasi-experimental study, conducted with newly joined faculty members. The MCQ items were analyzed for difficulty index, discriminating index, reliability, Bloom's cognitive levels, item writing flaws (IWFs) and MCQs' nonfunctioning distractors (NFDs) based test courses of respiratory, cardiovascular and renal blocks. Significant improvement was found in the difficulty index values of pre- to post-training (p = 0.003). MCQs with moderate difficulty and higher discrimination were found to be more in the post-training tests in all three courses. Easy questions were decreased from 36.7 to 22.5%. Significant improvement was also reported in the discriminating indices from 92.1 to 95.4% after training (p = 0.132). More number of higher cognitive level of Bloom's taxonomy was reported in the post-training test items (p<0.0001). Also, NFDs and IWFs were reported less in the post-training items (p<0.02). The MCQs written by the faculties without participating in FDPs are usually of low quality. This study suggests that newly joined faculties need active participation in FDPs as these programs are supportive in improving the quality of MCQs' items writing. PMID:25828516

  6. Multiple-Choice Exams: An Obstacle for Higher-Level Thinking in Introductory Science Classes

    PubMed Central

    Stanger-Hall, Kathrin F.

    2012-01-01

    Learning science requires higher-level (critical) thinking skills that need to be practiced in science classes. This study tested the effect of exam format on critical-thinking skills. Multiple-choice (MC) testing is common in introductory science courses, and students in these classes tend to associate memorization with MC questions and may not see the need to modify their study strategies for critical thinking, because the MC exam format has not changed. To test the effect of exam format, I used two sections of an introductory biology class. One section was assessed with exams in the traditional MC format, the other section was assessed with both MC and constructed-response (CR) questions. The mixed exam format was correlated with significantly more cognitively active study behaviors and a significantly better performance on the cumulative final exam (after accounting for grade point average and gender). There was also less gender-bias in the CR answers. This suggests that the MC-only exam format indeed hinders critical thinking in introductory science classes. Introducing CR questions encouraged students to learn more and to be better critical thinkers and reduced gender bias. However, student resistance increased as students adjusted their perceptions of their own critical-thinking abilities. PMID:22949426

  7. Faculty development programs improve the quality of Multiple Choice Questions items' writing.

    PubMed

    Abdulghani, Hamza Mohammad; Ahmad, Farah; Irshad, Mohammad; Khalil, Mahmoud Salah; Al-Shaikh, Ghadeer Khalid; Syed, Sadiqa; Aldrees, Abdulmajeed Abdurrahman; Alrowais, Norah; Haque, Shafiul

    2015-01-01

    The aim of this study was to assess the utility of long term faculty development programs (FDPs) in order to improve the quality of multiple choice questions (MCQs) items' writing. This was a quasi-experimental study, conducted with newly joined faculty members. The MCQ items were analyzed for difficulty index, discriminating index, reliability, Bloom's cognitive levels, item writing flaws (IWFs) and MCQs' nonfunctioning distractors (NFDs) based test courses of respiratory, cardiovascular and renal blocks. Significant improvement was found in the difficulty index values of pre- to post-training (p = 0.003). MCQs with moderate difficulty and higher discrimination were found to be more in the post-training tests in all three courses. Easy questions were decreased from 36.7 to 22.5%. Significant improvement was also reported in the discriminating indices from 92.1 to 95.4% after training (p = 0.132). More number of higher cognitive level of Bloom's taxonomy was reported in the post-training test items (p<0.0001). Also, NFDs and IWFs were reported less in the post-training items (p<0.02). The MCQs written by the faculties without participating in FDPs are usually of low quality. This study suggests that newly joined faculties need active participation in FDPs as these programs are supportive in improving the quality of MCQs' items writing. PMID:25828516

  8. Contemplation on marking scheme for Type X multiple choice questions, and an illustration of a practically applicable scheme.

    PubMed

    Siddiqui, Nazeem Ishrat; Bhavsar, Vinayak H; Bhavsar, Arnav V; Bose, Sukhwant

    2016-01-01

    Ever since its inception 100 years back, multiple choice items have been widely used as a method of assessment. It has certain inherent limitations such as inability to test higher cognitive skills, element of guesswork while answering, and issues related with marking schemes. Various marking schemes have been proposed in the past but they are not balanced, skewed, and complex, which are based on mathematical calculations which are typically not within the grasp of medical personnel. Type X questions has many advantages being easy to construct, can test multiple concepts/application/facets of a topic, cognitive skill of various level of hierarchy can be tested, and unlike Type K items, they are free from complicated coding. In spite of these advantages, they are not in common use due to complicated marking schemes. This is the reason we explored the aspects of methods of evaluation of multiple correct options multiple choice questions and came up with the simple, practically applicable, nonstringent but logical scoring system for the same. The rationale of the illustrated marking scheme is that it takes into consideration the distracter recognition ability of the examinee rather than relying on the ability only to select the correct response. Thus, examinee's true knowledge is tested, and he is rewarded accordingly for selecting a correct answer and omitting a distracter. The scheme also penalizes for not recognizing a distracter thus controlling guessing behavior. It is emphasized that if the illustrated scoring scheme is adopted, then Type X questions would come in common practice. PMID:27127312

  9. Contemplation on marking scheme for Type X multiple choice questions, and an illustration of a practically applicable scheme

    PubMed Central

    Siddiqui, Nazeem Ishrat; Bhavsar, Vinayak H.; Bhavsar, Arnav V.; Bose, Sukhwant

    2016-01-01

    Ever since its inception 100 years back, multiple choice items have been widely used as a method of assessment. It has certain inherent limitations such as inability to test higher cognitive skills, element of guesswork while answering, and issues related with marking schemes. Various marking schemes have been proposed in the past but they are not balanced, skewed, and complex, which are based on mathematical calculations which are typically not within the grasp of medical personnel. Type X questions has many advantages being easy to construct, can test multiple concepts/application/facets of a topic, cognitive skill of various level of hierarchy can be tested, and unlike Type K items, they are free from complicated coding. In spite of these advantages, they are not in common use due to complicated marking schemes. This is the reason we explored the aspects of methods of evaluation of multiple correct options multiple choice questions and came up with the simple, practically applicable, nonstringent but logical scoring system for the same. The rationale of the illustrated marking scheme is that it takes into consideration the distracter recognition ability of the examinee rather than relying on the ability only to select the correct response. Thus, examinee's true knowledge is tested, and he is rewarded accordingly for selecting a correct answer and omitting a distracter. The scheme also penalizes for not recognizing a distracter thus controlling guessing behavior. It is emphasized that if the illustrated scoring scheme is adopted, then Type X questions would come in common practice. PMID:27127312

  10. Benford’s Law: Textbook Exercises and Multiple-Choice Testbanks

    PubMed Central

    Slepkov, Aaron D.; Ironside, Kevin B.; DiBattista, David

    2015-01-01

    Benford’s Law describes the finding that the distribution of leading (or leftmost) digits of innumerable datasets follows a well-defined logarithmic trend, rather than an intuitive uniformity. In practice this means that the most common leading digit is 1, with an expected frequency of 30.1%, and the least common is 9, with an expected frequency of 4.6%. Currently, the most common application of Benford’s Law is in detecting number invention and tampering such as found in accounting-, tax-, and voter-fraud. We demonstrate that answers to end-of-chapter exercises in physics and chemistry textbooks conform to Benford’s Law. Subsequently, we investigate whether this fact can be used to gain advantage over random guessing in multiple-choice tests, and find that while testbank answers in introductory physics closely conform to Benford’s Law, the testbank is nonetheless secure against such a Benford’s attack for banal reasons. PMID:25689468

  11. Pathfinding in a large vertebrate axon tract: isotypic interactions guide retinotectal axons at multiple choice points

    PubMed Central

    Pittman, Andrew J.; Law, Mei-Yee; Chien, Chi-Bin

    2008-01-01

    Summary Navigating axons respond to environmental guidance signals, but can also follow axons that have gone before—pioneer axons. Pioneers have been studied extensively in simple systems, but the role of axon-axon interactions remains largely unexplored in large vertebrate axon tracts, where cohorts of identical axons could potentially use isotypic interactions to guide each other through multiple choice points. Furthermore, the relative importance of axon-axon interactions compared to axon-autonomous receptor function has not been assessed. Here we test the role of axon-axon interactions in retinotectal development, by devising a technique to selectively remove or replace early-born retinal ganglion cells (RGCs). We find that early RGCs are both necessary and sufficient for later axons to exit the eye. Furthermore, introducing misrouted axons by transplantation reveals that guidance from eye to tectum relies heavily on interactions between axons, including both pioneer-follower and community effects. We conclude that axon-axon interactions and ligand-receptor signaling have coequal roles, cooperating to ensure the fidelity of axon guidance in developing vertebrate tracts. PMID:18653554

  12. Experiences in adding multiple-choice questions to an objective structural clinical examination (OSCE) in undergraduate dental education.

    PubMed

    Näpänkangas, R; Harila, V; Lahti, S

    2012-02-01

    In the University of Oulu, the competencies of fourth-year dental students have traditionally been assessed with a written examination before they go to work for the first time as dentists outside the Institute of Dentistry. In 2009, the objective structural clinical examination (OSCE) modified with multiple-choice questions was introduced as a tool for assessing clinical competencies. The aim of the study was to evaluate the validity of the modified OSCE (m-OSCE) by measuring the attitude of examiners (teachers) and dental students towards the m-OSCE and to evaluate whether the OSCE is preferred to the written examination in the assessment of knowledge and clinical skills. Additionally, the aim was to evaluate the reliability of the multiple-choice examination. Altogether 30 students (86%) and 11/12 examiners (92%) responded to the questionnaire. Most of the students considered the multiple-choice questions easy, but complained about the complex formulation of the questions. The test stations were easy for 87% of the students, but the time allocated was too short. Most of the students (73%) and examiners (91%) preferred the m-OSCE to the written examination. All students and examiners found the immediate assessment of the tasks good. Based on the evaluations of m-OSCE, it could be concluded that both students and examiners preferred the m-OSCE to the pure written examination in assessment, which indicate that m-OSCE had good face validity. Combining multiple methods in assessment of knowledge and clinical skills whilst simultaneously taking into account the feasibility and available resources provides more valid results. PMID:22251338

  13. Instructor perspectives of multiple-choice questions in summative assessment for novice programmers

    NASA Astrophysics Data System (ADS)

    Shuhidan, Shuhaida; Hamilton, Margaret; D'Souza, Daryl

    2010-09-01

    Learning to program is known to be difficult for novices. High attrition and high failure rates in foundation-level programming courses undertaken at tertiary level in Computer Science programs, are commonly reported. A common approach to evaluating novice programming ability is through a combination of formative and summative assessments, with the latter typically represented by a final examination. Preparation of such assessment is driven by instructor perceptions of student learning of programming concepts. This in turn may yield instructor perspectives of summative assessment that do not necessarily correlate with student expectations or abilities. In this article, we present results of our study around instructor perspectives of summative assessment for novice programmers. Both quantitative and qualitative data have been obtained via survey responses from programming instructors with varying teaching experience, and from novice student responses to targeted examination questions. Our findings highlight that most of the instructors believed that summative assessment is, and is meant to be, a valid measure of a student's ability to program. Most instructors further believed that Multiple-choice Questions (MCQs) provide a means of testing a low level of understanding, and a few added qualitative comments to suggest that MCQs are easy questions, and others refused to use them at all. There was no agreement around the proposition that if a question was designed to test a low level of skill, or a low level in a hierarchy of a body of knowledge, that such a question should or would be found to be easy by the student. To aid our analysis of assessment questions, we introduced four measures: Syntax Knowledge; Semantic Knowledge; Problem Solving Skill and the Level of Difficulty of the Problem. We applied these measures to selected examination questions, and have identified gaps between the instructor perspectives of what is considered to be an easy question and also in

  14. Validity and Reliability of Scores Obtained on Multiple-Choice Questions: Why Functioning Distractors Matter

    ERIC Educational Resources Information Center

    Ali, Syed Haris; Carr, Patrick A.; Ruit, Kenneth G.

    2016-01-01

    Plausible distractors are important for accurate measurement of knowledge via multiple-choice questions (MCQs). This study demonstrates the impact of higher distractor functioning on validity and reliability of scores obtained on MCQs. Freeresponse (FR) and MCQ versions of a neurohistology practice exam were given to four cohorts of Year 1 medical…

  15. Improving Educational Assessment: A Computer-Adaptive Multiple Choice Assessment Using NRET as the Scoring Method

    ERIC Educational Resources Information Center

    Sie Hoe, Lau; Ngee Kiong, Lau; Kian Sam, Hong; Bin Usop, Hasbee

    2009-01-01

    Assessment is central to any educational process. Number Right (NR) scoring method is a conventional scoring method for multiple choice items, where students need to pick one option as the correct answer. One point is awarded for the correct response and zero for any other responses. However, it has been heavily criticized for guessing and failure…

  16. Are Faculty Predictions or Item Taxonomies Useful for Estimating the Outcome of Multiple-Choice Examinations?

    ERIC Educational Resources Information Center

    Kibble, Jonathan D.; Johnson, Teresa

    2011-01-01

    The purpose of this study was to evaluate whether multiple-choice item difficulty could be predicted either by a subjective judgment by the question author or by applying a learning taxonomy to the items. Eight physiology faculty members teaching an upper-level undergraduate human physiology course consented to participate in the study. The…

  17. Guide to Developing High-Quality, Reliable, and Valid Multiple-Choice Assessments

    ERIC Educational Resources Information Center

    Towns, Marcy H.

    2014-01-01

    Chemistry faculty members are highly skilled in obtaining, analyzing, and interpreting physical measurements, but often they are less skilled in measuring student learning. This work provides guidance for chemistry faculty from the research literature on multiple-choice item development in chemistry. Areas covered include content, stem, and…

  18. Illusion of Linearity in Geometry: Effect in Multiple-Choice Problems

    ERIC Educational Resources Information Center

    Vlahovic-Stetic, Vesna; Pavlin-Bernardic, Nina; Rajter, Miroslav

    2010-01-01

    The aim of this study was to examine if there is a difference in the performance on non-linear problems regarding age, gender, and solving situation, and whether the multiple-choice answer format influences students' thinking. A total of 112 students, aged 15-16 and 18-19, were asked to solve problems for which solutions based on proportionality…

  19. A Cognitive Diagnosis Model for Cognitively Based Multiple-Choice Options

    ERIC Educational Resources Information Center

    de la Torre, Jimmy

    2009-01-01

    Cognitive or skills diagnosis models are discrete latent variable models developed specifically for the purpose of identifying the presence or absence of multiple fine-grained skills. However, applications of these models typically involve dichotomous or dichotomized data, including data from multiple-choice (MC) assessments that are scored as…

  20. Multiple-Choice Glosses and Incidental Vocabulary Learning: A Case of an EFL Context

    ERIC Educational Resources Information Center

    Ghahari, Shima; Heidarolad, Meissam

    2015-01-01

    Provision of multiple-choice (MC) glosses, which combines the advantages of glosses and inferring, has recently gained its share of supporters as a potential technique for enhancing L2 texts and increasing word gain for L2 learners. Upon taking an actual TOEFL, the participants underwent a vocabulary pretest to ensure that the target words were…

  1. Clinicians' Explanations of Students' Reasoning while Solving Multiple-Choice Items.

    ERIC Educational Resources Information Center

    Triska, Olive H.; And Others

    A study was conducted to determine whether competently reasoning clinicians (clinical instructors in medical instruction) could identify reasons competently reasoning students would eliminate distractors, and explain how students would reason to select the keyed response when solving multiple-choice items. The think-aloud protocols of clinicians…

  2. An Alternate Multiple-Choice Scoring Procedure in a Macroeconomics Course

    ERIC Educational Resources Information Center

    Bradbard, David A.; Parker, Darrell F.; Stone, Gary L.

    2004-01-01

    In the standard scoring procedure for multiple-choice exams, students must choose exactly one response as correct. Often students may be unable to identify the correct response, but can determine that some of the options are incorrect. This partial knowledge is not captured in the standard scoring format. The Coombs elimination procedure is an…

  3. Cheating on Multiple-Choice Exams: Monitoring, Assessment, and an Optional Assignment

    ERIC Educational Resources Information Center

    Nath, Leda; Lovaglia, Michael

    2009-01-01

    Academic dishonesty is unethical. Exam cheating is viewed as more serious than most other forms (Pincus and Schmelkin 2003). The authors review the general cheating problem, introduce a program to conservatively identify likely cheaters on multiple-choice exams, and offer a procedure for handling likely cheaters. Feedback from students who confess…

  4. Grading Multiple Choice Exams with Low-Cost and Portable Computer-Vision Techniques

    ERIC Educational Resources Information Center

    Fisteus, Jesus Arias; Pardo, Abelardo; García, Norberto Fernández

    2013-01-01

    Although technology for automatic grading of multiple choice exams has existed for several decades, it is not yet as widely available or affordable as it should be. The main reasons preventing this adoption are the cost and the complexity of the setup procedures. In this paper, "Eyegrade," a system for automatic grading of multiple…

  5. Student-Generated Content: Enhancing Learning through Sharing Multiple-Choice Questions

    ERIC Educational Resources Information Center

    Hardy, Judy; Bates, Simon P.; Casey, Morag M.; Galloway, Kyle W.; Galloway, Ross K.; Kay, Alison E.; Kirsop, Peter; McQueen, Heather A.

    2014-01-01

    The relationship between students' use of PeerWise, an online tool that facilitates peer learning through student-generated content in the form of multiple-choice questions (MCQs), and achievement, as measured by their performance in the end-of-module examinations, was investigated in 5 large early-years science modules (in physics, chemistry…

  6. FormScanner: Open-Source Solution for Grading Multiple-Choice Exams

    ERIC Educational Resources Information Center

    Young, Chadwick; Lo, Glenn; Young, Kaisa; Borsetta, Alberto

    2016-01-01

    The multiple-choice exam remains a staple for many introductory physics courses. In the past, people have graded these by hand or even flaming needles. Today, one usually grades the exams with a form scanner that utilizes optical mark recognition (OMR). Several companies provide these scanners and particular forms, such as the eponymous…

  7. Writing Multiple-Choice Items to Measure Higher-Order Educational Objectives.

    ERIC Educational Resources Information Center

    Aiken, Lewis R.

    1982-01-01

    Five types of multiple-choice items that can be used to assess higher-order educational objectives are examined. The item types do not exhaust the possibilities, but they are standard forms found helpful in writing items to measure more than recognitive memory. (Author/CM)

  8. Visual Attention for Solving Multiple-Choice Science Problem: An Eye-Tracking Analysis

    ERIC Educational Resources Information Center

    Tsai, Meng-Jung; Hou, Huei-Tse; Lai, Meng-Lung; Liu, Wan-Yi; Yang, Fang-Ying

    2012-01-01

    This study employed an eye-tracking technique to examine students' visual attention when solving a multiple-choice science problem. Six university students participated in a problem-solving task to predict occurrences of landslide hazards from four images representing four combinations of four factors. Participants' responses and visual attention…

  9. A Method for Imputing Response Options for Missing Data on Multiple-Choice Assessments

    ERIC Educational Resources Information Center

    Wolkowitz, Amanda A.; Skorupski, William P.

    2013-01-01

    When missing values are present in item response data, there are a number of ways one might impute a correct or incorrect response to a multiple-choice item. There are significantly fewer methods for imputing the actual response option an examinee may have provided if he or she had not omitted the item either purposely or accidentally. This…

  10. Differential Daily Writing Contingencies and Performance on Major Multiple-Choice Exams

    ERIC Educational Resources Information Center

    Hautau, Briana; Turner, Haley C.; Carroll, Erin; Jaspers, Kathryn; Parker, Megan; Krohn, Katy; Williams, Robert L.

    2006-01-01

    On 4 of 7 days in each unit of an undergraduate human development course, students responded in writing to specific questions related to instructor notes previously made available to them. The study compared the effects of three writing contingencies on the quality of student writing and performance on major multiple-choice exams in the course. …

  11. A Participatory Learning Approach to Biochemistry Using Student Authored and Evaluated Multiple-Choice Questions

    ERIC Educational Resources Information Center

    Bottomley, Steven; Denny, Paul

    2011-01-01

    A participatory learning approach, combined with both a traditional and a competitive assessment, was used to motivate students and promote a deep approach to learning biochemistry. Students were challenged to research, author, and explain their own multiple-choice questions (MCQs). They were also required to answer, evaluate, and discuss MCQs…

  12. Potential Values of Incorporating a Multiple-Choice Question Construction in Physics Experimentation Instruction

    ERIC Educational Resources Information Center

    Yu, Fu-Yun; Liu, Yu-Hsin

    2005-01-01

    The potential value of a multiple-choice question-construction instructional strategy for the support of students' learning of physics experiments was examined in the study. Forty-two university freshmen participated in the study for a whole semester. A constant comparison method adopted to categorize students' qualitative data indicated that the…

  13. Using the Distractor Categories of Multiple-Choice Items to Improve IRT Linking

    ERIC Educational Resources Information Center

    Kim, Jee-Seon

    2006-01-01

    Simulation and real data studies are used to investigate the value of modeling multiple-choice distractors on item response theory linking. Using the characteristic curve linking procedure for Bock's (1972) nominal response model presented by Kim and Hanson (2002), all-category linking (i.e., a linking based on all category characteristic curves…

  14. Does Correction for Guessing Reduce Students' Performance on Multiple-Choice Examinations? Yes? No? Sometimes?

    ERIC Educational Resources Information Center

    Betts, Lucy R.; Elder, Tracey J.; Hartley, James; Trueman, Mark

    2009-01-01

    Multiple-choice (MC) examinations are becoming increasingly popular in higher education because they can be used effectively to assess breadth of knowledge in large cohorts of students. This present research investigated psychology students' performance on, and experiences of, MC examinations with and without correction for guessing. In Study 1,…

  15. Meta-Evaluation in Clinical Anatomy: A Practical Application of Item Response Theory in Multiple Choice Examinations

    ERIC Educational Resources Information Center

    Severo, Milton; Tavares, Maria A. Ferreira

    2010-01-01

    The nature of anatomy education has changed substantially in recent decades, though the traditional multiple-choice written examination remains the cornerstone of assessing students' knowledge. This study sought to measure the quality of a clinical anatomy multiple-choice final examination using item response theory (IRT) models. One hundred…

  16. Application of Item Analysis to Assess Multiple-Choice Examinations in the Mississippi Master Cattle Producer Program

    ERIC Educational Resources Information Center

    Parish, Jane A.; Karisch, Brandi B.

    2013-01-01

    Item analysis can serve as a useful tool in improving multiple-choice questions used in Extension programming. It can identify gaps between instruction and assessment. An item analysis of Mississippi Master Cattle Producer program multiple-choice examination responses was performed to determine the difficulty of individual examinations, assess the…

  17. Sustainable Assessment for Large Science Classes: Non-Multiple Choice, Randomised Assignments through a Learning Management System

    ERIC Educational Resources Information Center

    Schultz, Madeleine

    2011-01-01

    This paper reports on the development of a tool that generates randomised, non-multiple choice assessment within the BlackBoard Learning Management System interface. An accepted weakness of multiple-choice assessment is that it cannot elicit learning outcomes from upper levels of Biggs' SOLO taxonomy. However, written assessment items require…

  18. Sex Differences in the Relationship of Advanced Placement Essay and Multiple-Choice Scores to Grades in College Courses.

    ERIC Educational Resources Information Center

    Bridgeman, Brent; Lewis, Charles

    Essay and multiple-choice scores from Advanced Placement (AP) examinations in American History, European History, English Language and Composition, and Biology were matched with freshman grades in a sample of 32 colleges. Multiple-choice scores from the American History and Biology examinations were superior to essays for predicting overall grade…

  19. Influence of experience on intake and feeding behavior of dairy sheep when offered forages from woody plants in a multiple-choice situation.

    PubMed

    Meier, J S; Liesegang, A; Rischkowsky, B; Louhaichi, M; Zaklouta, M; Kreuzer, M; Marquardt, S

    2013-10-01

    A satisfactory intake of novel low-quality forages by ruminants may require previous experience with this feed. Therefore, this study tested in sheep whether experience with forages from woody plants had an influence on feed intake, feeding behavior, and nutrient supply when offered in a multiple-choice arrangement. Two sheep experiments were conducted, 1 in Syria (Mediterranean region; Exp. 1) and the other in Switzerland (Central Europe; Exp. 2), that investigated 5 and 6 woody test plants, respectively. In Exp. 1, the test plants were Artemisia herba-alba, Atriplex leucoclada, Haloxylon articulatum, Noaea mucronata, and Salsola vermiculata. In Exp. 2, Betula pendula, Castanea sativa, and Juglans regia were used in addition to A. leucoclada, H. articulatum, and S. vermiculata (the plants most consumed in Exp. 1). In each experiment, 12 lactating sheep (Awassi sheep in Exp. 1 and East Friesian Milk sheep in Exp. 2) were allocated to 2 groups ("experienced" and "naïve"). Experienced sheep subsequently were familiarized with each test plant during a learning period of binary choices (1 test plant vs. barley straw) for 4 h in the morning for 7 d each. The naïve group received only straw. During the rest of the day, a basal diet composed of barley straw (ad libitum) and concentrate was offered to both groups. For the 2 wk following the learning period, the sheep were subjected to feeding of the basal diet to avoid carryover effects of the last offered test plant. In the following multiple-choice period, both groups were allowed to select from all test plants during 4 h in the morning for 14 d. Forage intake after 4 and 24 h and feeding behavior during the first 30 min of the test feeding were assessed. Milk yield and composition were measured at the end of the multiple-choice period. Nutrient intake was calculated using feed intake measurements and compositional analyses. Only in Exp. 2, group differences (P < 0.05) were found on d 1 of the multiple-choice period

  20. [A factor analysis method for contingency table data with unlimited multiple choice questions].

    PubMed

    Toyoda, Hideki; Haiden, Reina; Kubo, Saori; Ikehara, Kazuya; Isobe, Yurie

    2016-02-01

    The purpose of this study is to propose a method of factor analysis for analyzing contingency tables developed from the data of unlimited multiple-choice questions. This method assumes that the element of each cell of the contingency table has a binominal distribution and a factor analysis model is applied to the logit of the selection probability. Scree plot and WAIC are used to decide the number of factors, and the standardized residual, the standardized difference between the sample, and the proportion ratio, is used to select items. The proposed method was applied to real product impression research data on advertised chips and energy drinks. Since the results of the analysis showed that this method could be used in conjunction with conventional factor analysis model, and extracted factors were fully interpretable, and suggests the usefulness of the proposed method in the study of psychology using unlimited multiple-choice questions. PMID:26964368

  1. FormScanner: Open-Source Solution for Grading Multiple-Choice Exams

    NASA Astrophysics Data System (ADS)

    Young, Chadwick; Lo, Glenn; Young, Kaisa; Borsetta, Alberto

    2016-01-01

    The multiple-choice exam remains a staple for many introductory physics courses. In the past, people have graded these by hand or even flaming needles. Today, one usually grades the exams with a form scanner that utilizes optical mark recognition (OMR). Several companies provide these scanners and particular forms, such as the eponymous "Scantron." OMR scanners combine hardware and software—a scanner and OMR program—to read and grade student-filled forms.

  2. Examen en Vue du Diplome Douzieme Annee, Langue et Litterature 30. Partie B: Lecture (Choix Multiples). Livret de Questions (Examination for the Twelfth Grade Diploma, Language and Literature 30. Part B: Reading--Multiple Choice. Questions Booklet). June 1988 Edition.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    As part of an examination required by the Alberta (Canada) Department of Education in order for 12th grade students to receive a diploma in French, this booklet contains the 80 multiple choice questions portion of Part B, the language and literature component of the June 1988 tests. Representing the genres of poetry, short story, the novel, and…

  3. Grading Multiple Choice Exams with Low-Cost and Portable Computer-Vision Techniques

    NASA Astrophysics Data System (ADS)

    Fisteus, Jesus Arias; Pardo, Abelardo; García, Norberto Fernández

    2013-08-01

    Although technology for automatic grading of multiple choice exams has existed for several decades, it is not yet as widely available or affordable as it should be. The main reasons preventing this adoption are the cost and the complexity of the setup procedures. In this paper, Eyegrade, a system for automatic grading of multiple choice exams is presented. While most current solutions are based on expensive scanners, Eyegrade offers a truly low-cost solution requiring only a regular off-the-shelf webcam. Additionally, Eyegrade performs both mark recognition as well as optical character recognition of handwritten student identification numbers, which avoids the use of bubbles in the answer sheet. When compared with similar webcam-based systems, the user interface in Eyegrade has been designed to provide a more efficient and error-free data collection procedure. The tool has been validated with a set of experiments that show the ease of use (both setup and operation), the reduction in grading time, and an increase in the reliability of the results when compared with conventional, more expensive systems.

  4. The Social Attribution Task-Multiple Choice (SAT-MC): A Psychometric and Equivalence Study of an Alternate Form.

    PubMed

    Johannesen, Jason K; Lurie, Jessica B; Fiszdon, Joanna M; Bell, Morris D

    2013-01-01

    The Social Attribution Task-Multiple Choice (SAT-MC) uses a 64-second video of geometric shapes set in motion to portray themes of social relatedness and intentions. Considered a test of "Theory of Mind," the SAT-MC assesses implicit social attribution formation while reducing verbal and basic cognitive demands required of other common measures. We present a comparability analysis of the SAT-MC and the new SAT-MC-II, an alternate form created for repeat testing, in a university sample (n = 92). Score distributions and patterns of association with external validation measures were nearly identical between the two forms, with convergent and discriminant validity supported by association with affect recognition ability and lack of association with basic visual reasoning. Internal consistency of the SAT-MC-II was superior (alpha = .81) to the SAT-MC (alpha = .56). Results support the use of SAT-MC and new SAT-MC-II as equivalent test forms. Demonstrating relatively higher association to social cognitive than basic cognitive abilities, the SAT-MC may provide enhanced sensitivity as an outcome measure of social cognitive intervention trials. PMID:23864984

  5. The Social Attribution Task-Multiple Choice (SAT-MC): A Psychometric and Equivalence Study of an Alternate Form

    PubMed Central

    Johannesen, Jason K.; Lurie, Jessica B.; Fiszdon, Joanna M.; Bell, Morris D.

    2013-01-01

    The Social Attribution Task-Multiple Choice (SAT-MC) uses a 64-second video of geometric shapes set in motion to portray themes of social relatedness and intentions. Considered a test of “Theory of Mind,” the SAT-MC assesses implicit social attribution formation while reducing verbal and basic cognitive demands required of other common measures. We present a comparability analysis of the SAT-MC and the new SAT-MC-II, an alternate form created for repeat testing, in a university sample (n = 92). Score distributions and patterns of association with external validation measures were nearly identical between the two forms, with convergent and discriminant validity supported by association with affect recognition ability and lack of association with basic visual reasoning. Internal consistency of the SAT-MC-II was superior (alpha = .81) to the SAT-MC (alpha = .56). Results support the use of SAT-MC and new SAT-MC-II as equivalent test forms. Demonstrating relatively higher association to social cognitive than basic cognitive abilities, the SAT-MC may provide enhanced sensitivity as an outcome measure of social cognitive intervention trials. PMID:23864984

  6. The assessment of critical thinking skills in anatomy and physiology students who practice writing higher order multiple choice questions

    NASA Astrophysics Data System (ADS)

    Shaw, Jason

    Critical thinking is a complex abstraction that defies homogeneous interpretation. This means that no operational definition is universal and no critical thinking measurement tool is all encompassing. Instructors will likely find evidence based strategies to facilitate thinking skills only as numerous research efforts from multiple disciplines accumulate. This study focuses on a question writing exercise designed to help anatomy and physiology students. Students were asked to design multiple choice questions that combined course concepts in new and novel ways. Instructions and examples were provided on how to construct these questions and student attempts were sorted into levels one through three of Bloom's Cognitive Taxonomy (Bloom et al. 1956). Students submitted their question designs weekly and received individual feedback as to how they might improve. Eight course examinations were created to contain questions that modeled the Bloom's Cognitive Taxonomy levels that students were attempting. Students were assessed on their course examination performance as well as performance on a discipline independent critical thinking test called the California Critical Thinking Skills Test (CCTST). The performance of students in this study was compared to students from two previous years that took the same course but did not have the question writing activity. Results suggest that students do not improve their ability to answer critical thinking multiple choices questions when they practice the task of creating such problems. The effect of class level on critical thinking is examined and it appears that the longer a student has attended college the better the performance on both discipline specific and discipline independent critical thinking questions. The data were also used to analyze students who improved their course examination grades in the second semester of this course. There is a pattern to suggest that students who improve their performance on course examinations

  7. A set partitioning reformulation for the multiple-choice multidimensional knapsack problem

    NASA Astrophysics Data System (ADS)

    Voß, Stefan; Lalla-Ruiz, Eduardo

    2016-05-01

    The Multiple-choice Multidimensional Knapsack Problem (MMKP) is a well-known ?-hard combinatorial optimization problem that has received a lot of attention from the research community as it can be easily translated to several real-world problems arising in areas such as allocating resources, reliability engineering, cognitive radio networks, cloud computing, etc. In this regard, an exact model that is able to provide high-quality feasible solutions for solving it or being partially included in algorithmic schemes is desirable. The MMKP basically consists of finding a subset of objects that maximizes the total profit while observing some capacity restrictions. In this article a reformulation of the MMKP as a set partitioning problem is proposed to allow for new insights into modelling the MMKP. The computational experimentation provides new insights into the problem itself and shows that the new model is able to improve on the best of the known results for some of the most common benchmark instances.

  8. Reliability, validity and efficiency of multiple choice question and patient management problem item formats in assessment of clinical competence.

    PubMed

    Norcini, J J; Swanson, D B; Grosso, L J; Webster, G D

    1985-05-01

    Despite a lack of face validity, there continues to be heavy reliance on objective paper-and-pencil measures of clinical competence. Among these measures, the most common item formats are patient management problems (PMPs) and three types of multiple choice questions (MCQs): one-best-answer (A-types); matching questions (M-types); and multiple true/false questions (X-types). The purpose of this study is to compare the reliability, validity and efficiency of these item formats with particular focus on whether MCQs and PMPs measure different aspects of clinical competence. Analyses revealed reliabilities of 0.72 or better for all item formats; the MCQ formats were most reliable. Similarly, efficiency analyses (reliability per unit of testing time) demonstrated the superiority of MCQs. Evidence for validity obtained through correlations of both programme directors' ratings and criterion group membership with item format scores also favoured MCQs. More important, however, is whether MCQs and PMPs measure the same or different aspects of clinical competence. Regression analyses of the scores on the validity measures (programme directors' ratings and criterion group membership) indicated that MCQs and PMPs seem to be measuring predominantly the same thing. MCQs contribute a small unique variance component over and above PMPs, while PMPs make the smallest unique contribution. As a whole, these results indicate that MCQs are more efficient, reliable and valid than PMPs. PMID:4010571

  9. An Online National Archive of Multiple-Choice Questions for Astro 101 and the Development of the Question Complexity Rubric

    NASA Astrophysics Data System (ADS)

    Cormier, S.; Prather, E.; Brissenden, G.

    2011-09-01

    We are developing a national archive of multiple-choice questions for use in the Astronomy 101 classroom. These questions are intended to supplement an instructor's implementation of Think-Pair-Share or for their assessment purposes (i.e., exams and homework). We are also developing the Question Complexity Rubric (QCR) to guide members of the Astro 101 teaching and learning community in assisting us with hierarchically ranking questions in this archive based on their conceptual complexity. Using the QCR, a score is assigned to differentiate each question based on the cognitive steps necessary to comprehensively explain the reasoning pathway to the correct answer. The lowest QCR score is given to questions with a reasoning pathway requiring only declarative knowledge. The highest QCR score is given to questions with a reasoning pathway that requires multiple connected cognitive steps. When completed, the online question archive will provide users with the utility to 1) use the QCR to score questions 2) search for and download questions based on topic and/or QCR score, and 3) add their own questions to the archive. Stop by our poster to test your skills at determining question complexity by trying out the QCR with our sample questions.

  10. A Comparison of Two Methods of Assessing Partial Knowledge on Multiple-Choice Tests.

    ERIC Educational Resources Information Center

    Tollefson, Nona; Chung, Jing-Mei

    Procedures for correcting for guessing and for assessing partial knowledge (correction-for-guessing, three-decision scoring, elimination/inclusion scoring, and confidence or probabilistic scoring) are discussed. Mean scores and internal consistency reliability estimates were compared across three administration and scoring procedures for…

  11. Force Concept Inventory-Based Multiple-Choice Test for Investigating Students' Representational Consistency

    ERIC Educational Resources Information Center

    Nieminen, Pasi; Savinainen, Antti; Viiri, Jouni

    2010-01-01

    This study investigates students' ability to interpret multiple representations consistently (i.e., representational consistency) in the context of the force concept. For this purpose we developed the Representational Variant of the Force Concept Inventory (R-FCI), which makes use of nine items from the 1995 version of the Force Concept Inventory…

  12. A comparative study of students' performance in preclinical physiology assessed by multiple choice and short essay questions.

    PubMed

    Oyebola, D D; Adewoye, O E; Iyaniwura, J O; Alada, A R; Fasanmade, A A; Raji, Y

    2000-01-01

    This study was designed to compare the performance of medical students in physiology when assessed by multiple choice questions (MCQs) and short essay questions (SEQs). The study also examined the influence of factors such as age, sex, O/level grades and JAMB scores on performance in the MCQs and SEQs. A structured questionnaire was administered to 264 medical students' four months before the Part I MBBS examination. Apart from personal data of each student, the questionnaire sought information on the JAMB scores and GCE O' Level grades of each student in English Language, Biology, Chemistry, Physics and Mathematics. The physiology syllabus was divided into five parts and the students were administered separate examinations (tests) on each part. Each test consisted of MCQs and SEQs. The performance in MCQs and SEQs were compared. Also, the effects of JAMB scores and GCE O/level grades on the performance in both the MCQs and SEQs were assessed. The results showed that the students performed better in all MCQ tests than in the SEQs. JAMB scores and O' level English Language grade had no significant effect on students' performance in MCQs and SEQs. However O' level grades in Biology, Chemistry, Physics and Mathematics had significant effects on performance in MCQs and SEQs. Inadequate knowledge of physiology and inability to present information in a logical sequence are believed to be major factors contributing to the poorer performance in the SEQs compared with MCQs. In view of the finding of significant association between performance in MCQs and SEQs and GCE O/level grades in science subjects and mathematics, it was recommended that both JAMB results and the GCE results in the four O/level subjects above may be considered when selecting candidates for admission into the medical schools. PMID:11713989

  13. The Question Complexity Rubric: Development and Application for a National Archive of Astro 101 Multiple-Choice Questions

    NASA Astrophysics Data System (ADS)

    Cormier, Sebastien; Prather, E. E.; Brissenden, G.; Collaboration of Astronomy Teaching Scholars CATS

    2011-01-01

    For the last two years we have been developing an online national archive of multiple-choice questions for use in the Astro 101 classroom. These questions are intended to either supplement an instructor's implementation of Think-Pair-Share or be used for assessment purposes (i.e. exams and homework). In this talk we will describe the development, testing and implementation of the Question Complexity Rubric (QCR), which is designed to guide the ranking of questions in this archive based on their conceptual complexity. Using the QCR, a score is assigned to differentiate each question based on the cognitive steps necessary to comprehensively explain the reasoning pathway to the correct answer. The lowest QCR score is given to questions with a reasoning pathway requiring only declarative knowledge whereas the highest QCR score is given to questions that require multiple pathways of multi-step reasoning. When completed, the online question archive will provide users with the utility to 1) search for and download questions based on subject and average QCR score, 2) use the QCR to score questions, and 3) add their own questions to the archive. We will also discuss other potential applications of the QCR, such as how it informs our work in developing and testing of survey instruments by allowing us to calibrate the range of question complexity. This material is based upon work supported by the National Science Foundation under Grant No. 0715517, a CCLI Phase III Grant for the Collaboration of Astronomy Teaching Scholars (CATS). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

  14. A participatory learning approach to biochemistry using student authored and evaluated multiple-choice questions.

    PubMed

    Bottomley, Steven; Denny, Paul

    2011-01-01

    A participatory learning approach, combined with both a traditional and a competitive assessment, was used to motivate students and promote a deep approach to learning biochemistry. Students were challenged to research, author, and explain their own multiple-choice questions (MCQs). They were also required to answer, evaluate, and discuss MCQs written by their peers. The technology used to support this activity was PeerWise--a freely available, innovative web-based system that supports students in the creation of an annotated question repository. In this case study, we describe students' contributions to, and perceptions of, the PeerWise system for a cohort of 107 second-year biomedical science students from three degree streams studying a core biochemistry subject. Our study suggests that the students are eager participants and produce a large repository of relevant, good quality MCQs. In addition, they rate the PeerWise system highly and use higher order thinking skills while taking an active role in their learning. We also discuss potential issues and future work using PeerWise for biomedical students. PMID:21948507

  15. The Development and Validation of a Two-Tiered Multiple-Choice Instrument to Identify Alternative Conceptions in Earth Science

    ERIC Educational Resources Information Center

    Mangione, Katherine Anna

    2010-01-01

    This study was to determine reliability and validity for a two-tiered, multiple- choice instrument designed to identify alternative conceptions in earth science. Additionally, this study sought to identify alternative conceptions in earth science held by preservice teachers, to investigate relationships between self-reported confidence scores and…

  16. Estimating the Effect on Grades of Using Multiple-Choice versus Constructive-Response Questions: Data from the Classroom

    ERIC Educational Resources Information Center

    Hickson, Stephen; Reed, W. Robert; Sander, Nicholas

    2012-01-01

    This study investigates the degree to which grades based solely on constructed-response (CR) questions differ from grades based solely on multiple-choice (MC) questions. If CR questions are to justify their higher costs, they should produce different grade outcomes than MC questions. We use a data set composed of thousands of observations on…

  17. So Many Choices, So Little Time: Strategies for Understanding and Taking Multiple-Choice Exams in History

    ERIC Educational Resources Information Center

    Blackey, Robert

    2009-01-01

    Learning as much as possible about what to expect and how best to select among response choices enables students to improve their scores so as to reflect more fully their knowledge and abilities, whether taking the SAT, ACT, AP history exams, multiple-choice exams in college courses, the GRE, or any other standardized examinations that include…

  18. Developing an Array Binary Code Assessment Rubric for Multiple- Choice Questions Using Item Arrays and Binary-Coded Responses

    ERIC Educational Resources Information Center

    Haro, Elizabeth K.; Haro, Luis S.

    2014-01-01

    The multiple-choice question (MCQ) is the foundation of knowledge assessment in K-12, higher education, and standardized entrance exams (including the GRE, MCAT, and DAT). However, standard MCQ exams are limited with respect to the types of questions that can be asked when there are only five choices. MCQs offering additional choices more…

  19. Diagnosing Secondary Students' Misconceptions of Photosynthesis and Respiration in Plants Using a Two-Tier Multiple Choice Instrument.

    ERIC Educational Resources Information Center

    Haslam, Filocha; Treagust, David F.

    1987-01-01

    Describes a multiple-choice instrument that reliably and validly diagnoses secondary students' understanding of photosynthesis and respiration in plants. Highlights the consistency of students' misconceptions across secondary levels and indicates a high percentage of students have misconceptions regarding plant physiology. (CW)

  20. A One-Day Dental Faculty Workshop in Writing Multiple-Choice Questions: An Impact Evaluation.

    PubMed

    AlFaris, Eiad; Naeem, Naghma; Irfan, Farhana; Qureshi, Riaz; Saad, Hussain; Al Sadhan, Ra'ed; Abdulghani, Hamza Mohammad; Van der Vleuten, Cees

    2015-11-01

    Long training workshops on the writing of exam questions have been shown to be effective; however, the effectiveness of short workshops needs to be demonstrated. The aim of this study was to evaluate the impact of a one-day, seven-hour faculty development workshop at the College of Dentistry, King Saud University, Saudi Arabia, on the quality of multiple-choice questions (MCQs). Kirkpatrick's four-level evaluation model was used. Participants' satisfaction (Kirkpatrick's Level 1) was evaluated with a post-workshop questionnaire. A quasi-experimental, randomized separate sample, pretest-posttest design was used to assess the learning effect (Kirkpatrick's Level 2). To evaluate transfer of learning to practice (Kirkpatrick's Level 3), MCQs created by ten faculty members as a result of the training were assessed. To assess Kirkpatrick's Level 4 regarding institutional change, interviews with three key leaders of the school were conducted, coded, and analyzed. A total of 72 course directors were invited to and attended some part of the workshop; all 52 who attended the entire workshop completed the satisfaction form; and 22 of the 36 participants in the experimental group completed the posttest. The results showed that all 52 participants were highly satisfied with the workshop, and significant positive changes were found in the faculty members' knowledge and the quality of their MCQs with effect sizes of 0.7 and 0.28, respectively. At the institutional level, the interviews demonstrated positive structural changes in the school's assessment system. Overall, this one-day item-writing faculty workshop resulted in positive changes at all four of Kirkpatrick's levels; these effects suggest that even a short training session can improve a dental school's assessment of its students. PMID:26522635

  1. Exploring problem solving strategies on multiple-choice science items: Comparing native Spanish-speaking English Language Learners and mainstream monolinguals

    NASA Astrophysics Data System (ADS)

    Kachchaf, Rachel Rae

    The purpose of this study was to compare how English language learners (ELLs) and monolingual English speakers solved multiple-choice items administered with and without a new form of testing accommodation---vignette illustration (VI). By incorporating theories from second language acquisition, bilingualism, and sociolinguistics, this study was able to gain more accurate and comprehensive input into the ways students interacted with items. This mixed methods study used verbal protocols to elicit the thinking processes of thirty-six native Spanish-speaking English language learners (ELLs), and 36 native-English speaking non-ELLs when solving multiple-choice science items. Results from both qualitative and quantitative analyses show that ELLs used a wider variety of actions oriented to making sense of the items than non-ELLs. In contrast, non-ELLs used more problem solving strategies than ELLs. There were no statistically significant differences in student performance based on the interaction of presence of illustration and linguistic status or the main effect of presence of illustration. However, there were significant differences based on the main effect of linguistic status. An interaction between the characteristics of the students, the items, and the illustrations indicates considerable heterogeneity in the ways in which students from both linguistic groups think about and respond to science test items. The results of this study speak to the need for more research involving ELLs in the process of test development to create test items that do not require ELLs to carry out significantly more actions to make sense of the item than monolingual students.

  2. Improving multiple-choice questions to better assess dental student knowledge: distractor utilization in oral and maxillofacial pathology course examinations.

    PubMed

    McMahan, C Alex; Pinckard, R Neal; Prihoda, Thomas J; Hendricson, William D; Jones, Anne Cale

    2013-12-01

    How many incorrect response options (known as distractors) to use in multiple-choice questions has been the source of considerable debate in the assessment literature, especially relative to influence on the likelihood of students' guessing the correct answer. This study compared distractor use by second-year dental students in three successive oral and maxillofacial pathology classes that had three different examination question formats and scoring resulting in different levels of academic performance. One class was given all multiple-choice questions; the two other were given half multiple-choice questions, with and without formula scoring, and half un-cued short-answer questions. Use by at least 1 percent of the students was found to better identify functioning distractors than higher cutoffs. The average number of functioning distractors differed among the three classes and did not always correspond to differences in class scores. Increased numbers of functioning distractors were associated with higher question discrimination and greater question difficulty. Fewer functioning distractors fostered more effective student guessing and overestimation of academic achievement. Appropriate identification of functioning distractors is essential for improving examination quality and better estimating actual student knowledge through retrospective use of formula scoring, where the amount subtracted for incorrect answers is based on the harmonic mean number of functioning distractors. PMID:24319131

  3. Delay and probability discounting of multiple commodities in smokers and never-smokers using multiple-choice tasks.

    PubMed

    Poltavski, Dmitri V; Weatherly, Jeffrey N

    2013-12-01

    The purpose of the present study was to investigate temporal and probabilistic discounting in smokers and never-smokers, across a number of commodities, using a multiple-choice method. One hundred and eighty-two undergraduate university students, of whom 90 had never smoked, 73 were self-reported light smokers (<10 cigarettes/day), and 17 were heavy smokers (10+cigarettes/day), completed computerized batteries of delay and probability discounting questions pertaining to a total of eight commodities and administered in a multiple-choice format. In addition to cigarettes, monetary rewards, and health outcomes, the tasks included novel commodities such as ideal dating partner and retirement income. The results showed that heavy smokers probability discounted commodities at a significantly shallower rate than never-smokers, suggesting greater risk-taking. No effect of smoking status was observed for delay discounting questions. The only commodity that was probability discounted significantly less than others was 'finding an ideal dating partner'. The results suggest that probability discounting tasks using the multiple-choice format can discriminate between non-abstaining smokers and never-smokers and could be further explored in the context of behavioral and drug addictions. PMID:24196025

  4. Treatment of burns in the first 24 hours: simple and practical guide by answering 10 questions in a step-by-step form

    PubMed Central

    2012-01-01

    Residents in training, medical students and other staff in surgical sector, emergency room (ER) and intensive care unit (ICU) or Burn Unit face a multitude of questions regarding burn care. Treatment of burns is not always straightforward. Furthermore, National and International guidelines differ from one region to another. On one hand, it is important to understand pathophysiology, classification of burns, surgical treatment, and the latest updates in burn science. On the other hand, the clinical situation for treating these cases needs clear guidelines to cover every single aspect during the treatment procedure. Thus, 10 questions have been organised and discussed in a step-by-step form in order to achieve the excellence of education and the optimal treatment of burn injuries in the first 24 hours. These 10 questions will clearly discuss referral criteria to the burn unit, primary and secondary survey, estimation of the total burned surface area (%TBSA) and the degree of burns as well as resuscitation process, routine interventions, laboratory tests, indications of Bronchoscopy and special considerations for Inhalation trauma, immediate consultations and referrals, emergency surgery and admission orders. Understanding and answering the 10 questions will not only cover the management process of Burns during the first 24 hours but also seems to be an interactive clear guide for education purpose. PMID:22583548

  5. The Relationship between Student Learning Style and Performance on Various Test Question Formats.

    ERIC Educational Resources Information Center

    Holley, Joyce H.; Jenkins, Elizabeth K.

    1993-01-01

    The relationship between performance on four test formats (multiple-choice theory, multiple-choice quantitative, open-ended theory, open-ended quantitative) and scores on the Kolb Learning Style Inventory was investigated for 49 accounting students. Learning style was significant for all formats except multiple-choice quantitative. (SK)

  6. The Effect of Using Different Weights for Multiple-Choice and Free-Response Item Sections

    ERIC Educational Resources Information Center

    Hendrickson, Amy; Patterson, Brian; Melican, Gerald

    2008-01-01

    Presented at the Annual National Council on Measurement in Education (NCME) in New York in March 2008. This presentation explores how different item weighting can affect the effective weights, validity coefficents and test reliability of composite scores among test takers.

  7. Do Multiple-Choice Options Inflate Estimates of Vocabulary Size on the VST?

    ERIC Educational Resources Information Center

    Stewart, Jeffrey

    2014-01-01

    Validated under a Rasch framework (Beglar, 2010), the Vocabulary Size Test (VST) (Nation & Beglar, 2007) is an increasingly popular measure of decontextualized written receptive vocabulary size in the field of second language acquisition. However, although the validation indicates that the test has high internal reliability, still unaddressed…

  8. Modified Multiple-Choice Items for Alternate Assessments: Reliability, Difficulty, and Differential Boost

    ERIC Educational Resources Information Center

    Kettler, Ryan J.; Rodriguez, Michael C.; Bolt, Daniel M.; Elliott, Stephen N.; Beddow, Peter A.; Kurz, Alexander

    2011-01-01

    Federal policy on alternate assessment based on modified academic achievement standards (AA-MAS) inspired this research. Specifically, an experimental study was conducted to determine whether tests composed of modified items would have the same level of reliability as tests composed of original items, and whether these modified items helped reduce…

  9. Grade 12 Diploma Examination, English 30. Part B: Reading (Multiple Choice). Readings Booklet. June 1988 Edition.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    Intended for students taking the English 30 Grade 12 Diploma Examinations in Alberta, Canada, this test (to be administered along with a questions booklet) contains the reading selections portion of Part B, the reading component of the June 1988 tests. Representing the genres of fiction, nonfiction, poetry, and drama, the 10 selections consist of:…

  10. Data-mining to build a knowledge representation store for clinical decision support. Studies on curation and validation based on machine performance in multiple choice medical licensing examinations.

    PubMed

    Robson, Barry; Boray, Srinidhi

    2016-06-01

    Extracting medical knowledge by structured data mining of many medical records and from unstructured data mining of natural language source text on the Internet will become increasingly important for clinical decision support. Output from these sources can be transformed into large numbers of elements of knowledge in a Knowledge Representation Store (KRS), here using the notation and to some extent the algebraic principles of the Q-UEL Web-based universal exchange and inference language described previously, rooted in Dirac notation from quantum mechanics and linguistic theory. In a KRS, semantic structures or statements about the world of interest to medicine are analogous to natural language sentences seen as formed from noun phrases separated by verbs, prepositions and other descriptions of relationships. A convenient method of testing and better curating these elements of knowledge is by having the computer use them to take the test of a multiple choice medical licensing examination. It is a venture which perhaps tells us almost as much about the reasoning of students and examiners as it does about the requirements for Artificial Intelligence as employed in clinical decision making. It emphasizes the role of context and of contextual probabilities as opposed to the more familiar intrinsic probabilities, and of a preliminary form of logic that we call presyllogistic reasoning. PMID:27089305

  11. The development and validation of a two-tiered multiple-choice instrument to identify alternative conceptions in earth science

    NASA Astrophysics Data System (ADS)

    Mangione, Katherine Anna

    This study was to determine reliability and validity for a two-tiered, multiple- choice instrument designed to identify alternative conceptions in earth science. Additionally, this study sought to identify alternative conceptions in earth science held by preservice teachers, to investigate relationships between self-reported confidence scores and understanding of earth science concepts, and to describe relationships between content knowledge and alternative conceptions and planning instruction in the science classroom. Eighty-seven preservice teachers enrolled in the MAT program participated in this study. Sixty-eight participants were female, twelve were male, and seven chose not to answer. Forty-seven participants were in the elementary certification program, five were in the middle school certification program, and twenty-nine were pursuing secondary certification. Results indicate that the two-tiered, multiple-choice format can be a reliable and valid method for identifying alternative conceptions. Preservice teachers in all certification areas who participated in this study may possess common alternative conceptions previously identified in the literature. Alternative conceptions included: all rivers flow north to south, the shadow of the Earth covers the Moon causing lunar phases, the Sun is always directly overhead at noon, weather can be predicted by animal coverings, and seasons are caused by the Earth's proximity to the Sun. Statistical analyses indicated differences, however not all of them significant, among all subgroups according to gender and certification area. Generally males outperformed females and preservice teachers pursuing middle school certification had higher scores on the questionnaire followed by those obtaining secondary certification. Elementary preservice teachers scored the lowest. Additionally, self-reported scores of confidence in one's answers and understanding of the earth science concept in question were analyzed. There was a

  12. A Novel Multiple Choice Question Generation Strategy: Alternative Uses for Controlled Vocabulary Thesauri in Biomedical-Sciences Education.

    PubMed

    Lopetegui, Marcelo A; Lara, Barbara A; Yen, Po-Yin; Çatalyürek, Ümit V; Payne, Philip R O

    2015-01-01

    Multiple choice questions play an important role in training and evaluating biomedical science students. However, the resource intensive nature of question generation limits their open availability, reducing their contribution to evaluation purposes mainly. Although applied-knowledge questions require a complex formulation process, the creation of concrete-knowledge questions (i.e., definitions, associations) could be assisted by the use of informatics methods. We envisioned a novel and simple algorithm that exploits validated knowledge repositories and generates concrete-knowledge questions by leveraging concepts' relationships. In this manuscript we present the development and validation of a prototype which successfully produced meaningful concrete-knowledge questions, opening new applications for existing knowledge repositories, potentially benefiting students of all biomedical sciences disciplines. PMID:26958222

  13. A Novel Multiple Choice Question Generation Strategy: Alternative Uses for Controlled Vocabulary Thesauri in Biomedical-Sciences Education

    PubMed Central

    Lopetegui, Marcelo A.; Lara, Barbara A.; Yen, Po-Yin; Çatalyürek, Ümit V.; Payne, Philip R.O.

    2015-01-01

    Multiple choice questions play an important role in training and evaluating biomedical science students. However, the resource intensive nature of question generation limits their open availability, reducing their contribution to evaluation purposes mainly. Although applied-knowledge questions require a complex formulation process, the creation of concrete-knowledge questions (i.e., definitions, associations) could be assisted by the use of informatics methods. We envisioned a novel and simple algorithm that exploits validated knowledge repositories and generates concrete-knowledge questions by leveraging concepts’ relationships. In this manuscript we present the development and validation of a prototype which successfully produced meaningful concrete-knowledge questions, opening new applications for existing knowledge repositories, potentially benefiting students of all biomedical sciences disciplines. PMID:26958222

  14. Grade 12 Diploma Examination, English 33. Part B: Reading (Multiple Choice). Readings Booklet.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    Intended for students taking the Grade 12 Examination in English 33 in Alberta, Canada, this reading test (to be administered along with the questions booklet) contains short reading selections taken from fiction, nonfiction, poetry, and drama, including the following: an excerpt from "Catch-22" (Joseph Heller); "School Thief" (Dennis Potter); "In…

  15. Grade 12 Diploma Examination, English 30. Part B: Reading (Multiple Choice). Readings Booklet.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    Intended for students taking the Grade 12 Examination in English 30 in Alberta, Canada, this reading test (to be administered along with the questions booklet) contains 10 short reading selections taken from fiction, nonfiction, poetry, and drama, including the following: an excerpt from "Where Did You Go?""Out." (Robert Paul Smith); "Lines on…

  16. Changing Answers on Multiple-Choice Examinations Taken by Baccalaureate Nursing Students.

    ERIC Educational Resources Information Center

    Nieswiadomy, Rose M.; Arnold, Wilda K.; Garza, Chris

    2001-01-01

    The answer sheets of 122 nursing students showed that 119 changed at least 1 answer; 93.3% of those who changed answers either gained or did not lose points by changing; changing answers on psychiatric nursing exams made more difference than on medical-surgical tests. However, those who made the smallest number of changes tended to have higher…

  17. A Study of Three-option and Four-option Multiple Choice Exams.

    ERIC Educational Resources Information Center

    Cooper, Terence H.

    1988-01-01

    Describes a study used to determine differences in exam reliability, difficulty, and student evaluations. Indicates that when a fourth option was added to the three-option items, the exams became more difficult. Includes methods, results discussion, and tables on student characteristics, whole test analyses, and selected items. (RT)

  18. Grade 12 Diploma Examination, English 30. Part B: Reading (Multiple Choice). Readings Booklet.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    Intended for students taking the Grade 12 Examination in English 30 in Alberta, Canada, this reading test (to be administered along with the questions booklet) contains 10 short reading selections taken from fiction, nonfiction, poetry and drama, including the following: "At the Age at Which Mozart Was Dead Already" (Ellen Goodman); "Embassy" (W.…

  19. We Don't Live in a Multiple-Choice World: Inquiry and the Common Core

    ERIC Educational Resources Information Center

    Jaeger, Paige

    2012-01-01

    The Common Core raises the bar for states struggling to decide what should be taught or tested. As low-performing schools strive to improve instruction, the blueprint has been defined. The Common Core defines the curriculum in enough detail and specifies ways to teach that content creatively and innovatively, to produce graduates who are problem…

  20. Grade 12 Diploma Examination, English 30. Part B: Reading (Multiple Choice). Readings Booklet. 1986 Edition.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    Intended for students taking the Grade 12 Examination in English 30 in Alberta, Canada, this reading test (to be administered along with the questions booklet) contains 10 short reading selections taken from fiction, nonfiction, poetry, and drama, including the following: "My Magical Metronome" (Lewis Thomas); "Queen Street Trolley" (Dale…

  1. Grade 12 Diploma Examination, English 30. Part B: Reading (Multiple Choice). Readings Booklet. 1988 Edition.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    Intended for students taking the Grade 12 Examinations in English 30 in Alberta, Canada, this reading test (to be administered along with a questions booklet) includes the following nine short selections taken from fiction, nonfiction, poetry, and drama: "The Biggest Liar in the World" (Harry Mark Petrakis); "Victorian Grandmother" (Margo…

  2. A Comparison of Several Multiple-Choice, Linguistic-Based Item Writing Algorithms.

    ERIC Educational Resources Information Center

    Roid, Gale; Haladyna, Tom

    The technology of transforming sentences from prose instruction into test questions was examined by comparing two methods of selecting sentences (keyword vs. rare singleton), two types of question words (nouns vs. adjectives), and two foil construction methods (writer's choice vs. algorithmic). Four item writers created items using each…

  3. Grade 12 Diploma Examination, English 30. Part B: Reading (Multiple Choice). Readings Booklet. 1987 Edition.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    Intended for students taking the Grade 12 Examination in English 30 in Alberta, Canada, this reading test (to be administered along with a questions booklet) includes the following 10 short selections taken from fiction, nonfiction, poetry, and drama: "Parents as People (with Children)" (Ellen Goodman); "Everybody Knows about the Arctic" (Jim…

  4. Grade 12 Diploma Examination, English 33. Part B: Reading. (Multiple Choice). Readings Booklet. 1988 Edition.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    Intended for students taking Grade 12 Diploma Examinations in English 33 in Alberta, Canada, this reading test is designed to be administered with a questions booklet. The following short selections taken from fiction, nonfiction, poetry, drama, and day-to-day functional materials are included: (1) "M is for Mother" (Marjorie Riddle); (2) "It's…

  5. Rasch analysis for the evaluation of rank of student response time in multiple choice examinations.

    PubMed

    Thompson, James J; Yang, Tong; Chauvin, Sheila W

    2013-01-01

    The availability of computerized testing has broadened the scope of person assessment beyond the usual accuracy-ability domain to include response time analyses. Because there are contexts in which speed is important, e.g. medical practice, it is important to develop tools by which individuals can be evaluated for speed. In this paper, the ability of Rasch measurement to convert ordinal nonparametric rankings of speed to measures is examined and compared to similar measures derived from parametric analysis of response times (pace) and semi-parametric logarithmic time-scaling procedures. Assuming that similar spans of the measures were used, non-parametric methods of raw ranking or percentile-ranking of persons by questions gave statistically acceptable person estimates of speed virtually identical to the parametric or semi-parametric methods. Because no assumptions were made about the underlying time distributions with ranking, generality of conclusions was enhanced. The main drawbacks of the non-parametric ranking procedures were the lack of information on question duration and the overall assignment by the model of variance to the person by question interaction. PMID:24064578

  6. Comparing delay discounting rates when using the fill-in-the-blank and multiple-choice methods.

    PubMed

    Weatherly, Jeffrey N; Derenne, Adam

    2011-01-01

    Several methods have been devised to measure delay discounting. The present study recruited university students to complete a delay-discounting task involving five different outcomes (finding a dating partner, free cigarettes, winning $100,000, being owed $100,000, and obtaining one's ideal body image) that was administered using either the fill-in-the blank (FITB) or multiple-choice (MC) method. Results showed that the different administration methods sometimes produced significantly different rates of discounting, the direction of which differed by outcome. Hyperbolic discounting and the area under the discounting curve were nearly always significantly correlated when the FITB method was used but were never significantly correlated when the MC method was used. Discounting across the five outcomes produced a two-factor solution when the FITB data were factor analyzed. The MC data were described by a one-factor solution. The present results illustrate that procedural variables have a potentially profound impact on delay-discounting data, and generalizing from studies on delay discounting should be done with caution until those variables are fully understood. PMID:24836568

  7. Web-MCQ: a set of methods and freely available open source code for administering online multiple choice question assessments.

    PubMed

    Hewson, Claire

    2007-08-01

    E-learning approaches have received increasing attention in recent years. Accordingly, a number of tools have become available to assist the nonexpert computer user in constructing and managing virtual learning environments, and implementing computer-based and/or online procedures to support pedagogy. Both commercial and free packages are now available, with new developments emerging periodically. Commercial products have the advantage of being comprehensive and reliable, but tend to require substantial financial investment and are not always transparent to use. They may also restrict pedagogical choices due to their predetermined ranges of functionality. With these issues in mind, several authors have argued for the pedagogical benefits of developing freely available, open source e-learning resources, which can be shared and further developed within a community of educational practitioners. The present paper supports this objective by presenting a set of methods, along with supporting freely available, downloadable, open source programming code, to allow administration of online multiple choice question assessments to students. PMID:17958158

  8. Seizure metaphors in children with epilepsy: A study based on a multiple-choice self-report questionnaire.

    PubMed

    D'Angelosante, Valentina; Tommasi, Marco; Casadio, Claudia; Verrotti, Alberto

    2015-05-01

    The advantages of metaphorical representation are pointed out in many fields of clinical research (e.g. cancer, HIV, psychogenic nonepileptic seizures). This study aimed at offering a novel contribution showing how children with epilepsy describe the symptomatology of their seizure experiences by means of particular kinds of cognitive metaphors. Twenty-three children with idiopathic generalized epilepsy and thirty-one healthy children were recruited for this study and interviewed with a multiple-choice questionnaire asking them to describe their epileptic seizures by means of suitable metaphors. A psychologist blinded to medical diagnosis assessed and categorized all metaphors. By considering the 89 metaphors produced by the children with epilepsy and the 147 ones by the healthy controls, Agent/Force was the primary metaphor assessed by children with epilepsy, followed by Event/Situation as the second preference. Moreover, comparing the results of the control group with those of the subjects with epilepsy, it was found that controls were oriented towards selecting exogenous forces, while subjects with epilepsy tended to select endogenous forces. In particular, children with epilepsy showed a peculiar preference for an endogenous force resembling the waggle metaphor, which is similar to the effect of a quake's shaking (earthquake or seaquake). The metaphors identified by this research are a useful resource to better understand the seizure experiences of patients with epilepsy, helping to improve clinical treatment. PMID:25934584

  9. Development and Application of a Two-Tier Multiple Choice Diagnostic Instrument To Assess High School Students' Understanding of Inorganic Chemistry Qualitative Analysis.

    ERIC Educational Resources Information Center

    Tan, Kim Chwee Daniel; Goh, Ngoh Khang; Chia, Lian Sai; Treagust, David F.

    2002-01-01

    Describes the development and application of a two-tier multiple choice diagnostic instrument to assess high school students' understanding of inorganic chemistry qualitative analysis. Shows that the Grade 10 students had difficulty understanding the reactions involved in the identification of cations and anions, for example, double decomposition…

  10. A Stratified Study of Students' Understanding of Basic Optics Concepts in Different Contexts Using Two-Tier Multiple-Choice Items

    ERIC Educational Resources Information Center

    Chu, Hye-Eun; Treagust, David F.; Chandrasegaran, A. L.

    2009-01-01

    A large scale study involving 1786 year 7-10 Korean students from three school districts in Seoul was undertaken to evaluate their understanding of basic optics concepts using a two-tier multiple-choice diagnostic instrument consisting of four pairs of items, each of which evaluated the same concept in two different contexts. The instrument, which…

  11. College Students' Ratings of Student Effort, Student Ability and Teacher Input as Correlates of Student Performance on Multiple-Choice Exams

    ERIC Educational Resources Information Center

    Williams, Robert L.; Clark, Lloyd

    2004-01-01

    In the class session following feedback regarding their scores on multiple-choice exams, undergraduate students in a large human development course rated the strength of possible contributors to their exam performance. Students rated items related to their personal effort in preparing for the exam (identified as student effort in the paper), their…

  12. A Comparison of the Performance on Three Multiple Choice Question Papers in Obstetrics and Gynecology Over a Period of Three Years Administered at Five London Medical Schools

    ERIC Educational Resources Information Center

    Stevens, J. M.; And Others

    1977-01-01

    Five of the medical schools in the University of London collaborated in administering one multiple choice question paper in obstetrics and gynecology, and results showed differences in performance between the five schools on questions and alternatives within questions. The rank order of the schools may result from differences in teaching methods.…

  13. Incorporating Multiple-Choice Questions into an AACSB Assurance of Learning Process: A Course-Embedded Assessment Application to an Introductory Finance Course

    ERIC Educational Resources Information Center

    Santos, Michael R.; Hu, Aidong; Jordan, Douglas

    2014-01-01

    The authors offer a classification technique to make a quantitative skills rubric more operational, with the groupings of multiple-choice questions to match the student learning levels in knowledge, calculation, quantitative reasoning, and analysis. The authors applied this classification technique to the mid-term exams of an introductory finance…

  14. How Are the Form and Magnitude of DIF Effects in Multiple-Choice Items Determined by Distractor-Level Invariance Effects?

    ERIC Educational Resources Information Center

    Penfield, Randall D.

    2011-01-01

    This article explores how the magnitude and form of differential item functioning (DIF) effects in multiple-choice items are determined by the underlying differential distractor functioning (DDF) effects, as modeled under the nominal response model. The results of a numerical investigation indicated that (a) the presence of one or more nonzero DDF…

  15. Assessing Understanding of the Concept of Function: A Study Comparing Prospective Secondary Mathematics Teachers' Responses to Multiple-Choice and Constructed-Response Items

    ERIC Educational Resources Information Center

    Feeley, Susan Jane

    2013-01-01

    The purpose of this study was to determine whether multiple-choice and constructed-response items assessed prospective secondary mathematics teachers' understanding of the concept of function. The conceptual framework for the study was the Dreyfus and Eisenberg (1982) Function Block. The theoretical framework was Sierpinska's (1992, 1994)…

  16. Difficulty and Discriminability of Introductory Psychology Test Items.

    ERIC Educational Resources Information Center

    Scialfa, Charles; Legare, Connie; Wenger, Larry; Dingley, Louis

    2001-01-01

    Analyzes multiple-choice questions provided in test banks for introductory psychology textbooks. Study 1 offered a consistent picture of the objective difficulty of multiple-choice tests for introductory psychology students, while both studies 1 and 2 indicated that test items taken from commercial test banks have poor psychometric properties.…

  17. Another kind of 'BOLD Response': answering multiple-choice questions via online decoded single-trial brain signals.

    PubMed

    Sorger, Bettina; Dahmen, Brigitte; Reithler, Joel; Gosseries, Olivia; Maudoux, Audrey; Laureys, Steven; Goebel, Rainer

    2009-01-01

    The term 'locked-in'syndrome (LIS) describes a medical condition in which persons concerned are severely paralyzed and at the same time fully conscious and awake. The resulting anarthria makes it impossible for these patients to naturally communicate, which results in diagnostic as well as serious practical and ethical problems. Therefore, developing alternative, muscle-independent communication means is of prime importance. Such communication means can be realized via brain-computer interfaces (BCIs) circumventing the muscular system by using brain signals associated with preserved cognitive, sensory, and emotional brain functions. Primarily, BCIs based on electrophysiological measures have been developed and applied with remarkable success. Recently, also blood flow-based neuroimaging methods, such as functional magnetic resonance imaging (fMRI) and functional near-infrared spectroscopy (fNIRS), have been explored in this context. After reviewing recent literature on the development of especially hemodynamically based BCIs, we introduce a highly reliable and easy-to-apply communication procedure that enables untrained participants to motor-independently and relatively effortlessly answer multiple-choice questions based on intentionally generated single-trial fMRI signals that can be decoded online. Our technique takes advantage of the participants' capability to voluntarily influence certain spatio-temporal aspects of the blood oxygenation level-dependent (BOLD) signal: source location (by using different mental tasks), signal onset and offset. We show that healthy participants are capable of hemodynamically encoding at least four distinct information units on a single-trial level without extensive pretraining and with little effort. Moreover, real-time data analysis based on simple multi-filter correlations allows for automated answer decoding with a high accuracy (94.9%) demonstrating the robustness of the presented method. Following our 'proof of concept', the

  18. Use of a multiple choice questionnaire to assess UK prescribing channels' knowledge of helminthology and best practice surrounding anthelmintic use in livestock and horses.

    PubMed

    Easton, Stephanie; Bartley, David J; Hotchkiss, Emily; Hodgkinson, Jane E; Pinchbeck, Gina L; Matthews, Jacqueline B

    2016-06-01

    Grazing livestock and equines are at risk of infection from a variety of helminths, for which the primary method of control has long been the use of anthelmintics. Anthelmintic resistance is now widespread in a number of helminth species across the globe so it is imperative that best practice control principles be adopted to delay the further spread of resistance. It is the responsibility of all who prescribe anthelmintics (in the UK, this being veterinarians, suitably qualified persons (SQPs) and pharmacists) to provide adequate information on best practice approaches to parasite control at the point of purchase. Poor uptake of best practice guidelines at farm level has been documented; this could be due to a lack of, or inappropriate, advice at the point of anthelmintics purchase. Therefore, the aim here was to evaluate levels of basic knowledge of helminthology, best practice guidelines and dispensing legislation among veterinarians and SQPs in the UK, through a Multiple Choice Question (MCQ) test, that was distributed online via targeted emails and social media sites. For each respondent, the percentage correct was determined (for the MCQ test overall and for subsections) and the results analysed initially using parametric and non-parametric statistics to compare differences between prescribing channels. The results showed that channels generally performed well; veterinarians achieved a mean total percentage correct of 79.7% (range 34.0-100%) and SQPs, a mean total percentage correct of 75.8% (range 38.5-100%) (p=0.051). The analysis indicated that veterinarians performed better in terms of knowledge of basic helminthology (p=0.001), whilst the SQP group performed better on legislation type questions (p=0.032). There was no significant difference in knowledge levels of best practice between the two channels. Multivariable linear regression analysis showed that veterinarians and those answering equine questions only performed significantly better than those

  19. Higher Retention after a New Take-Home Computerised Test

    ERIC Educational Resources Information Center

    Park, Jooyong; Choi, Byung-Chul

    2008-01-01

    A new computerised testing system was used at home to promote learning and also to save classroom instruction time. The testing system combined the features of short-answer and multiple-choice formats. The questions of the multiple-choice problems were presented without the options so that students had to generate answers for themselves; they…

  20. Reliability and Validity of the Second Level English Proficiency Tests.

    ERIC Educational Resources Information Center

    Stansfield, Charles

    1984-01-01

    Describes the development of the Secondary Level English Proficiency (SLEP) Test specifications and the performance of each item type during administration of the test in other countries. Innovative formats such as multiple-choice cloze and multiple-choice dictation are discussed and described. In addition, the findings of a validity study…

  1. The Design of the Test Format for Tablet Computers in Blended Learning Environments: A Study of the Test Approach-Avoidance Tendency of University Students

    ERIC Educational Resources Information Center

    Kitazawa, Takeshi

    2013-01-01

    This study analyzed effective test formats that utilized tablets for tests in university information basic subjects in blended learning environments. Specifically, three types of test were created: (1) multiple-choice, (2) fill-in-the-blank, and (3) a mixture of multiple-choice and fill-in-the-blank. An analysis focusing on university students'…

  2. Reliability and Levels of Difficulty of Objective Test Items in a Mathematics Achievement Test: A Study of Ten Senior Secondary Schools in Five Local Government Areas of Akure, Ondo State

    ERIC Educational Resources Information Center

    Adebule, S. O.

    2009-01-01

    This study examined the reliability and difficult indices of Multiple Choice (MC) and True or False (TF) types of objective test items in a Mathematics Achievement Test (MAT). The instruments used were two variants- 50-items Mathematics achievement test based on the multiple choice and true or false test formats. A total of five hundred (500)…

  3. Do Sequentially-Presented Answer Options Prevent the Use of Testwiseness Cues on Continuing Medical Education Tests?

    ERIC Educational Resources Information Center

    Willing, Sonja; Ostapczuk, Martin; Musch, Jochen

    2015-01-01

    Testwiseness--that is, the ability to find subtle cues towards the solution by the simultaneous comparison of the available answer options--threatens the validity of multiple-choice (MC) tests. Discrete-option multiple-choice (DOMC) has recently been proposed as a computerized alternative testing format for MC tests, and presumably allows for a…

  4. The Potential Use of the Discouraging Random Guessing (DRG) Approach in Multiple-Choice Exams in Medical Education.

    ERIC Educational Resources Information Center

    Friedman, Miriam; And Others

    1987-01-01

    Test performances of sophomore medical students on a pretest and final exam (under guessing and no-guessing instructions) were compared. Discouraging random guessing produced test information with improved test reliability and less distortion of item difficulty. More able examinees were less compliant than less able examinees. (Author/RH)

  5. Developing Form Assembly Specifications for Exams with Multiple Choice and Constructed Response Items: Balancing Reliability and Validity Concerns

    ERIC Educational Resources Information Center

    Hendrickson, Amy; Patterson, Brian; Ewing, Maureen

    2010-01-01

    The psychometric considerations and challenges associated with including constructed response items on tests are discussed along with how these issues affect the form assembly specifications for mixed-format exams. Reliability and validity, security and fairness, pretesting, content and skills coverage, test length and timing, weights, statistical…

  6. A univariate analysis of variance design for multiple-choice feeding-preference experiments: A hypothetical example with fruit-eating birds

    NASA Astrophysics Data System (ADS)

    Larrinaga, Asier R.

    2010-01-01

    I consider statistical problems in the analysis of multiple-choice food-preference experiments, and propose a univariate analysis of variance design for experiments of this type. I present an example experimental design, for a hypothetical comparison of fruit colour preferences between two frugivorous bird species. In each fictitious trial, four trays each containing a known weight of artificial fruits (red, blue, black, or green) are introduced into the cage, while four equivalent trays are left outside the cage, to control for tray weight loss due to other factors (notably desiccation). The proposed univariate approach allows data from such designs to be analysed with adequate power and no major violations of statistical assumptions. Nevertheless, there is no single "best" approach for experiments of this type: the best analysis in each case will depend on the particular aims and nature of the experiments.

  7. Development and application of a two-tier multiple choice diagnostic instrument to assess high school students' understanding of inorganic chemistry qualitative analysis

    NASA Astrophysics Data System (ADS)

    Tan, Kim Chwee Daniel; Khang Goh, Ngoh; Sai Chia, Lian; Treagust, David F.

    2002-04-01

    This article describes the development and application of a two-tier multiple choice diagnostic instrument to assess high school students' understanding of inorganic chemistry qualitative analysis. The development of the diagnostic instrument was guided by the framework outlined by Treagust. The instrument was administered to 915 Grade 10 students (15 to 17 years old) from 11 schools after they had learned the theory involved in qualitative analysis and after a series of qualitative analysis practical sessions. The Cronbach alpha reliability of the instrument was .68, the facility indices ranged from .17 to .48, and the discrimination indices ranged from .20 to .53. The study showed that the Grade 10 students had difficulty understanding the reactions involved in the identification of cations and anions, for example, double decomposition reactions, the formation and reaction of complex salts, and thermal decomposition. The findings of the study and literature on practical work were used to develop a qualitative analysis teaching package.

  8. The Role of Essay Tests Assessment in e-Learning: A Japanese Case Study

    ERIC Educational Resources Information Center

    Nakayama, Minoru; Yamamoto, Hiroh; Santiago, Rowena

    2010-01-01

    e-Learning has some restrictions on how learning performance is assessed. Online testing is usually in the form of multiple-choice questions, without any essay type of learning assessment. Major reasons for employing multiple-choice tasks in e-learning include ease of implementation and ease of managing learner's responses. To address this…

  9. Exploring Secondary Students' Knowledge and Misconceptions about Influenza: Development, validation, and implementation of a multiple-choice influenza knowledge scale

    NASA Astrophysics Data System (ADS)

    Romine, William L.; Barrow, Lloyd H.; Folk, William R.

    2013-07-01

    Understanding infectious diseases such as influenza is an important element of health literacy. We present a fully validated knowledge instrument called the Assessment of Knowledge of Influenza (AKI) and use it to evaluate knowledge of influenza, with a focus on misconceptions, in Midwestern United States high-school students. A two-phase validation process was used. In phase 1, an initial factor structure was calculated based on 205 students of grades 9-12 at a rural school. In phase 2, one- and two-dimensional factor structures were analyzed from the perspectives of classical test theory and the Rasch model using structural equation modeling and principal components analysis (PCA) on Rasch residuals, respectively. Rasch knowledge measures were calculated for 410 students from 6 school districts in the Midwest, and misconceptions were verified through the χ 2 test. Eight items measured knowledge of flu transmission, and seven measured knowledge of flu management. While alpha reliability measures for the subscales were acceptable, Rasch person reliability measures and PCA on residuals advocated for a single-factor scale. Four misconceptions were found, which have not been previously documented in high-school students. The AKI is the first validated influenza knowledge assessment, and can be used by schools and health agencies to provide a quantitative measure of impact of interventions aimed at increasing understanding of influenza. This study also adds significantly to the literature on misconceptions about influenza in high-school students, a necessary step toward strategic development of educational interventions for these students.

  10. The undergraduate curriculum of Faculty of Medicine and Health Sciences, Universiti Malaysia Sarawak in terms of Harden's 10 questions.

    PubMed

    Malik, Alam Sher; Malik, Rukhsana Hussain

    2002-11-01

    The curriculum of the Faculty of Medicine and Health Sciences (FMHS) is designed particularly to cater for the health needs of the State of Sarawak, Malaysia. The framework of the curriculum is built on four strands: biological knowledge, clinical skills, behavioural and population aspects. The training is community based and a graduate of FMHS is expected to possess the ability to deal with many ethnic groups with different cultures and beliefs; expertise in tropical infectious diseases; skills to deal with emergencies such as snakebite and near drowning; qualities of an administrator, problem-solver and community leader; and proficiency in information and communication technology. The content of the curriculum strives for commitment to lifelong learning and professional values. The FMHS has adopted a 'mixed economy' of education strategies and a 'mixed menu approach' to test a wide range of curriculum outcomes. The FMHS fosters intellectual and academic pursuits, encourages friendliness and a sense of social responsibility and businesslike efficiency. PMID:12623455

  11. Do Students Know What They Know and What They Don't Know? Using a Four-Tier Diagnostic Test to Assess the Nature of Students' Alternative Conceptions

    ERIC Educational Resources Information Center

    Caleon, Imelda S.; Subramaniam, R.

    2010-01-01

    This study reports on the development and application of a four-tier multiple-choice (4TMC) diagnostic instrument, which has not been reported in the literature. It is an enhanced version of the two-tier multiple-choice (2TMC) test. As in 2TMC tests, its answer and reason tiers measure students' content knowledge and explanatory knowledge,…

  12. General Chemistry Students' Understanding of the Chemistry Underlying Climate Science and the Development of a Two-Tiered Multiple-Choice Diagnostic Instrument

    NASA Astrophysics Data System (ADS)

    Versprille, A.; Towns, M.; Mahaffy, P.; Martin, B.; McKenzie, L.; Kirchhoff, M.

    2013-12-01

    As part of the NSF funded Visualizing the Chemistry of Climate Change (VC3) project, we have developed a chemistry of climate science diagnostic instrument for use in general chemistry courses based on twenty-four student interviews. We have based our interview protocol on misconceptions identified in the research literature and the essential principles of climate change outlined in the CCSP document that pertain to chemistry (CCSP, 2009). The undergraduate student interviews elicited their understanding of the greenhouse effect, global warming, climate change, greenhouse gases, climate, and weather, and the findings from these interviews informed and guided the development of the multiple-choice diagnostic instrument. Our analysis and findings from the interviews indicate that students seem to confuse the greenhouse effect, global warming, and the ozone layer and in terms of chemistry concepts, the students lack a particulate level understanding of greenhouse gases causing them to not fully conceptualize the greenhouse effect and climate change. Details of the findings from the interviews, development of diagnostic instrument, and preliminary findings from the full implementation of the diagnostic instrument will be shared.

  13. Linking neuroscientific research on decision making to the educational context of novice students assigned to a multiple-choice scientific task involving common misconceptions about electrical circuits

    PubMed Central

    Potvin, Patrice; Turmel, Élaine; Masson, Steve

    2014-01-01

    Functional magnetic resonance imaging was used to identify the brain-based mechanisms of uncertainty and certainty associated with answers to multiple-choice questions involving common misconceptions about electric circuits. Twenty-two scientifically novice participants (humanities and arts college students) were asked, in an fMRI study, whether or not they thought the light bulbs in images presenting electric circuits were lighted up correctly, and if they were certain or uncertain of their answers. When participants reported that they were unsure of their responses, analyses revealed significant activations in brain areas typically involved in uncertainty (anterior cingulate cortex, anterior insula cortex, and superior/dorsomedial frontal cortex) and in the left middle/superior temporal lobe. Certainty was associated with large bilateral activations in the occipital and parietal regions usually involved in visuospatial processing. Correct-and-certain answers were associated with activations that suggest a stronger mobilization of visual attention resources when compared to incorrect-and-certain answers. These findings provide insights into brain-based mechanisms of uncertainty that are activated when common misconceptions, identified as such by science education research literature, interfere in decision making in a school-like task. We also discuss the implications of these results from an educational perspective. PMID:24478680

  14. 10 Questions about Independent Reading

    ERIC Educational Resources Information Center

    Truby, Dana

    2012-01-01

    Teachers know that establishing a robust independent reading program takes more than giving kids a little quiet time after lunch. But how do they set up a program that will maximize their students' gains? Teachers have to know their students' reading levels inside and out, help them find just-right books, and continue to guide them during…

  15. Stimulus Seeks Enriched Tests

    ERIC Educational Resources Information Center

    Sawchuk, Stephen

    2009-01-01

    No matter where teachers, state officials, and testing experts stand on the debate about school accountability, they generally agree that the United States' current multiple-choice-dominated Kinder-12 tests are, to use language borrowed from the No Child Left Behind (NCLB) Act, "in need of improvement." Now, federal officials are signaling that…

  16. Test Design Project: Studies in Test Bias. Annual Report.

    ERIC Educational Resources Information Center

    McArthur, David

    Item bias in a multiple-choice test can be detected by appropriate analyses of the persons x items scoring matrix. This permits comparison of groups of examinees tested with the same instrument. The test may be biased if it is not measuring the same thing in comparable groups, if groups are responding to different aspects of the test items, or if…

  17. Emergency physician’s perception of cultural and linguistic barriers in immigrant care: results of a multiple-choice questionnaire in a large Italian urban emergency department

    PubMed Central

    Numeroso, Filippo; Benatti, Mario; Pizzigoni, Caterina; Sartori, Elisabetta; Lippi, Giuseppe; Cervellin, Gianfranco

    2015-01-01

    BACKGROUND: A poor communication with immigrants can lead to inappropriate use of healthcare services, greater risk of misdiagnosis, and lower compliance with treatment. As precise information about communication between emergency physicians (EPs) and immigrants is lacking, we analyzed difficulties in communicating with immigrants in the emergency department (ED) and their possible associations with demographic data, geographical origin and clinical characteristics. METHODS: In an ED with approximately 85 000 visits per year, a multiple-choice questionnaire was given to the EPs 4 months after discharge of each immigrant in 2011. RESULTS: Linguistic comprehension was optimal or partial in the majority of patients. Significant barriers were noted in nearly one fourth of patients, for only half of them compatriots who were able to translate. Linguistic barriers were mainly found in older and sicker patients; they were also frequently seen in patients coming from western Africa and southern Europe. Non-linguistic barriers were perceived by EPs in a minority of patients, more frequently in the elderly and frequent attenders. Factors independently associated with a poor final comprehension led to linguistic barriers, non-linguistic obstacles, the absence of intermediaries, and the presence of patient’s fear and hostility. The latter probably is a consequence, not the cause, of a poor comprehension. CONCLUSION: Linguistic and non-linguistic barriers, although quite infrequent, are the main factors that compromise communication with immigrants in the ED, with negative effects especially on elderly and more seriously ill patients as well as on physician satisfaction and appropriateness in using services. PMID:26056541

  18. Handbook for Driving Knowledge Testing.

    ERIC Educational Resources Information Center

    Pollock, William T.; McDole, Thomas L.

    Materials intended for driving knowledge test development for use by operational licensing and education agencies are presented. A pool of 1,313 multiple choice test items is included, consisting of sets of specially developed and tested items covering principles of safe driving, legal regulations, and traffic control device knowledge pertinent to…

  19. Test Pool Questions, Area III.

    ERIC Educational Resources Information Center

    Sloan, Jamee Reid

    This manual contains multiple choice questions to be used in testing students on nurse training objectives. Each test includes several questions covering each concept. The concepts in section A, medical surgical nursing, are diseases of the following systems: musculoskeletal; central nervous; cardiovascular; gastrointestinal; urinary and male…

  20. Relationships among Testing Medium, Test Performance, and Testing Time of High School Students Who Are Visually Impaired

    ERIC Educational Resources Information Center

    Erin, Jane N.; Hong, Sunggye; Schoch, Christina; Kuo, YaJu

    2006-01-01

    This study compared the test scores and time required by high school students who are blind, sighted, or have low vision to complete tests administered in written and oral formats. The quantitative results showed that the blind students performed better on multiple-choice tests in braille and needed more time while taking tests in braille. The…

  1. The Discourse Co-operation Test.

    ERIC Educational Resources Information Center

    Friel, Michael

    1984-01-01

    Proposes a new type of oral test, based on Grice's Co-operative Principle and its dependent maxims, to measure communicative competence. Describes the design, administration, and scoring of this timed, multiple-choice test and gives sample test materials. Discusses applications to testing languages for special purposes and ways of presenting the…

  2. Pursuing the Qualities of a "Good" Test

    ERIC Educational Resources Information Center

    Coniam, David

    2014-01-01

    This article examines the issue of the quality of teacher-produced tests, limiting itself in the current context to objective, multiple-choice tests. The article investigates a short, two-part 20-item English language test. After a brief overview of the key test qualities of reliability and validity, the article examines the two subtests in terms…

  3. [Nursing] Test Pool Questions. Area II.

    ERIC Educational Resources Information Center

    Watkins, Nettie; Patton, Bob

    This manual consists of area 2 test pool questions which are designed to assist instructors in selecting appropriate questions to help prepare practical nursing students for the Oklahoma state board exam. Multiple choice questions are utilized to facilitate testing of nursing 2 curriculum objectives. Each test contains questions covering each…

  4. [Nursing] Test Pool Questions. Area I.

    ERIC Educational Resources Information Center

    Watkins, Nettie; Patton, Bob

    This manual consists of area 1 test pool questions which are designed to assist instructors in selecting appropriate questions to help prepare practical nursing students for the Oklahoma state board exam. Multiple choice questions are utilized to facilitate testing of nursing 1 curriculum objectives. Each test contains questions covering each…

  5. Paying for Better Test Scores

    ERIC Educational Resources Information Center

    Eisenkopf, Gerald

    2011-01-01

    The paper investigates if the provision of financial incentives has an impact on the performance of students in educational tests. The analysis is based on data from an experiment with high school students who answered multiple-choice items from the Third International Mathematics and Science Study (TIMSS). As in TIMSS, the setup did not…

  6. Equating of Multi-Facet Tests across Administrations

    ERIC Educational Resources Information Center

    Lunz, Mary; Suanthong, Surintorn

    2011-01-01

    The desirability of test equating to maintain the same criterion standard from test administration to test administration has long been accepted for multiple choice tests. The same consistency of expectations is desirable for performance tests, especially if they are part of a licensure or certification process or used for other high stakes…

  7. Roofing Workbook and Tests: Rigid Roofing.

    ERIC Educational Resources Information Center

    Klingensmith, Robert, Ed.

    This document is one of a series of nine individual units of instruction for use in roofing apprenticeship classes in California. The unit consists of a workbook and test. Eight topics are covered in the workbook and corresponding multiple-choice tests. For each topic, objectives and information sheets are provided. Information sheets are…

  8. Item Analysis in Introductory Economics Testing.

    ERIC Educational Resources Information Center

    Tinari, Frank D.

    1979-01-01

    Computerized analysis of multiple choice test items is explained. Examples of item analysis applications in the introductory economics course are discussed with respect to three objectives: to evaluate learning; to improve test items; and to help improve classroom instruction. Problems, costs and benefits of the procedures are identified. (JMD)

  9. Standardized Reading Tests and the Postsecondary Reading Curriculum.

    ERIC Educational Resources Information Center

    Wood, Nancy V.

    To help college reading teachers develop an awareness of what standardized reading tests do and do not reveal about students' reading abilities, a study examined the testing of reading and criticized four major standardized tests. Results indicated that reading is tested through (1) reading passages accompanied by multiple choice questions, (2)…

  10. Test Scores Count! A Handbook for Teaching Test-Taking Skills.

    ERIC Educational Resources Information Center

    Koenigs, Sharon

    This handbook provides guidelines for teaching test-taking skills to students of all grade levels to help the students raise their standardized test scores. Topics covered include: understanding instructions and following directions, efficient use of time, intelligent guessing, and application of special strategies for multiple-choice and…

  11. Development of Achievement Test: Validity and Reliability Study for Achievement Test on Matter Changing

    ERIC Educational Resources Information Center

    Kara, Filiz; Celikler, Dilek

    2015-01-01

    For "Matter Changing" unit included in the Secondary School 5th Grade Science Program, it is intended to develop a test conforming the gains described in the program, and that can determine students' achievements. For this purpose, a multiple-choice test of 48 questions is arranged, consisting of 8 questions for each gain included in the…

  12. Test Your Sodium Smarts

    MedlinePlus

    ... You may be surprised to learn how much sodium is in many foods. Sodium, including sodium chloride ... foods with little or no salt. Test your sodium smarts by answering these 10 questions about which ...

  13. A Method for Writing Open-Ended Curved Arrow Notation Questions for Multiple-Choice Exams and Electronic-Response Systems

    ERIC Educational Resources Information Center

    Ruder, Suzanne M.; Straumanis, Andrei R.

    2009-01-01

    A critical stage in the process of developing a conceptual understanding of organic chemistry is learning to use curved arrow notation. From this stems the ability to predict reaction products and mechanisms beyond the realm of memorization. Since evaluation (i.e., testing) is known to be a key driver of student learning, it follows that a new…

  14. ACER Chemistry Test Item Collection. ACER Chemtic Year 12.

    ERIC Educational Resources Information Center

    Australian Council for Educational Research, Hawthorn.

    The chemistry test item banks contains 225 multiple-choice questions suitable for diagnostic and achievement testing; a three-page teacher's guide; answer key with item facilities; an answer sheet; and a 45-item sample achievement test. Although written for the new grade 12 chemistry course in Victoria, Australia, the items are widely applicable.…

  15. Test of Logical Thinking in Science. Appendix B.

    ERIC Educational Resources Information Center

    Moodie, Allan G.; Robinson, T. E.

    This 20-item science test (18 multiple choice and 2 essay) is hand scored and has a 30 minute time limit. Analysis of raw scores by "t" test is provided. There has been limited field testing. See TM 000 959. (DLG)

  16. Development of a Test of Experimental Problem-Solving Skills.

    ERIC Educational Resources Information Center

    Ross, John A.; Maynes, Florence J.

    1983-01-01

    Multiple-choice tests were constructed for seven problem-solving skills using learning hierarchies based on expert-novice differences and refined in three phases of field testing. Includes test reliabilities (sufficient for making judgments of group performance but insufficient in single-administration for individual assessment), validity, and…

  17. Construction of Valid and Reliable Test for Assessment of Students

    ERIC Educational Resources Information Center

    Osadebe, P. U.

    2015-01-01

    The study was carried out to construct a valid and reliable test in Economics for secondary school students. Two research questions were drawn to guide the establishment of validity and reliability for the Economics Achievement Test (EAT). It is a multiple choice objective test of five options with 100 items. A sample of 1000 students was randomly…

  18. Writing ConceptTests for a Multivariable Calculus Class.

    ERIC Educational Resources Information Center

    Schlatter, Mark D.

    2002-01-01

    Discusses one way of addressing the difficulty of mastering a large number of concepts through the use of ConcepTests; that is, multiple choice questions given in a lecture that test understanding as opposed to calculation. Investigates various types of ConcepTests and the material they can cover. (Author/KHR)

  19. Project Physics Tests 6, The Nucleus.

    ERIC Educational Resources Information Center

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    Test items relating to Project Physics Unit 6 are presented in this booklet. Included are 70 multiple-choice and 24 problem-and-essay questions. Nuclear physics fundamentals are examined with respect to the shell model, isotopes, neutrons, protons, nuclides, charge-to-mass ratios, alpha particles, Becquerel's discovery, gamma rays, cyclotrons,…

  20. Project Physics Tests 4, Light and Electromagnetism.

    ERIC Educational Resources Information Center

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    Test items relating to Project Physics Unit 4 are presented in this booklet. Included are 70 multiple-choice and 22 problem-and-essay questions. Concepts of light and electromagnetism are examined on charges, reflection, electrostatic forces, electric potential, speed of light, electromagnetic waves and radiations, Oersted's and Faraday's work,…

  1. Accountability Is More than a Test Score

    ERIC Educational Resources Information Center

    Turnipseed, Stephan; Darling-Hammond, Linda

    2015-01-01

    The number one quality business leaders look for in employees is creativity and yet the U.S. education system undermines the development of the higher-order skills that promote creativity by its dogged focus on multiple-choice tests. Stephan Turnipseed and Linda DarlingHammond discuss the kind of rich accountability system that will help students…

  2. The Cloze Procedure as a Progress Test.

    ERIC Educational Resources Information Center

    Stansfield, Charles

    This paper reports on a pilot study conducted to determine the possible use of the cloze procedure as a substitute measure of achievement in a second-year Spanish culture and civilization class. Twenty students enrolled in a third-semester Spanish class at the University of Colorado were simultaneously given multiple choice tests and cloze tests…

  3. Project Physics Tests 1, Concepts of Motion.

    ERIC Educational Resources Information Center

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    Test items relating to Project Physics Unit 1 are presented in this booklet, consisting of 70 multiple-choice and 20 problem-and-essay questions. Concepts of motion are examined with respect to velocities, acceleration, forces, vectors, Newton's laws, and circular motion. Suggestions are made for time consumption in answering some items. Besides…

  4. Student Assessment System. Domain Referenced Tests. Allied Health Occupations/Practical Nursing. Volume II: Theory.

    ERIC Educational Resources Information Center

    Campbell, Gene, Comp.; Simpson, Bruce, Comp.

    These written domain referenced tests (DRTs) for the area of allied health occupations/practical nursing test cognitive abilities or knowledge of theory. Introductory materials describe domain referenced testing and test development. Each multiple choice test includes a domain statement, describing the behavior and content of the domain, and a…

  5. A Retrospective and an Analysis of Roles of Mandated Testing in Education Reform.

    ERIC Educational Resources Information Center

    Archbald, Douglas A.; Porter, Andrew C.

    The role and influence of mandated testing in educational reform are reviewed. Mandated testing refers to large-scale (districtwide or statewide) multiple-choice testing programs used for policy purposes of evaluation and accountability, which includes nationally normed standardized achievement tests and tests custom-developed to reflect state and…

  6. Student Assessment System. Domain Referenced Tests. Transportation/Automotive Mechanics. Volume II: Theory. Georgia Vocational Education Program Articulation.

    ERIC Educational Resources Information Center

    Watkins, James F., Comp.

    These written domain referenced tests (DRTs) for the area of transportation/automotive mechanics test cognitive abilities or knowledge of theory. Introductory materials describe domain referenced testing and test development. Each multiple choice test includes a domain statement, describing the behavior and content of the domain, and a test item…

  7. ACER Chemistry Test Item Collection (ACER CHEMTIC Year 12 Supplement).

    ERIC Educational Resources Information Center

    Australian Council for Educational Research, Hawthorn.

    This publication contains 317 multiple-choice chemistry test items related to topics covered in the Victorian (Australia) Year 12 chemistry course. It allows teachers access to a range of items suitable for diagnostic and achievement purposes, supplementing the ACER Chemistry Test Item Collection--Year 12 (CHEMTIC). The topics covered are: organic…

  8. Food Service Worker. Dietetic Support Personnel Achievement Test.

    ERIC Educational Resources Information Center

    Oklahoma State Dept. of Vocational and Technical Education, Stillwater.

    This guide contains a series of multiple-choice items and guidelines to assist instructors in composing criterion-referenced tests for use in the food service worker component of Oklahoma's Dietetic Support Personnel training program. Test items addressing each of the following occupational duty areas are provided: human relations; personal…

  9. Food Production Worker. Dietetic Support Personnel Achievement Test.

    ERIC Educational Resources Information Center

    Oklahoma State Dept. of Vocational and Technical Education, Stillwater.

    This guide contains a series of multiple-choice items and guidelines to assist instructors in composing criterion-referenced tests for use in the food production worker component of Oklahoma's Dietetic Support Personnel training program. Test items addressing each of the following occupational duty areas are provided: human relations; hygiene and…

  10. My Child Doesn't Test Well. Carnegie Perspectives

    ERIC Educational Resources Information Center

    Bond, Lloyd

    2007-01-01

    The writer examines a variety of reasons why test performance may not always be a valid measure of a person's competence or potential. Citing that a sizable percentage of students perform well in their schoolwork but poorly on standardized, multiple-choice tests, Bond defines and discusses four candidates as source factors for the phenomenon: (1)…

  11. Can Relevant Grammatical Cues Result in Invalid Test Items?

    ERIC Educational Resources Information Center

    Plake, Barbara S.; Huntley, Renee M.

    1984-01-01

    Two studies examined the effect of making the correct answer of a multiple choice test item grammatically consistent with the item. American College Testing Assessment experimental items were constructed to investigate grammatical compliance to investigate grammatical compliance for plural-singular and vowel-consonant agreement. Results suggest…

  12. Reducing Listening Test Anxiety through Various Forms of Listening Support

    ERIC Educational Resources Information Center

    Chang, Anna Ching-Shyang; Read, John

    2008-01-01

    Foreign language learners typically experience considerable anxiety about taking listening tests. This study investigated how four forms of listening support (pre-teaching of content and vocabulary, question preview, and repeated input) affect the anxiety levels of college students in Taiwan taking a multiple-choice achievement test, which counts…

  13. Ethnic DIF in Reading Tests with Mixed Item Formats

    ERIC Educational Resources Information Center

    Taylor, Catherine S.; Lee, Yoonsun

    2011-01-01

    This article presents a study of ethnic Differential Item Functioning (DIF) for 4th-, 7th-, and 10th-grade reading items on a state criterion-referenced achievement test. The tests, administered 1997 to 2001, were composed of multiple-choice and constructed-response items. Item performance by focal groups (i.e., students from Asian/Pacific Island,…

  14. Use of the Talking Tactile Tablet in Mathematics Testing.

    ERIC Educational Resources Information Center

    Landau, Steven; Russell, Michael; Gourgey, Karen; Erin, Jane N.; Cowan, Jennifer

    2003-01-01

    This article describes an experimental system for administering multiple-choice math tests to students who are visually impaired or have other print disabilities. Using a new audio-tactile computer peripheral device called the Talking Tactile Tablet, a preliminary version of a self-voicing test was created for eight participants. (Contains…

  15. Investigation of Response Changes in the GRE Revised General Test

    ERIC Educational Resources Information Center

    Liu, Ou Lydia; Bridgeman, Brent; Gu, Lixiong; Xu, Jun; Kong, Nan

    2015-01-01

    Research on examinees' response changes on multiple-choice tests over the past 80 years has yielded some consistent findings, including that most examinees make score gains by changing answers. This study expands the research on response changes by focusing on a high-stakes admissions test--the Verbal Reasoning and Quantitative Reasoning measures…

  16. Food Service Supervisor. Dietetic Support Personnel Achievement Test.

    ERIC Educational Resources Information Center

    Oklahoma State Dept. of Vocational and Technical Education, Stillwater.

    This guide contains a series of multiple-choice items and guidelines to assist instructors in composing criterion-referenced tests for use in the food service supervisor component of Oklahoma's Dietetic Support Personnel training program. Test items addressing each of the following occupational duty areas are provided: human relations; nutrient…

  17. Australian Chemistry Test Item Bank: Years 11 & 12. Volume 1.

    ERIC Educational Resources Information Center

    Commons, C., Ed.; Martin, P., Ed.

    Volume 1 of the Australian Chemistry Test Item Bank, consisting of two volumes, contains nearly 2000 multiple-choice items related to the chemistry taught in Year 11 and Year 12 courses in Australia. Items which were written during 1979 and 1980 were initially published in the "ACER Chemistry Test Item Collection" and in the "ACER Chemistry Test…

  18. Electronics. Criterion-Referenced Test (CRT) Item Bank.

    ERIC Educational Resources Information Center

    Davis, Diane, Ed.

    This document contains 519 criterion-referenced multiple choice and true or false test items for a course in electronics. The test item bank is designed to work with both the Vocational Instructional Management System (VIMS) and the Vocational Administrative Management System (VAMS) in Missouri. The items are grouped into 15 units covering the…

  19. Item Writing for Domain-Based Tests of Prose Learning.

    ERIC Educational Resources Information Center

    Roid, Gale; And Others

    Differences among test item writers and among different rules for writing multiple choice items were investigated. Items testing comprehension of a prose passage were varied according to five factors: (1) information density of the passage; (2) item writer; (3) deletion of nouns, as opposed to adjectives, from the sentence in order to construct…

  20. Auto Mechanics. Criterion-Referenced Test (CRT) Item Bank.

    ERIC Educational Resources Information Center

    Tannehill, Dana, Ed.

    This document contains 546 criterion-referenced multiple choice and true or false test items for a course in auto mechanics. The test item bank is designed to work with both the Vocational Instructional Management System (VIMS) and Vocational Administrative Management System (VAMS) in Missouri. The items are grouped into 35 units covering the…

  1. Cooperative Industrial/Vocational Education. Test Items and Assessment Techniques.

    ERIC Educational Resources Information Center

    Smith, Clifton L.; Elias, Julie Whitaker

    This document contains multiple-choice test items and assessment techniques in the form of instructional management plans for Missouri's cooperative industrial-vocational education core curriculum. The test items and techniques are relevant to these 15 occupational duties: (1) career research and planning; (2) computer awareness; (3) employment…

  2. An examination of factors contributing to a reduction in subgroup differences on a constructed-response paper-and-pencil test of scholastic achievement.

    PubMed

    Edwards, Bryan D; Arthur, Winfred

    2007-05-01

    The authors investigated subgroup differences on a multiple-choice and constructed-response test of scholastic achievement in a sample of 197 African American and 258 White test takers. Although both groups had lower mean scores on the constructed-response test, the results showed a 39% reduction in subgroup differences compared with the multiple-choice test. The results demonstrate that the lower subgroup differences were explained by more favorable test perceptions for African Americans on the constructed-response test. In addition, the two test formats displayed comparable levels of criterion-related validity. The results suggest that the constructed-response test format may be a viable alternative to the traditional multiple-choice test format in efforts to simultaneously use valid predictors of performance and minimize subgroup differences in high-stakes testing. PMID:17484558

  3. Using Multigroup Confirmatory Factor Analysis to Test Measurement Invariance in Raters: A Clinical Skills Examination Application

    ERIC Educational Resources Information Center

    Kahraman, Nilufer; Brown, Crystal B.

    2015-01-01

    Psychometric models based on structural equation modeling framework are commonly used in many multiple-choice test settings to assess measurement invariance of test items across examinee subpopulations. The premise of the current article is that they may also be useful in the context of performance assessment tests to test measurement invariance…

  4. The Positive and Negative Effects of Science Concept Tests on Student Conceptual Understanding

    ERIC Educational Resources Information Center

    Chang, Chun-Yen; Yeh, Ting-Kuang; Barufaldi, James P.

    2010-01-01

    This study explored the phenomenon of testing effect during science concept assessments, including the mechanism behind it and its impact upon a learner's conceptual understanding. The participants consisted of 208 high school students, in either the 11th or 12th grade. Three types of tests (traditional multiple-choice test, correct concept test,…

  5. Ontology-Based Multiple Choice Question Generation

    PubMed Central

    Al-Yahya, Maha

    2014-01-01

    With recent advancements in Semantic Web technologies, a new trend in MCQ item generation has emerged through the use of ontologies. Ontologies are knowledge representation structures that formally describe entities in a domain and their relationships, thus enabling automated inference and reasoning. Ontology-based MCQ item generation is still in its infancy, but substantial research efforts are being made in the field. However, the applicability of these models for use in an educational setting has not been thoroughly evaluated. In this paper, we present an experimental evaluation of an ontology-based MCQ item generation system known as OntoQue. The evaluation was conducted using two different domain ontologies. The findings of this study show that ontology-based MCQ generation systems produce satisfactory MCQ items to a certain extent. However, the evaluation also revealed a number of shortcomings with current ontology-based MCQ item generation systems with regard to the educational significance of an automatically constructed MCQ item, the knowledge level it addresses, and its language structure. Furthermore, for the task to be successful in producing high-quality MCQ items for learning assessments, this study suggests a novel, holistic view that incorporates learning content, learning objectives, lexical knowledge, and scenarios into a single cohesive framework. PMID:24982937

  6. Multiple Choice: Trends in Dining Design.

    ERIC Educational Resources Information Center

    Swanquist, Barry

    1999-01-01

    Examines changes in the traditional school dining hall and the prevailing trends in food-service design. Explores dining-hall space flexibility and multi-functionality and the need to cater to student preferences for brand names and choice. (GR)

  7. The Multiple Choices of Sex Education

    ERIC Educational Resources Information Center

    Hamilton, Rashea; Sanders, Megan; Anderman, Eric M.

    2013-01-01

    Sex education in middle and high school health classes is critically important because it frequently comprises the primary mechanism for conveying information about sexual health to adolescents. Deliver evidence-based information on HIV and pregnancy prevention practices and they will be less likely to engage in risky sexual behaviors, the theory…

  8. Ontology-based multiple choice question generation.

    PubMed

    Al-Yahya, Maha

    2014-01-01

    With recent advancements in Semantic Web technologies, a new trend in MCQ item generation has emerged through the use of ontologies. Ontologies are knowledge representation structures that formally describe entities in a domain and their relationships, thus enabling automated inference and reasoning. Ontology-based MCQ item generation is still in its infancy, but substantial research efforts are being made in the field. However, the applicability of these models for use in an educational setting has not been thoroughly evaluated. In this paper, we present an experimental evaluation of an ontology-based MCQ item generation system known as OntoQue. The evaluation was conducted using two different domain ontologies. The findings of this study show that ontology-based MCQ generation systems produce satisfactory MCQ items to a certain extent. However, the evaluation also revealed a number of shortcomings with current ontology-based MCQ item generation systems with regard to the educational significance of an automatically constructed MCQ item, the knowledge level it addresses, and its language structure. Furthermore, for the task to be successful in producing high-quality MCQ items for learning assessments, this study suggests a novel, holistic view that incorporates learning content, learning objectives, lexical knowledge, and scenarios into a single cohesive framework. PMID:24982937

  9. 10 Questions: Approaches to Research Funding.

    PubMed

    2016-07-01

    We asked four stem cell scientists who recently started their labs or expanded their research programs to share their insights and approaches to obtaining funding. We present highlights from their interview responses here. PMID:27392222

  10. 32 CFR 287.10 - Questions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... INFORMATION ACT PROGRAM DEFENSE INFORMATION SYSTEMS AGENCY FREEDOM OF INFORMATION ACT PROGRAM § 287.10..., faxes, and electronic mail. FOIA requests should be addressed as follows: Defense Information...

  11. 32 CFR 287.10 - Questions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... INFORMATION ACT PROGRAM DEFENSE INFORMATION SYSTEMS AGENCY FREEDOM OF INFORMATION ACT PROGRAM § 287.10..., faxes, and electronic mail. FOIA requests should be addressed as follows: Defense Information...

  12. Constructed-Response Test Questions: Why We Use Them; How We Score Them. R&D Connections. Number 11

    ERIC Educational Resources Information Center

    Livingston, Samuel A.

    2009-01-01

    To many people, standardized testing means multiple-choice testing. However, some tests contain questions that require the test taker to produce the answer, rather than simply choosing it from a list. The required response can be as simple as the writing of a single word as complex as the design of a laboratory experiment to test a scientific…

  13. American Sign Language Comprehension Test: A Tool for Sign Language Researchers

    ERIC Educational Resources Information Center

    Hauser, Peter C.; Paludneviciene, Raylene; Riddle, Wanda; Kurz, Kim B.; Emmorey, Karen; Contreras, Jessica

    2016-01-01

    The American Sign Language Comprehension Test (ASL-CT) is a 30-item multiple-choice test that measures ASL receptive skills and is administered through a website. This article describes the development and psychometric properties of the test based on a sample of 80 college students including deaf native signers, hearing native signers, deaf…

  14. An Explanatory Item Response Theory Approach for a Computer-Based Case Simulation Test

    ERIC Educational Resources Information Center

    Kahraman, Nilüfer

    2014-01-01

    Problem: Practitioners working with multiple-choice tests have long utilized Item Response Theory (IRT) models to evaluate the performance of test items for quality assurance. The use of similar applications for performance tests, however, is often encumbered due to the challenges encountered in working with complicated data sets in which local…

  15. Two-Dimensional, Implicit Confidence Tests as a Tool for Recognizing Student Misconceptions

    ERIC Educational Resources Information Center

    Klymkowsky, Michael W.; Taylor, Linda B.; Spindler, Shana R.; Garvin-Doxas, R. Kathy

    2006-01-01

    The misconceptions that students bring with them, or that arise during instruction, are a critical barrier to learning. Implicit-confidence tests, a simple modification of the multiple-choice test, can be used as a strategy for recognizing student misconceptions. An important issue, however, is whether such tests are gender-neutral. We analyzed…

  16. Developing Information Skills Test for Malaysian Youth Students Using Rasch Analysis

    ERIC Educational Resources Information Center

    Karim, Aidah Abdul; Shah, Parilah M.; Din, Rosseni; Ahmad, Mazalah; Lubis, Maimun Aqhsa

    2014-01-01

    This study explored the psychometric properties of a locally developed information skills test for youth students in Malaysia using Rasch analysis. The test was a combination of 24 structured and multiple choice items with a 4-point grading scale. The test was administered to 72 technical college students and 139 secondary school students. The…

  17. CDA (Child Development Associate) Instructional Materials. Assessing Competency: Tests for CDA Competencies (Experimental Edition). Book 7.

    ERIC Educational Resources Information Center

    Hotvedt, Kathleen J.; Hotvedt, Martyn O.

    This book of tests is designed to assess the competencies of the Child Development Associate (CDA) trainee: both what the trainee knows and how well the trainee works with children. The tests are designed as posttests to be administered after the trainee's completion of the relevant learning module. Each test consists of multiple choice questions,…

  18. A Teacher's Dream Come True - A Simple Program for Writing Tests.

    ERIC Educational Resources Information Center

    Vittitoe, Ted W.; Bradley, James V.

    1984-01-01

    Describes a test writing program for a 48K memory Apple microcomputer with lower-case capability. The program permits the production of any number of different tests and also different forms of the same multiple-choice or essay test. (JN)

  19. Regulating Accuracy on University Tests with the Plurality Option

    ERIC Educational Resources Information Center

    Higham, Philip A.

    2013-01-01

    A single experiment is reported in which introductory psychology students were administered a multiple-choice test on psychology with either 4 (n = 78) or 5 alternatives (n = 92) prior to any lectures being delivered. Two answers were generated for each question: a small answer consisting of their favorite alternative, and a large answer…

  20. Use of the Coombs Elimination Procedure in Classroom Tests.

    ERIC Educational Resources Information Center

    Bradbard, David A.; Green, Samuel B.

    1986-01-01

    The effectiveness of the Coombs elimination procedure was evaluated with 29 college students enrolled in a statistics course. Five multiple-choice tests were employed and scored using the Coombs procedure. Results suggest that the Coombs procedure decreased guessing, and this effect increased over the grading period. (Author/LMO)

  1. Project Physics Tests 2, Motion in the Heavens.

    ERIC Educational Resources Information Center

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    Test items relating to Project Physics Unit 2 are presented in this booklet. Included are 70 multiple-choice and 22 problem-and-essay questions. Concepts of motion in the heavens are examined for planetary motions, heliocentric theory, forces exerted on the planets, Kepler's laws, gravitational force, Galileo's work, satellite orbits, Jupiter's…

  2. Test Anxiety and the Immediate Feedback Assessment Technique

    ERIC Educational Resources Information Center

    DiBattista, David; Gosse, Leanne

    2006-01-01

    The authors examined the relationship between the reactions of undergraduate students to using the Immediate Feedback Assessment Technique (IFAT), an answer form that provides immediate feedback on multiple-choice questions, for the first time on a major examination and their levels of test anxiety and trait anxiety. They also assessed whether…

  3. Project Physics Tests 5, Models of the Atom.

    ERIC Educational Resources Information Center

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    Test items relating to Project Physics Unit 5 are presented in this booklet. Included are 70 multiple-choice and 23 problem-and-essay questions. Concepts of atomic model are examined on aspects of relativistic corrections, electron emission, photoelectric effects, Compton effect, quantum theories, electrolysis experiments, atomic number and mass,…

  4. Evaluation of an Interactive Tutorial for Teaching Hypothesis Testing Concepts

    ERIC Educational Resources Information Center

    Aberson, Christopher L.; Berger, Dale E.; Healy, Michael R.; Romero, Victoria L.

    2003-01-01

    In this article, we describe and evaluate a Web-based interactive tutorial used to present hypothesis testing concepts. The tutorial includes multiple-choice questions with feedback, an interactive applet that allows students to draw samples and evaluate null hypotheses, and follow-up questions suitable for grading. Students either used the…

  5. Advanced Marketing Core Curriculum. Test Items and Assessment Techniques.

    ERIC Educational Resources Information Center

    Smith, Clifton L.; And Others

    This document contains duties and tasks, multiple-choice test items, and other assessment techniques for Missouri's advanced marketing core curriculum. The core curriculum begins with a list of 13 suggested textbook resources. Next, nine duties with their associated tasks are given. Under each task appears one or more citations to appropriate…

  6. Fundamentals of Marketing Core Curriculum. Test Items and Assessment Techniques.

    ERIC Educational Resources Information Center

    Smith, Clifton L.; And Others

    This document contains multiple choice test items and assessment techniques for Missouri's fundamentals of marketing core curriculum. The core curriculum is divided into these nine occupational duties: (1) communications in marketing; (2) economics and marketing; (3) employment and advancement; (4) human relations in marketing; (5) marketing…

  7. The Analysis of Dichotomous Test Data Using Nonmetric Multidimensional Scaling.

    ERIC Educational Resources Information Center

    Koch, William R.

    The technique of nonmetric multidimensional scaling (MDS) was applied to real item response data obtained from a multiple-choice achievement test of unknown dimensionality. The goal was to classify the 50 items into the various subtests from which they were drawn originally, the latter being unknown to the investigator. Issues addressed in the…

  8. Project Physics Tests 3, The Triumph of Mechanics.

    ERIC Educational Resources Information Center

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    Test items relating to Project Physics Unit 3 are presented in this booklet. Included are 70 multiple-choice and 20 problem-and-essay questions. Concepts of mechanics are examined on energy, momentum, kinetic theory of gases, pulse analyses, "heat death," water waves, power, conservation laws, normal distribution, thermodynamic laws, and wave…

  9. Percentile Norms for the AAHPER Cooperative Physical Education Tests. Research Report.

    ERIC Educational Resources Information Center

    Moodie, Allan G.

    Percentile scores for Vancouver students in grades 9, 10, 11 and 12 on the AAHPER Cooperative Physical Education Tests are presented. Two of the six forms of the tests were used in these administrations. Every form consists of 60 multiple-choice questions to be completed in 40 minutes. A single score, based on the number of questions answered…

  10. The Performance of IRT Model Selection Methods with Mixed-Format Tests

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.

    2012-01-01

    When tests consist of multiple-choice and constructed-response items, researchers are confronted with the question of which item response theory (IRT) model combination will appropriately represent the data collected from these mixed-format tests. This simulation study examined the performance of six model selection criteria, including the…

  11. Learning From Tests: Facilitation of Delayed Recall by Initial Recognition Alternatives.

    ERIC Educational Resources Information Center

    Whitten, William B., II; Leonard, Janet Mauriello

    1980-01-01

    Two experiments were designed to determine the effects of multiple-choice recognition test alternatives on subsequent memory for the correct answers. Results of both experiments are interpreted as demonstrations of the principle that long-term retention is facilitated such that memory evaluation occurs during initial recognition tests. (Author/RD)

  12. Teacher-made Test Items in American History: Emphasis Junior High School. Bulletin Number 40.

    ERIC Educational Resources Information Center

    Kurfman, Dana

    Designed originally for use in junior-high-school classes, this bulletin provides an extensive file of 420 multiple-choice test questions in American history. The test items are intended to measure substantive understandings as well as such abilities as interpretation, analysis, synthesis, evaluation, and application. The initial questions were…

  13. The Development and Analysis of a Grade Eight Physical Science Test.

    ERIC Educational Resources Information Center

    Singh, Balwant; And Others

    This 46-question multiple choice test deals with the physical and chemical properties of matter, wave motion and types of energy, simple machines, equipment safety and measurement. The test is meant for administration to grade 8 students before and after instruction. Item analysis of the pre- post data are included, as are reliability estimates…

  14. Developing a Test of Pragmatics of Japanese as a Foreign Language

    ERIC Educational Resources Information Center

    Itomitsu, Masayuki

    2009-01-01

    This dissertation reports development and validation studies of a Web-based standardized test of Japanese as a foreign language (JFL), designed to measure learners' off-line grammatical and pragmatic knowledge in multiple-choice format. Targeting Japanese majors in the U.S. universities and colleges, the test is designed to explore possible…

  15. Australian Chemistry Test Item Bank: Years 11 and 12. Volume 2.

    ERIC Educational Resources Information Center

    Commons, C., Ed.; Martin, P., Ed.

    The second volume of the Australian Chemistry Test Item Bank, consisting of two volumes, contains nearly 2000 multiple-choice items related to the chemistry taught in Year 11 and Year 12 courses in Australia. Items which were written during 1979 and 1980 were initially published in the "ACER Chemistry Test Item Collection" and in the "ACER…

  16. The Disaggregation of Value-Added Test Scores to Assess Learning Outcomes in Economics Courses

    ERIC Educational Resources Information Center

    Walstad, William B.; Wagner, Jamie

    2016-01-01

    This study disaggregates posttest, pretest, and value-added or difference scores in economics into four types of economic learning: positive, retained, negative, and zero. The types are derived from patterns of student responses to individual items on a multiple-choice test. The micro and macro data from the "Test of Understanding in College…

  17. Cue-Free Computerized Interactive Tests--Computer Emulation of Oral Examinations.

    ERIC Educational Resources Information Center

    Anbar, Michael

    This discussion of the use of microcomputer software in a medical school class for the purposes of emulating oral examinations begins by stating the three major goals of testing in medical school. The limitations of multiple choice tests and oral examinations are then discussed, and the use and suitability of computers to administer and to avoid…

  18. Dividing the Force Concept Inventory into Two Equivalent Half-Length Tests

    ERIC Educational Resources Information Center

    Han, Jing; Bao, Lei; Chen, Li; Cai, Tianfang; Pi, Yuan; Zhou, Shaona; Tu, Yan; Koenig, Kathleen

    2015-01-01

    The Force Concept Inventory (FCI) is a 30-question multiple-choice assessment that has been a building block for much of the physics education research done today. In practice, there are often concerns regarding the length of the test and possible test-retest effects. Since many studies in the literature use the mean score of the FCI as the…

  19. Measuring Student Learning Using Initial and Final Concept Test in an STEM Course

    ERIC Educational Resources Information Center

    Kaw, Autar; Yalcin, Ali

    2012-01-01

    Effective assessment is a cornerstone in measuring student learning in higher education. For a course in Numerical Methods, a concept test was used as an assessment tool to measure student learning and its improvement during the course. The concept test comprised 16 multiple choice questions and was given in the beginning and end of the class for…

  20. Comparisons among Designs for Equating Mixed-Format Tests in Large-Scale Assessments

    ERIC Educational Resources Information Center

    Kim, Sooyeon; Walker, Michael E.; McHale, Frederick

    2010-01-01

    In this study we examined variations of the nonequivalent groups equating design for tests containing both multiple-choice (MC) and constructed-response (CR) items to determine which design was most effective in producing equivalent scores across the two tests to be equated. Using data from a large-scale exam, this study investigated the use of…

  1. Integrated Testlets: A New Form of Expert-Student Collaborative Testing

    ERIC Educational Resources Information Center

    Shiell, Ralph C.; Slepkov, Aaron D.

    2015-01-01

    Integrated testlets are a new assessment tool that encompass the procedural benefits of multiple-choice testing, the pedagogical advantages of free-response-based tests, and the collaborative aspects of a viva voce or defence examination format. The result is a robust assessment tool that provides a significant formative aspect for students.…

  2. An Investigation of the Relationship between Item Arrangement and Test Performance.

    ERIC Educational Resources Information Center

    Chissom, Brad; Chukabarah, Prince C. O.

    The comparative effects of various sequences of test items were examined for over 900 graduate students enrolled in an educational research course at The University of Alabama, Tuscaloosa. experiment, which was conducted a total of four times using four separate tests, presented three different arrangements of 50 multiple-choice items: (1)…

  3. Performance-Based Assessment of Software Skills Proficiency: A Demonstration of the "Judd Tests."

    ERIC Educational Resources Information Center

    Roberts, David C.

    The differences between multiple-choice, simulated, and concurrent tests of software-skills proficiency are discussed. For three basic human-resource functions, the advantages of concurrent tests (i.e., those that use the actual application software) include true performance-based assessment, unconstrained response alternatives, and increased job…

  4. High-Stakes Testing for Students with Mathematics Difficulty: Response Format Effects in Mathematics Problem Solving

    ERIC Educational Resources Information Center

    Powell, Sarah R.

    2012-01-01

    Students with disabilities are frequently granted accommodations for high-stakes standardized tests to provide them an opportunity to demonstrate their academic knowledge without interference from their disability. One type of possible accommodation, test response format, concerns whether students respond in multiple-choice or constructed-response…

  5. Inferring Cross Sections of 3D Objects: A New Spatial Thinking Test

    ERIC Educational Resources Information Center

    Cohen, Cheryl A.; Hegarty, Mary

    2012-01-01

    A new spatial ability test was administered online to 223 undergraduate students enrolled in introductory science courses. The 30-item multiple choice test measures individual differences in ability to identify the two-dimensional cross section of a three-dimensional geometric solid, a skill that has been identified as important in science,…

  6. Delaware Student Testing Program Item Sampler: Released Items for Social Studies, Grades 8 and 11.

    ERIC Educational Resources Information Center

    Delaware State Dept. of Education, Dover.

    The Delaware Student Testing Program (DSTP) is designed to assess progress toward the Delaware Content Standards. Every year a certain number of items are removed from the test and then selected for public release. This booklet contains multiple-choice and short-answer (constructed response) items released from the 2000 or 2001 administration of…

  7. Changing World Patterns of Machine-Scored Objective Testing: The Expected Impact of the Multi-Digit Method.

    ERIC Educational Resources Information Center

    Anderson, Paul S.; Saliba, Alcyone

    The use of optical scanners and computers in educational testing is common where objective testing methods (such as true-false, matching, and multiple-choice items) are well-established means of evaluating educational achievement. Where non-objective testing methods (such as fill-in-the-blank, short-answer, and essay items) have been more common,…

  8. Investigating the Comparability of School Scores across Test Forms that Are Not Parallel. Technical Guidelines for Performance Assessment.

    ERIC Educational Resources Information Center

    Fitzpatrick, Anne R.

    This study, one of a series designed to answer practical questions about performance based assessment, examined the comparability of school scores on short, nonparallel test forms. The data were obtained from mathematics tests with both multiple choice (MC) and performance assessment (PA) items. The tests were administered in a statewide testing…

  9. Effects of an Oral Testing Accommodation on the Mathematics Performance of Secondary Students with and without Learning Disabilities

    ERIC Educational Resources Information Center

    Elbaum, Batya

    2007-01-01

    This study compared the performance of students with and without learning disabilities (LD) on a mathematics test using a standard administration procedure and a read-aloud accommodation. Analyses were conducted on the test scores of 625 middle and high school students (n = 388 with LD) on two equivalent 30-item multiple-choice tests. Whereas mean…

  10. Computer-Managed Instruction in the Navy: IV. The Effects of Test Item Format on Learning and Knowledge Retention.

    ERIC Educational Resources Information Center

    Lockhart, Kathleen A.; And Others

    The relative effectiveness of multiple-choice (MC) and constructed-response (CR) test formats in computer-managed instruction (CMI) were compared using four test groups of 30 trainees each who were assigned nonsystematically from the basics course at the Propulsion Engineering School, Great Lakes Naval Training Center. Group A took module tests in…

  11. Development and Application of a Three-Tier Diagnostic Test to Assess Secondary Students' Understanding of Waves

    ERIC Educational Resources Information Center

    Caleon, Imelda; Subramaniam, R.

    2010-01-01

    This study focused on the development and application of a three-tier multiple-choice diagnostic test (or three-tier test) on the nature and propagation of waves. A question in a three-tier test comprises the "content tier", which measures content knowledge; the "reason tier", which measures explanatory knowledge; and the "confidence tier", which…

  12. Scratching Where They Itch: Evaluation of Feedback on a Diagnostic English Grammar Test for Taiwanese University Students

    ERIC Educational Resources Information Center

    Yin, Muchun; Sims, James; Cothran, Daniel

    2012-01-01

    Feedback to the test taker is a defining characteristic of diagnostic language testing (Alderson, 2005). This article reports on a study that investigated how much and in what ways students at a Taiwan university perceived the feedback to be useful on an online multiple-choice diagnostic English grammar test, both in general and by students of…

  13. National Conference on Critical Issues in Competency-Based Testing for Vocational-Technical Education (Nashville, Tennessee, April 11-13, 1988). Conference Notebook.

    ERIC Educational Resources Information Center

    Vocational Technical Education Consortium of States, Decatur, GA.

    This notebook contains the following conference presentations: "Identifying and Validating Task Lists by Business and Industry for Test/Test Item Development" (Charles Losh); "Conducting a Task Analysis for Competency-Based Test Item Development" (Brenda Hattaway); "Writing and Reviewing Test Items: Multiple Choice, Matching, Performance" (Ora…

  14. State Test Programs Mushroom as NCLB Mandate Kicks in: Nearly Half of States Are Expanding Their Testing Programs to Additional Grades This School Year to Comply with the Federal No Child Left Behind Act

    ERIC Educational Resources Information Center

    Olson, Lynn

    2005-01-01

    Twenty-three states are expanding their testing programs to additional grades this school year to comply with the federal No Child Left Behind Act. In devising the new tests, most states have defied predictions and chosen to go beyond multiple-choice items, by including questions that ask students to construct their own responses. But many state…

  15. The Effect of Differential Weighting of Individual Item Responses on the Predictive Validity and Reliability of an Aptitude Test.

    ERIC Educational Resources Information Center

    Sabers, Darrell L.; White, Gordon W.

    A procedure for scoring multiple-choice tests by assigning different weights to every option of a test item is investigated. The weighting method used was based on that proposed by Davis, which involves taking the upper and lower 27% of a sample, according to some criterion measure, and using the percentages of these groups marking an item option…

  16. The Design and Development of a Context-Rich, Photo-Based Online Testing to Assess Students' Science Learning

    ERIC Educational Resources Information Center

    Lin, Min-Jin; Guo, Chorng-Jee; Hsu, Chia-Er

    2011-01-01

    This study designed and developed a CP-MCT (content-rich, photo-based multiple choice online test) to assess whether college students can apply the basic light concept to interpret daily light phenomena. One hundred college students volunteered to take the CP-MCT, and the results were statistically analyzed by applying t-test or ANOVA (Analysis of…

  17. Relationships of Teacher-Assigned Grades in High School Chemistry to Taxonomy-Type Objective Test Scores.

    ERIC Educational Resources Information Center

    Even, Alexander

    Reported is a study designed (1) to investigate the relationship between teacher-assigned chemistry grades and the scores obtained on a multiple-choice chemistry test built on taxonomic principles, and (2) to compare the contributions of various predictor variables to the explainable variance of the grades and the total test scores. The sample…

  18. Test-Enhanced Learning in a Middle School Science Classroom: The Effects of Quiz Frequency and Placement

    ERIC Educational Resources Information Center

    McDaniel, Mark A.; Agarwal, Pooja K.; Huelser, Barbie J.; McDermott, Kathleen B.; Roediger, Henry L., III

    2011-01-01

    Typically, teachers use tests to evaluate students' knowledge acquisition. In a novel experimental study, we examined whether low-stakes testing ("quizzing") can be used to foster students' learning of course content in 8th grade science classes. Students received multiple-choice quizzes (with feedback); in the quizzes, some target content that…

  19. Entwicklung eines Einstufungstests fuer Deutsch als Fremdsprache an der Universitaet Bonn (Developing a Placement Test for German as a Foreign Language at the University of Bonn).

    ERIC Educational Resources Information Center

    Kummer, Manfred; And Others

    1978-01-01

    Discusses various test types, and specifically the placement test for German as a foreign language at Bonn University, describing the segments: multiple-choice questions and "fill-in" dictations based on given texts. Test content varies according to students' nationality. Grading procedures are also described. (IFS/WGA)

  20. The Impact of Discourse Features of Science Test Items on ELL Performance

    ERIC Educational Resources Information Center

    Kachchaf, Rachel; Noble, Tracy; Rosebery, Ann; Wang, Yang; Warren, Beth; O'Connor, Mary Catherine

    2014-01-01

    Most research on linguistic features of test items negatively impacting English language learners' (ELLs') performance has focused on lexical and syntactic features, rather than discourse features that operate at the level of the whole item. This mixed-methods study identified two discourse features in 162 multiple-choice items on a standardized…

  1. The Effect of Topic Interest and Gender on Reading Test Types in a Second Language

    ERIC Educational Resources Information Center

    Ay, Sila; Bartan, Ozgur Sen

    2012-01-01

    This study explores how readers' interest, gender, and test types (multiple-choice questions, Yes/No questions, and short-answer formats) affect second language reading comprehension in three different levels and five different categories of topics. A questionnaire was administered to 168 Turkish EFL students to find out the gender-oriented topic…

  2. V-TECS Criterion-Referenced Test Item Bank for Radiologic Technology Occupations.

    ERIC Educational Resources Information Center

    Reneau, Fred; And Others

    This Vocational-Technical Education Consortium of States (V-TECS) criterion-referenced test item bank provides 696 multiple-choice items and 33 matching items for radiologic technology occupations. These job titles are included: radiologic technologist, chief; radiologic technologist; nuclear medicine technologist; radiation therapy technologist;…

  3. The Development and Analysis of a Pictorial Physics Test for Grades Five and Six.

    ERIC Educational Resources Information Center

    Kernaghan, Joseph Barron

    Reported is a study to measure the degree to which children have developed an understanding of basic physics. The instruments contained items in a multiple choice format with a stem picture depicting an actual happening and three choice pictures, one of which was true in relationship to the stem. Items for the test were drawn from basic science…

  4. V-TECS Criterion-Referenced Test Item Bank for Ornamental Horticulture Production Occupations.

    ERIC Educational Resources Information Center

    Reneau, Fred; And Others

    This Vocational-Technical Education Consortium of States (V-TECS) criterion-referenced test item bank provides 325 multiple-choice items and 18 matching items for ornamental horticulture production occupations. Job titles covered are specialty grower, plant propagator, and horticultural worker I and II. The following information is provided in…

  5. Effects of Test Format, Self Concept and Anxiety on Item Response Changing Behaviour

    ERIC Educational Resources Information Center

    Afolabi, E. R. I.

    2007-01-01

    The study examined the effects of item format, self-concept and anxiety on response changing behaviour. Four hundred undergraduate students who offered a counseling psychology course in a Nigerian university participated in the study. Students' answers in multiple--choice and true--false formats of an achievement test were observed for response…

  6. Elimination vs. Best Answer Response Modes for M-C Tests.

    ERIC Educational Resources Information Center

    Collet, LeVerne S.

    A critical review of systems of scoring multiple choice tests is presented and the superiority of a system based upon elimination method over one based upon the best answer mode is hypothesized. This is discussed in terms of the capacity of the mode to reveal the relationships among decoy options and the effects of partial information,…

  7. Algorithms for Developing Test Questions from Sentences in Instructional Materials: An Extension of an Earlier Study.

    ERIC Educational Resources Information Center

    Roid, Gale H.; And Others

    An earlier study was extended and replicated to examine the feasibility of generating multiple-choice test questions by transforming sentences from prose instructional material. In the first study, a computer-based algorithm was used to analyze prose subject matter and to identify high-information words. Sentences containing selected words were…

  8. The Development of Test Items for the Integrated Science Processes: Formulating Hypotheses and Defining Operationally.

    ERIC Educational Resources Information Center

    Fyffe, Darrel Wayne

    This study focused on developing group test items which measure the science process skills, "Formulating Hypotheses" and "Defining Operationally". Thirty-six multiple choice questions were developed by using behavioral objectives of "Science - A Process Approach" materials. A group of 56 students who had used these materials were given the…

  9. Building the BIKE: Development and Testing of the Biotechnology Instrument for Knowledge Elicitation (BIKE)

    ERIC Educational Resources Information Center

    Witzig, Stephen B.; Rebello, Carina M.; Siegel, Marcelle A.; Freyermuth, Sharyn K.; Izci, Kemal; McClure, Bruce

    2014-01-01

    Identifying students' conceptual scientific understanding is difficult if the appropriate tools are not available for educators. Concept inventories have become a popular tool to assess student understanding; however, traditionally, they are multiple choice tests. International science education standard documents advocate that assessments…

  10. A Study of Teaching and Testing Strategies for a Required Statistics Course for Undergraduate Business Students

    ERIC Educational Resources Information Center

    Lawrence, John A.; Singhania, Ram P.

    2004-01-01

    In this investigation of student performance in introductory business statistics classes, the authors performed two separate controlled studies to compare performance in (a) distance-learning versus traditionally delivered courses and (b) multiple choice versus problem-solving tests. Results of the first study, based on the authors' several…

  11. Facilitating Recognition Memory: The Use of Distinctive Contexts in Study Materials and Tests.

    ERIC Educational Resources Information Center

    Marlin, Carol A.; And Others

    The effects of distinctive background settings on children's recognition memory for subjects and objects of related sentences was examined. As a follow-up to a study by Levin, Ghatala, and Truman (1979), the effects of presenting distinctive background contexts in sentences and multiple-choice tests were separated from the effects of providing…

  12. Separate & Unequal: Use Test Scores To Improve Education--Not To Segregate Poor Learners.

    ERIC Educational Resources Information Center

    Burley, Hansel

    2001-01-01

    Disappointing high-stakes test results matter far less than the type of future citizens that schools produce. Citizenship values (teamwork, leadership, and neighborliness) are not assessed well by multiple-choice exams. Poor performers should not be segregated, data should be reinterpreted, and remediation should stress tutoring interventions, not…

  13. Reliability for the Greek Version of the "Test of Everyday Reasoning (TER)"

    ERIC Educational Resources Information Center

    Malamitsa, Katerina; Kasoutas, Michael; Kokkotas, Panagiotis

    2008-01-01

    The core critical thinking skills, identified in "The Delphi Report" as essential elements for workplace and educational success, are targeted in a standardized 35 item multiple-choice assessment tool entitled the "Test of Everyday Reasoning (TER)" which is designed to provide a representation of a person's overall critical thinking ability. In…

  14. Creative Math Assessment: How the "Fizz & Martina Approach" Helps Prepare Students for the Math Assessment Tests.

    ERIC Educational Resources Information Center

    Vaille, John; Kushins, Harold

    Many school districts around the nation are re-evaluating how they measure student performance in mathematics. Calls have been made for alternative, authentic assessment tools that go beyond simple, and widely ineffective, multiple-choice tests. This book examines how the Fizz & Martina math video series provides students with hands-on practice…

  15. Empirical Validation Studies of Alternate Response Modes for Writing Assessment. Test Design Project.

    ERIC Educational Resources Information Center

    Capell, Frank J.; Quellmalz, Edys S.

    In the area of large scale assessment, there is increasing interest in the measurement of students' written performance. At issue is whether the task demands in writing assessment can be simplified to involve the production of paragraph-length writing samples and/or multiple choice testing, rather than full-length essays. This study considers data…

  16. Examining the Predictive Validity of a Screening Test for Court Interpreters

    ERIC Educational Resources Information Center

    Stansfield, Charles W.; Hewitt, William E.

    2005-01-01

    The United States Court Interpreters Act (US Congress, 1978) requires that interpreters in US federal courts be certified through a criterion-referenced performance test. The Federal Court Interpreter Certification Examination (FCICE) is a two-phase certification battery for federal court interpreters. Phase I is a multiple-choice Written…

  17. Computerized Classification Testing under the One-Parameter Logistic Response Model with Ability-Based Guessing

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Huang, Sheng-Yun

    2011-01-01

    The one-parameter logistic model with ability-based guessing (1PL-AG) has been recently developed to account for effect of ability on guessing behavior in multiple-choice items. In this study, the authors developed algorithms for computerized classification testing under the 1PL-AG and conducted a series of simulations to evaluate their…

  18. Components of Spatial Thinking: Evidence from a Spatial Thinking Ability Test

    ERIC Educational Resources Information Center

    Lee, Jongwon; Bednarz, Robert

    2012-01-01

    This article introduces the development and validation of the spatial thinking ability test (STAT). The STAT consists of sixteen multiple-choice questions of eight types. The STAT was validated by administering it to a sample of 532 junior high, high school, and university students. Factor analysis using principal components extraction was applied…

  19. Relationships between Inquiry-Based Teaching and Physical Science Standardized Test Scores

    ERIC Educational Resources Information Center

    Tretter, Thomas R.; Jones, M. Gail

    2003-01-01

    This exploratory case study investigates relationships between use of an inquiry-based instructional style and student scores on standardized multiple-choice tests. The study takes the form of a case study of physical science classes taught by one of the authors over a span of four school years. The first 2 years were taught using traditional…

  20. [The American Way: Rules, Laws and You. An Assessment of Law-Related Competencies. Primary, Intermediate and Secondary Levels. Tests and Supporting Materials.

    ERIC Educational Resources Information Center

    Pennsylvania State Dept. of Education, Harrisburg.

    The Assessment of Law-Related Competencies includes tests for three different levels: primary, covering grades K-4; intermediate, covering grades 5-8; and secondary, covering grades 9-12. Most of the items in the three multiple choice tests are intended to measure cognitive knowledge of various law-related concepts, including: nature and types of…

  1. A Three-Tier Diagnostic Test to Assess Pre-Service Teachers' Misconceptions about Global Warming, Greenhouse Effect, Ozone Layer Depletion, and Acid Rain

    ERIC Educational Resources Information Center

    Arslan, Harika Ozge; Cigdemoglu, Ceyhan; Moseley, Christine

    2012-01-01

    This study describes the development and validation of a three-tier multiple-choice diagnostic test, the atmosphere-related environmental problems diagnostic test (AREPDiT), to reveal common misconceptions of global warming (GW), greenhouse effect (GE), ozone layer depletion (OLD), and acid rain (AR). The development of a two-tier diagnostic test…

  2. Immediate vs. Delayed Feedback in a Computer-Managed Test: Effects on Long-Term Retention. Technical Report, March 1976-August 1976.

    ERIC Educational Resources Information Center

    Sturges, Persis T.

    This experiment was designed to test the effect of immediate and delayed feedback on retention of learning in an educational situation. Four groups of college undergraduates took a multiple-choice computer-managed test. Three of these groups received informative feedback (the entire item with the correct answer identified) either: (1) immediately…

  3. Measuring Gains in Critical Thinking in Food Science and Human Nutrition Courses: The Cornell Critical Thinking Test, Problem-Based Learning Activities, and Student Journal Entries

    ERIC Educational Resources Information Center

    Iwaoka, Wayne T.; Li, Yong; Rhee, Walter Y.

    2010-01-01

    The Cornell Critical Thinking Test (CCTT) is one of the many multiple-choice tests with validated questions that have been reported to measure general critical thinking (CT) ability. One of the IFT Education Standards for undergraduate degrees in Food Science is the emphasis on the development of critical thinking. While this skill is easy to list…

  4. Controlling Guessing Bias in the Dichotomous Rasch Model Applied to a Large-Scale, Vertically Scaled Testing Program

    ERIC Educational Resources Information Center

    Andrich, David; Marais, Ida; Humphry, Stephen Mark

    2016-01-01

    Recent research has shown how the statistical bias in Rasch model difficulty estimates induced by guessing in multiple-choice items can be eliminated. Using vertical scaling of a high-profile national reading test, it is shown that the dominant effect of removing such bias is a nonlinear change in the unit of scale across the continuum. The…

  5. A Comparison of Three- and Four-Option English Tests for University Entrance Selection Purposes in Japan

    ERIC Educational Resources Information Center

    Shizuka, Tetsuhito; Takeuchi, Osamu; Yashima, Tomoko; Yoshizawa, Kiyomi

    2006-01-01

    The present study investigated the effects of reducing the number of options per item on psychometric characteristics of a Japanese EFL university entrance examination. A four-option multiple-choice reading test used for entrance screening at a university in Japan was later converted to a three-option version by eliminating the least frequently…

  6. The Nature of Field Independence: Percentiles and Factor Structure of the Finding Embedded Figures Test--Research Edition.

    ERIC Educational Resources Information Center

    Melancon, Janet G.; Thompson, Bruce

    This study investigated the nature of field independence by exploring the structure underlying responses to Forms A and B of a multiple-choice measure of field-independence, the Finding Embedded Figures Test (FEFT). Subjects included 302 students (52.7% male) enrolled in mathematics courses at a university in the southern United States. Students…

  7. Using Two-Tier Test to Identify Primary Students' Conceptual Understanding and Alternative Conceptions in Acid Base

    ERIC Educational Resources Information Center

    Bayrak, Beyza Karadeniz

    2013-01-01

    The purpose of this study was to identify primary students' conceptual understanding and alternative conceptions in acid-base. For this reason, a 15 items two-tier multiple choice test administered 56 eighth grade students in spring semester 2009-2010. Data for this study were collected using a conceptual understanding scale prepared to include…

  8. Using a Two-Tier Test to Assess Students' Understanding and Alternative Conceptions of Cyber Copyright Laws

    ERIC Educational Resources Information Center

    Chou, Chien; Chan, Pei-Shan; Wu, Huan-Chueh

    2007-01-01

    The purpose of this study is to explore students' understanding of cyber copyright laws. This study developed a two-tier test with 10 two-level multiple-choice questions. The first tier presented a real-case scenario and asked whether the conduct was acceptable whereas the second-tier provided reasons to justify the conduct. Students in Taiwan…

  9. Examen en Vue du Diplome Douzieme Annee, Langue et Litterature 30. Partie B: Lecture (Choix Multiples). Livret de Textes (Examination for the Twelfth Grade Diploma, Language and Literature 30. Part B: Reading--Multiple Choice. Readings Booklet. June 1988 Edition.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    As part of an examination required by the Alberta (Canada) Department of Education in order for 12th grade students to receive a diploma in French, this test, to be accompanied by the questions booklet, contains the reading selections portion of Part B, the language and literature component of the June 1988 tests. Representing the genres of…

  10. Providing Transparency and Credibility: The Selection of International Students for Australian Universities. An Examination of the Relationship between Scores in the International Student Admissions Test (ISAT), Final Year Academic Programs and an Australian University's Foundation Program

    ERIC Educational Resources Information Center

    Lai, Kelvin; Nankervis, Susan; Story, Margot; Hodgson, Wayne; Lewenberg, Michael; Ball, Marita MacMahon

    2008-01-01

    Throughout 2003-04 five cohorts of students in their final year of school studies in various Malaysian colleges and a group of students completing an Australian university foundation year in Malaysia sat the International Student Admissions Test (ISAT). The ISAT is a multiple-choice test of general academic abilities developed for students whose…

  11. SAT Wars: The Case for Test-Optional College Admissions

    ERIC Educational Resources Information Center

    Soares, Joseph A., Ed.

    2011-01-01

    What can a college admissions officer safely predict about the future of a 17-year-old? Are the best and the brightest students the ones who can check off the most correct boxes on a multiple-choice exam? Or are there better ways of measuring ability and promise? In this penetrating and revealing look at high-stakes standardized admissions tests,…

  12. Stereotype threat? Effects of inquiring about test takers' gender on conceptual test performance in physics

    NASA Astrophysics Data System (ADS)

    Maries, Alexandru; Singh, Chandralekha

    2015-12-01

    It has been found that activation of a stereotype, for example by indicating one's gender before a test, typically alters performance in a way consistent with the stereotype, an effect called "stereotype threat." On a standardized conceptual physics assessment, we found that asking test takers to indicate their gender right before taking the test did not deteriorate performance compared to an equivalent group who did not provide gender information. Although a statistically significant gender gap was present on the standardized test whether or not students indicated their gender, no gender gap was observed on the multiple-choice final exam students took, which included both quantitative and conceptual questions on similar topics.

  13. How Do Chinese ESL Learners Recognize English Words during a Reading Test? A Comparison with Romance-Language-Speaking ESL Learners

    ERIC Educational Resources Information Center

    Li, Hongli; Suen, Hoi K.

    2015-01-01

    This study examines how Chinese ESL learners recognize English words while responding to a multiple-choice reading test as compared to Romance-language-speaking ESL learners. Four adult Chinese ESL learners and three adult Romance-language-speaking ESL learners participated in a think-aloud study with the Michigan English Language Assessment…

  14. Use of the NBME Comprehensive Basic Science Examination as a Progress Test in the Preclerkship Curriculum of a New Medical School

    ERIC Educational Resources Information Center

    Johnson, Teresa R.; Khalil, Mohammed K.; Peppler, Richard D.; Davey, Diane D.; Kibble, Jonathan D.

    2014-01-01

    In the present study, we describe the innovative use of the National Board of Medical Examiners (NBME) Comprehensive Basic Science Examination (CBSE) as a progress test during the preclerkship medical curriculum. The main aim of this study was to provide external validation of internally developed multiple-choice assessments in a new medical…

  15. Research and Teaching: Does the Color-Coding of Examination Versions Affect College Science Students' Test Performance? Countering Claims of Bias

    ERIC Educational Resources Information Center

    Clary, Renee; Wandersee, James; Elias, Janet Schexnayder

    2007-01-01

    To circumvent the problem of academic dishonesty through the mass administration of multiple-choice exams in college classrooms, a study was conducted from 2003 to 2005, in which multiple versions of the same examination were color coded during testing in a large-enrollment classroom. Instructors reported that this color-coded exam system appeared…

  16. Testing.

    ERIC Educational Resources Information Center

    Killoran, James, Ed.

    1984-01-01

    This journal issue addresses the issue of testing in the social studies classroom. The first article, "The Role of Testing" (Bragaw), focuses on the need for tests to reflect the objectives of the study completed. The varying functions of pop quizzes, weekly tests, and unit tests are explored. "Testing Thinking Processes" (Killoran, Zimmer, and…

  17. Can we improve on situational judgement tests?

    PubMed

    Affleck, P; Bowman, M; Wardman, M; Sinclair, S; Adams, R

    2016-01-15

    Situational judgement tests (SJTs) are multiple-choice psychological assessments that claim to measure professional attributes such as empathy, integrity, team involvement and resilience. One of their attractions is the ability to rank large numbers of candidates. Last year SJTs formed a major component (50% of the assessment marks) of the selection process for dental foundation training (DFT). However, it is not clear what SJTs are actually assessing. There is also the concern that applicants who have developed ethical reasoning skills may be disadvantaged by such tests. The DFT selection process needs to explicitly recognise the importance of ethical reasoning. PMID:26768457

  18. Virtual test: A student-centered software to measure student's critical thinking on human disease

    NASA Astrophysics Data System (ADS)

    Rusyati, Lilit; Firman, Harry

    2016-02-01

    The study "Virtual Test: A Student-Centered Software to Measure Student's Critical Thinking on Human Disease" is descriptive research. The background is importance of computer-based test that use element and sub element of critical thinking. Aim of this study is development of multiple choices to measure critical thinking that made by student-centered software. Instruments to collect data are (1) construct validity sheet by expert judge (lecturer and medical doctor) and professional judge (science teacher); and (2) test legibility sheet by science teacher and junior high school student. Participants consisted of science teacher, lecturer, and medical doctor as validator; and the students as respondent. Result of this study are describe about characteristic of virtual test that use to measure student's critical thinking on human disease, analyze result of legibility test by students and science teachers, analyze result of expert judgment by science teachers and medical doctor, and analyze result of trial test of virtual test at junior high school. Generally, result analysis shown characteristic of multiple choices to measure critical thinking was made by eight elements and 26 sub elements that developed by Inch et al.; complete by relevant information; and have validity and reliability more than "enough". Furthermore, specific characteristic of multiple choices to measure critical thinking are information in form science comic, table, figure, article, and video; correct structure of language; add source of citation; and question can guide student to critical thinking logically.

  19. Lucky Guess or Knowledge: A Cross-Sectional Study Using the Bland and Altman Analysis to Compare Confidence-Based Testing of Pharmacological Knowledge in 3rd and 5th Year Medical Students

    ERIC Educational Resources Information Center

    Kampmeyer, Daniela; Matthes, Jan; Herzig, Stefan

    2015-01-01

    Multiple-choice-questions are common in medical examinations, but guessing biases assessment results. Confidence-based-testing (CBT) integrates indicated confidence levels. It has been suggested that correctness of and confidence in an answer together indicate knowledge levels thus determining the quality of a resulting decision. We used a CBT…

  20. Simple model for multiple-choice collective decision making.

    PubMed

    Lee, Ching Hua; Lucas, Andrew

    2014-11-01

    We describe a simple model of heterogeneous, interacting agents making decisions between n≥2 discrete choices. For a special class of interactions, our model is the mean field description of random field Potts-like models and is effectively solved by finding the extrema of the average energy E per agent. In these cases, by studying the propagation of decision changes via avalanches, we argue that macroscopic dynamics is well captured by a gradient flow along E. We focus on the permutation symmetric case, where all n choices are (on average) the same, and spontaneous symmetry breaking (SSB) arises purely from cooperative social interactions. As examples, we show that bimodal heterogeneity naturally provides a mechanism for the spontaneous formation of hierarchies between decisions and that SSB is a preferred instability to discontinuous phase transitions between two symmetric points. Beyond the mean field limit, exponentially many stable equilibria emerge when we place this model on a graph of finite mean degree. We conclude with speculation on decision making with persistent collective oscillations. Throughout the paper, we emphasize analogies between methods of solution to our model and common intuition from diverse areas of physics, including statistical physics and electromagnetism. PMID:25493831

  1. Switching Answers on Multiple-Choice Questions: Shrewdness or Shibboleth?

    ERIC Educational Resources Information Center

    Skinner, Nicholas F.

    1983-01-01

    Because of a belief that the alternatives they had chosen initially were probably correct, most subjects were reluctant to change answers, and, consequently, did so only when they were highly confident in the change. Results were that more than half the changes were correct. (RM)

  2. Simple model for multiple-choice collective decision making

    NASA Astrophysics Data System (ADS)

    Lee, Ching Hua; Lucas, Andrew

    2014-11-01

    We describe a simple model of heterogeneous, interacting agents making decisions between n ≥2 discrete choices. For a special class of interactions, our model is the mean field description of random field Potts-like models and is effectively solved by finding the extrema of the average energy E per agent. In these cases, by studying the propagation of decision changes via avalanches, we argue that macroscopic dynamics is well captured by a gradient flow along E . We focus on the permutation symmetric case, where all n choices are (on average) the same, and spontaneous symmetry breaking (SSB) arises purely from cooperative social interactions. As examples, we show that bimodal heterogeneity naturally provides a mechanism for the spontaneous formation of hierarchies between decisions and that SSB is a preferred instability to discontinuous phase transitions between two symmetric points. Beyond the mean field limit, exponentially many stable equilibria emerge when we place this model on a graph of finite mean degree. We conclude with speculation on decision making with persistent collective oscillations. Throughout the paper, we emphasize analogies between methods of solution to our model and common intuition from diverse areas of physics, including statistical physics and electromagnetism.

  3. Eternity's Sunrise and Other Multiple Choice Questions (Reading Professional).

    ERIC Educational Resources Information Center

    Neilsen, Lorri

    1992-01-01

    Asserts that William Blake's phrase, "eternity's sunrise," captures the essence of the teaching and learning enterprise. Discusses new horizons in education and teachers who are daily reinventing their teaching. Warns against adopting educational change to be fashionable. (PRA)

  4. Policy considerations based on a cost analysis of alternative test formats in large scale science assessments

    NASA Astrophysics Data System (ADS)

    Lawrenz, Frances; Huffman, Douglas; Welch, Wayne

    2000-08-01

    This article compares the costs of four assessment formats: multiple choice, open ended, laboratory station, and full investigation. The amount of time spent preparing the devices, developing scoring consistency for the devices, and scoring the devices was tracked as the devices were developed. These times are presented by individual item and by complete device. Times are also compared as if 1,000 students completed each assessment. Finally, the times are converted into cost estimates by assuming a potential hourly wage. The data show that a multiple choice item costs the least, and that it is approximately 80 times as much for an open ended item, 300 times as much for a content station, and 500 times as much for a full investigation item. The very large discrepancies in costs are used as a basis to raise several policy issues related to the inclusion of alternative assessment formats in large scale science achievement testing.

  5. Wolf Testing: Open Source Testing Software

    NASA Astrophysics Data System (ADS)

    Braasch, P.; Gay, P. L.

    2004-12-01

    Wolf Testing is software for easily creating and editing exams. Wolf Testing allows the user to create an exam from a database of questions, view it on screen, and easily print it along with the corresponding answer guide. The questions can be multiple choice, short answer, long answer, or true and false varieties. This software can be accessed securely from any location, allowing the user to easily create exams from home. New questions, which can include associated pictures, can be added through a web-interface. After adding in questions, they can be edited, deleted, or duplicated into multiple versions. Long-term test creation is simplified, as you are able to quickly see what questions you have asked in the past and insert them, with or without editing, into future tests. All tests are archived in the database. Written in PHP and MySQL, this software can be installed on any UNIX / Linux platform, including Macintosh OS X. The secure interface keeps students out, and allows you to decide who can create tests and who can edit information already in the database. Tests can be output as either html with pictures or rich text without pictures, and there are plans to add PDF and MS Word formats as well. We would like to thank Dr. Wolfgang Rueckner and the Harvard University Science Center for providing incentive to start this project, computers and resources to complete this project, and inspiration for the project's name. We would also like to thank Dr. Ronald Newburgh for his assistance in beta testing.

  6. Test item linguistic complexity and assessments for deaf students.

    PubMed

    Cawthon, Stephanie

    2011-01-01

    Linguistic complexity of test items is one test format element that has been studied in the context of struggling readers and their participation in paper-and-pencil tests. The present article presents findings from an exploratory study on the potential relationship between linguistic complexity and test performance for deaf readers. A total of 64 students completed 52 multiple-choice items, 32 in mathematics and 20 in reading. These items were coded for linguistic complexity components of vocabulary, syntax, and discourse. Mathematics items had higher linguistic complexity ratings than reading items, but there were no significant relationships between item linguistic complexity scores and student performance on the test items. The discussion addresses issues related to the subject area, student proficiency levels in the test content, factors to look for in determining a "linguistic complexity effect," and areas for further research in test item development and deaf students. PMID:21941876

  7. [Development of a proverb test for assessment of concrete thinking problems in schizophrenic patients].

    PubMed

    Barth, A; Küfferle, B

    2001-11-01

    Concretism is considered an important aspect of schizophrenic thought disorder. Traditionally it is measured using the method of proverb interpretation, in which metaphoric proverbs are presented with the request that the subject tell its meaning. Interpretations are recorded and scored on concretistic tendencies. However, this method has two problems: its reliability is doubtful and it is rather complicated to perform. In this paper, a new version of a multiple choice proverb test is presented which can solve these problems in a reliable and economic manner. Using the new test, it is has been shown that schizophrenic patients have greater deficits in proverb interpretation than depressive patients. PMID:11758092

  8. Duchenne Muscular Dystrophy: a Survey of Perspectives on Carrier Testing and Communication Within the Family.

    PubMed

    Hayes, Brenna; Hassed, Susan; Chaloner, Jae Lindsay; Aston, Christopher E; Guy, Carrie

    2016-06-01

    Carrier testing is widely available for multiple genetic conditions, and several professional organizations have created practice guidelines regarding appropriate clinical application and the testing of minors. Previous research has focused on carrier screening, predictive testing, and testing for X-linked conditions. However, family perspectives on carrier testing for X-linked lethal diseases have yet to be described. In this study, we explored communication within the family about carrier testing and the perspectives of mothers of sons with an X-linked lethal disease, Duchenne muscular dystrophy (DMD). Twenty-five mothers of sons with DMD participated in an anonymous online survey. Survey questions included multiple choice, Likert scale, and open ended, short answer questions. Analysis of the multiple choice and Likert scale questions revealed that most mothers preferred a gradual style of communication with their daughters regarding risk status. In addition, most participants reported having consulted with a genetic counselor and found it helpful. Comparisons between groups, analyzed using Fisher's exact tests, found no differences in preferred style due to mother's carrier status or having a daughter. Thematic analysis was conducted on responses to open ended questions. Themes identified included the impact of family implications, age and maturity, and a desire for autonomy regarding the decision to discuss and undergo carrier testing with at-risk daughters, particularly timing of these discussions. Implications for genetic counseling practice are discussed. PMID:26482744

  9. ML-PAT. Mohawk Language Picture Association Test.

    ERIC Educational Resources Information Center

    Cole, Glory; And Others

    This picture association test booklet for the Mohawk language has two parts. Part One contains 10 questions and Part Two contains 40 questions. Each item consists of a word in Mohawk and a number of drawing from which the learner is to choose the correct one that represents the word. (AMH)

  10. Phonetic Intelligibility Testing in Adults with Down Syndrome

    PubMed Central

    Bunton, Kate; Leddy, Mark; Miller, Jon

    2009-01-01

    The purpose of the study was to document speech intelligibility deficits for a group of five adult males with Down syndrome, and use listener based error profiles to identify phonetic dimensions underlying reduced intelligibility. Phonetic error profiles were constructed for each speaker using the Kent, Weismer, Kent, and Rosenbek (1989) word intelligibility test. The test was designed to allow for identification of reasons for the intelligibility deficit, quantitative analyses at varied levels, and sensitivity to potential speech deficits across populations. Listener generated profiles were calculated based on a multiple-choice task and a transcription task. The most disrupted phonetic features, across listening task, involved simplification of clusters in both the word initial and word final position, and contrasts involving tongue-posture, control, and timing (e.g., high-low vowel, front-back vowel, and place of articulation for stops and fricatives). Differences between speakers in the ranking of these phonetic features was found, however, the mean error proportion for the six most severely affected features correlated highly with the overall intelligibility score (0.88 based on multiple-choice task, .94 for the transcription task). The phonetic feature analyses are an index that may help clarify the suspected motor speech basis for the speech intelligibility deficits seen in adults with Down syndrome and may lead to improved speech management in these individuals. PMID:17692179

  11. The establisment of an achievement test for determination of primary teachers' knowledge level of earthquake

    NASA Astrophysics Data System (ADS)

    Aydin, Süleyman; Haşiloǧlu, M. Akif; Kunduraci, Ayşe

    2016-04-01

    In this study it was aimed to improve an academic achievement test to establish the students' knowledge about the earthquake and the ways of protection from earthquakes. In the method of this study, the steps that Webb (1994) was created to improve an academic achievement test for a unit were followed. In the developmental process of multiple choice test having 25 questions, was prepared to measure the pre-service teachers' knowledge levels about the earthquake and the ways of protection from earthquakes. The multiple choice test was presented to view of six academics (one of them was from geographic field and five of them were science educator) and two expert teachers in science Prepared test was applied to 93 pre-service teachers studying in elementary education department in 2014-2015 academic years. As a result of validity and reliability of the study, the test was composed of 20 items. As a result of these applications, Pearson Moments Multiplication half-reliability coefficient was found to be 0.94. When this value is adjusted according to Spearman Brown reliability coefficient the reliability coefficient was set at 0.97.

  12. Classification Accuracy of Mixed Format Tests: A Bi-Factor Item Response Theory Approach

    PubMed Central

    Wang, Wei; Drasgow, Fritz; Liu, Liwen

    2016-01-01

    Mixed format tests (e.g., a test consisting of multiple-choice [MC] items and constructed response [CR] items) have become increasingly popular. However, the latent structure of item pools consisting of the two formats is still equivocal. Moreover, the implications of this latent structure are unclear: For example, do constructed response items tap reasoning skills that cannot be assessed with multiple choice items? This study explored the dimensionality of mixed format tests by applying bi-factor models to 10 tests of various subjects from the College Board's Advanced Placement (AP) Program and compared the accuracy of scores based on the bi-factor analysis with scores derived from a unidimensional analysis. More importantly, this study focused on a practical and important question—classification accuracy of the overall grade on a mixed format test. Our findings revealed that the degree of multidimensionality resulting from the mixed item format varied from subject to subject, depending on the disattenuated correlation between scores from MC and CR subtests. Moreover, remarkably small decrements in classification accuracy were found for the unidimensional analysis when the disattenuated correlations exceeded 0.90. PMID:26973568

  13. Electricity and Magnetism Self-Testing and Test Construction Tool

    NASA Astrophysics Data System (ADS)

    Stewart, Gay; Stewart, John

    2011-04-01

    This talk presents an online resource for teaching and evaluating introductory electricity and magnetism classes. The resource contains a library of highly characterized, multiple-choice, conceptual and quantitative electricity and magnetism problems and solutions all linked to a free online textbook. The library contains over 1000 classroom tested problems. Each problem is characterized by the complexity of its solution and by the fundamental intellectual steps found in the solution. Exam construction, administration, and analysis tools are provided through the resource's website. Problems may be downloaded for use in exams or as clicker questions. Instructors may also design and administer assignments online. A self-testing tool is provided for students or instructors, an excellent tool for brushing up on conceptual electricity and magnetism. Conceptual inventory scores produced by the site are normed against the Conceptual Survey in Electricity and Magnetism. There is no cost associated with using any of the facilities of the site and you can begin to use the site immediately. Supported by NSF - DUE 0535928. Site address http://physinfo.uark.edu/physicsonline.

  14. Development of three-tier heat, temperature and internal energy diagnostic test

    NASA Astrophysics Data System (ADS)

    Gurcay, Deniz; Gulbas, Etna

    2015-05-01

    Background:Misconceptions are major obstacles to learning physics, and the concepts of heat and temperature are some of the common misconceptions that are encountered in daily life. Therefore, it is important to develop valid and reliable tools to determine students' misconceptions about basic thermodynamics concepts. Three-tier tests are effective assessment tools to determine misconceptions in physics. Although a limited number of three-tier tests about heat and temperature are discussed in the literature, no reports discuss three-tier tests that simultaneously consider heat, temperature and internal energy. Purpose:The aim of this study is to develop a valid and reliable three-tier test to determine students' misconceptions about heat, temperature and internal energy. Sample:The sample consists of 462 11th-grade Anatolian high school students. Of the participants, 46.8% were female and 53.2% were male. Design and methods:This research takes the form of a survey study. Initially, a multiple-choice test was developed. To each multiple-choice question was added one open-ended question asking the students to explain their answers. This test was then administered to 259 high school students and the data were analyzed both quantitatively and qualitatively. The students' answers for each open-ended question were analyzed and used to create the choices for the second-tier questions of the test. Depending on those results, a three-tier Heat, Temperature and Internal Energy Diagnostic Test (HTIEDT) was developed by adding a second-tier and certainty response index to each item. This three-tier test was administered to the sample of 462 high school students. Results:The Cronbach alpha reliability for the test was estimated for correct and misconception scores as .75 and .68, respectively. The results of the study suggested that HTIEDT could be used as a valid and reliable test in determining misconceptions about heat, temperature and internal energy concepts.

  15. Guide to good practices for the development of test items

    SciTech Connect

    1997-01-01

    While the methodology used in developing test items can vary significantly, to ensure quality examinations, test items should be developed systematically. Test design and development is discussed in the DOE Guide to Good Practices for Design, Development, and Implementation of Examinations. This guide is intended to be a supplement by providing more detailed guidance on the development of specific test items. This guide addresses the development of written examination test items primarily. However, many of the concepts also apply to oral examinations, both in the classroom and on the job. This guide is intended to be used as guidance for the classroom and laboratory instructor or curriculum developer responsible for the construction of individual test items. This document focuses on written test items, but includes information relative to open-reference (open book) examination test items, as well. These test items have been categorized as short-answer, multiple-choice, or essay. Each test item format is described, examples are provided, and a procedure for development is included. The appendices provide examples for writing test items, a test item development form, and examples of various test item formats.

  16. Measuring student learning using initial and final concept test in an STEM course

    NASA Astrophysics Data System (ADS)

    Kaw, Autar; Yalcin, Ali

    2012-06-01

    Effective assessment is a cornerstone in measuring student learning in higher education. For a course in Numerical Methods, a concept test was used as an assessment tool to measure student learning and its improvement during the course. The concept test comprised 16 multiple choice questions and was given in the beginning and end of the class for three semesters. Hake's gain index, a measure of learning gains from pre- to post-tests, of 0.36 to 0.41 were recorded. The validity and reliability of the concept test was checked via standard measures such as Cronbach's alpha, content and criterion-related validity, item characteristic curves and difficulty and discrimination indices. The performance of various subgroups such as pre-requisite grades, transfer students, gender and age were also studied.

  17. Development and application of a two-tier diagnostic test measuring college biology students' understanding of diffusion and osmosis after a course of instruction

    NASA Astrophysics Data System (ADS)

    Odom, Arthur Louis; Barrow, Lloyd H.

    This study involved the development and application of a two-tier diagnostic test measuring college biology students' understanding of diffusion and osmosis after a course of instruction. The development procedure had three general steps: defining the content boundaries of the test, collecting information on students' misconceptions, and instrument development. Misconception data were collected from interviews and multiple-choice questions with free response answers. The data were used to develop 12 two-tier multiple choice items in which the first tier examined content knowledge and the second examined understanding of that knowledge. The conceptual knowledge examined was the particulate and random nature of matter, concentration and tonicity, the influence of life forces on diffusion and osmosis, membranes, kinetic energy of matter, the process of diffusion, and the process of osmosis. The diagnostic instrument was administered to 240 students (123 non-biology majors and 117 biology majors) enrolled in a college freshman biology laboratory course. The students had completed a unit on diffusion and osmosis. The content taught was carefully defined by propositional knowledge statements, and was the same content that defined the content boundaries of the test. The split-half reliability was .74. Difficulty indices ranged from 0.23 to 0.95, and discrimination indices ranged from 0.21 to 0.65. Each item was analyzed to determine student understanding of, and identify misconceptions about, diffusion and osmosis.Received: 18 June 1993; Revised: 16 February 1994;

  18. The Effects of Images on Multiple-Choice Questions in Computer-Based Formative Assessment

    ERIC Educational Resources Information Center

    Martín-SanJosé, Juan Fernando; Juan, M.-Carmen; Vivó, Roberto; Abad, Francisco

    2015-01-01

    Current learning and assessment are evolving into digital systems that can be used, stored, and processed online. In this paper, three different types of questionnaires for assessment are presented. All the questionnaires were filled out online on a web-based format. A study was carried out to determine whether the use of images related to each…

  19. MEDSIRCH: A Computerized System for the Retrieval of Multiple Choice Items.

    ERIC Educational Resources Information Center

    Hazlett, C. B.

    Medsirch (Medical Search) is an information retrieval system designed to aid in preparing examinations for medical students. There are two versions of the system: a sequential access file suitable for shallow indexing with a broad choice of search terms and a random direct access file for deep indexing with a restricted range of choices for search…

  20. Instructor Perspectives of Multiple-Choice Questions in Summative Assessment for Novice Programmers

    ERIC Educational Resources Information Center

    Shuhidan, Shuhaida; Hamilton, Margaret; D'Souza, Daryl

    2010-01-01

    Learning to program is known to be difficult for novices. High attrition and high failure rates in foundation-level programming courses undertaken at tertiary level in Computer Science programs, are commonly reported. A common approach to evaluating novice programming ability is through a combination of formative and summative assessments, with…

  1. Intuitive Judgments Govern Students' Answering Patterns in Multiple-Choice Exercises in Organic Chemistry

    ERIC Educational Resources Information Center

    Graulich, Nicole

    2015-01-01

    Research in chemistry education has revealed that students going through their undergraduate and graduate studies in organic chemistry have a fragmented conceptual knowledge of the subject. Rote memorization, rule-based reasoning, and heuristic strategies seem to strongly influence students' performances. There appears to be a gap between what we…

  2. Self-organization and phase transition in financial markets with multiple choices

    NASA Astrophysics Data System (ADS)

    Zhong, Li-Xin; Xu, Wen-Juan; Huang, Ping; Qiu, Tian; He, Yun-Xin; Zhong, Chen-Yang

    2014-09-01

    Market confidence is essential for successful investing. By incorporating multi-market into the evolutionary minority game, we investigate the effects of investor beliefs on the evolution of collective behaviors and asset prices. It is found that the roles of market confidence are closely related to whether or not there exists another market. When there exists another investment opportunity, different market confidence may lead to the same price fluctuations and the same investment attainment. There are two feedback effects. Being overly optimistic about a particular asset makes an investor become insensitive to losses. A delayed strategy adjustment leads to a decline in wealth and one's runaway from the market. The withdrawal of the agents results in the optimization of the strategy distributions and an increase in wealth. Being overly pessimistic about a particular asset makes an investor over-sensitive to losses. One's too frequent strategy adjustment leads to a decline in wealth. The withdrawal of the agents results in the improvement of the market environment and an increase in wealth.

  3. The Role of Professional Identity in Patterns of Use of Multiple-Choice Assessment Tools

    ERIC Educational Resources Information Center

    Johannesen, Monica; Habib, Laurence

    2010-01-01

    This article uses the notion of professional identity within the framework of actor network theory to understand didactic practices within three faculties in an institution of higher education. The study is based on a series of interviews with lecturers in each faculty and diaries of their didactic practices. The article focuses on the use of a…

  4. The Multiple-Choice Concept Map (MCCM): An Interactive Computer-Based Assessment Method

    ERIC Educational Resources Information Center

    Sas, Ioan Ciprian

    2010-01-01

    This research attempted to bridge the gap between cognitive psychology and educational measurement (Mislevy, 2008; Leighton & Gierl, 2007; Nichols, 1994; Messick, 1989; Snow & Lohman, 1989) by using cognitive theories from working memory (Baddeley, 1986; Miyake & Shah, 1999; Grimley & Banner, 2008), multimedia learning (Mayer, 2001), and cognitive…

  5. An Investigation of the Representativeness Heuristic: The Case of a Multiple Choice Exam

    ERIC Educational Resources Information Center

    Chernoff, Egan J.; Mamolo, Ami; Zazkis, Rina

    2016-01-01

    By focusing on a particular alteration of the comparative likelihood task, this study contributes to research on teachers' understanding of probability. Our novel task presented prospective teachers with multinomial, contextualized sequences and asked them to identify which was least likely. Results demonstrate that determinants of…

  6. Evaluating and improving multiple choice papers: true-false questions in public health medicine.

    PubMed

    Dixon, R A

    1994-09-01

    The quality of a multiple true-false (MTF) examination paper in public health medicine for 149 clinical medical students was evaluated using predefined performance criteria to offer guidelines for improvement of such a paper. There were 35 questions, each with five true-false branches, and the performance of the overall best 25% of candidates was compared for individual items with that of the overall worst 25%. To improve discrimination between best and worst candidates, 60% of items needed changes, and several indicators were used to identify how, usually because the branch was too easy (26%), unpopular (16%) or too hard (10%). A number of guidelines for writing good MTF questions and for improving them are suggested. The inequity is illustrated of marking systems which do not allocate a negative mark for incorrect answers equal in size to the positive mark for correct ones, with zero for unanswered questions or 'don't know' answers. PMID:7845259

  7. Differential Daily Writing Conditions and Performance on Major Multiple-Choice Exams

    ERIC Educational Resources Information Center

    Hautau, Briana; Turner, Haley C.; Carroll, Erin; Jaspers, Kathryn; Krohn, Katy; Parker, Megan; Williams, Robert L.

    2006-01-01

    Students (N=153) in three equivalent sections of an undergraduate human development course compared pairs of related concepts via either written or oral discussion at the beginning of most class sessions. A writing-for-random-credit section achieved significantly higher ratings on the writing activities than did a writing-for-no-credit section.…

  8. Lipids for intravenous nutrition in hospitalised adult patients: a multiple choice of options.

    PubMed

    Calder, Philip C

    2013-08-01

    Lipids used in parenteral nutrition provide energy, building blocks and essential fatty acids. Traditionally, these lipids have been based on n-6 PUFA-rich vegetable oils particularly soyabean oil. This may not be optimal because soyabean oil may present an excessive supply of linoleic acid. Alternatives to use of soyabean oil include its partial replacement by medium-chain TAG, olive oil or fish oil, either alone or in combination. Lipid emulsions containing these alternatives are well tolerated without adverse effects in a wide range of hospitalised adult patients. Lipid emulsions that include fish oil have been used in parenteral nutrition in adult patients' post-surgery (mainly gastrointestinal). This has been associated with alterations in patterns of inflammatory mediators and in immune function and, in some studies, a reduction in length of intensive care unit and hospital stay. These benefits are emphasised through recent meta-analyses. Perioperative administration of fish oil may be superior to post-operative administration. Parenteral fish oil has been used in critically ill adults. Here, the influence on inflammatory processes, immune function and clinical endpoints is not clear, since there are too few studies and those that are available report contradictory findings. However, some studies found reduced inflammation, improved gas exchange and shorter length of hospital stay in critically ill patients if they receive fish oil. More and better trials are needed in patient groups in which parenteral nutrition is used and where fish oil may offer benefits. PMID:23663322

  9. Top 10 questions to ask when looking at an EHR license agreement.

    PubMed

    Shay, Daniel F

    2006-01-01

    Electronic health records (EHRs) offer medical practices the potential for increased efficiency and profits. The license agreement controls the practice's relationship with the vendor. Understanding the implications of the license agreement will help a practice to evaluate an EHR that the practice may adopt. This article will focus on 10 specific questions to ask while evaluating the EHR license and which portions of the license answer the questions. PMID:17260908

  10. The effects of a test-taking strategy intervention for high school students with test anxiety in advanced placement science courses

    NASA Astrophysics Data System (ADS)

    Markus, Doron J.

    Test anxiety is one of the most debilitating and disruptive factors associated with underachievement and failure in schools (Birenbaum, Menucha, Nasser, & Fadia, 1994; Tobias, 1985). Researchers have suggested that interventions that combine multiple test-anxiety reduction techniques are most effective at reducing test anxiety levels (Ergene, 2003). For the current study, involving 62 public high school students enrolled in advanced placement science courses, the researcher designed a multimodal intervention designed to reduce test anxiety. Analyses were conducted to assess the relationships among test anxiety levels, unit examination scores, and irregular multiple-choice error patterns (error clumping), as well as changes in these measures after the intervention. Results indicate significant, positive relationships between some measures of test anxiety and error clumping, as well as significant, negative relationships between test anxiety levels and student achievement. In addition, results show significant decreases in holistic measures of test anxiety among students with low anxiety levels, as well as decreases in Emotionality subscores of test anxiety among students with high levels of test anxiety. There were no significant changes over time in the Worry subscores of test anxiety. Suggestions for further research include further confirmation of the existence of error clumping, and its causal relationship with test anxiety.

  11. The Positive and Negative Effects of Science Concept Tests on Student Conceptual Understanding

    NASA Astrophysics Data System (ADS)

    Chang, Chun-Yen; Yeh, Ting-Kuang; Barufaldi, James P.

    2010-01-01

    This study explored the phenomenon of testing effect during science concept assessments, including the mechanism behind it and its impact upon a learner's conceptual understanding. The participants consisted of 208 high school students, in either the 11th or 12th grade. Three types of tests (traditional multiple-choice test, correct concept test, and incorrect concept test) related to the greenhouse effect and global warming were developed to explore the mechanisms underlining the test effect. Interview data analyzed by means of the flow-map method were used to examine the two-week post-test consequences of taking one of these three tests. The results indicated: (1) Traditional tests can affect participants' long-term memory, both positively and negatively; in addition, when students ponder repeatedly and think harder about highly distracting choices during a test, they may gradually develop new conceptions; (2) Students develop more correct conceptions when more true descriptions are provided on the tests; on the other hand, students develop more misconceptions while completing tests in which more false descriptions of choices are provided. Finally, the results of this study revealed a noteworthy phenomenon that tests, if employed appropriately, may be also an effective instrument for assisting students' conceptual understanding.

  12. Explicit versus implicit social cognition testing in autism spectrum disorder.

    PubMed

    Callenmark, Björn; Kjellin, Lars; Rönnqvist, Louise; Bölte, Sven

    2014-08-01

    Although autism spectrum disorder is defined by reciprocal social-communication impairments, several studies have found no evidence for altered social cognition test performance. This study examined explicit (i.e. prompted) and implicit (i.e. spontaneous) variants of social cognition testing in autism spectrum disorder. A sample of 19 adolescents with autism spectrum disorder and 19 carefully matched typically developing controls completed the Dewey Story Test. 'Explicit' (multiple-choice answering format) and 'implicit' (free interview) measures of social cognition were obtained. Autism spectrum disorder participants did not differ from controls regarding explicit social cognition performance. However, the autism spectrum disorder group performed more poorly than controls on implicit social cognition performance in terms of spontaneous perspective taking and social awareness. Findings suggest that social cognition alterations in autism spectrum disorder are primarily implicit in nature and that an apparent absence of social cognition difficulties on certain tests using rather explicit testing formats does not necessarily mean social cognition typicality in autism spectrum disorder. PMID:24104519

  13. Explicit versus implicit social cognition testing in autism spectrum disorder

    PubMed Central

    Callenmark, Björn; Kjellin, Lars; Rönnqvist, Louise

    2014-01-01

    Although autism spectrum disorder is defined by reciprocal social-communication impairments, several studies have found no evidence for altered social cognition test performance. This study examined explicit (i.e. prompted) and implicit (i.e. spontaneous) variants of social cognition testing in autism spectrum disorder. A sample of 19 adolescents with autism spectrum disorder and 19 carefully matched typically developing controls completed the Dewey Story Test. ‘Explicit’ (multiple-choice answering format) and ‘implicit’ (free interview) measures of social cognition were obtained. Autism spectrum disorder participants did not differ from controls regarding explicit social cognition performance. However, the autism spectrum disorder group performed more poorly than controls on implicit social cognition performance in terms of spontaneous perspective taking and social awareness. Findings suggest that social cognition alterations in autism spectrum disorder are primarily implicit in nature and that an apparent absence of social cognition difficulties on certain tests using rather explicit testing formats does not necessarily mean social cognition typicality in autism spectrum disorder. PMID:24104519

  14. FAA Pilot Knowledge Tests: Learning or Rote Memorization?

    NASA Technical Reports Server (NTRS)

    Casner, Stephen M.; Jones, Karen M.; Puentes, Antonio; Irani, Homi

    2004-01-01

    The FAA pilot knowledge test is a multiple-choice assessment tool designed to measure the extent to which applicants for FAA pilot certificates and ratings have mastered a corpus of required aeronautical knowledge. All questions that appear on the test are drawn from a database of questions that is made available to the public. The FAA and others are concerned that releasing test questions may encourage students to focus their study on memorizing test questions. To investigate this concern, we created our own database of questions that differed from FAA questions in four different ways. Our first three question types were derived by modifying existing FAA questions: (1) rewording questions and answers; (2) shuffling answers; and (3) substituting different figures for problems that used figures. Our last question type posed a question about required knowledge for which no FAA question currently exists. Forty-eight student pilots completed one of two paper-and-pencil knowledge tests that contained a mix of these experimental questions. The results indicate significantly lower scores for some question types when compared to unaltered FAA questions to which participants had prior access.

  15. Dividing the Force Concept Inventory into two equivalent half-length tests

    NASA Astrophysics Data System (ADS)

    Han, Jing; Bao, Lei; Chen, Li; Cai, Tianfang; Pi, Yuan; Zhou, Shaona; Tu, Yan; Koenig, Kathleen

    2015-06-01

    The Force Concept Inventory (FCI) is a 30-question multiple-choice assessment that has been a building block for much of the physics education research done today. In practice, there are often concerns regarding the length of the test and possible test-retest effects. Since many studies in the literature use the mean score of the FCI as the primary variable, it would be useful then to have different shorter tests that can produce FCI-equivalent scores while providing the benefits of being quicker to administer and overcoming the test-retest effects. In this study, we divide the 1995 version of the FCI into two half-length tests; each contains a different subset of the original FCI questions. The two new tests are shorter, still cover the same set of concepts, and produce mean scores equivalent to those of the FCI. Using a large quantitative data set collected at a large midwestern university, we statistically compare the assessment features of the two half-length tests and the full-length FCI. The results show that the mean error of equivalent scores between any two of the three tests is within 3%. Scores from all tests are well correlated. Based on the analysis, it appears that the two half-length tests can be a viable option for score based assessment that need to administer tests quickly or need to measure short-term gains where using identical pre- and post-test questions is a concern.

  16. First Results from the Test Of Astronomy STandards (TOAST) Assessment Instrument

    NASA Astrophysics Data System (ADS)

    Slater, Stephanie

    2009-01-01

    Considerable effort in the astronomy education research over the past several years has focused on developing assessment tools in the form of multiple-choice conceptual diagnostics and content knowledge surveys. This has been critically important in advancing astronomy as a sub-discipline of physics education research, allowing researchers to establish the initial knowledge state of students as well as to attempt to measure some of the impacts of innovative instructional interventions. Before now, few of the existing instruments were constructed upon a solid list of clearly articulated and widely agreed upon learning objectives. Moving beyond the 10-year old Astronomy Diagnostics Test, we have developed and validated a new assessment instrument that is tightly aligned to the consensus learning goals stated by the American Astronomical Society - Chair's Conference on ASTRO 101, the American Association of the Advancement of Science's Project 2061 Benchmarks, and the National Research Council's National Science Education Standards. Researchers from the Cognition in Astronomy, Physics and Earth sciences Research (CAPER) Team at the University of Wyoming's Science and Math Teaching Center (UWYO SMTC) designed a criterion-referenced assessment tool, called the Test Of Astronomy STandards (TOAST). Through iterative development, this multiple-choice instrument has a high degree of reliability and validity for instructors and researchers needing information on students’ initial knowledge state at the beginning of a course and can be used, in aggregate, to help measure the impact of course-length duration instructional strategies for undergraduate science survey courses with learning goals tightly aligned to the consensus goals of the astronomy education community.

  17. [Test your knowledge: contraceptives].

    PubMed

    1998-06-01

    A brief self-administered quiz on contraceptive knowledge is presented. The 7 questions ask the reader to explain the mechanism of action of combined oral contraceptives, and why estrogens are used with progestins, and to indicate the main secondary effects of Depo-Provera and implants and the dosage of the "morning-after pill." A multiple-choice question concerns absolute contraindications to combined OC use. One clinical case involves selection of OCs for a woman with a family history of breast cancer and the other requires development of a strategy for reducing high-risk pregnancies and risk of AIDS. PMID:12321847

  18. Creating a lesson that addresses gender differences in physics testing a specific instructional technique in college level physics education

    NASA Astrophysics Data System (ADS)

    Lincoln, James J.

    Research-based instructional methods are applied in an effort to close the persistent gender gap in physics. Creating a short text on a limited topic using some of these methods could benefit female students specifically. A literature review showed research on the gender gap in physics and updated instructional methods for females. Two female physics students were interviewed and observations were conducted at a high performing all-girls school. A physics lab dialogue between two female physics students was recorded and analyzed, which informed the style and voice of the interactive dialogue lesson. An original written lesson intended to engage female physics students was created and tested on three classes of college-level physics students. The survey data, based on multiple choice and essay responses, measured the students' opinions of the lesson and their current textbook. Results showed the interactive lesson was preferred over the current text, and some students requested similar lessons.

  19. Correlation of Simulation Examination to Written Test Scores for Advanced Cardiac Life Support Testing: Prospective Cohort Study

    PubMed Central

    Strom, Suzanne L.; Anderson, Craig L.; Yang, Luanna; Canales, Cecilia; Amin, Alpesh; Lotfipour, Shahram; McCoy, C. Eric; Langdorf, Mark I.

    2015-01-01

    Introduction Traditional Advanced Cardiac Life Support (ACLS) courses are evaluated using written multiple-choice tests. High-fidelity simulation is a widely used adjunct to didactic content, and has been used in many specialties as a training resource as well as an evaluative tool. There are no data to our knowledge that compare simulation examination scores with written test scores for ACLS courses. Objective To compare and correlate a novel high-fidelity simulation-based evaluation with traditional written testing for senior medical students in an ACLS course. Methods We performed a prospective cohort study to determine the correlation between simulation-based evaluation and traditional written testing in a medical school simulation center. Students were tested on a standard acute coronary syndrome/ventricular fibrillation cardiac arrest scenario. Our primary outcome measure was correlation of exam results for 19 volunteer fourth-year medical students after a 32-hour ACLS-based Resuscitation Boot Camp course. Our secondary outcome was comparison of simulation-based vs. written outcome scores. Results The composite average score on the written evaluation was substantially higher (93.6%) than the simulation performance score (81.3%, absolute difference 12.3%, 95% CI [10.6–14.0%], p<0.00005). We found a statistically significant moderate correlation between simulation scenario test performance and traditional written testing (Pearson r=0.48, p=0.04), validating the new evaluation method. Conclusion Simulation-based ACLS evaluation methods correlate with traditional written testing and demonstrate resuscitation knowledge and skills. Simulation may be a more discriminating and challenging testing method, as students scored higher on written evaluation methods compared to simulation. PMID:26594288

  20. CATGEN: A Computer Assisted Test Generator.

    ERIC Educational Resources Information Center

    McCallum, L. W.

    1985-01-01

    Procedures for generating multiple-choice exams in psychology using the Apple IIe computer and the Applewriter II text editing software are described. The model is simple to use and provides flexibility in sequencing the choice of items from an instructor generated pool. (Author/RM)

  1. Chlamydia Testing

    MedlinePlus

    ... Amplification Test (NAAT); Chlamydia trachomatis Culture; Chlamydia trachomatis DNA Probe Related tests: Gonorrhea Testing , HIV Antibody and HIV Antigen , Syphilis Tests , Herpes Testing , HPV Test , Trichomonas Testing All content on Lab Tests Online has ...

  2. What State Tests Test.

    ERIC Educational Resources Information Center

    McGee, Glenn W.

    What the Illinois Goal Assessment Program (IGAP) test actually tests and the consequences of these tests for funding decisions were studied with a random sample of 100 school districts in the Cook County suburbs of Chicago. Eighth-grade IGAP scores for reading were obtained from the state report card, a document prepared by each school district…

  3. Use of the NBME Comprehensive Basic Science Examination as a progress test in the preclerkship curriculum of a new medical school.

    PubMed

    Johnson, Teresa R; Khalil, Mohammed K; Peppler, Richard D; Davey, Diane D; Kibble, Jonathan D

    2014-12-01

    In the present study, we describe the innovative use of the National Board of Medical Examiners (NBME) Comprehensive Basic Science Examination (CBSE) as a progress test during the preclerkship medical curriculum. The main aim of this study was to provide external validation of internally developed multiple-choice assessments in a new medical school. The CBSE is a practice exam for the United States Medical Licensing Examination (USMLE) Step 1 and is purchased directly from the NBME. We administered the CBSE five times during the first 2 yr of medical school. Student scores were compared with scores on newly created internal summative exams and to the USMLE Step 1. Significant correlations were observed between almost all our internal exams and CBSE scores over time as well as with USMLE Step 1 scores. The strength of correlations of internal exams to the CBSE and USMLE Step 1 broadly increased over time during the curriculum. Student scores on courses that have strong emphasis on physiology and pathophysiology correlated particularly well with USMLE Step 1 scores. Student progress, as measured by the CBSE, was found to be linear across time, and test performance fell behind the anticipated level by the end of the formal curriculum. These findings are discussed with respect to student learning behaviors. In conclusion, the CBSE was found to have good utility as a progress test and provided external validation of our new internally developed multiple-choice assessments. The data also provide performance benchmarks both for our future students to formatively assess their own progress and for other medical schools to compare learning progression patterns in different curricular models. PMID:25434014

  4. Gonorrhea Test

    MedlinePlus

    ... gonorrhoeae Culture; Neisseria gonorrhoeae Gram Stain; Neisseria gonorrhoeae DNA Probe Related tests: Chlamydia Testing , HIV Antibody and HIV Antigen , Syphilis Tests , Herpes Testing , HPV Test , Trichomonas Testing All content on Lab Tests Online has ...

  5. Development of a test of suprathreshold acuity in noise in Brazilian Portuguese: a new method for hearing screening and surveillance.

    PubMed

    Vaez, Nara; Desgualdo-Pereira, Liliane; Paglialonga, Alessia

    2014-01-01

    This paper describes the development of a speech-in-noise test for hearing screening and surveillance in Brazilian Portuguese based on the evaluation of suprathreshold acuity performances. The SUN test (Speech Understanding in Noise) consists of a list of intervocalic consonants in noise presented in a multiple-choice paradigm by means of a touch screen. The test provides one out of three possible results: "a hearing check is recommended" (red light), "a hearing check would be advisable" (yellow light), and "no hearing difficulties" (green light) (Paglialonga et al., Comput. Biol. Med. 2014). This novel test was developed in a population of 30 normal hearing young adults and 101 adults with varying degrees of hearing impairment and handicap, including normal hearing. The test had 84% sensitivity and 76% specificity compared to conventional pure-tone screening and 83% sensitivity and 86% specificity to detect disabling hearing impairment. The test outcomes were in line with the degree of self-perceived hearing handicap. The results found here paralleled those reported in the literature for the SUN test and for conventional speech-in-noise measures. This study showed that the proposed test might be a viable method to identify individuals with hearing problems to be referred to further audiological assessment and intervention. PMID:25247181

  6. Testing the Test

    ERIC Educational Resources Information Center

    Berube, Michael

    2009-01-01

    The author, an English professor, shares his experience in retaking the Graduate Record Examination in English literature, 25 years after he entered graduate school at the University of Virginia. He took the practice test instead of the "real" test, for a number of reasons. He wanted to be able to look over the questions afterward; to see what…

  7. Test Architecture, Test Retrofit

    ERIC Educational Resources Information Center

    Fulcher, Glenn; Davidson, Fred

    2009-01-01

    Just like buildings, tests are designed and built for specific purposes, people, and uses. However, both buildings and tests grow and change over time as the needs of their users change. Sometimes, they are also both used for purposes other than those intended in the original designs. This paper explores architecture as a metaphor for language…

  8. American Sign Language Comprehension Test: A Tool for Sign Language Researchers.

    PubMed

    Hauser, Peter C; Paludneviciene, Raylene; Riddle, Wanda; Kurz, Kim B; Emmorey, Karen; Contreras, Jessica

    2016-01-01

    The American Sign Language Comprehension Test (ASL-CT) is a 30-item multiple-choice test that measures ASL receptive skills and is administered through a website. This article describes the development and psychometric properties of the test based on a sample of 80 college students including deaf native signers, hearing native signers, deaf non-native signers, and hearing ASL students. The results revealed that the ASL-CT has good internal reliability (α = 0.834). Discriminant validity was established by demonstrating that deaf native signers performed significantly better than deaf non-native signers and hearing native signers. Concurrent validity was established by demonstrating that test results positively correlated with another measure of ASL ability (r = .715) and that hearing ASL students' performance positively correlated with the level of ASL courses they were taking (r = .726). Researchers can use the ASL-CT to characterize an individual's ASL comprehension skills, to establish a minimal skill level as an inclusion criterion for a study, to group study participants by ASL skill (e.g., proficient vs. nonproficient), or to provide a measure of ASL skill as a dependent variable. PMID:26590608

  9. Building the BIKE: Development and Testing of the Biotechnology Instrument for Knowledge Elicitation (BIKE)

    NASA Astrophysics Data System (ADS)

    Witzig, Stephen B.; Rebello, Carina M.; Siegel, Marcelle A.; Freyermuth, Sharyn K.; Izci, Kemal; McClure, Bruce

    2014-10-01

    Identifying students' conceptual scientific understanding is difficult if the appropriate tools are not available for educators. Concept inventories have become a popular tool to assess student understanding; however, traditionally, they are multiple choice tests. International science education standard documents advocate that assessments should be reform based, contain diverse question types, and should align with instructional approaches. To date, no instrument of this type targeting student conceptions in biotechnology has been developed. We report here the development, testing, and validation of a 35-item Biotechnology Instrument for Knowledge Elicitation (BIKE) that includes a mix of question types. The BIKE was designed to elicit student thinking and a variety of conceptual understandings, as opposed to testing closed-ended responses. The design phase contained nine steps including a literature search for content, student interviews, a pilot test, as well as expert review. Data from 175 students over two semesters, including 16 student interviews and six expert reviewers (professors from six different institutions), were used to validate the instrument. Cronbach's alpha on the pre/posttest was 0.664 and 0.668, respectively, indicating the BIKE has internal consistency. Cohen's kappa for inter-rater reliability among the 6,525 total items was 0.684 indicating substantial agreement among scorers. Item analysis demonstrated that the items were challenging, there was discrimination among the individual items, and there was alignment with research-based design principles for construct validity. This study provides a reliable and valid conceptual understanding instrument in the understudied area of biotechnology.

  10. Susceptibility Testing

    MedlinePlus

    ... page helpful? Also known as: Sensitivity Testing; Drug Resistance Testing; Culture and Sensitivity; C & S; Antimicrobial Susceptibility Formal name: Bacterial and Fungal Susceptibility Testing Related tests: Urine Culture ; ...

  11. Schirmer test

    MedlinePlus

    Tear test; Tearing test; Dry eye test; Basal secretion test; Sjögren - Schirmer; Schirmer's test ... used when the eye doctor suspects you have dry eye. Symptoms include dryness of the eyes or excessive ...

  12. Prenatal Tests

    MedlinePlus

    ... X Home > Pregnancy > Prenatal care > Prenatal tests Prenatal tests E-mail to a friend Please fill in ... if you’re feeling fine. What are prenatal tests? Prenatal tests are medical tests you get during ...

  13. Pinworm test

    MedlinePlus

    Oxyuriasis test; Enterobiasis test; Tape test ... diagnose this infection is to do a tape test. The best time to do this is in ... to determine if there are eggs. The tape test may need to be done on 3 separate ...

  14. Thyroid Tests

    MedlinePlus

    ... calories and how fast your heart beats. Thyroid tests check how well your thyroid is working. They ... thyroid diseases such as hyperthyroidism and hypothyroidism. Thyroid tests include blood tests and imaging tests. Blood tests ...

  15. Effectiveness of web-based teaching modules: test-enhanced learning in dental education.

    PubMed

    Jackson, Tate H; Hannum, Wallace H; Koroluk, Lorne; Proffit, William R

    2011-06-01

    The purpose of our study was to evaluate the effectiveness of self-tests as a component of web-based self-instruction in predoctoral orthodontics and pediatric dentistry. To this end, the usage patterns of online teaching modules and self-tests by students enrolled in three courses at the University of North Carolina at Chapel Hill School of Dentistry were monitored and correlated to final exam grade and course average. We recorded the frequency of access to thirty relevant teaching modules and twenty-nine relevant self-tests for 157 second- and third-year D.D.S. students during the course of our data collection. There was a statistically significant positive correlation between frequency of accessing self-tests and course performance in one course that was totally based on self-instruction with seminars and multiple-choice examination (Level IV): Spearman correlation between frequency of self-test access and final exam grade, rho=0.23, p=0.044; correlation between frequency of self-test access and course average: rho=0.39, p=0.0004. In the other two courses we monitored, which included content beyond self-instruction with self-tests, the correlations were positive but not statistically significant. The students' use of online learning resources varied significantly from one course (Level I) to the next (Level II): Wilcoxon matched pairs signed-rank tests, S=-515.5, p=.0057 and S=1086, p<0.0001. The data from this study suggest that increased use of web-based self-tests may be correlated with more effective learning in predoctoral dental education by virtue of the testing effect and that dental students' usage of resources for learning changes significantly over the course of their education. PMID:21642523

  16. The Validity of Multiple Choice Practical Examinations as an Alternative to Traditional Free Response Examination Formats in Gross Anatomy

    ERIC Educational Resources Information Center

    Shaibah, Hassan Sami; van der Vleuten, Cees P. M.

    2013-01-01

    Traditionally, an anatomy practical examination is conducted using a free response format (FRF). However, this format is resource-intensive, as it requires a relatively large time investment from anatomy course faculty in preparation and grading. Thus, several interventions have been reported where the response format was changed to a selected…

  17. An Analysis of Complex Multiple-Choice Science-Technology-Society Items: Methodological Development and Preliminary Results

    ERIC Educational Resources Information Center

    Vazquez-Alonso, Angel; Manassero-Mas, Maria-Antonia; Acevedo-Diaz, Jose-Antonio

    2006-01-01

    The scarce attention to the assessment and evaluation in science education research has been especially harmful for teaching science-technology-society (STS) issues, due to the dialectical, tentative, value-laden, and polemic nature of most STS topics. This paper tackles the methodological difficulties of the instruments that monitor views related…

  18. Formulation of Multiple Choice Questions as a Revision Exercise at the End of a Teaching Module in Biochemistry

    ERIC Educational Resources Information Center

    Bobby, Zachariah; Radhika, M. R.; Nandeesha, H.; Balasubramanian, A.; Prerna, Singh; Archana, Nimesh; Thippeswamy, D. N.

    2012-01-01

    The graduate medical students often get less opportunity for clarifying their doubts and to reinforce their concepts after lecture classes. Assessment of the effect of MCQ preparation by graduate medical students as a revision exercise on the topic "Mineral metabolism." At the end of regular teaching module on the topic "Mineral metabolism,"…

  19. Using Ordered Multiple-Choice Items to Assess Students' Understanding of the Structure and Composition of Matter

    ERIC Educational Resources Information Center

    Hadenfeldt, Jan C.; Bernholt, Sascha; Liu, Xiufeng; Neumann, Knut; Parchmann, Ilka

    2013-01-01

    Helping students develop a sound understanding of scientific concepts can be a major challenge. Lately, learning progressions have received increasing attention as a means to support students in developing understanding of core scientific concepts. At the center of a learning progression is a sequence of developmental levels reflecting an…

  20. Keeping It in Three Dimensions: Measuring the Development of Mental Rotation in Children with the Rotated Colour Cube Test (RCCT)

    PubMed Central

    Lütke, Nikolay; Lange-Küttner, Christiane

    2015-01-01

    Abstract This study introduces the new Rotated Colour Cube Test (RCCT) as a measure of object identification and mental rotation using single 3D colour cube images in a matching-to-sample procedure. One hundred 7- to 11-year-old children were tested with aligned or rotated cube models, distracters and targets. While different orientations of distracters made the RCCT more difficult, different colours of distracters had the opposite effect and made the RCCT easier because colour facilitated clearer discrimination between target and distracters. Ten-year-olds performed significantly better than 7- to 8-year-olds. The RCCT significantly correlated with children’s performance on the Raven’s Coloured Progressive Matrices Test (RCPM) presumably due to the shared multiple-choice format, but the RCCT was easier, as it did not require sequencing. Children from families with a high socio-economic status performed best on both tests, with boys outperforming girls on the more difficult RCCT test sections. PMID:27375975

  1. Have the Answers to Common Legal Questions Concerning Nutrition Support Changed Over the Past Decade? 10 Questions for 10 Years.

    PubMed

    Barrocas, Albert; Cohen, Michael L

    2016-06-01

    Clinical nutrition specialists (CNSs) are often confronted with technological, ethical, and legal questions, that is, what can be done technologically, what should be done ethically, and what must be done legally, which conflict at times. The conflict represents a "troubling trichotomy" as discussed in the lead article of this issue of Nutrition in Clinical Practice (NCP). During Clinical Nutrition Week in 2006, a symposium covering these 3 topics was presented, and later that year, an article covering the same topic was published in NCP In this article, we revisit several legal questions/issues that were raised 10 years ago and discuss current answers and approaches. Some of the answers remain unchanged. Other answers have been modified by additional legislation, court decisions, or regulations. In addition, new questions/issues have arisen. Some of the most common questions regarding nutrition support involve the following: liability, informed consent, medical decisional incapacity vs legal competence, advance directive specificity, surrogate decision making, physician orders for life-sustaining treatment and electronic medical orders for life-sustaining treatment, legal definition of death, patient vs family decision making, the noncompliant patient, and elder abuse obligations. In the current healthcare environment, these questions and issues are best addressed via a transdisciplinary team that focuses on function rather than form. The CNS can play a pivotal role in dealing with these challenges by applying the acronym ACT: being Accountable and Communicating with all stakeholders while actively participating as an integral part of the transdisciplinary Team. PMID:27113077

  2. Predictive Testing

    MedlinePlus

    ... Primary care providers Specialists Getting covered Research Basic science research Research in people ... screening Diagnostic testing Direct-to-consumer genetic testing Newborn screening Pharmacogenomic testing ...

  3. Use of the Moodle Platform to Promote an Ongoing Learning When Lecturing General Physics in the Physics, Mathematics and Electronic Engineering Programmes at the University of the Basque Country UPV/EHU

    NASA Astrophysics Data System (ADS)

    López, Gabriel A.; Sáenz, Jon; Leonardo, Aritz; Gurtubay, Idoia G.

    2016-03-01

    The Moodle platform has been used to put into practice an ongoing evaluation of the students' Physics learning process. The evaluation has been done on the frame of the course General Physics, which is lectured during the first year of the Physics, Mathematics and Electronic Engineering Programmes at the Faculty of Science and Technology of the University of the Basque Country (UPV/EHU). A test bank with more than 1000 multiple-choice questions, including conceptual and numerical problems, has been prepared. Throughout the course, the students have to answer a 10-question multiple-choice test for every one of the blocks the course is divided in and which were previously treated and worked in the theoretical lectures and problem-solving sessions. The tests are automatically corrected by Moodle, and under certain criteria, the corresponding mark is taken into account for the final mark of the course. According to the results obtained from a statistical study of the data on the student performances during the last four academic years, it has been observed that there exists an actual correlation between the marks obtained in the Moodle tests and the final mark of the course. In addition, it could be deduced that students who have passed the Moodle tests increase their possibilities of passing the course by an odds ratio close to 3.

  4. Use of the Moodle Platform to Promote an Ongoing Learning When Lecturing General Physics in the Physics, Mathematics and Electronic Engineering Programmes at the University of the Basque Country UPV/EHU

    NASA Astrophysics Data System (ADS)

    López, Gabriel A.; Sáenz, Jon; Leonardo, Aritz; Gurtubay, Idoia G.

    2016-08-01

    The Moodle platform has been used to put into practice an ongoing evaluation of the students' Physics learning process. The evaluation has been done on the frame of the course General Physics, which is lectured during the first year of the Physics, Mathematics and Electronic Engineering Programmes at the Faculty of Science and Technology of the University of the Basque Country (UPV/EHU). A test bank with more than 1000 multiple-choice questions, including conceptual and numerical problems, has been prepared. Throughout the course, the students have to answer a 10-question multiple-choice test for every one of the blocks the course is divided in and which were previously treated and worked in the theoretical lectures and problem-solving sessions. The tests are automatically corrected by Moodle, and under certain criteria, the corresponding mark is taken into account for the final mark of the course. According to the results obtained from a statistical study of the data on the student performances during the last four academic years, it has been observed that there exists an actual correlation between the marks obtained in the Moodle tests and the final mark of the course. In addition, it could be deduced that students who have passed the Moodle tests increase their possibilities of passing the course by an odds ratio close to 3.

  5. Coombs test

    MedlinePlus

    Direct antiglobulin test; Indirect antiglobulin test ... No special preparation is necessary for this test. ... There are two types of the Coombs test: Direct Indirect The ... that are stuck to the surface of red blood cells. Many diseases ...

  6. VDRL test

    MedlinePlus

    ... The VDRL test is a screening test for syphilis. It measures substances (proteins), called antibodies, that your ... come in contact with the bacteria that cause syphilis. How the Test is Performed The test is ...

  7. Coombs test

    MedlinePlus

    Direct antiglobulin test; Indirect antiglobulin test; Anemia - hemolytic ... No special preparation is necessary for this test. ... There are 2 types of the Coombs test: Direct Indirect The direct ... that are stuck to the surface of red blood cells. Many diseases ...

  8. Trichomonas Testing

    MedlinePlus

    ... vaginalis by Amplified Detection; Trichomonas vaginalis by Direct Fluorescent Antibody (DFA) Related tests: Pap Smear , Chlamydia Testing , ... and men. Other methods. These include the direct fluorescent antibody (DFA) test and a test that detects ...

  9. The effect on student performance of scrambling questions and their stems in medical colleges admission tests.

    PubMed

    Khan, Junaid Sarfraz; Tabasum, Saima; Mukhtar, Osama; Iqbal, Maryam

    2013-12-01

    Assessment is an indispensable part of an educational program. Multiple Choice Questions (MCQs) is an objective tool of assessment provided cheating is controlled. A method employed to reduce the chance of cheating is to scramble the sequence of the MCQs and responses in multiple papers having the same content. It is assumed that the performance of students is mainly dependent on the difficulty of the items and not the order in which they are placed within the instrument. The marks obtained by 1,02,211 candidates sitting in Medical Colleges Admission Test (MCAT) from 2008 to 2011 and given similar-content but scrambled-sequence question paper codes were analyzed using parametric tests. A significant difference amongst the mean marks of candidates in the different codes of MCAT 2008 (F = 22.15, p < 0.001) and MCAT 2011 (F = 3.85, p = 0.009) was identified. No significant difference was found in the mean marks of the candidates' each year for different codes in each centre. PMID:24305000

  10. Relevant Writing Assessment: Instructionally-Sound Alternative Testing.

    ERIC Educational Resources Information Center

    Matter, M. Kevin; And Others

    The writing assessment used in grade 10 of the Cherry Creek, (Colorado) schools is described. Over the past 3 years, the focus of the assessment has changed from an assessment heavily weighted on multiple-choice items, with minimal constructed response tasks, to an assessment based entirely on a written product. Students are provided with multiple…

  11. The benefits of testing for learning on later performance.

    PubMed

    McConnell, Meghan M; St-Onge, Christina; Young, Meredith E

    2015-05-01

    Testing has been shown to enhance retention of learned information beyond simple studying, a phenomena known as test-enhanced learning (TEL). Research has shown that TEL effects are greater for tests that require the production of responses [e.g., short-answer questions (SAQs)] relative to tests that require the recognition of correct answers [e.g., multiple-choice questions (MCQs)]. High stakes licensure examinations have recently differentiated MCQs that require the application of clinical knowledge (context-rich MCQs) from MCQs that rely on the recognition of "facts" (context-free MCQs). The present study investigated the influence of different types of educational activities (including studying, SAQs, context-rich MCQs and context-free MCQs) on later performance on a mock licensure examination. Fourth-year medical students (n = 224) from four Quebec universities completed four educational activities: one reading-based activity and three quiz-based activities (SAQs, context-rich MCQs, and context-free MCQs). We assessed the influence of the type of educational activity on students' subsequent performance in a mock licensure examination, which consisted of two types of context-rich MCQs: (1) verbatim replications of previous items and (2) items that tested the same learning objective but were new. Mean accuracy scores on the mock licensure exam were higher when intervening educational activities contained either context-rich MCQs (Mean z-score = 0.40) or SAQs (M = 0.39) compared to context-free MCQs (M = -0.38) or study only items (M = -0.42; all p < 0.001). Higher mean scores were only present for verbatim items (p < 0.001). The benefit of testing was observed when intervening educational activities required either the generation of a response (SAQs) or the application of knowledge (context-rich MCQs); however, this effect was only observed for verbatim test items. These data provide evidence that context-rich MCQs and SAQs enhance learning through testing

  12. Test Madness

    ERIC Educational Resources Information Center

    Hedrick, Wanda B., Ed.

    2007-01-01

    There's accountability and then there's the testing craze an iatrogenic practice that undermines real learning. Hedrick documents the negative effects of testing, giving teachers another weapon in their arsenal against mindless preparation for high-stakes tests.

  13. Thyroid Tests

    MedlinePlus

    ... Organizations (PDF, 269 KB). Alternate Language URL Thyroid Tests Page Content On this page: What is the ... Top ] Why do health care providers perform thyroid tests? Health care providers perform thyroid tests to assess ...

  14. IQ testing

    MedlinePlus

    Many IQ tests are used today. Whether they measure actual intelligence or simply certain abilities is controversial. IQ tests measure a specific functioning ability and may not accurately ... any intelligence test may be culturally biased. The more widely ...

  15. Laboratory Tests

    MedlinePlus

    Laboratory tests check a sample of your blood, urine, or body tissues. A technician or your doctor ... compare your results to results from previous tests. Laboratory tests are often part of a routine checkup ...

  16. Laboratory Tests

    MedlinePlus

    ... Home Medical Devices Products and Medical Procedures In Vitro Diagnostics Lab Tests Laboratory Tests Share Tweet Linkedin ... Approved Home and Lab Tests Find All In Vitro Diagnostic Products and Decision Summaries Since November 2003 ...

  17. Pap Test

    MedlinePlus

    ... hyphen, e.g. -historical Searches are case-insensitive Pap Test Add to My Pictures View /Download : Small: ... 1454x1326 View Download Large: 2908x2652 View Download Title: Pap Test Description: Pap test; drawing shows a side ...

  18. IQ testing

    MedlinePlus

    IQ (intelligence quotient) testing is a series of exams used to determine your general intelligence in relation ... Many IQ tests are used today. Whether they measure actual intelligence or simply certain abilities is controversial. IQ tests ...

  19. Development of the Test Of Astronomy STandards (TOAST) Assessment Instrument

    NASA Astrophysics Data System (ADS)

    Slater, Timothy F.; Slater, S. J.

    2008-05-01

    Considerable effort in the astronomy education research (AER) community over the past several years has focused on developing assessment tools in the form of multiple-choice conceptual diagnostics and content knowledge surveys. This has been critically important in advancing the AER discipline so that researchers could establish the initial knowledge state of students as well as to attempt measure some of the impacts of innovative instructional interventions. Unfortunately, few of the existing instruments were constructed upon a solid list of clearly articulated and widely agreed upon learning objectives. This was not done in oversight, but rather as a result of the relative youth of AER as a discipline. Now that several important science education reform documents exist and are generally accepted by the AER community, we are in a position to develop, validate, and disseminate a new assessment instrument which is tightly aligned to the consensus learning goals stated by the American Astronomical Society - Chair's Conference on ASTRO 101, the American Association of the Advancement of Science's Project 2061 Benchmarks, and the National Research Council's National Science Education Standards. In response, researchers from the Cognition in Astronomy, Physics and Earth sciences Research (CAPER) Team at the University of Wyoming's Science & Math Teaching Center (UWYO SMTC) have designed a criterion-referenced assessment tool, called the Test Of Astronomy STandards (TOAST). Through iterative development, this instrument has a high degree of reliability and validity for instructors and researchers needing information on students’ initial knowledge state at the beginning of a course and can be used, in aggregate, to help measure the impact of course-length duration instructional strategies for courses with learning goals tightly aligned to the consensus goals of our community.

  20. Test Of Astronomy STandards TOAST Survey of K-12 Teachers

    NASA Astrophysics Data System (ADS)

    Slater, Timothy F.; Slater, Stephanie; Stork, Debra J.

    2015-01-01

    Discipline-based education research in astronomy is focused on understanding the underlying mental mechanisms used by students when learning astronomy and teachers when teaching astronomy. Systematic surveys of K-12 teacher' knowledge in the domain of astronomy are conducted periodically in order to better focus and improve professional development. These surveys are most often done when doing contemporary needs assessments or when new assessment instruments are readily available. Designed by Stephanie J. Slater of the CAPER Center for Astronomy & Physics Education Research, the 29-item multiple-choice format Test Of Astronomy STandards - TOAST is a carefully constructed, criterion-referenced instrument constructed upon a solid list of clearly articulated and widely agreed upon learning objectives. The targeted learning concepts tightly align with the consensus learning goals stated by the American Astronomical Society - Chair's Conference on ASTRO 101, the American Association of the Advancement of Science's Project 2061 Benchmarks, and the National Research Council's 1996 National Science Education Standards. Without modification, the TOAST is also aligned with the significantly less ambitious 2013 Next Generation Science Standards created by Achieve, Inc., under the auspices of the National Research Council. This latest survey reveals that K-12 teachers still hold many of the same fundamental misconceptions uncovered by earlier surveys. This includes misconceptions about the size, scale, and structure of the cosmos as well as misconceptions about the nature of physical processes at work in astronomy. This suggests that professional development in astronomy is still needed and that modern curriculum materials are best served if they provide substantial support for implementation.