Science.gov

Sample records for 10-question multiple-choice test

  1. Accommodations for Multiple Choice Tests

    ERIC Educational Resources Information Center

    Trammell, Jack

    2011-01-01

    Students with learning or learning-related disabilities frequently struggle with multiple choice assessments due to difficulty discriminating between items, filtering out distracters, and framing a mental best answer. This Practice Brief suggests accommodations and strategies that disability service providers can utilize in conjunction with…

  2. Constructive Multiple-Choice Testing System

    ERIC Educational Resources Information Center

    Park, Jooyong

    2010-01-01

    The newly developed computerized Constructive Multiple-choice Testing system is introduced. The system combines short answer (SA) and multiple-choice (MC) formats by asking examinees to respond to the same question twice, first in the SA format, and then in the MC format. This manipulation was employed to collect information about the two…

  3. Multiple-choice testing in anatomy.

    PubMed

    Nnodim, J O

    1992-07-01

    An analysis of 596 multiple-choice questions (MCQs) on human anatomy given at three First Professional Examinations for medical students is reported. The MCQ paper at each examination was 200 items long and consisted of three item-types: A, K and T/F. Each A-type item comprised a stem and five options, only one of the latter being the correct or best answer. Items of the K-type consisted of a stem and four responses, any number of which may be correct. The T/F items were of the three-response kind, the available options being 'true', 'false' and 'don't know'. Test reliability was computed by internal analysis, using the Kuder-Richardson 20 formula. Measures of concurrent validity were obtained by correlating the scores in the MCQ papers with the overall outcome of the First Professional Examination. Indices of item facility, discrimination and abstention were calculated. The effects of item-type and the availability of the 'don't know' option on examinee performance were also determined. Reliability (alpha) and concurrent validity (Pearson r) coefficients in the ranges of 0.71-0.85 and 0.80-0.93 (P less than 0.05) respectively were recorded. Regression analysis revealed the MCQ papers to be less sensitive predictors of the aggregate performance than the essay papers. The proportion of highly discriminatory and excessively difficult items was highest for the K-type. When the same K-type questions were re-exhibited in the indeterminate format, the examinees performed significantly better. Higher scores were also recorded when candidates were required to respond to all the questions than when they were offered the 'don't know' option and the percentage gain was higher for the low-scoring examinees. The appropriateness of multiple-choice testing as a tool for assessing student achievement in human anatomy is discussed.

  4. The Positive and Negative Consequences of Multiple-Choice Testing

    ERIC Educational Resources Information Center

    Roediger, Henry L.; Marsh, Elizabeth J.

    2005-01-01

    Multiple-choice tests are commonly used in educational settings but with unknown effects on students' knowledge. The authors examined the consequences of taking a multiple-choice test on a later general knowledge test in which students were warned not to guess. A large positive testing effect was obtained: Prior testing of facts aided final…

  5. Reducing the Need for Guesswork in Multiple-Choice Tests

    ERIC Educational Resources Information Center

    Bush, Martin

    2015-01-01

    The humble multiple-choice test is very widely used within education at all levels, but its susceptibility to guesswork makes it a suboptimal assessment tool. The reliability of a multiple-choice test is partly governed by the number of items it contains; however, longer tests are more time consuming to take, and for some subject areas, it can be…

  6. The memorial consequences of multiple-choice testing.

    PubMed

    Marsh, Elizabeth J; Roediger, Henry L; Bjork, Robert A; Bjork, Elizabeth L

    2007-04-01

    The present article addresses whether multiple-choice tests may change knowledge even as they attempt to measure it. Overall, taking a multiple-choice test boosts performance on later tests, as compared with non-tested control conditions. This benefit is not limited to simple definitional questions, but holds true for SAT II questions and for items designed to tap concepts at a higher level in Bloom's (1956) taxonomy of educational objectives. Students, however, can also learn false facts from multiple-choice tests; testing leads to persistence of some multiple-choice lures on later general knowledge tests. Such persistence appears due to faulty reasoning rather than to an increase in the familiarity of lures. Even though students may learn false facts from multiple-choice tests, the positive effects of testing outweigh this cost.

  7. Detection of Copying on Multiple-Choice Tests: An Update.

    ERIC Educational Resources Information Center

    Bellezza, Francis S.; Bellezza, Suzanne R.

    1995-01-01

    Reviews research on detection of cheating by students on multiple choice tests. Discusses three ideas concerning detecting, deterring, and confronting cheating. Discusses problems confronting teachers attempting to use statistical data to prove cheating. (CFR)

  8. Optimizing Multiple-Choice Tests as Learning Events

    ERIC Educational Resources Information Center

    Little, Jeri Lynn

    2011-01-01

    Although generally used for assessment, tests can also serve as tools for learning--but different test formats may not be equally beneficial. Specifically, research has shown multiple-choice tests to be less effective than cued-recall tests in improving the later retention of the tested information (e.g., see meta-analysis by Hamaker, 1986),…

  9. Statistical Analysis of Multiple Choice Testing

    DTIC Science & Technology

    2001-04-01

    the question to help determine poor distractors (incorrect answers). However, Attali and Fraenkel show that while it is sound to use the Rpbis...heavily on question difficulty.21 Attali and Fraenkel say that the Biserial is usually preferred as a criterion measure for the correct alternative...pubs/mcq/scpre.html, p.6 17 Renckly, Thomas R. Test Analysis & Development Sysem (TAD) version 5.49. CD- ROM.(1990-2000). 18 Ibid. 19 Attali , Yigal

  10. Reliability of Speeded Number-Right Multiple-Choice Tests

    ERIC Educational Resources Information Center

    Attali, Yigal

    2005-01-01

    Contrary to common belief, reliability estimates of number-right multiple-choice tests are not inflated by speededness. Because examinees guess on questions when they run out of time, the responses to these questions generally show less consistency with the responses of other questions, and the reliability of the test will be decreased. The…

  11. Guessing, Partial Knowledge, and Misconceptions in Multiple-Choice Tests

    ERIC Educational Resources Information Center

    Lau, Paul Ngee Kiong; Lau, Sie Hoe; Hong, Kian Sam; Usop, Hasbee

    2011-01-01

    The number right (NR) method, in which students pick one option as the answer, is the conventional method for scoring multiple-choice tests that is heavily criticized for encouraging students to guess and failing to credit partial knowledge. In addition, computer technology is increasingly used in classroom assessment. This paper investigates the…

  12. Partial-Credit Scoring Methods for Multiple-Choice Tests.

    ERIC Educational Resources Information Center

    Frary, Robert B.

    1989-01-01

    Multiple-choice response and scoring methods that attempt to determine an examinee's degree of knowledge about each item in order to produce a total test score are reviewed. There is apparently little advantage to such schemes; however, they may have secondary benefits such as providing feedback to enhance learning. (SLD)

  13. A Review of Scoring Algorithms for Multiple-Choice Tests.

    ERIC Educational Resources Information Center

    Kurz, Terri Barber

    Multiple-choice tests are generally scored using a conventional number right scoring method. While this method is easy to use, it has several weaknesses. These weaknesses include decreased validity due to guessing and failure to credit partial knowledge. In an attempt to address these weaknesses, psychometricians have developed various scoring…

  14. Violating Conventional Wisdom in Multiple Choice Test Construction

    ERIC Educational Resources Information Center

    Taylor, Annette Kujawski

    2005-01-01

    This research examined 2 elements of multiple-choice test construction, balancing the key and optimal number of options. In Experiment 1 the 3 conditions included a balanced key, overrepresentation of a and b responses, and overrepresentation of c and d responses. The results showed that error-patterns were independent of the key, reflecting…

  15. Valuing Assessment in Teacher Education - Multiple-Choice Competency Testing

    ERIC Educational Resources Information Center

    Martin, Dona L.; Itter, Diane

    2014-01-01

    When our focus is on assessment educators should work to value the nature of assessment. This paper presents a new approach to multiple-choice competency testing in mathematics education. The instrument discussed here reflects student competence, encourages self-regulatory learning behaviours and links content with current curriculum documents and…

  16. Multiple Choice Testing and the Retrieval Hypothesis of the Testing Effect

    ERIC Educational Resources Information Center

    Sensenig, Amanda E.

    2010-01-01

    Taking a test often leads to enhanced later memory for the tested information, a phenomenon known as the "testing effect". This memory advantage has been reliably demonstrated with recall tests but not multiple choice tests. One potential explanation for this finding is that multiple choice tests do not rely on retrieval processes to the same…

  17. Computer-Based Confidence Testing: Alternatives to Conventional, Computer-Based Multiple-Choice Testing.

    ERIC Educational Resources Information Center

    Anderson, Richard Ivan

    1982-01-01

    Describes confidence testing methods (confidence weighting, probabilistic marking, multiple alternative selection) as alternative to computer-based, multiple choice tests and explains potential benefits (increased reliability, improved examinee evaluation of alternatives, extended diagnostic information and remediation prescriptions, happier…

  18. Measures of Partial Knowledge and Unexpected Responses in Multiple-Choice Tests

    ERIC Educational Resources Information Center

    Chang, Shao-Hua; Lin, Pei-Chun; Lin, Zih-Chuan

    2007-01-01

    This study investigates differences in the partial scoring performance of examinees in elimination testing and conventional dichotomous scoring of multiple-choice tests implemented on a computer-based system. Elimination testing that uses the same set of multiple-choice items rewards examinees with partial knowledge over those who are simply…

  19. Examination of the Quality of Multiple-Choice Items on Classroom Tests

    ERIC Educational Resources Information Center

    DiBattista, David; Kurzawa, Laura

    2011-01-01

    Because multiple-choice testing is so widespread in higher education, we assessed the quality of items used on classroom tests by carrying out a statistical item analysis. We examined undergraduates' responses to 1198 multiple-choice items on sixteen classroom tests in various disciplines. The mean item discrimination coefficient was +0.25, with…

  20. Are Multiple Choice Tests Fair to Medical Students with Specific Learning Disabilities?

    ERIC Educational Resources Information Center

    Ricketts, Chris; Brice, Julie; Coombes, Lee

    2010-01-01

    The purpose of multiple choice tests of medical knowledge is to estimate as accurately as possible a candidate's level of knowledge. However, concern is sometimes expressed that multiple choice tests may also discriminate in undesirable and irrelevant ways, such as between minority ethnic groups or by sex of candidates. There is little literature…

  1. Evidence-Based Decision about Test Scoring Rules in Clinical Anatomy Multiple-Choice Examinations

    ERIC Educational Resources Information Center

    Severo, Milton; Gaio, A. Rita; Povo, Ana; Silva-Pereira, Fernanda; Ferreira, Maria Amélia

    2015-01-01

    In theory the formula scoring methods increase the reliability of multiple-choice tests in comparison with number-right scoring. This study aimed to evaluate the impact of the formula scoring method in clinical anatomy multiple-choice examinations, and to compare it with that from the number-right scoring method, hoping to achieve an…

  2. Testing Collective Memory: Representing the Soviet Union on Multiple-Choice Questions

    ERIC Educational Resources Information Center

    Reich, Gabriel A.

    2011-01-01

    This article tests the assumption that state-mandated multiple-choice history exams are a cultural tool for disseminating an "official" collective memory. Findings from a qualitative study of a collection of multiple-choice questions that relate to the history of the Soviet Union are presented. The 263 questions all come from New York…

  3. A Statistical Test for Detecting Answer Copying on Multiple-Choice Tests

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Sotaridona, Leonardo

    2004-01-01

    A statistical test for the detection of answer copying on multiple-choice tests is presented. The test is based on the idea that the answers of examinees to test items may be the result of three possible processes: (1) knowing, (2) guessing, and (3) copying, but that examinees who do not have access to the answers of other examinees can arrive at…

  4. Can Multiple-Choice Testing Induce Desirable Difficulties? Evidence from the Laboratory and the Classroom.

    PubMed

    Bjork, Elizabeth Ligon; Soderstrom, Nicholas C; Little, Jeri L

    2015-01-01

    The term desirable difficulties (Bjork, 1994) refers to conditions of learning that, though often appearing to cause difficulties for the learner and to slow down the process of acquisition, actually improve long-term retention and transfer. One known desirable difficulty is testing (as compared with restudy), although typically it is tests that clearly involve retrieval--such as free and cued recall tests--that are thought to induce these learning benefits and not multiple-choice tests. Nonetheless, multiple-choice testing is ubiquitous in educational settings and many other high-stakes situations. In this article, we discuss research, in both the laboratory and the classroom, exploring whether multiple-choice testing can also be fashioned to promote the type of retrieval processes known to improve learning, and we speculate about the necessary properties that multiple-choice questions must possess, as well as the metacognitive strategy students need to use in answering such questions, to achieve this goal.

  5. A Comparison of Multiple-Choice Tests and True-False Tests Used in Evaluating Student Progress

    ERIC Educational Resources Information Center

    Tasdemir, Mehmet

    2010-01-01

    This study aims at comparing the difficulty levels, discrimination powers and powers of testing achievement of multiple choice tests and true-false tests, and thus revealing the rightness or wrongness of the commonly believed hypothesis that multiple choice tests don't bear the same properties as true-false tests. The research was performed with…

  6. Multiple-Choice and True/False Tests: Myths and Misapprehensions

    ERIC Educational Resources Information Center

    Burton, Richard F.

    2005-01-01

    Examiners seeking guidance on multiple-choice and true/false tests are likely to encounter various faulty or questionable ideas. Twelve of these are discussed in detail, having to do mainly with the effects on test reliability of test length, guessing and scoring method (i.e. number-right scoring or negative marking). Some misunderstandings could…

  7. Sex Differences in the Tendency to Omit Items on Multiple-Choice Tests: 1980-2000

    ERIC Educational Resources Information Center

    von Schrader, Sarah; Ansley, Timothy

    2006-01-01

    Much has been written concerning the potential group differences in responding to multiple-choice achievement test items. This discussion has included references to possible disparities in tendency to omit such test items. When test scores are used for high-stakes decision making, even small differences in scores and rankings that arise from male…

  8. Linguistic and cultural adaptation needs of Mexican American nursing students related to multiple-choice tests.

    PubMed

    Lujan, Josefina

    2008-07-01

    Hispanic nurses represent less than 2% of the current U.S. nursing workforce, despite that approximately 14% of the nation's population is Hispanic. There is an urgent need to correct the gross underrepresentation of Mexican Americans, the largest subgroup among Hispanics, in the U.S. nursing workforce to provide culturally concordant care. One solution is to increase the academic success of Mexican American nursing students with English as a second language through improved linguistic and cultural adaptation to multiple-choice tests. This article will discuss these students' linguistic and cultural adaptation needs related to multiple-choice tests and will also present several intervention strategies and a case study.

  9. Piloting a Polychotomous Partial-Credit Scoring Procedure in a Multiple-Choice Test

    ERIC Educational Resources Information Center

    Tsopanoglou, Antonios; Ypsilandis, George S.; Mouti, Anna

    2014-01-01

    Multiple-choice (MC) tests are frequently used to measure language competence because they are quick, economical and straightforward to score. While degrees of correctness have been investigated for partially correct responses in combined-response MC tests, degrees of incorrectness in distractors and the role they play in determining the…

  10. A new scoring system for the Spraings Multiple Choice Bender Gestalt Test.

    PubMed

    Friedman, A F; Wakefield, J A; Sasek, J; Schroeder, D

    1977-01-01

    A new scoring procedure to be used with Spraings' technique for administering the Bender-Gestalt test in a multiple choice format is presented. Scoring weights are used instead of simply scoring each item right or wrong. The evidence presented suggests that this method of scoring would increase the value of Spraings' test in the diagnosis of perceptual deficits.

  11. Reliability of Speeded Number-Right Multiple-Choice Tests. Research Report. RR-04-15

    ERIC Educational Resources Information Center

    Attali, Yigal

    2004-01-01

    Contrary to common belief, reliability estimates of number-right multiple-choice tests are not inflated by speededness. Because examinees guess on questions when they run out of time, the responses to these questions show less consistency with the responses of other questions, and the reliability of the test will be decreased. The surprising…

  12. Difficulty and Discriminating Indices of Three-Multiple Choice Tests Using the Confidence Scoring Procedure

    ERIC Educational Resources Information Center

    Omirin, M. S.

    2007-01-01

    The study investigated the comparison of the difficulty and discrimination incides of three multiple choice tests using the confidence scoring procedure (CSP). The study was also set to determine whether or not the difficulty and discrimination indices would be improved, if the tests were scored by the confidence scoring procedure. Two null…

  13. Cognitive Diagnostic Models for Tests with Multiple-Choice and Constructed-Response Items

    ERIC Educational Resources Information Center

    Kuo, Bor-Chen; Chen, Chun-Hua; Yang, Chih-Wei; Mok, Magdalena Mo Ching

    2016-01-01

    Traditionally, teachers evaluate students' abilities via their total test scores. Recently, cognitive diagnostic models (CDMs) have begun to provide information about the presence or absence of students' skills or misconceptions. Nevertheless, CDMs are typically applied to tests with multiple-choice (MC) items, which provide less diagnostic…

  14. Multiple-Choice versus Constructed-Response Tests in the Assessment of Mathematics Computation Skills.

    ERIC Educational Resources Information Center

    Gadalla, Tahany M.

    The equivalence of multiple-choice (MC) and constructed response (discrete) (CR-D) response formats as applied to mathematics computation at grade levels two to six was tested. The difference between total scores from the two response formats was tested for statistical significance, and the factor structure of items in both response formats was…

  15. Predictive Validity of a Multiple-Choice Test for Placement in a Community College

    ERIC Educational Resources Information Center

    Verbout, Mary F.

    2013-01-01

    Multiple-choice tests of punctuation and usage are used throughout the United States to assess the writing skills of new community college students in order to place them in either a basic writing course or first-year composition. To determine whether using the COMPASS Writing Test (CWT) is a valid placement at a community college, student test…

  16. Research on the Multiple-Choice Test Item in Japan: Toward the Validation of Mathematical Models.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    Research related to the multiple choice test item is reported, as it is conducted by educational technologists in Japan. Sato's number of hypothetical equivalent alternatives is introduced. The based idea behind this index is that the expected uncertainty of the m events, or alternatives, be large and the number of hypothetical, equivalent…

  17. The "None of the Above" Option in Multiple-Choice Testing: An Experimental Study

    ERIC Educational Resources Information Center

    DiBattista, David; Sinnige-Egger, Jo-Anne; Fortuna, Glenda

    2014-01-01

    The authors assessed the effects of using "none of the above" as an option in a 40-item, general-knowledge multiple-choice test administered to undergraduate students. Examinees who selected "none of the above" were given an incentive to write the correct answer to the question posed. Using "none of the above" as the…

  18. A Practical Methodology for the Systematic Development of Multiple Choice Tests.

    ERIC Educational Resources Information Center

    Blumberg, Phyllis; Felner, Joel

    Using Guttman's facet design analysis, four parallel forms of a multiple-choice test were developed. A mapping sentence, logically representing the universe of content of a basic cardiology course, specified the facets of the course and the semantic structural units linking them. The facets were: cognitive processes, disease priority, specific…

  19. Multiple-Choice Tests and Student Understanding: What Is the Connection?

    ERIC Educational Resources Information Center

    Simkin, Mark G.; Kuechler, William L.

    2005-01-01

    Instructors can use both "multiple-choice" (MC) and "constructed response" (CR) questions (such as short answer, essay, or problem-solving questions) to evaluate student understanding of course materials and principles. This article begins by discussing the advantages and concerns of using these alternate test formats and…

  20. Application of a Multidimensional Nested Logit Model to Multiple-Choice Test Items

    ERIC Educational Resources Information Center

    Bolt, Daniel M.; Wollack, James A.; Suh, Youngsuk

    2012-01-01

    Nested logit models have been presented as an alternative to multinomial logistic models for multiple-choice test items (Suh and Bolt in "Psychometrika" 75:454-473, 2010) and possess a mathematical structure that naturally lends itself to evaluating the incremental information provided by attending to distractor selection in scoring. One potential…

  1. Not Read, but Nevertheless Solved? Three Experiments on PIRLS Multiple Choice Reading Comprehension Test Items

    ERIC Educational Resources Information Center

    Sparfeldt, Jorn R.; Kimmel, Rumena; Lowenkamp, Lena; Steingraber, Antje; Rost, Detlef H.

    2012-01-01

    Multiple-choice (MC) reading comprehension test items comprise three components: text passage, questions about the text, and MC answers. The construct validity of this format has been repeatedly criticized. In three between-subjects experiments, fourth graders (N[subscript 1] = 230, N[subscript 2] = 340, N[subscript 3] = 194) worked on three…

  2. Does the Position of Response Options in Multiple-Choice Tests Matter?

    ERIC Educational Resources Information Center

    Hohensinn, Christine; Baghaei, Purya

    2017-01-01

    In large scale multiple-choice (MC) tests alternate forms of a test may be developed to prevent cheating by changing the order of items or by changing the position of the response options. The assumption is that since the content of the test forms are the same the order of items or the positions of the response options do not have any effect on…

  3. Effects of Mayfield's Four Questions (M4Q) on Nursing Students' Self-Efficacy and Multiple-Choice Test Scores

    ERIC Educational Resources Information Center

    Mayfield, Linda Riggs

    2010-01-01

    This study examined the effects of being taught the Mayfield's Four Questions multiple-choice test-taking strategy on the perceived self-efficacy and multiple-choice test scores of nursing students in a two-year associate degree program. Experimental and control groups were chosen by stratified random sampling. Subjects completed the 10-statement…

  4. Writing multiple-choice test items that promote and measure critical thinking.

    PubMed

    Morrison, S; Free, K W

    2001-01-01

    Faculties are concerned about measurement of critical thinking especially since the National League for Nursing Accrediting Commission cited such measurement as a requirement for accreditation (NLNAC, 1997). Some writers and researchers (Alfaro-LeFevre, 1995; Blat, 1989; McPeck, 1981, 1990) describe the need to measure critical thinking within the context of a specific discipline. Based on McPeck's position that critical thinking is discipline-specific, guidelines for developing multiple-choice test items as a means of measuring critical thinking within the discipline of nursing are discussed. Specifically, criteria described by Morrison, Smith, and Britt (1996) for writing critical-thinking multiple-choice test items are reviewed and explained for promoting and measuring critical thinking.

  5. Test of understanding of vectors: A reliable multiple-choice vector concept test

    NASA Astrophysics Data System (ADS)

    Barniol, Pablo; Zavala, Genaro

    2014-06-01

    In this article we discuss the findings of our research on students' understanding of vector concepts in problems without physical context. First, we develop a complete taxonomy of the most frequent errors made by university students when learning vector concepts. This study is based on the results of several test administrations of open-ended problems in which a total of 2067 students participated. Using this taxonomy, we then designed a 20-item multiple-choice test [Test of understanding of vectors (TUV)] and administered it in English to 423 students who were completing the required sequence of introductory physics courses at a large private Mexican university. We evaluated the test's content validity, reliability, and discriminatory power. The results indicate that the TUV is a reliable assessment tool. We also conducted a detailed analysis of the students' understanding of the vector concepts evaluated in the test. The TUV is included in the Supplemental Material as a resource for other researchers studying vector learning, as well as instructors teaching the material.

  6. Feedback-related brain activity predicts learning from feedback in multiple-choice testing.

    PubMed

    Ernst, Benjamin; Steinhauser, Marco

    2012-06-01

    Different event-related potentials (ERPs) have been shown to correlate with learning from feedback in decision-making tasks and with learning in explicit memory tasks. In the present study, we investigated which ERPs predict learning from corrective feedback in a multiple-choice test, which combines elements from both paradigms. Participants worked through sets of multiple-choice items of a Swahili-German vocabulary task. Whereas the initial presentation of an item required the participants to guess the answer, corrective feedback could be used to learn the correct response. Initial analyses revealed that corrective feedback elicited components related to reinforcement learning (FRN), as well as to explicit memory processing (P300) and attention (early frontal positivity). However, only the P300 and early frontal positivity were positively correlated with successful learning from corrective feedback, whereas the FRN was even larger when learning failed. These results suggest that learning from corrective feedback crucially relies on explicit memory processing and attentional orienting to corrective feedback, rather than on reinforcement learning.

  7. Mechanical waves conceptual survey: Its modification and conversion to a standard multiple-choice test

    NASA Astrophysics Data System (ADS)

    Barniol, Pablo; Zavala, Genaro

    2016-06-01

    In this article we present several modifications of the mechanical waves conceptual survey, the most important test to date that has been designed to evaluate university students' understanding of four main topics in mechanical waves: propagation, superposition, reflection, and standing waves. The most significant changes are (i) modification of several test questions that had some problems in their original design, (ii) standardization of the number of options for each question to five, (iii) conversion of the two-tier questions to multiple-choice questions, and (iv) modification of some questions to make them independent of others. To obtain a final version of the test, we administered both the original and modified versions several times to students at a large private university in Mexico. These students were completing a course that covers the topics tested by the survey. The final modified version of the test was administered to 234 students. In this study we present the modifications for each question, and discuss the reasons behind them. We also analyze the results obtained by the final modified version and offer a comparison between the original and modified versions. In the Supplemental Material we present the final modified version of the test. It can be used by teachers and researchers to assess students' understanding of, and learning about, mechanical waves.

  8. Multiple-Choice Cloze Exercises: Textual Domain, Science. SPPED Test Development Notebook, Form 81-S [and] Answer Key for Multiple-Choice Cloze Exercises: Textual Domain, Science. SPPED Test Development Notebook, Form 85-S. Revised.

    ERIC Educational Resources Information Center

    New York State Education Dept., Albany. Div. of Research.

    The "Test Development Notebook" is a resource designed for the preparation of tests of literal comprehension for students in grades 1 through 12. This volume contains 200 multiple-choice cloze exercises taken from textbooks in science, and the accompanying answer key. Each exercise carries the code letter of the section to which it belongs. The…

  9. Some Effects of Changes in Question Structure and Sequence on Performance in a Multiple Choice Chemistry Test.

    ERIC Educational Resources Information Center

    Hodson, D.

    1984-01-01

    Investigated the effect on student performance of changes in question structure and sequence on a GCE 0-level multiple-choice chemistry test. One finding noted is that there was virtually no change in test reliability on reducing the number of options (from five to per test item). (JN)

  10. Quantifying the Effects of Chance in Multiple Choice and True/False Tests: Question Selection and Guessing of Answers.

    ERIC Educational Resources Information Center

    Burton, Richard F.

    2001-01-01

    Describes four measures of test unreliability that quantify effects of question selection and guessing, both separately and together--three chosen for immediacy and one for greater mathematical elegance. Quantifies their dependence on test length and number of answer options per question. Concludes that many multiple choice tests are unreliable…

  11. The Impact of Escape Alternative Position Change in Multiple-Choice Test on the Psychometric Properties of a Test and Its Items Parameters

    ERIC Educational Resources Information Center

    Hamadneh, Iyad Mohammed

    2015-01-01

    This study aimed at investigating the impact changing of escape alternative position in multiple-choice test on the psychometric properties of a test and it's items parameters (difficulty, discrimination & guessing), and estimation of examinee ability. To achieve the study objectives, a 4-alternative multiple choice type achievement test…

  12. Set of Criteria for Efficiency of the Process Forming the Answers to Multiple-Choice Test Items

    ERIC Educational Resources Information Center

    Rybanov, Alexander Aleksandrovich

    2013-01-01

    Is offered the set of criteria for assessing efficiency of the process forming the answers to multiple-choice test items. To increase accuracy of computer-assisted testing results, it is suggested to assess dynamics of the process of forming the final answer using the following factors: loss of time factor and correct choice factor. The model…

  13. Effectiveness of Guided Multiple Choice Objective Questions Test on Students' Academic Achievement in Senior School Mathematics by School Location

    ERIC Educational Resources Information Center

    Igbojinwaekwu, Patrick Chukwuemeka

    2015-01-01

    This study investigated, using pretest-posttest quasi-experimental research design, the effectiveness of guided multiple choice objective questions test on students' academic achievement in Senior School Mathematics, by school location, in Delta State Capital Territory, Nigeria. The sample comprised 640 Students from four coeducation secondary…

  14. AN INVESTIGATION OF NON-INDEPENDENCE OF COMPONENTS OF SCORES ON MULTIPLE-CHOICE TESTS. FINAL REPORT.

    ERIC Educational Resources Information Center

    ZIMMERMAN, DONALD W.; BURKHEIMER, GRAHAM J., JR.

    INVESTIGATION IS CONTINUED INTO VARIOUS EFFECTS OF NON-INDEPENDENT ERROR INTRODUCED INTO MULTIPLE-CHOICE TEST SCORES AS A RESULT OF CHANCE GUESSING SUCCESS. A MODEL IS DEVELOPED IN WHICH THE CONCEPT OF THEORETICAL COMPONENTS OF SCORES IS NOT INTRODUCED AND IN WHICH, THEREFORE, NO ASSUMPTIONS REGARDING ANY RELATIONSHIP BETWEEN SUCH COMPONENTS NEED…

  15. A Statistical Analysis of Infrequent Events on Multiple-Choice Tests that Indicate Probable Cheating

    ERIC Educational Resources Information Center

    Sundermann, Michael J.

    2008-01-01

    A statistical analysis of multiple-choice answers is performed to identify anomalies that can be used as evidence of student cheating. The ratio of exact errors in common (EEIC: two students put the same wrong answer for a question) to differences (D: two students get different answers) was found to be a good indicator of cheating under a wide…

  16. Improving Measures via Examining the Behavior of Distractors in Multiple-Choice Tests: Assessment and Remediation

    ERIC Educational Resources Information Center

    Sideridis, Georgios; Tsaousis, Ioannis; Al Harbi, Khaleel

    2017-01-01

    The purpose of the present article was to illustrate, using an example from a national assessment, the value from analyzing the behavior of distractors in measures that engage the multiple-choice format. A secondary purpose of the present article was to illustrate four remedial actions that can potentially improve the measurement of the…

  17. Test-Taking Strategies of Arab EFL Learners on Multiple Choice Tests

    ERIC Educational Resources Information Center

    Al Fraidan, Abdullah; Al-Khalaf, Khadija

    2012-01-01

    Many studies have focused on the function of learners' strategies in a variety of EFL domains. However, research on test-taking strategies (TTSs) has been limited, even though such strategies might influence test scores and, as a result, test validity. Motivated by this fact and in light of our own experience as EFL test-makers, this article will…

  18. Why Is Performance on Multiple-Choice Tests and Constructed-Response Tests Not More Closely Related? Theory and an Empirical Test

    ERIC Educational Resources Information Center

    Kuechler, William L.; Simkin, Mark G.

    2010-01-01

    Both professional certification and academic tests rely heavily on multiple-choice questions, despite the widespread belief that alternate, constructed-response questions are superior measures of a test taker's understanding of the underlying material. Empirically, the search for a link between these two assessment metrics has met with limited…

  19. Test of Understanding of Vectors: A Reliable Multiple-Choice Vector Concept Test

    ERIC Educational Resources Information Center

    Barniol, Pablo; Zavala, Genaro

    2014-01-01

    In this article we discuss the findings of our research on students' understanding of vector concepts in problems without physical context. First, we develop a complete taxonomy of the most frequent errors made by university students when learning vector concepts. This study is based on the results of several test administrations of open-ended…

  20. Measuring the Consistency in Change in Hepatitis B Knowledge among Three Different Types of Tests: True/False, Multiple Choice, and Fill in the Blanks Tests.

    ERIC Educational Resources Information Center

    Sahai, Vic; Demeyere, Petra; Poirier, Sheila; Piro, Felice

    1998-01-01

    The recall of information about Hepatitis B demonstrated by 180 seventh graders was tested with three test types: (1) short-answer; (2) true/false; and (3) multiple-choice. Short answer testing was the most reliable. Suggestions are made for the use of short-answer tests in evaluating student knowledge. (SLD)

  1. The 1999-00 Preliminary North Carolina State Testing Results: Multiple Choice, Grade 3 Pretest, End-of-Grade, High School Comprehensive, and End-of-Course Tests.

    ERIC Educational Resources Information Center

    Bazemore, Mildred; Geary, Monica; Barbour, Ken; Barefoot, Angela

    This report presents preliminary results from four sets of tests that are part of the North Carolina state testing program. The Grade 3 Pretest is a multiple choice reading and mathematics test administered to students at the beginning of third grade. This pretest was administered to more than 102,000 students in the 1999-2000 school year, and…

  2. Post-Graduate Student Performance in "Supervised In-Class" vs. "Unsupervised Online" Multiple Choice Tests: Implications for Cheating and Test Security

    ERIC Educational Resources Information Center

    Ladyshewsky, Richard K.

    2015-01-01

    This research explores differences in multiple choice test (MCT) scores in a cohort of post-graduate students enrolled in a management and leadership course. A total of 250 students completed the MCT in either a supervised in-class paper and pencil test or an unsupervised online test. The only statistically significant difference between the nine…

  3. Englische Rechtschreibtests in Multiple-Choice-Form in Hauptschule und Gymnasium (English Spelling Tests in Multiple Choice Form in the Hauptschule and Gymnasium)

    ERIC Educational Resources Information Center

    Pauels, Wolfgang

    1975-01-01

    Hauptschule (practical secondary school) pupils are more readily confused than Gymnasium (university-preparatory secondary school) pupils when confronted with false answers. Spelling tests should be designed with regard to the type of school. Introducing visual guides helps the Hauptschule pupils to better achievement in productive tests. (Text is…

  4. Comparison between three option, four option and five option multiple choice question tests for quality parameters: A randomized study

    PubMed Central

    Vegada, Bhavisha; Shukla, Apexa; Khilnani, Ajeetkumar; Charan, Jaykaran; Desai, Chetna

    2016-01-01

    Background: Most of the academic teachers use four or five options per item of multiple choice question (MCQ) test as formative and summative assessment. Optimal number of options in MCQ item is a matter of considerable debate among academic teachers of various educational fields. There is a scarcity of the published literature regarding the optimum number of option in each item of MCQ in the field of medical education. Objectives: To compare three options, four options, and five options MCQs test for the quality parameters – reliability, validity, item analysis, distracter analysis, and time analysis. Materials and Methods: Participants were 3rd semester M.B.B.S. students. Students were divided randomly into three groups. Each group was given one set of MCQ test out of three options, four options, and five option randomly. Following the marking of the multiple choice tests, the participants’ option selections were analyzed and comparisons were conducted of the mean marks, mean time, validity, reliability and facility value, discrimination index, point biserial value, distracter analysis of three different option formats. Results: Students score more (P = 0.000) and took less time (P = 0.009) for the completion of three options as compared to four options and five options groups. Facility value was more (P = 0.004) in three options group as compared to four and five options groups. There was no significant difference between three groups for the validity, reliability, and item discrimination. Nonfunctioning distracters were more in the four and five options group as compared to three option group. Conclusion: Assessment based on three option MCQs is can be preferred over four option and five option MCQs. PMID:27721545

  5. A Multiple-Choice Mushroom: Schools, Colleges Rely More than Ever on Standardized Tests.

    ERIC Educational Resources Information Center

    Hawkins, B. Denise

    1995-01-01

    This discussion of college entrance examinations reviews differences between the Scholastic Assessment Test (SAT) and the American College Test. It then focuses on the SAT, discussing numbers of students taking the tests, changes in test construction to recognize contributions of women and minorities, involvement of African Americans in…

  6. Multiple Choice and True/False Tests: Reliability Measures and Some Implications of Negative Marking

    ERIC Educational Resources Information Center

    Burton, Richard F.

    2004-01-01

    The standard error of measurement usefully provides confidence limits for scores in a given test, but is it possible to quantify the reliability of a test with just a single number that allows comparison of tests of different format? Reliability coefficients do not do this, being dependent on the spread of examinee attainment. Better in this…

  7. Evaluating the Psychometric Characteristics of Generated Multiple-Choice Test Items

    ERIC Educational Resources Information Center

    Gierl, Mark J.; Lai, Hollis; Pugh, Debra; Touchie, Claire; Boulais, André-Philippe; De Champlain, André

    2016-01-01

    Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric…

  8. Grade 9 English Language Arts Achievement Test. Part B: Reading (Multiple Choice). Readings Booklet. 1986 Edition.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    Intended for students taking the Grade 9 English Language Arts Achievement Test in Alberta, Canada, this reading test (to be administered along with the questions booklet) contains eight short reading selections taken from fiction, nonfiction, and poetry, including the following: "Thieving Raffles" (Eric Nicol); "Flight of the…

  9. Memory-Context Effects of Screen Color in Multiple-Choice and Fill-In Tests

    ERIC Educational Resources Information Center

    Prestera, Gustavo E.; Clariana, Roy; Peck, Andrew

    2005-01-01

    In this experimental study, 44 undergraduates completed five computer-based instructional lessons and either two multiplechoice tests or two fill-in-the-blank tests. Color-coded borders were displayed during the lesson, adjacent to the screen text and illustrations. In the experimental condition, corresponding border colors were shown at posttest.…

  10. Criterion Validation of a Written Multiple-Choice Test of Spanish/English Bilingual Skills.

    ERIC Educational Resources Information Center

    Doyle, Teresa F.; Lin, Thung-Rung

    Supervisory performance appraisals may be of limited utility in the validation of bilingual tests because incumbents are often hired to be the only employee in a unit who possesses the skills necessary to do the job. In an effort to provide criterion-related validity for four equivalent forms of a Spanish/English bilingual test for school district…

  11. Validation of a Standardized Multiple-Choice Multicultural Competence Test: Implications for Training, Assessment, and Practice

    ERIC Educational Resources Information Center

    Gillem, Angela R.; Bartoli, Eleonora; Bertsch, Kristin N.; McCarthy, Maureen A.; Constant, Kerra; Marrero-Meisky, Sheila; Robbins, Steven J.; Bellamy, Scarlett

    2016-01-01

    The Multicultural Counseling and Psychotherapy Test (MCPT), a measure of multicultural counseling competence (MCC), was validated in 2 phases. In Phase 1, the authors administered 451 test items derived from multicultural guidelines in counseling and psychology to 32 multicultural experts and 30 nonexperts. In Phase 2, the authors administered the…

  12. Mechanical Waves Conceptual Survey: Its Modification and Conversion to a Standard Multiple-Choice Test

    ERIC Educational Resources Information Center

    Barniol, Pablo; Zavala, Genaro

    2016-01-01

    In this article we present several modifications of the mechanical waves conceptual survey, the most important test to date that has been designed to evaluate university students' understanding of four main topics in mechanical waves: propagation, superposition, reflection, and standing waves. The most significant changes are (i) modification of…

  13. The role of Rasch analysis when conducting science education research utilizing multiple-choice tests

    NASA Astrophysics Data System (ADS)

    Boone, William J.; Scantlebury, Kathryn

    2006-03-01

    Recent international studies note that countries whose students perform well on international science assessments report the need to change science education. Some countries use assessments for diagnostic purposes to assist teachers in addressing their students' needs. However, in the United States, standards-based reform has focused the national discussion on documenting students' attainment of high educational standards. Students' science achievement is one of those standards, and in many states, high-stakes tests determine the resultant achievement measures. Policymakers and administrators use those tests to rank school performance, to prevent students' graduation, and to evaluate teachers. With science test measures used in different ways, statistical confidence in the measures' validity and reliability is essential. Using a science achievement test from one state's systemic reform project as an example, this paper discusses the strengths of the Rasch model as a psychometric tool and analysis technique, referring to person item maps, anchoring, differential item functioning, and person item fit. Furthermore, the paper proposes that science educators should carefully inspect the tools they use to measure and document changes in educational systems.

  14. A Meta-Analysis of Test Format Effects on Reading and Listening Test Performance: Focus on Multiple-Choice and Open-Ended Formats

    ERIC Educational Resources Information Center

    In'nami, Yo; Koizumi, Rie

    2009-01-01

    A meta-analysis was conducted on the effects of multiple-choice and open-ended formats on L1 reading, L2 reading, and L2 listening test performance. Fifty-six data sources located in an extensive search of the literature were the basis for the estimates of the mean effect sizes of test format effects. The results using the mixed effects model of…

  15. Factor structure of the Benton Visual retention tests: dimensionalization of the Benton Visual retention test, Benton Visual retention test - multiple choice, and the Visual Form Discrimination Test.

    PubMed

    Lockwood, Courtney A; Mansoor, Yael; Homer-Smith, Elizabeth; Moses, James A

    2011-01-01

    Six sequential experiments were conducted on archival data of 610 U.S. Veterans seen at the Palo Alto Veteran's Affairs Hospital, to understand the dimensionalization of the Benton Visual retention test in both the recall (BVRT) and multiple-choice (BVRT-MC) format as well as the Visual Form Discrimination Test (VFDT). These tests were dimensionalized by the Wechsler Adult Intelligence Scale-Revised (WAIS-R) revealing a four-component model that explains 81.04% of the shared variance: the moderately difficult items (BVRT-MC and VFDT items 13-16) loaded with the WAIS-R Perceptual Organization, the easiest items (VFDT items 1-12, BVRT-MC items 1-12, and BVRT items 1-4) loaded separately with both WAIS-R Verbal Comprehension and Freedom from Distractibility, and the most difficult items (BVRT items 3-10) loaded weakly with WAIS-R Perceptual Organization.

  16. Statistical Modelling of Multiple-Choice and True/False Tests: Ways of Considering, and of Reducing, the Uncertainties Attributable To Guessing.

    ERIC Educational Resources Information Center

    Burton, Richard F.; Miller, David J.

    1999-01-01

    Discusses statistical procedures for increasing test unreliability due to guessing in multiple choice and true/false tests. Proposes two new measures of test unreliability: one concerned with resolution of defined levels of knowledge and the other with the probability of examinees being incorrectly ranked. Both models are based on the binomial…

  17. Examining Two Strategies to Link Mixed-Format Tests Using Multiple-Choice Anchors. Research Report. ETS RR-10-18

    ERIC Educational Resources Information Center

    Walker, Michael E.; Kim, Sooyeon

    2010-01-01

    This study examined the use of an all multiple-choice (MC) anchor for linking mixed format tests containing both MC and constructed-response (CR) items, in a nonequivalent groups design. An MC-only anchor could effectively link two such test forms if either (a) the MC and CR portions of the test measured the same construct, so that the MC anchor…

  18. Examining students' understanding of electrical circuits through multiple-choice testing and interviews

    NASA Astrophysics Data System (ADS)

    Engelhardt, Paula Vetter

    Research has shown that both high school and university students have misconceptions about direct current resistive electric circuits. At present, there are no standard diagnostic examinations in electric circuits. Such an instrument would be useful in determining what conceptual problems students have either before or after instruction. The information provided by the exam can be used by classroom instructors to evaluate their instructional methods and the progress and conceptual problems of their students. It can be used to evaluate curricular packages and/or other supplemental materials for their effectiveness in overcoming students' conceptual difficulties. Two versions of a diagnostic instrument known as Determining and Interpreting Resistive Electric circuits Concepts Tests (DIRECT) were developed, each consisting of 29 questions. DIRECT was administered to groups of high school and university students in the United States, Canada and Germany. The students had completed their study of electrostatics and direct current electric circuits prior to taking the exam. Individual interviews were conducted after the administration of version 1.0 to determine how students were interpreting the questions and to uncover their reasoning behind their selections. The analyses indicate that students, especially females, tend to hold multiple misconceptions, even after instruction. The idea that the battery is a constant source of current was used most often in answering the questions. Although students tend to use different misconceptions for each question presented, they do use misconceptions associated with the global objective of the question. Students' definitions of terms used on the exam and their misconceptions were examined. Students tended to confuse terms, especially current. They assigned the properties of current to voltage and/or resistance. One of the major findings from the study was that students were able to translate easily from a "realistic" representation

  19. Does Linking Mixed-Format Tests Using a Multiple-Choice Anchor Produce Comparable Results for Male and Female Subgroups? Research Report. ETS RR-11-44

    ERIC Educational Resources Information Center

    Kim, Sooyeon; Walker, Michael E.

    2011-01-01

    This study examines the use of subpopulation invariance indices to evaluate the appropriateness of using a multiple-choice (MC) item anchor in mixed-format tests, which include both MC and constructed-response (CR) items. Linking functions were derived in the nonequivalent groups with anchor test (NEAT) design using an MC-only anchor set for 4…

  20. The Empirical Power and Type I Error Rates of the GBT and [omega] Indices in Detecting Answer Copying on Multiple-Choice Tests

    ERIC Educational Resources Information Center

    Zopluoglu, Cengiz; Davenport, Ernest C., Jr.

    2012-01-01

    The generalized binomial test (GBT) and [omega] indices are the most recent methods suggested in the literature to detect answer copying behavior on multiple-choice tests. The [omega] index is one of the most studied indices, but there has not yet been a systematic simulation study for the GBT index. In addition, the effect of the ability levels…

  1. The Validity of Pre-Calculus Multiple Choice and Performance-Based Testing as a Predictor of Undergraduate Mathematics and Chemistry Achievement.

    ERIC Educational Resources Information Center

    Fisher, Gwen Laura

    There has been concern over the validity of the Algebra Diagnostic Test (ADT) used to determine the actual level of student preparation for the first quarter of calculus as taught at the University of California, Santa Barbara. It has been hypothesized that performance-based questions, along with the more traditional multiple choice questions,…

  2. A Clarification of the Effects of Rapid Guessing on Coefficient [Alpha]: A Note on Attali's "Reliability of Speeded Number-Right Multiple-Choice Tests"

    ERIC Educational Resources Information Center

    Wise, Steven L.; DeMars, Christine E.

    2009-01-01

    Attali (2005) recently demonstrated that Cronbach's coefficient [alpha] estimate of reliability for number-right multiple-choice tests will tend to be deflated by speededness, rather than inflated as is commonly believed and taught. Although the methods, findings, and conclusions of Attali (2005) are correct, his article may inadvertently invite a…

  3. The Impact of Item Position in Multiple-Choice Test on Student Performance at the Basic Education Certificate Examination (BECE) Level

    ERIC Educational Resources Information Center

    Ollennu, Sam Nii Nmai; Etsey, Y. K. A.

    2015-01-01

    The study investigated the impact of item position in multiple-choice test on student performance at the Basic Education Certificate Examination (BECE) level in Ghana. The sample consisted of 810 Junior Secondary School (JSS) Form 3 students selected from 12 different schools. A quasi-experimental design was used. The instrument for the project…

  4. Multiple-Choice Testing Using Immediate Feedback--Assessment Technique (IF AT®) Forms: Second-Chance Guessing vs. Second-Chance Learning?

    ERIC Educational Resources Information Center

    Merrel, Jeremy D.; Cirillo, Pier F.; Schwartz, Pauline M.; Webb, Jeffrey A.

    2015-01-01

    Multiple choice testing is a common but often ineffective method for evaluating learning. A newer approach, however, using Immediate Feedback Assessment Technique (IF AT®, Epstein Educational Enterprise, Inc.) forms, offers several advantages. In particular, a student learns immediately if his or her answer is correct and, in the case of an…

  5. Dynamic Testing of Analogical Reasoning in 5- to 6-Year-Olds: Multiple-Choice versus Constructed-Response Training Items

    ERIC Educational Resources Information Center

    Stevenson, Claire E.; Heiser, Willem J.; Resing, Wilma C. M.

    2016-01-01

    Multiple-choice (MC) analogy items are often used in cognitive assessment. However, in dynamic testing, where the aim is to provide insight into potential for learning and the learning process, constructed-response (CR) items may be of benefit. This study investigated whether training with CR or MC items leads to differences in the strategy…

  6. C-Test vs. Multiple-Choice Cloze Test as Tests of Reading Comprehension in Iranian EFL Context: Learners' Perspective

    ERIC Educational Resources Information Center

    Ajideh, Parviz; Mozaffarzadeh, Sorayya

    2012-01-01

    Cloze tests have been widely used for measuring reading comprehension since their introducing to the testing world by Taylor in 1953. But in 1982, Klein-Braley criticized cloze procedure mostly for their deletion and scoring problems. They introduced their newly developed testing procedure, C-test, which was an evolved form of cloze tests without…

  7. A Systematic Assessment of "None of the Above" on Multiple Choice Tests in a First Year Psychology Classroom

    ERIC Educational Resources Information Center

    Pachai, Matthew V.; DiBattista, David; Kim, Joseph A.

    2015-01-01

    Multiple choice writing guidelines are decidedly split on the use of "none of the above" (NOTA), with some authors discouraging and others advocating its use. Moreover, empirical studies of NOTA have produced mixed results. Generally, these studies have utilized NOTA as either the correct response or a distractor and assessed its effect…

  8. Making the Most of Multiple Choice

    ERIC Educational Resources Information Center

    Brookhart, Susan M.

    2015-01-01

    Multiple-choice questions draw criticism because many people perceive they test only recall or atomistic, surface-level objectives and do not require students to think. Although this can be the case, it does not have to be that way. Susan M. Brookhart suggests that multiple-choice questions are a useful part of any teacher's questioning repertoire…

  9. Effect of differing PowerPoint slide design on multiple-choice test scores for assessment of knowledge and retention in a theriogenology course.

    PubMed

    Root Kustritz, Margaret V

    2014-01-01

    Third-year veterinary students in a required theriogenology diagnostics course were allowed to self-select attendance at a lecture in either the evening or the next morning. One group was presented with PowerPoint slides in a traditional format (T group), and the other group was presented with PowerPoint slides in the assertion-evidence format (A-E group), which uses a single sentence and a highly relevant graphic on each slide to ensure attention is drawn to the most important points in the presentation. Students took a multiple-choice pre-test, attended lecture, and then completed a take-home assignment. All students then completed an online multiple-choice post-test and, one month later, a different online multiple-choice test to evaluate retention. Groups did not differ on pre-test, assignment, or post-test scores, and both groups showed significant gains from pre-test to post-test and from pre-test to retention test. However, the T group showed significant decline from post-test to retention test, while the A-E group did not. Short-term differences between slide designs were most likely unaffected due to required coursework immediately after lecture, but retention of material was superior with the assertion-evidence slide design.

  10. Science Library of Test Items. Volume Nineteen. A Collection of Multiple Choice Test Items Relating Mainly to Geology.

    ERIC Educational Resources Information Center

    New South Wales Dept. of Education, Sydney (Australia).

    As one in a series of test item collections developed by the Assessment and Evaluation Unit of the Directorate of Studies, items are made available to teachers for the construction of unit tests or term examinations or as a basis for class discussion. Each collection was reviewed for content validity and reliability. The test items meet syllabus…

  11. Science Library of Test Items. Volume Eighteen. A Collection of Multiple Choice Test Items Relating Mainly to Chemistry.

    ERIC Educational Resources Information Center

    New South Wales Dept. of Education, Sydney (Australia).

    As one in a series of test item collections developed by the Assessment and Evaluation Unit of the Directorate of Studies, items are made available to teachers for the construction of unit tests or term examinations or as a basis for class discussion. Each collection was reviewed for content validity and reliability. The test items meet syllabus…

  12. Science Library of Test Items. Volume Twenty. A Collection of Multiple Choice Test Items Relating Mainly to Physics, 1.

    ERIC Educational Resources Information Center

    New South Wales Dept. of Education, Sydney (Australia).

    As one in a series of test item collections developed by the Assessment and Evaluation Unit of the Directorate of Studies, items are made available to teachers for the construction of unit tests or term examinations or as a basis for class discussion. Each collection was reviewed for content validity and reliability. The test items meet syllabus…

  13. Multiple-Choice Exams and Guessing: Results from a One-Year Study of General Chemistry Tests Designed to Discourage Guessing

    ERIC Educational Resources Information Center

    Campbell, Mark L.

    2015-01-01

    Multiple-choice exams, while widely used, are necessarily imprecise due to the contribution of the final student score due to guessing. This past year at the United States Naval Academy the construction and grading scheme for the department-wide general chemistry multiple-choice exams were revised with the goal of decreasing the contribution of…

  14. Typeface and Multiple Choice Option Format.

    ERIC Educational Resources Information Center

    Follman, John; And Others

    The effects of typeface and item options arrangement on comprehension as indicated by multiple-choice test performance were investigated. Copies of the Ability to Interpret Reading Materials in the Social Studies, SRA Iowa Tests of Educational Development, Form X-4 were prepared in four typefaces: elite, pica, proportional, and script. For each…

  15. Nonrestricted multiple-choice examination items.

    PubMed

    Kolstad, R; Goaz, P; Kolstad, R

    1982-08-01

    Multiple-choice items are frequently used in objective examinations. The format chosen should conform to the nature of the instruction. Knowledge about cumulative information, such as lists of attributes, can be tested efficiently by means of multiple-choice items that include a variable number of correct answers. In contrast to conventional, single-answer questions, nonrestricted multiple-choice items are capable of including more facts and fewer incorrect responses. In addition, the nonrestricted format is not burdened with the repetitious pattern of one correct answer coupled with several incorrect responses, a cue that may promote successful guessing. Item analyses can be performed on examinations that include both conventional and nonrestricted items. The reliability of one examination constructed totally with nonrestricted items was analyzed by means of the Kuder-Richardson Formula No. 20. The value 0.72 proved this examination to be both discriminating and consistent.

  16. Identifying Students' Mathematical Skills from a Multiple-Choice Diagnostic Test Using an Iterative Technique to Minimise False Positives

    ERIC Educational Resources Information Center

    Manning, S.; Dix, A.

    2008-01-01

    There is anecdotal evidence that a significant number of students studying computing related courses at degree level have difficulty with sub-GCE mathematics. Testing of students' skills is often performed using diagnostic tests and a number of computer-based diagnostic tests exist, which work, essentially, by testing one specific diagnostic skill…

  17. Investigating Administered Essay and Multiple-Choice Tests in the English Department of Islamic Azad University, Hamedan Branch

    ERIC Educational Resources Information Center

    Karimi, Lotfollah; Mehrdad, Ali Gholami

    2012-01-01

    This study has attempted to investigate the administered written tests in the language department of Islamic Azad University of Hamedan, Iran from validity, practicality and reliability points of view. To this end two steps were taken. First, examining 112 tests, we knew that the face validity of 50 tests had been threatened, 9 tests lacked…

  18. This Is Only a Test: A Machine-Graded Improvement to the Multiple-Choice and True-False Examination

    ERIC Educational Resources Information Center

    McAllister, Daniel; Guidice, Rebecca M.

    2012-01-01

    The primary goal of teaching is to successfully facilitate learning. Testing can help accomplish this goal in two ways. First, testing can provide a powerful motivation for students to prepare when they perceive that the effort involved leads to valued outcomes. Second, testing can provide instructors with valuable feedback on whether their…

  19. Item Order, Response Format, and Examinee Sex and Handedness and Performance on a Multiple-Choice Test.

    ERIC Educational Resources Information Center

    Kleinke, David J.

    Four forms of a 36-item adaptation of the Stanford Achievement Test were administered to 484 fourth graders. External factors potentially influencing test performance were examined, namely: (1) item order (easy-to-difficult vs. uniform); (2) response location (left column vs. right column); (3) handedness which may interact with response location;…

  20. catcher: A Software Program to Detect Answer Copying in Multiple-Choice Tests Based on Nominal Response Model

    ERIC Educational Resources Information Center

    Kalender, Ilker

    2012-01-01

    catcher is a software program designed to compute the [omega] index, a common statistical index for the identification of collusions (cheating) among examinees taking an educational or psychological test. It requires (a) responses and (b) ability estimations of individuals, and (c) item parameters to make computations and outputs the results of…

  1. Exploring Clinical Reasoning Strategies and Test-Taking Behaviors During Clinical Vignette Style Multiple-Choice Examinations: A Mixed Methods Study

    PubMed Central

    Heist, Brian Sanjay; Gonzalo, Jed David; Durning, Steven; Torre, Dario; Elnicki, David Michael

    2014-01-01

    Background Clinical vignette multiple-choice questions (MCQs) are widely used in medical education, but clinical reasoning (CR) strategies employed when approaching these questions have not been well described. Objectives The aims of the study were (1) to identify CR strategies and test-taking (TT) behaviors of physician trainees while solving clinical vignette MCQs; and (2) to examine the relationships between CR strategies and behaviors, and performance on a high-stakes clinical vignette MCQ examination. Methods Thirteen postgraduate year–1 level trainees completed 6 clinical vignette MCQs using a think-aloud protocol. Thematic analysis employing elements of grounded theory was performed on data transcriptions to identify CR strategies and TT behaviors. Participants' CR strategies and TT behaviors were then compared with their US Medical Licensing Examination Step 2 Clinical Knowledge scores. Results Twelve CR strategies and TT behaviors were identified. Individuals with low performance on Step 2 Clinical Knowledge demonstrated increased premature closure and increased faulty knowledge, and showed comparatively less ruling out of alternatives or admission of knowledge deficits. High performers on Step 2 Clinical Knowledge demonstrated increased ruling out of alternatives and admission of knowledge deficits, and less premature closure, faulty knowledge, or closure prior to reading the alternatives. Conclusions Different patterns of CR strategies and TT behaviors may be used by high and low performers during high-stakes clinical vignette MCQ examinations. PMID:26140123

  2. Improving Multiple-Choice Questions

    ERIC Educational Resources Information Center

    Torres, Cristina; Lopes, Ana Paula; Babo, Lurdes; Azevedo, Jose

    2011-01-01

    A MC (multiple-choice) question can be defined as a question in which students are asked to select one alternative from a given set of alternatives in response to a question stem. The objective of this paper is to analyse if MC questions may be considered as an interesting alternative for assessing knowledge, particularly in the mathematics area,…

  3. Social attribution test--multiple choice (SAT-MC) in schizophrenia: comparison with community sample and relationship to neurocognitive, social cognitive and symptom measures.

    PubMed

    Bell, Morris D; Fiszdon, Joanna M; Greig, Tamasine C; Wexler, Bruce E

    2010-09-01

    This is the first report on the use of the Social Attribution Task - Multiple Choice (SAT-MC) to assess social cognitive impairments in schizophrenia. The SAT-MC was originally developed for autism research, and consists of a 64-second animation showing geometric figures enacting a social drama, with 19 multiple choice questions about the interactions. Responses from 85 community-dwelling participants and 66 participants with SCID confirmed schizophrenia or schizoaffective disorders (Scz) revealed highly significant group differences. When the two samples were combined, SAT-MC scores were significantly correlated with other social cognitive measures, including measures of affect recognition, theory of mind, self-report of egocentricity and the Social Cognition Index from the MATRICS battery. Using a cut-off score, 53% of Scz were significantly impaired on SAT-MC compared with 9% of the community sample. Most Scz participants with impairment on SAT-MC also had impairment on affect recognition. Significant correlations were also found with neurocognitive measures but with less dependence on verbal processes than other social cognitive measures. Logistic regression using SAT-MC scores correctly classified 75% of both samples. Results suggest that this measure may have promise, but alternative versions will be needed before it can be used in pre-post or longitudinal designs.

  4. A Close Look at the Relationship between Multiple Choice Vocabulary Test and Integrative Cloze Test of Lexical Words in Iranian Context

    ERIC Educational Resources Information Center

    Ajideh, Parviz

    2009-01-01

    In spite of various definitions provided for it, language proficiency has been always a difficult concept to define and realize. However the commonality of all the definitions for this illusive concept is that language tests should seek to test the learners' ability to use real-life language. The best type of test to show such ability is…

  5. Science Library of Test Items. Volume Twenty-One. A Collection of Multiple Choice Test Items Relating Mainly to Physics, 2.

    ERIC Educational Resources Information Center

    New South Wales Dept. of Education, Sydney (Australia).

    As one in a series of test item collections developed by the Assessment and Evaluation Unit of the Directorate of Studies, items are made available to teachers for the construction of unit tests or term examinations or as a basis for class discussion. Each collection was reviewed for content validity and reliability. The test items meet syllabus…

  6. Approaches to Data Analysis of Multiple-Choice Questions

    ERIC Educational Resources Information Center

    Ding, Lin; Beichner, Robert

    2009-01-01

    This paper introduces five commonly used approaches to analyzing multiple-choice test data. They are classical test theory, factor analysis, cluster analysis, item response theory, and model analysis. Brief descriptions of the goals and algorithms of these approaches are provided, together with examples illustrating their applications in physics…

  7. The North Carolina State Testing Results. Preliminary State - Level Data Only. Multiple-Choice. Grade 3 Pretest. End-of-Grade (Grades 3-8); and End-of-Course Tests. Reporting on the State and 117 Public School Systems and 92 Charter Schools. "The Green Book."

    ERIC Educational Resources Information Center

    North Carolina State Dept. of Public Instruction, Raleigh. Div. of Accountability.

    This book contains data reported to the North Carolina Department of Public Instruction before August 12, 2002 about state testing results. It contains preliminary 2001-2002 state testing results for: (1) grade 3 pretest, in reading and mathematics; (2) end-of-grade tests at grades 3 through 8, multiple choice tests; (3) alternate assessment…

  8. The North Carolina State Testing Results, 2000-01. Preliminary State-Level Data Only. Multiple-Choice Grade 3 Pretest, End-of-Grade, High School Comprehensive and End-of-Course Tests. Reporting on the State and 117 Public School Systems and 87 Charter Schools. "The Green Book."

    ERIC Educational Resources Information Center

    North Carolina State Dept. of Public Instruction, Raleigh. Div. of Accountability/Testing.

    This document contains preliminary state-level results for the 2000-2001 North Carolina state testing program. No conclusions about achievement are drawn in this report, although percentages achieving at given Achievement Levels are given for the end-of-grade tests. The Grade 3 Pretest is a multiple-choice reading and mathematics test administered…

  9. Scores Based on Dangerous Responses to Multiple-Choice Items.

    ERIC Educational Resources Information Center

    Grosse, Martin E.

    1986-01-01

    Scores based on the number of correct answers were compared with scores based on dangerous responses to items in the same multiple choice test developed by American Board of Orthopaedic Surgery. Results showed construct validity for both sets of scores. However, both scores were redundant when evaluated by correlation coefficient. (Author/JAZ)

  10. Further Support for Changing Multiple-Choice Answers.

    ERIC Educational Resources Information Center

    Fabrey, Lawrence J.; Case, Susan M.

    1985-01-01

    The effect on test scores of changing answers to multiple-choice questions was studied and compared to earlier research. The current setting was a nationally administered, in-training, specialty examination for medical residents in obstetrics and gynecology. Both low and high scorers improved their scores when they changed answers. (SW)

  11. A framework for improving the quality of multiple-choice assessments.

    PubMed

    Tarrant, Marie; Ware, James

    2012-01-01

    Multiple-choice questions are frequently used in high-stakes nursing assessments. Many nurse educators, however, lack the necessary knowledge and training to develop these tests. The authors discuss test development guidelines to help nurse educators produce valid and reliable multiple-choice assessments.

  12. A Diagnostic Study of Pre-Service Teachers' Competency in Multiple-Choice Item Development

    ERIC Educational Resources Information Center

    Asim, Alice E.; Ekuri, Emmanuel E.; Eni, Eni I.

    2013-01-01

    Large class size is an issue in testing at all levels of Education. As a panacea to this, multiple choice test formats has become very popular. This case study was designed to diagnose pre-service teachers' competency in constructing questions (IQT); direct questions (DQT); and best answer (BAT) varieties of multiple choice items. Subjects were 88…

  13. Development and Application of a Two-Tier Multiple-Choice Diagnostic Test for High School Students' Understanding of Cell Division and Reproduction

    ERIC Educational Resources Information Center

    Sesli, Ertugrul; Kara, Yilmaz

    2012-01-01

    This study involved the development and application of a two-tier diagnostic test for measuring students' understanding of cell division and reproduction. The instrument development procedure had three general steps: defining the content boundaries of the test, collecting information on students' misconceptions, and instrument development.…

  14. The Effects of Item Preview on Video-Based Multiple-Choice Listening Assessments

    ERIC Educational Resources Information Center

    Koyama, Dennis; Sun, Angela; Ockey, Gary J.

    2016-01-01

    Multiple-choice formats remain a popular design for assessing listening comprehension, yet no consensus has been reached on how multiple-choice formats should be employed. Some researchers argue that test takers must be provided with a preview of the items prior to the input (Buck, 1995; Sherman, 1997); others argue that a preview may decrease the…

  15. Sample Selection Effect on AP Multiple-Choice Score to Composite Score Scaling.

    ERIC Educational Resources Information Center

    Yang, Wen-Ling; Dorans, Neil J.; Tateneni, Krishna

    Scores on the multiple-choice sections of alternate forms are equated through anchor-test equating for the Advanced Placement Program (AP) examinations. There is no linkage of free-response sections since different free-response items are given yearly. However, the free-response and multiple-choice sections are combined to produce a composite.…

  16. Mind the Red Herrings--Deliberate Distraction of Pupil's Strategies Solving Multiple Choice Questions in Chemistry.

    ERIC Educational Resources Information Center

    Schmidt, Hans-Jurgen

    This study assumes that multiple choice test items generally provide the testee with several solutions, one of which is correct and the others of which are wrong. If pupils are unable to answer a question, one would expect that the wrong choices have equal chances of being selected. In many multiple choice items on stoichiometric calculation which…

  17. Measuring Strategy Use in Context with Multiple-Choice Items

    ERIC Educational Resources Information Center

    Cromley, Jennifer; Azevedo, Roger

    2011-01-01

    A number of authors have presented data that challenge the validity of self-report of strategy use or choice of strategy. We created a multiple-choice measure of students' strategy use based on the work of Kozminsky, E., and Kozminsky, L. (2001), and tested it with three samples as part of a series of studies testing the fit of the DIME model of…

  18. Cheating Probabilities on Multiple Choice Tests

    NASA Astrophysics Data System (ADS)

    Rizzuto, Gaspard T.; Walters, Fred

    1997-10-01

    This paper is strictly based on mathematical statistics and as such does not depend on prior performance and assumes the probability of each choice to be identical. In a real life situation, the probability of two students having identical responses becomes larger the better the students are. However the mathematical model is developed for all responses, both correct and incorrect, and provides a baseline for evaluation. David Harpp and coworkers (2, 3) at McGill University have evaluated ratios of exact errors in common (EEIC) to errors in common (EIC) and differences (D). In pairings where the ratio EEIC/EIC was greater than 0.75, the pair had unusually high odds against their answer pattern being random. Detection of copying of the EEIC/D ratios at values >1.0 indicate that pairs of these students were seated adjacent to one another and copied from one another. The original papers should be examined for details.

  19. Do they know too little? An inter-institutional study on the anatomical knowledge of upper-year medical students based on multiple choice questions of a progress test.

    PubMed

    Brunk, Irene; Schauber, Stefan; Georg, Waltraud

    2017-01-01

    The depth of medical students' knowledge of human anatomy is often controversially discussed. In particular, members of surgical disciplines raise concerns regarding deficits in the factual anatomical and topographical knowledge of upper-year students. The question often raised is whether or not medical students have sufficient anatomical and topographical knowledge when they graduate from medical school. Indeed, this question is highly relevant for curricular planners. Therefore, we have addressed it by evaluating the performance of students in the 5th and 6th years of their studies on anatomical multiple choice questions from the Berlin Progress Test Medicine performed at 10 German university medical schools. Results were compared to a reference based on a standard setting (modified Angoff-procedure). The reference was established independently by 5 panels of anatomists at different universities across Germany. As the ratings were independent of university affiliation, teaching-experience or training of the anatomists, an overall cut off score could be calculated which corresponded to 60.4% correct answers for the question set used in this study. In the progress test, on average only 29.9% of the students' answers were correct, reflecting that the performance was significantly below the expected standard. On the basis of the test results it remained unclear whether acquisition or retention of anatomical information was insufficient. Further evaluation by item characteristics revealed that the students had major difficulty in applying their theoretical knowledge to practical problems in the context of a clinical setting. Thus, our results reveal deficits in the anatomical knowledge of medical students in their final years. Therefore medical curricula should not only focus on enhancing the acquisition and retention of core anatomical knowledge, but aim at improving skills applying this in a clinical setting.

  20. A Comparison of Student Performances in Answering Essay-Type and Multiple-Choice Questions

    ERIC Educational Resources Information Center

    McCloskey, D. I.; Holland, R. A. B.

    1976-01-01

    Three groups of students were tested on the same material in three different forms of examination. They performed better in multiple-choice and in cued essay questions than in uncued essay questions. (Author/LBH)

  1. Using a Classroom Response System to Improve Multiple-Choice Performance in AP® Physics

    NASA Astrophysics Data System (ADS)

    Bertrand, Peggy

    2009-04-01

    Participation in rigorous high school courses such as Advanced Placement (AP®) Physics increases the likelihood of college success, especially for students who are traditionally underserved. Tackling difficult multiple-choice exams should be part of any AP program because well-constructed multiple-choice questions, such as those on AP exams and on the Force Concept Inventory,2 are particularly good at rooting out common and persisting student misconceptions. Additionally, there are barriers to multiple-choice performance that have little to do with content mastery. For example, a student might fail to read the question thoroughly, forget to apply a reasonableness test to the answer, or simply work too slowly.

  2. Genetic Algorithms for Multiple-Choice Problems

    NASA Astrophysics Data System (ADS)

    Aickelin, Uwe

    2010-04-01

    This thesis investigates the use of problem-specific knowledge to enhance a genetic algorithm approach to multiple-choice optimisation problems.It shows that such information can significantly enhance performance, but that the choice of information and the way it is included are important factors for success.Two multiple-choice problems are considered.The first is constructing a feasible nurse roster that considers as many requests as possible.In the second problem, shops are allocated to locations in a mall subject to constraints and maximising the overall income.Genetic algorithms are chosen for their well-known robustness and ability to solve large and complex discrete optimisation problems.However, a survey of the literature reveals room for further research into generic ways to include constraints into a genetic algorithm framework.Hence, the main theme of this work is to balance feasibility and cost of solutions.In particular, co-operative co-evolution with hierarchical sub-populations, problem structure exploiting repair schemes and indirect genetic algorithms with self-adjusting decoder functions are identified as promising approaches.The research starts by applying standard genetic algorithms to the problems and explaining the failure of such approaches due to epistasis.To overcome this, problem-specific information is added in a variety of ways, some of which are designed to increase the number of feasible solutions found whilst others are intended to improve the quality of such solutions.As well as a theoretical discussion as to the underlying reasons for using each operator,extensive computational experiments are carried out on a variety of data.These show that the indirect approach relies less on problem structure and hence is easier to implement and superior in solution quality.

  3. Free Response vs. Multiple Choice CUE at Oregon State University

    NASA Astrophysics Data System (ADS)

    Zwolak, Justyna; Manogue, Corinne

    2015-04-01

    Standardized assessment tests that allow researchers to compare the performance of students under various curricula are highly desirable. There are several research-based conceptual tests that serve as instruments to assess and identify students' difficulties in lower-division courses. At the upper-division level, however, assessing students' difficulties is a more challenging task. Although several research groups are currently working on such tests, their reliability and validity are still under investigation. We analyze the results of the Colorado Upper-Division Electrostatics (CUE) diagnostic from Oregon State University and compare it with data from University of Colorado. In particular, we compare students' performance on the Free Response and the Multiple Choice versions of the CUE. Our work complements and extends the previous findings from the University of Colorado by highlighting important differences in student learning that may be related to the curriculum, illuminating difficulties with the rubric for certain problems.

  4. Evaluating Multiple-Choice Exams in Large Introductory Physics Courses

    ERIC Educational Resources Information Center

    Scott, Michael; Stelzer, Tim; Gladding, Gary

    2006-01-01

    The reliability and validity of professionally written multiple-choice exams have been extensively studied for exams such as the SAT, graduate record examination, and the force concept inventory. Much of the success of these multiple-choice exams is attributed to the careful construction of each question, as well as each response. In this study,…

  5. Nested Logit Models for Multiple-Choice Item Response Data

    ERIC Educational Resources Information Center

    Suh, Youngsuk; Bolt, Daniel M.

    2010-01-01

    Nested logit item response models for multiple-choice data are presented. Relative to previous models, the new models are suggested to provide a better approximation to multiple-choice items where the application of a solution strategy precedes consideration of response options. In practice, the models also accommodate collapsibility across all…

  6. Benford's Law: textbook exercises and multiple-choice testbanks.

    PubMed

    Slepkov, Aaron D; Ironside, Kevin B; DiBattista, David

    2015-01-01

    Benford's Law describes the finding that the distribution of leading (or leftmost) digits of innumerable datasets follows a well-defined logarithmic trend, rather than an intuitive uniformity. In practice this means that the most common leading digit is 1, with an expected frequency of 30.1%, and the least common is 9, with an expected frequency of 4.6%. Currently, the most common application of Benford's Law is in detecting number invention and tampering such as found in accounting-, tax-, and voter-fraud. We demonstrate that answers to end-of-chapter exercises in physics and chemistry textbooks conform to Benford's Law. Subsequently, we investigate whether this fact can be used to gain advantage over random guessing in multiple-choice tests, and find that while testbank answers in introductory physics closely conform to Benford's Law, the testbank is nonetheless secure against such a Benford's attack for banal reasons.

  7. Multiple-Choice Exams: An Obstacle for Higher-Level Thinking in Introductory Science Classes

    ERIC Educational Resources Information Center

    Stanger-Hall, Kathrin F.

    2012-01-01

    Learning science requires higher-level (critical) thinking skills that need to be practiced in science classes. This study tested the effect of exam format on critical-thinking skills. Multiple-choice (MC) testing is common in introductory science courses, and students in these classes tend to associate memorization with MC questions and may not…

  8. How Assessing Reading Comprehension with Multiple-Choice Questions Shapes the Construct: A Cognitive Processing Perspective

    ERIC Educational Resources Information Center

    Rupp, Andre A.; Ferne, Tracy; Choi, Hyeran

    2006-01-01

    This article provides renewed converging empirical evidence for the hypothesis that asking test-takers to respond to text passages with multiple-choice questions induces response processes that are strikingly different from those that respondents would draw on when reading in non-testing contexts. Moreover, the article shows that the construct of…

  9. Multiple Choice Questions Can Be Designed or Revised to Challenge Learners' Critical Thinking

    ERIC Educational Resources Information Center

    Tractenberg, Rochelle E.; Gushta, Matthew M.; Mulroney, Susan E.; Weissinger, Peggy A.

    2013-01-01

    Multiple choice (MC) questions from a graduate physiology course were evaluated by cognitive-psychology (but not physiology) experts, and analyzed statistically, in order to test the independence of content expertise and cognitive complexity ratings of MC items. Integration of higher order thinking into MC exams is important, but widely known to…

  10. The Effect of Images on Item Statistics in Multiple Choice Anatomy Examinations

    ERIC Educational Resources Information Center

    Notebaert, Andrew J.

    2017-01-01

    Although multiple choice examinations are often used to test anatomical knowledge, these often forgo the use of images in favor of text-based questions and answers. Because anatomy is reliant on visual resources, examinations using images should be used when appropriate. This study was a retrospective analysis of examination items that were text…

  11. Pick-N Multiple Choice-Exams: A Comparison of Scoring Algorithms

    ERIC Educational Resources Information Center

    Bauer, Daniel; Holzer, Matthias; Kopp, Veronika; Fischer, Martin R.

    2011-01-01

    To compare different scoring algorithms for Pick-N multiple correct answer multiple-choice (MC) exams regarding test reliability, student performance, total item discrimination and item difficulty. Data from six 3rd year medical students' end of term exams in internal medicine from 2005 to 2008 at Munich University were analysed (1,255 students,…

  12. Multiple Choice Converted to True-False: Comparative Reliabilities and Validities.

    ERIC Educational Resources Information Center

    Green, Kathy

    Forty three-option multiple choice (MC) statements on a midterm examination were converted to 120 true-false (TF) statements, identical in content. Test forms (MC and TF) were randomly administered to 50 undergraduates, to investigate the validity and internal consistency reliability of the two forms. A Kuder-Richardson formula 20 reliability was…

  13. Comparative Reliabilities of the Multiple Choice and True-False Formats.

    ERIC Educational Resources Information Center

    Oosterhof, Albert C.; Glasnapp, Douglas R.

    The present study was concerned with several currently unanswered questions, two of which are: what is an empirically determined ratio of multiple choice to equivalent true-false items which can be answered in a given amount of time?; and for achievement test items administered within a classroom situation, which of the two formats under…

  14. Design and analysis of multiple choice feeding preference data.

    PubMed

    Prince, Jeffrey S; LeBlanc, W G; Maciá, S

    2004-01-01

    Traditional analyses of feeding experiments that test consumer preference for an array of foods suffer from several defects. We have modified the experimental design to incorporate into a multivariate analysis the variance due to autogenic change in control replicates. Our design allows the multiple foods to be physically paired with their control counterparts. This physical proximity of the multiple food choices in control/experimental pairs ensures that the variance attributable to external environmental factors jointly affects all combinations within each replicate. Our variance term, therefore, is not a contrived estimate as is the case for the random pairing strategy proposed by previous studies. The statistical analysis then proceeds using standard multivariate statistical tests. We conducted a multiple choice feeding experiment using our experimental design and utilized a Monte Carlo analysis to compare our results with those obtained from an experimental design that employed the random pairing strategy. Our experimental design allowed detection of moderate differences among feeding means when the random design did not.

  15. Item analysis of in use multiple choice questions in pharmacology

    PubMed Central

    Kaur, Mandeep; Singla, Shweta; Mahajan, Rajiv

    2016-01-01

    Background: Multiple choice questions (MCQs) are a common method of assessment of medical students. The quality of MCQs is determined by three parameters such as difficulty index (DIF I), discrimination index (DI), and distracter efficiency (DE). Objectives: The objective of this study is to assess the quality of MCQs currently in use in pharmacology and discard the MCQs which are not found useful. Materials and Methods: A class test of central nervous system unit was conducted in the Department of Pharmacology. This test comprised 50 MCQs/items and 150 distracters. A correct response to an item was awarded one mark with no negative marking for incorrect response. Each item was analyzed for three parameters such as DIF I, DI, and DE. Results: DIF of 38 (76%) items was in the acceptable range (P = 30–70%), 11 (22%) items were too easy (P > 70%), and 1 (2%) item was too difficult (P < 30%). DI of 31 (62%) items was excellent (d > 0.35), of 12 (24%) items was good (d = 0.20–0.34), and of 7 (14%) items was poor (d < 0.20). A total of 50 items had 150 distracters. Among these, 27 (18%) were nonfunctional distracters (NFDs) and 123 (82%) were functional distracters. Items with one NFD were 11 and with two NFDs were 8. Based on these parameters, 6 items were discarded, 17 were revised, and 27 were kept for subsequent use. Conclusion: Item analysis is a valuable tool as it helps us to retain the valuable MCQs and discard the items which are not useful. It also helps in increasing our skills in test construction and identifies the specific areas of course content which need greater emphasis or clarity. PMID:27563581

  16. Correcting Grade Deflation Caused by Multiple-Choice Scoring.

    ERIC Educational Resources Information Center

    Baranchik, Alvin; Cherkas, Barry

    2000-01-01

    Presents a study involving three sections of pre-calculus (n=181) at four-year college where partial credit scoring on multiple-choice questions was examined over an entire semester. Indicates that grades determined by partial credit scoring seemed more reflective of both the quantity and quality of student knowledge than grades determined by…

  17. Using the Multiple Choice Procedure to Measure College Student Gambling

    ERIC Educational Resources Information Center

    Butler, Leon Harvey

    2010-01-01

    Research suggests that gambling is similar to addictive behaviors such as substance use. In the current study, gambling was investigated from a behavioral economics perspective. The Multiple Choice Procedure (MCP) with gambling as the target behavior was used to assess for relative reinforcing value, the effect of alternative reinforcers, and…

  18. Analyzing Student Confidence in Classroom Voting with Multiple Choice Questions

    ERIC Educational Resources Information Center

    Stewart, Ann; Storm, Christopher; VonEpps, Lahna

    2013-01-01

    The purpose of this paper is to present results of a recent study in which students voted on multiple choice questions in mathematics courses of varying levels. Students used clickers to select the best answer among the choices given; in addition, they were also asked whether they were confident in their answer. In this paper we analyze data…

  19. Initial Correction versus Negative Marking in Multiple Choice Examinations

    ERIC Educational Resources Information Center

    Van Hecke, Tanja

    2015-01-01

    Optimal assessment tools should measure in a limited time the knowledge of students in a correct and unbiased way. A method for automating the scoring is multiple choice scoring. This article compares scoring methods from a probabilistic point of view by modelling the probability to pass: the number right scoring, the initial correction (IC) and…

  20. Validity of multiple-choice examinations in surgery.

    PubMed

    Stillman, R M

    1984-07-01

    The difficulty of creating new, unambiguous, pertinent multiple-choice questions of a level appropriate to medical students implies that examinations must be compiled from a limited number of items. Furthermore, it is impossible to keep used questions inaccessible to all subsequent students. This study was undertaken to determine if these realities are compatible with examinations that are both valid and reliable. A pool of 480 multiple-choice questions was distributed to 232 students during the surgical clerkship. At the conclusion of each quarter, a 120-item multiple-choice examination that consisted of entirely new questions was administered (group I). These 960 questions were then made available to the next group of 218 students; each subsequent examination consisted of 50% new questions and 50% questions repeated verbatim from the publicized pool (group II). With the available pool now increased to 1200, the next examination consisted of 20% new and 80% repeat questions (group III). Reliability (internal consistency) was measured by the Kuder-Richardson-21 formula. Validity was measured by correlation between the multiple-choice examination and the average score of evaluations of each student by two oral examinations and five faculty members. Despite the expected increase in mean examination score, there is loss of neither reliability nor validity by inclusion of even 80% of items repeated from a large pool of multiple-choice questions that have been distributed to the students. Hence, instead of adding irrelevant, trivial, or inappropriate items or trying in vain to hide old examinations from new students, simple preparation of examinations from a large pool of questions is recommended. To insure fairness to all students, this pool should be made public knowledge.

  1. Assessment of item-writing flaws in multiple-choice questions.

    PubMed

    Nedeau-Cayo, Rosemarie; Laughlin, Deborah; Rus, Linda; Hall, John

    2013-01-01

    This study evaluated the quality of multiple-choice questions used in a hospital's e-learning system. Constructing well-written questions is fraught with difficulty, and item-writing flaws are common. Study results revealed that most items contained flaws and were written at the knowledge/comprehension level. Few items had linked objectives, and no association was found between the presence of objectives and flaws. Recommendations include education for writing test questions.

  2. A practical discussion to avoid common pitfalls when constructing multiple choice questions items

    PubMed Central

    Al-Faris, Eiad A.; Alorainy, Ibrahim A.; Abdel-Hameed, Ahmad A.; Al-Rukban, Mohammed O.

    2010-01-01

    This paper is an attempt to produce a guide for improving the quality of Multiple Choice Questions (MCQs) used in undergraduate and postgraduate assessment. Multiple Choice Questions type is the most frequently used type of assessment worldwide. Well constructed, context rich MCQs have a high reliability per hour of testing. Avoidance of technical items flaws is essential to improve the validity evidence of MCQs. Technical item flaws are essentially of two types (i) related to testwiseness, (ii) related to irrelevant difficulty. A list of such flaws is presented together with discussion of each flaw and examples to facilitate learning of this paper and to make it learner friendly. This paper was designed to be interactive with self-assessment exercises followed by the key answer with explanations. PMID:21359033

  3. The effect of images on item statistics in multiple choice anatomy examinations.

    PubMed

    Notebaert, Andrew J

    2017-01-01

    Although multiple choice examinations are often used to test anatomical knowledge, these often forgo the use of images in favor of text-based questions and answers. Because anatomy is reliant on visual resources, examinations using images should be used when appropriate. This study was a retrospective analysis of examination items that were text based compared to the same questions when a reference image was included with the question stem. Item difficulty and discrimination were analyzed for 15 multiple choice items given across two different examinations in two sections of an undergraduate anatomy course. Results showed that there were some differences item difficulty but these were not consistent to either text items or items with reference images. Differences in difficulty were mainly attributable to one group of students performing better overall on the examinations. There were no significant differences for item discrimination for any of the analyzed items. This implies that reference images do not significantly alter the item statistics, however this does not indicate if these images were helpful to the students when answering the questions. Care should be taken by question writers to analyze item statistics when making changes to multiple choice questions, including ones that are included for the perceived benefit of the students. Anat Sci Educ 10: 68-78. © 2016 American Association of Anatomists.

  4. An Investigation of Explanation Multiple-Choice Items in Science Assessment

    ERIC Educational Resources Information Center

    Liu, Ou Lydia; Lee, Hee-Sun; Linn, Marcia C.

    2011-01-01

    Both multiple-choice and constructed-response items have known advantages and disadvantages in measuring scientific inquiry. In this article we explore the function of explanation multiple-choice (EMC) items and examine how EMC items differ from traditional multiple-choice and constructed-response items in measuring scientific reasoning. A group…

  5. Polytomous versus Dichotomous Scoring on Multiple-Choice Examinations: Development of a Rubric for Rating Partial Credit

    ERIC Educational Resources Information Center

    Grunert, Megan L.; Raker, Jeffrey R.; Murphy, Kristen L.; Holme, Thomas A.

    2013-01-01

    The concept of assigning partial credit on multiple-choice test items is considered for items from ACS Exams. Because the items on these exams, particularly the quantitative items, use common student errors to define incorrect answers, it is possible to assign partial credits to some of these incorrect responses. To do so, however, it becomes…

  6. Asymmetry in Student Achievement on Multiple-Choice and Constructed-Response Items in Reversible Mathematics Processes

    ERIC Educational Resources Information Center

    Sangwin, Christopher J.; Jones, Ian

    2017-01-01

    In this paper we report the results of an experiment designed to test the hypothesis that when faced with a question involving the inverse direction of a reversible mathematical process, students solve a multiple-choice version by verifying the answers presented to them by the direct method, not by undertaking the actual inverse calculation.…

  7. Differences in Reaction to Immediate Feedback and Opportunity to Revise Answers for Multiple-Choice and Open-Ended Questions

    ERIC Educational Resources Information Center

    Attali, Yigal; Laitusis, Cara; Stone, Elizabeth

    2016-01-01

    There are many reasons to believe that open-ended (OE) and multiple-choice (MC) items elicit different cognitive demands of students. However, empirical evidence that supports this view is lacking. In this study, we investigated the reactions of test takers to an interactive assessment with immediate feedback and answer-revision opportunities for…

  8. An Empirical Comparison of DDF Detection Methods for Understanding the Causes of DIF in Multiple-Choice Items

    ERIC Educational Resources Information Center

    Suh, Youngsuk; Talley, Anna E.

    2015-01-01

    This study compared and illustrated four differential distractor functioning (DDF) detection methods for analyzing multiple-choice items. The log-linear approach, two item response theory-model-based approaches with likelihood ratio tests, and the odds ratio approach were compared to examine the congruence among the four DDF detection methods.…

  9. The Development of Multiple-Choice Items Consistent with the AP Chemistry Curriculum Framework to More Accurately Assess Deeper Understanding

    ERIC Educational Resources Information Center

    Domyancich, John M.

    2014-01-01

    Multiple-choice questions are an important part of large-scale summative assessments, such as the advanced placement (AP) chemistry exam. However, past AP chemistry exam items often lacked the ability to test conceptual understanding and higher-order cognitive skills. The redesigned AP chemistry exam shows a distinctive shift in item types toward…

  10. Multiple Choice Neurodynamical Model of the Uncertain Option Task.

    PubMed

    Insabato, Andrea; Pannunzi, Mario; Deco, Gustavo

    2017-01-01

    The uncertain option task has been recently adopted to investigate the neural systems underlying the decision confidence. Latterly single neurons activity has been recorded in lateral intraparietal cortex of monkeys performing an uncertain option task, where the subject is allowed to opt for a small but sure reward instead of making a risky perceptual decision. We propose a multiple choice model implemented in a discrete attractors network. This model is able to reproduce both behavioral and neurophysiological experimental data and therefore provides support to the numerous perspectives that interpret the uncertain option task as a sensory-motor association. The model explains the behavioral and neural data recorded in monkeys as the result of the multistable attractor landscape and produces several testable predictions. One of these predictions may help distinguish our model from a recently proposed continuous attractor model.

  11. Multiple Choice Neurodynamical Model of the Uncertain Option Task

    PubMed Central

    Insabato, Andrea; Pannunzi, Mario; Deco, Gustavo

    2017-01-01

    The uncertain option task has been recently adopted to investigate the neural systems underlying the decision confidence. Latterly single neurons activity has been recorded in lateral intraparietal cortex of monkeys performing an uncertain option task, where the subject is allowed to opt for a small but sure reward instead of making a risky perceptual decision. We propose a multiple choice model implemented in a discrete attractors network. This model is able to reproduce both behavioral and neurophysiological experimental data and therefore provides support to the numerous perspectives that interpret the uncertain option task as a sensory-motor association. The model explains the behavioral and neural data recorded in monkeys as the result of the multistable attractor landscape and produces several testable predictions. One of these predictions may help distinguish our model from a recently proposed continuous attractor model. PMID:28076355

  12. Potential Values of Incorporating a Multiple-Choice Question Construction in Physics Experimentation Instruction

    NASA Astrophysics Data System (ADS)

    Yu, Fu-Yun; Liu, Yu-Hsin

    2005-09-01

    The potential value of a multiple-choice question-construction instructional strategy for the support of students’ learning of physics experiments was examined in the study. Forty-two university freshmen participated in the study for a whole semester. A constant comparison method adopted to categorize students’ qualitative data indicated that the influences of multiple-choice question construction were evident in several significant ways (promoting constructive and productive studying habits; reflecting and previewing course-related materials; increasing in-group communication and interaction; breaking passive learning style and habits, etc.), which, worked together, not only enhanced students’ comprehension and retention of the obtained knowledge, but also helped distil a sense of empowerment and learning community within the participants. Analysis with one-group t-tests, using 3 as the expected mean, on quantitative data further found that students’ satisfaction toward past learning experience, and perceptions toward this strategy’s potentials for promoting learning were statistically significant at the 0.0005 level, while learning anxiety was not statistically significant. Suggestions for incorporating question-generation activities within classroom and topics for future studies were rendered.

  13. The detection of cheating in multiple choice examinations

    NASA Astrophysics Data System (ADS)

    Richmond, Peter; Roehner, Bertrand M.

    2015-10-01

    Cheating in examinations is acknowledged by an increasing number of organizations to be widespread. We examine two different approaches to assess their effectiveness at detecting anomalous results, suggestive of collusion, using data taken from a number of multiple-choice examinations organized by the UK Radio Communication Foundation. Analysis of student pair overlaps of correct answers is shown to give results consistent with more orthodox statistical correlations for which confidence limits as opposed to the less familiar "Bonferroni method" can be used. A simulation approach is also developed which confirms the interpretation of the empirical approach. Then the variables Xi =(1 -Ui) Yi +Ui Z are a system of symmetric dependent binary variables (0 , 1 ; p) whose correlation matrix is ρij = r. The proof is easy and given in the paper. Let us add two remarks. • We used the expression "symmetric variables" to reflect the fact that all Xi play the same role. The expression "exchangeable variables" is often used with the same meaning. • The correlation matrix has only positive elements. This is of course imposed by the symmetry condition. ρ12 < 0 and ρ23 < 0 would imply ρ13 > 0, thus violating the symmetry requirement. In the following subsections we will be concerned with the question of uniqueness of the set of Xi generated above. Needless to say, it is useful to know whether the proposition gives the answer or only one among many. More precisely, the problem can be stated as follows.

  14. Multiple Choice Knapsack Problem: example of planning choice in transportation.

    PubMed

    Zhong, Tao; Young, Rhonda

    2010-05-01

    Transportation programming, a process of selecting projects for funding given budget and other constraints, is becoming more complex as a result of new federal laws, local planning regulations, and increased public involvement. This article describes the use of an integer programming tool, Multiple Choice Knapsack Problem (MCKP), to provide optimal solutions to transportation programming problems in cases where alternative versions of projects are under consideration. In this paper, optimization methods for use in the transportation programming process are compared and then the process of building and solving the optimization problems is discussed. The concepts about the use of MCKP are presented and a real-world transportation programming example at various budget levels is provided. This article illustrates how the use of MCKP addresses the modern complexities and provides timely solutions in transportation programming practice. While the article uses transportation programming as a case study, MCKP can be useful in other fields where a similar decision among a subset of the alternatives is required.

  15. Role of the plurality rule in multiple choices

    NASA Astrophysics Data System (ADS)

    Calvão, A. M.; Ramos, M.; Anteneodo, C.

    2016-02-01

    People are often challenged to select one among several alternatives. This situation is present not only in decisions about complex issues, e.g. political or academic choices, but also about trivial ones, such as in daily purchases at a supermarket. We tackle this scenario by means of the tools of statistical mechanics. Following this approach, we introduce and analyse a model of opinion dynamics, using a Potts-like state variable to represent the multiple choices, including the ‘undecided state’, which represents the individuals who do not make a choice. We investigate the dynamics over Erdös-Rényi and Barabási-Albert networks, two paradigmatic classes with the small-world property, and we show the impact of the type of network on the opinion dynamics. Depending on the number of available options q and on the degree distribution of the network of contacts, different final steady states are accessible: from a wide distribution of choices to a state where a given option largely dominates. The abrupt transition between them is consistent with the sudden viral dominance of a given option over many similar ones. Moreover, the probability distributions produced by the model are validated by real data. Finally, we show that the model also contemplates the real situation of overchoice, where a large number of similar alternatives makes the choice process harder and indecision prevail.

  16. Evaluation of five guidelines for option development in multiple-choice item-writing.

    PubMed

    Martínez, Rafael J; Moreno, Rafael; Martín, Irene; Trigo, M Eva

    2009-05-01

    This paper evaluates certain guidelines for writing multiple-choice test items. The analysis of the responses of 5013 subjects to 630 items from 21 university classroom achievement tests suggests that an option should not differ in terms of heterogeneous content because such error has a slight but harmful effect on item discrimination. This also occurs with the "None of the above" option when it is the correct one. In contrast, results do not show the supposedly negative effects of a different-length option, the use of specific determiners, or the use of the "All of the above" option, which not only decreases difficulty but also improves discrimination when it is the correct option.

  17. Validating Measurement of Knowledge Integration in Science Using Multiple-Choice and Explanation Items

    ERIC Educational Resources Information Center

    Lee, Hee-Sun; Liu, Ou Lydia; Linn, Marcia C.

    2011-01-01

    This study explores measurement of a construct called knowledge integration in science using multiple-choice and explanation items. We use construct and instructional validity evidence to examine the role multiple-choice and explanation items plays in measuring students' knowledge integration ability. For construct validity, we analyze item…

  18. Teaching Critical Thinking without (Much) Writing: Multiple-Choice and Metacognition

    ERIC Educational Resources Information Center

    Bassett, Molly H.

    2016-01-01

    In this essay, I explore an exam format that pairs multiple-choice questions with required rationales. In a space adjacent to each multiple-choice question, students explain why or how they arrived at the answer they selected. This exercise builds the critical thinking skill known as metacognition, thinking about thinking, into an exam that also…

  19. The Answering Process for Multiple-Choice Questions in Collaborative Learning: A Mathematical Learning Model Analysis

    ERIC Educational Resources Information Center

    Nakamura, Yasuyuki; Nishi, Shinnosuke; Muramatsu, Yuta; Yasutake, Koichi; Yamakawa, Osamu; Tagawa, Takahiro

    2014-01-01

    In this paper, we introduce a mathematical model for collaborative learning and the answering process for multiple-choice questions. The collaborative learning model is inspired by the Ising spin model and the model for answering multiple-choice questions is based on their difficulty level. An intensive simulation study predicts the possibility of…

  20. Using Module Analysis for Multiple Choice Responses: A New Method Applied to Force Concept Inventory Data

    ERIC Educational Resources Information Center

    Brewe, Eric; Bruun, Jesper; Bearden, Ian G.

    2016-01-01

    We describe "Module Analysis for Multiple Choice Responses" (MAMCR), a new methodology for carrying out network analysis on responses to multiple choice assessments. This method is used to identify modules of non-normative responses which can then be interpreted as an alternative to factor analysis. MAMCR allows us to identify conceptual…

  1. The Use of a Comprehensive Multiple Choice Final Exam in the Macroeconomics Principles Course: An Assessment.

    ERIC Educational Resources Information Center

    Petrowsky, Michael C.

    This paper analyzes the results of a pilot study at Glendale Community College (Arizona) to assess the effectiveness of a comprehensive multiple choice final exam in the macroeconomic principles course. The "pilot project" involved the administration of a 50-question multiple choice exam to 71 students in three macroeconomics sections.…

  2. Multiple-Choice and Short-Answer Exam Performance in a College Classroom

    ERIC Educational Resources Information Center

    Funk, Steven C.; Dickson, K. Laurie

    2011-01-01

    The authors experimentally investigated the effects of multiple-choice and short-answer format exam items on exam performance in a college classroom. They randomly assigned 50 students to take a 10-item short-answer pretest or posttest on two 50-item multiple-choice exams in an introduction to personality course. Students performed significantly…

  3. Step by Step: Biology Undergraduates' Problem-Solving Procedures during Multiple-Choice Assessment

    ERIC Educational Resources Information Center

    Prevost, Luanna B.; Lemons, Paula P.

    2016-01-01

    This study uses the theoretical framework of domain-specific problem solving to explore the procedures students use to solve multiple-choice problems about biology concepts. We designed several multiple-choice problems and administered them on four exams. We trained students to produce written descriptions of how they solved the problem, and this…

  4. Multiple choice questions can be designed or revised to challenge learners' critical thinking.

    PubMed

    Tractenberg, Rochelle E; Gushta, Matthew M; Mulroney, Susan E; Weissinger, Peggy A

    2013-12-01

    Multiple choice (MC) questions from a graduate physiology course were evaluated by cognitive-psychology (but not physiology) experts, and analyzed statistically, in order to test the independence of content expertise and cognitive complexity ratings of MC items. Integration of higher order thinking into MC exams is important, but widely known to be challenging-perhaps especially when content experts must think like novices. Expertise in the domain (content) may actually impede the creation of higher-complexity items. Three cognitive psychology experts independently rated cognitive complexity for 252 multiple-choice physiology items using a six-level cognitive complexity matrix that was synthesized from the literature. Rasch modeling estimated item difficulties. The complexity ratings and difficulty estimates were then analyzed together to determine the relative contributions (and independence) of complexity and difficulty to the likelihood of correct answers on each item. Cognitive complexity was found to be statistically independent of difficulty estimates for 88 % of items. Using the complexity matrix, modifications were identified to increase some item complexities by one level, without affecting the item's difficulty. Cognitive complexity can effectively be rated by non-content experts. The six-level complexity matrix, if applied by faculty peer groups trained in cognitive complexity and without domain-specific expertise, could lead to improvements in the complexity targeted with item writing and revision. Targeting higher order thinking with MC questions can be achieved without changing item difficulties or other test characteristics, but this may be less likely if the content expert is left to assess items within their domain of expertise.

  5. Are faculty predictions or item taxonomies useful for estimating the outcome of multiple-choice examinations?

    PubMed

    Kibble, Jonathan D; Johnson, Teresa

    2011-12-01

    The purpose of this study was to evaluate whether multiple-choice item difficulty could be predicted either by a subjective judgment by the question author or by applying a learning taxonomy to the items. Eight physiology faculty members teaching an upper-level undergraduate human physiology course consented to participate in the study. The faculty members annotated questions before exams with the descriptors "easy," "moderate," or "hard" and classified them according to whether they tested knowledge, comprehension, or application. Overall analysis showed a statistically significant, but relatively low, correlation between the intended item difficulty and actual student scores (ρ = -0.19, P < 0.01), indicating that, as intended item difficulty increased, the resulting student scores on items tended to decrease. Although this expected inverse relationship was detected, faculty members were correct only 48% of the time when estimating difficulty. There was also significant individual variation among faculty members in the ability to predict item difficulty (χ(2) = 16.84, P = 0.02). With regard to the cognitive level of items, no significant correlation was found between the item cognitive level and either actual student scores (ρ = -0.09, P = 0.14) or item discrimination (ρ = 0.05, P = 0.42). Despite the inability of faculty members to accurately predict item difficulty, the examinations were of high quality, as evidenced by reliability coefficients (Cronbach's α) of 0.70-0.92, the rejection of only 4 of 300 items in the postexamination review, and a mean item discrimination (point biserial) of 0.37. In conclusion, the effort of assigning annotations describing intended difficulty and cognitive levels to multiple-choice items is of doubtful value in terms of controlling examination difficulty. However, we also report that the process of annotating questions may enhance examination validity and can reveal aspects of the hidden curriculum.

  6. Using a Theorem by Andersen and the Dichotomous Rasch Model to Assess the Presence of Random Guessing in Multiple Choice Items

    ERIC Educational Resources Information Center

    Andrich, David; Marais, Ida; Humphry, Stephen

    2012-01-01

    Andersen (1995, 2002) proves a theorem relating variances of parameter estimates from samples and subsamples and shows its use as an adjunct to standard statistical analyses. The authors show an application where the theorem is central to the hypothesis tested, namely, whether random guessing to multiple choice items affects their estimates in the…

  7. Sensitivity of Linkings between AP Multiple-Choice Scores and Composite Scores to Geographical Region: An Illustration of Checking for Population Invariance

    ERIC Educational Resources Information Center

    Yang, Wen-Ling

    2004-01-01

    This application study investigates whether the multiple-choice to composite linking functions that determine Advanced Placement Program exam grades remain invariant over subgroups defined by region. Three years of test data from an AP exam are used to study invariance across regions. The study focuses on two questions: (a) How invariant are grade…

  8. Comparison of Certification and Recertification Examinee Performance on Multiple-Choice Items in Forensic Psychiatry.

    PubMed

    Juul, Dorthea; Vollmer, Jennifer; Shen, Linjun; Faulkner, Larry R

    2016-03-01

    Research on the association between age and performance on tests of medical knowledge has generally shown an inverse relationship, which is of concern because of the positive association between measures of knowledge and measures of clinical performance. Because the certification and maintenance of certification (MOC) examinations in the subspecialty of forensic psychiatry draw on a common item bank, performance of the two groups of examinees on the same items could be compared. In addition, the relationship between age and test performance was analyzed. Performance on items administered to certification and MOC examinees did not differ significantly, and the mean amount of time spent on each item was similar for the two groups. Although the majority (five of eight) of the correlations between age and test score on the certification and MOC examinations were negative, only three were significant, and the amount of variance explained by age was small. In addition, examination performance for those younger than 50 was similar to those 60 and older, and diplomates recertifying for the second time outperformed those doing so for the first time. These results indicate that in this subspecialty, there is no clear evidence of an age-related decline in knowledge as assessed by multiple-choice items.

  9. Faculty development programs improve the quality of Multiple Choice Questions items' writing.

    PubMed

    Abdulghani, Hamza Mohammad; Ahmad, Farah; Irshad, Mohammad; Khalil, Mahmoud Salah; Al-Shaikh, Ghadeer Khalid; Syed, Sadiqa; Aldrees, Abdulmajeed Abdurrahman; Alrowais, Norah; Haque, Shafiul

    2015-04-01

    The aim of this study was to assess the utility of long term faculty development programs (FDPs) in order to improve the quality of multiple choice questions (MCQs) items' writing. This was a quasi-experimental study, conducted with newly joined faculty members. The MCQ items were analyzed for difficulty index, discriminating index, reliability, Bloom's cognitive levels, item writing flaws (IWFs) and MCQs' nonfunctioning distractors (NFDs) based test courses of respiratory, cardiovascular and renal blocks. Significant improvement was found in the difficulty index values of pre- to post-training (p = 0.003). MCQs with moderate difficulty and higher discrimination were found to be more in the post-training tests in all three courses. Easy questions were decreased from 36.7 to 22.5%. Significant improvement was also reported in the discriminating indices from 92.1 to 95.4% after training (p = 0.132). More number of higher cognitive level of Bloom's taxonomy was reported in the post-training test items (p<0.0001). Also, NFDs and IWFs were reported less in the post-training items (p<0.02). The MCQs written by the faculties without participating in FDPs are usually of low quality. This study suggests that newly joined faculties need active participation in FDPs as these programs are supportive in improving the quality of MCQs' items writing.

  10. Multiple-choice exams: an obstacle for higher-level thinking in introductory science classes.

    PubMed

    Stanger-Hall, Kathrin F

    2012-01-01

    Learning science requires higher-level (critical) thinking skills that need to be practiced in science classes. This study tested the effect of exam format on critical-thinking skills. Multiple-choice (MC) testing is common in introductory science courses, and students in these classes tend to associate memorization with MC questions and may not see the need to modify their study strategies for critical thinking, because the MC exam format has not changed. To test the effect of exam format, I used two sections of an introductory biology class. One section was assessed with exams in the traditional MC format, the other section was assessed with both MC and constructed-response (CR) questions. The mixed exam format was correlated with significantly more cognitively active study behaviors and a significantly better performance on the cumulative final exam (after accounting for grade point average and gender). There was also less gender-bias in the CR answers. This suggests that the MC-only exam format indeed hinders critical thinking in introductory science classes. Introducing CR questions encouraged students to learn more and to be better critical thinkers and reduced gender bias. However, student resistance increased as students adjusted their perceptions of their own critical-thinking abilities.

  11. Faculty development programs improve the quality of Multiple Choice Questions items' writing

    PubMed Central

    Abdulghani, Hamza Mohammad; Ahmad, Farah; Irshad, Mohammad; Khalil, Mahmoud Salah; Al-Shaikh, Ghadeer Khalid; Syed, Sadiqa; Aldrees, Abdulmajeed Abdurrahman; Alrowais, Norah; Haque, Shafiul

    2015-01-01

    The aim of this study was to assess the utility of long term faculty development programs (FDPs) in order to improve the quality of multiple choice questions (MCQs) items' writing. This was a quasi-experimental study, conducted with newly joined faculty members. The MCQ items were analyzed for difficulty index, discriminating index, reliability, Bloom's cognitive levels, item writing flaws (IWFs) and MCQs' nonfunctioning distractors (NFDs) based test courses of respiratory, cardiovascular and renal blocks. Significant improvement was found in the difficulty index values of pre- to post-training (p = 0.003). MCQs with moderate difficulty and higher discrimination were found to be more in the post-training tests in all three courses. Easy questions were decreased from 36.7 to 22.5%. Significant improvement was also reported in the discriminating indices from 92.1 to 95.4% after training (p = 0.132). More number of higher cognitive level of Bloom's taxonomy was reported in the post-training test items (p<0.0001). Also, NFDs and IWFs were reported less in the post-training items (p<0.02). The MCQs written by the faculties without participating in FDPs are usually of low quality. This study suggests that newly joined faculties need active participation in FDPs as these programs are supportive in improving the quality of MCQs' items writing. PMID:25828516

  12. Contemplation on marking scheme for Type X multiple choice questions, and an illustration of a practically applicable scheme

    PubMed Central

    Siddiqui, Nazeem Ishrat; Bhavsar, Vinayak H.; Bhavsar, Arnav V.; Bose, Sukhwant

    2016-01-01

    Ever since its inception 100 years back, multiple choice items have been widely used as a method of assessment. It has certain inherent limitations such as inability to test higher cognitive skills, element of guesswork while answering, and issues related with marking schemes. Various marking schemes have been proposed in the past but they are not balanced, skewed, and complex, which are based on mathematical calculations which are typically not within the grasp of medical personnel. Type X questions has many advantages being easy to construct, can test multiple concepts/application/facets of a topic, cognitive skill of various level of hierarchy can be tested, and unlike Type K items, they are free from complicated coding. In spite of these advantages, they are not in common use due to complicated marking schemes. This is the reason we explored the aspects of methods of evaluation of multiple correct options multiple choice questions and came up with the simple, practically applicable, nonstringent but logical scoring system for the same. The rationale of the illustrated marking scheme is that it takes into consideration the distracter recognition ability of the examinee rather than relying on the ability only to select the correct response. Thus, examinee's true knowledge is tested, and he is rewarded accordingly for selecting a correct answer and omitting a distracter. The scheme also penalizes for not recognizing a distracter thus controlling guessing behavior. It is emphasized that if the illustrated scoring scheme is adopted, then Type X questions would come in common practice. PMID:27127312

  13. Benford’s Law: Textbook Exercises and Multiple-Choice Testbanks

    PubMed Central

    Slepkov, Aaron D.; Ironside, Kevin B.; DiBattista, David

    2015-01-01

    Benford’s Law describes the finding that the distribution of leading (or leftmost) digits of innumerable datasets follows a well-defined logarithmic trend, rather than an intuitive uniformity. In practice this means that the most common leading digit is 1, with an expected frequency of 30.1%, and the least common is 9, with an expected frequency of 4.6%. Currently, the most common application of Benford’s Law is in detecting number invention and tampering such as found in accounting-, tax-, and voter-fraud. We demonstrate that answers to end-of-chapter exercises in physics and chemistry textbooks conform to Benford’s Law. Subsequently, we investigate whether this fact can be used to gain advantage over random guessing in multiple-choice tests, and find that while testbank answers in introductory physics closely conform to Benford’s Law, the testbank is nonetheless secure against such a Benford’s attack for banal reasons. PMID:25689468

  14. Improving Educational Assessment: A Computer-Adaptive Multiple Choice Assessment Using NRET as the Scoring Method

    ERIC Educational Resources Information Center

    Sie Hoe, Lau; Ngee Kiong, Lau; Kian Sam, Hong; Bin Usop, Hasbee

    2009-01-01

    Assessment is central to any educational process. Number Right (NR) scoring method is a conventional scoring method for multiple choice items, where students need to pick one option as the correct answer. One point is awarded for the correct response and zero for any other responses. However, it has been heavily criticized for guessing and failure…

  15. Are Faculty Predictions or Item Taxonomies Useful for Estimating the Outcome of Multiple-Choice Examinations?

    ERIC Educational Resources Information Center

    Kibble, Jonathan D.; Johnson, Teresa

    2011-01-01

    The purpose of this study was to evaluate whether multiple-choice item difficulty could be predicted either by a subjective judgment by the question author or by applying a learning taxonomy to the items. Eight physiology faculty members teaching an upper-level undergraduate human physiology course consented to participate in the study. The…

  16. Validity and Reliability of Scores Obtained on Multiple-Choice Questions: Why Functioning Distractors Matter

    ERIC Educational Resources Information Center

    Ali, Syed Haris; Carr, Patrick A.; Ruit, Kenneth G.

    2016-01-01

    Plausible distractors are important for accurate measurement of knowledge via multiple-choice questions (MCQs). This study demonstrates the impact of higher distractor functioning on validity and reliability of scores obtained on MCQs. Freeresponse (FR) and MCQ versions of a neurohistology practice exam were given to four cohorts of Year 1 medical…

  17. Differential Daily Writing Contingencies and Performance on Major Multiple-Choice Exams

    ERIC Educational Resources Information Center

    Hautau, Briana; Turner, Haley C.; Carroll, Erin; Jaspers, Kathryn; Parker, Megan; Krohn, Katy; Williams, Robert L.

    2006-01-01

    On 4 of 7 days in each unit of an undergraduate human development course, students responded in writing to specific questions related to instructor notes previously made available to them. The study compared the effects of three writing contingencies on the quality of student writing and performance on major multiple-choice exams in the course. …

  18. College Students' Behavior on Multiple-Choice Self-Tailored Exams

    ERIC Educational Resources Information Center

    Vuk, Jasna; Morse, David T.

    2009-01-01

    In this study we observed college students' behavior on two self-tailored, multiple-choice exams. Self-tailoring was defined as an option to omit up to five items from being scored on an exam. Participants, 80 undergraduate college students enrolled in two sections of an educational psychology course, statistically significantly improved their…

  19. Gender Differences in Mathematics Self-Efficacy and Back Substitution in Multiple-Choice Assessment

    ERIC Educational Resources Information Center

    Goodwin, K. Shane; Ostrom, Lee; Scott, Karen Wilson

    2009-01-01

    A quantitative observational study exploring the relationship of gender to mathematics self-efficacy and the frequency of back substitution in multiple-choice assessment sampled undergraduates at a western United States parochial university. Research questions addressed: to what extent are there gender differences in mathematics self-efficacy, as…

  20. Multiple-Choice Glosses and Incidental Vocabulary Learning: A Case of an EFL Context

    ERIC Educational Resources Information Center

    Ghahari, Shima; Heidarolad, Meissam

    2015-01-01

    Provision of multiple-choice (MC) glosses, which combines the advantages of glosses and inferring, has recently gained its share of supporters as a potential technique for enhancing L2 texts and increasing word gain for L2 learners. Upon taking an actual TOEFL, the participants underwent a vocabulary pretest to ensure that the target words were…

  1. Cheating on Multiple-Choice Exams: Monitoring, Assessment, and an Optional Assignment

    ERIC Educational Resources Information Center

    Nath, Leda; Lovaglia, Michael

    2009-01-01

    Academic dishonesty is unethical. Exam cheating is viewed as more serious than most other forms (Pincus and Schmelkin 2003). The authors review the general cheating problem, introduce a program to conservatively identify likely cheaters on multiple-choice exams, and offer a procedure for handling likely cheaters. Feedback from students who confess…

  2. Written Justifications to Multiple-Choice Concept Questions during Active Learning in Class

    ERIC Educational Resources Information Center

    Koretsky, Milo D.; Brooks, Bill J.; Higgins, Adam Z.

    2016-01-01

    Increasingly, instructors of large, introductory STEM courses are having students actively engage during class by answering multiple-choice concept questions individually and in groups. This study investigates the use of a technology-based tool that allows students to answer such questions during class. The tool also allows the instructor to…

  3. The Use of Management and Marketing Textbook Multiple-Choice Questions: A Case Study.

    ERIC Educational Resources Information Center

    Hampton, David R.; And Others

    1993-01-01

    Four management and four marketing professors classified multiple-choice questions in four widely adopted introductory textbooks according to the two levels of Bloom's taxonomy of educational objectives: knowledge and intellectual ability and skill. Inaccuracies may cause instructors to select questions that require less thinking than they intend.…

  4. Analysis of Changing Answers on Multiple-Choice Examination for Nationwide Sample of Canadian Psychiatry Residents.

    ERIC Educational Resources Information Center

    Welch, Joan; Leichner, Pierre

    1988-01-01

    Altering first choices on multiple-choice questions on a medical examination (Canadian Self-Assessment Examination in Psychiatry) was examined to see whether this led to an increase or decrease in the final score. Examinees were one-and-a-half times more likely to improve than lower their score. (MLW)

  5. A Participatory Learning Approach to Biochemistry Using Student Authored and Evaluated Multiple-Choice Questions

    ERIC Educational Resources Information Center

    Bottomley, Steven; Denny, Paul

    2011-01-01

    A participatory learning approach, combined with both a traditional and a competitive assessment, was used to motivate students and promote a deep approach to learning biochemistry. Students were challenged to research, author, and explain their own multiple-choice questions (MCQs). They were also required to answer, evaluate, and discuss MCQs…

  6. Visual Attention for Solving Multiple-Choice Science Problem: An Eye-Tracking Analysis

    ERIC Educational Resources Information Center

    Tsai, Meng-Jung; Hou, Huei-Tse; Lai, Meng-Lung; Liu, Wan-Yi; Yang, Fang-Ying

    2012-01-01

    This study employed an eye-tracking technique to examine students' visual attention when solving a multiple-choice science problem. Six university students participated in a problem-solving task to predict occurrences of landslide hazards from four images representing four combinations of four factors. Participants' responses and visual attention…

  7. Illusion of Linearity in Geometry: Effect in Multiple-Choice Problems

    ERIC Educational Resources Information Center

    Vlahovic-Stetic, Vesna; Pavlin-Bernardic, Nina; Rajter, Miroslav

    2010-01-01

    The aim of this study was to examine if there is a difference in the performance on non-linear problems regarding age, gender, and solving situation, and whether the multiple-choice answer format influences students' thinking. A total of 112 students, aged 15-16 and 18-19, were asked to solve problems for which solutions based on proportionality…

  8. Guide to Developing High-Quality, Reliable, and Valid Multiple-Choice Assessments

    ERIC Educational Resources Information Center

    Towns, Marcy H.

    2014-01-01

    Chemistry faculty members are highly skilled in obtaining, analyzing, and interpreting physical measurements, but often they are less skilled in measuring student learning. This work provides guidance for chemistry faculty from the research literature on multiple-choice item development in chemistry. Areas covered include content, stem, and…

  9. FormScanner: Open-Source Solution for Grading Multiple-Choice Exams

    ERIC Educational Resources Information Center

    Young, Chadwick; Lo, Glenn; Young, Kaisa; Borsetta, Alberto

    2016-01-01

    The multiple-choice exam remains a staple for many introductory physics courses. In the past, people have graded these by hand or even flaming needles. Today, one usually grades the exams with a form scanner that utilizes optical mark recognition (OMR). Several companies provide these scanners and particular forms, such as the eponymous…

  10. A Method for Imputing Response Options for Missing Data on Multiple-Choice Assessments

    ERIC Educational Resources Information Center

    Wolkowitz, Amanda A.; Skorupski, William P.

    2013-01-01

    When missing values are present in item response data, there are a number of ways one might impute a correct or incorrect response to a multiple-choice item. There are significantly fewer methods for imputing the actual response option an examinee may have provided if he or she had not omitted the item either purposely or accidentally. This…

  11. Grading Multiple Choice Exams with Low-Cost and Portable Computer-Vision Techniques

    ERIC Educational Resources Information Center

    Fisteus, Jesus Arias; Pardo, Abelardo; García, Norberto Fernández

    2013-01-01

    Although technology for automatic grading of multiple choice exams has existed for several decades, it is not yet as widely available or affordable as it should be. The main reasons preventing this adoption are the cost and the complexity of the setup procedures. In this paper, "Eyegrade," a system for automatic grading of multiple…

  12. A Validity Study of the Multiple-Choice Component of the Advanced Placement Chemistry Examination.

    ERIC Educational Resources Information Center

    Modu, Christopher C.; Taft, Hessy L.

    1982-01-01

    Compares performance of first-year general chemistry college students from 32 institutions with performance of Advanced Placement (AP) Chemistry Candidates in 1978 to provide a concurrent validity measure of the multiple-choice section of the AP chemistry examination. Average AP candidates scored significantly higher than average college students.…

  13. Using module analysis for multiple choice responses: A new method applied to Force Concept Inventory data

    NASA Astrophysics Data System (ADS)

    Brewe, Eric; Bruun, Jesper; Bearden, Ian G.

    2016-12-01

    We describe Module Analysis for Multiple Choice Responses (MAMCR), a new methodology for carrying out network analysis on responses to multiple choice assessments. This method is used to identify modules of non-normative responses which can then be interpreted as an alternative to factor analysis. MAMCR allows us to identify conceptual modules that are present in student responses that are more specific than the broad categorization of questions that is possible with factor analysis and to incorporate non-normative responses. Thus, this method may prove to have greater utility in helping to modify instruction. In MAMCR the responses to a multiple choice assessment are first treated as a bipartite, student X response, network which is then projected into a response X response network. We then use data reduction and community detection techniques to identify modules of non-normative responses. To illustrate the utility of the method we have analyzed one cohort of postinstruction Force Concept Inventory (FCI) responses. From this analysis, we find nine modules which we then interpret. The first three modules include the following: Impetus Force, More Force Yields More Results, and Force as Competition or Undistinguished Velocity and Acceleration. This method has a variety of potential uses particularly to help classroom instructors in using multiple choice assessments as diagnostic instruments beyond the Force Concept Inventory.

  14. The frequency of item writing flaws in multiple-choice questions used in high stakes nursing assessments.

    PubMed

    Tarrant, Marie; Knierim, Aimee; Hayes, Sasha K; Ware, James

    2006-12-01

    Multiple-choice questions are a common assessment method in nursing examinations. Few nurse educators, however, have formal preparation in constructing multiple-choice questions. Consequently, questions used in baccalaureate nursing assessments often contain item-writing flaws, or violations to accepted item-writing guidelines. In one nursing department, 2770 MCQs were collected from tests and examinations administered over a five-year period from 2001 to 2005. Questions were evaluated for 19 frequently occurring item-writing flaws, for cognitive level, for question source, and for the distribution of correct answers. Results show that almost half (46.2%) of the questions contained violations of item-writing guidelines and over 90% were written at low cognitive levels. Only a small proportion of questions were teacher generated (14.1%), while 36.2% were taken from testbanks and almost half (49.4%) had no source identified. MCQs written at a lower cognitive level were significantly more likely to contain item-writing flaws. While there was no relationship between the source of the question and item-writing flaws, teacher-generated questions were more likely to be written at higher cognitive levels (p<0.001). Correct answers were evenly distributed across all four options and no bias was noted in the placement of correct options. Further training in item-writing is recommended for all faculty members who are responsible for developing tests. Pre-test review and quality assessment is also recommended to reduce the occurrence of item-writing flaws and to improve the quality of test questions.

  15. The frequency of item writing flaws in multiple-choice questions used in high stakes nursing assessments.

    PubMed

    Tarrant, Marie; Knierim, Aimee; Hayes, Sasha K; Ware, James

    2006-12-01

    Multiple-choice questions are a common assessment method in nursing examinations. Few nurse educators, however, have formal preparation in constructing multiple-choice questions. Consequently, questions used in baccalaureate nursing assessments often contain item-writing flaws, or violations to accepted item-writing guidelines. In one nursing department, 2770 MCQs were collected from tests and examinations administered over a five-year period from 2001 to 2005. Questions were evaluated for 19 frequently occurring item-writing flaws, for cognitive level, for question source, and for the distribution of correct answers. Results show that almost half (46.2%) of the questions contained violations of item-writing guidelines and over 90% were written at low cognitive levels. Only a small proportion of questions were teacher generated (14.1%), while 36.2% were taken from testbanks and almost half (49.4%) had no source identified. MCQs written at a lower cognitive level were significantly more likely to contain item-writing flaws. While there was no relationship between the source of the question and item-writing flaws, teachergenerated questions were more likely to be written at higher cognitive levels (p<0.001). Correct answers were evenly distributed across all four options and no bias was noted in the placement of correct options. Further training in item-writing is recommended for all faculty members who are responsible for developing tests. Pre-test review and quality assessment is also recommended to reduce the occurrence of item-writing flaws and to improve the quality of test questions.

  16. Format Effects of Empirically Derived Multiple-Choice versus Free-Response Instruments When Assessing Graphing Abilities

    ERIC Educational Resources Information Center

    Berg, Craig; Boote, Stacy

    2017-01-01

    Prior graphing research has demonstrated that clinical interviews and free-response instruments produce very different results than multiple-choice instruments, indicating potential validity problems when using multiple-choice instruments to assess graphing skills (Berg & Smith in "Science Education," 78(6), 527-554, 1994). Extending…

  17. Comparison of Performance on Multiple-Choice Questions and Open-Ended Questions in an Introductory Astronomy Laboratory

    ERIC Educational Resources Information Center

    Wooten, Michelle M.; Cool, Adrienne M.; Prather, Edward E.; Tanner, Kimberly D.

    2014-01-01

    When considering the variety of questions that can be used to measure students' learning, instructors may choose to use multiple-choice questions, which are easier to score than responses to open-ended questions. However, by design, analyses of multiple-choice responses cannot describe all of students' understanding. One method that can…

  18. Sex Differences in the Relationship of Advanced Placement Essay and Multiple-Choice Scores to Grades in College Courses.

    ERIC Educational Resources Information Center

    Bridgeman, Brent; Lewis, Charles

    Essay and multiple-choice scores from Advanced Placement (AP) examinations in American History, European History, English Language and Composition, and Biology were matched with freshman grades in a sample of 32 colleges. Multiple-choice scores from the American History and Biology examinations were superior to essays for predicting overall grade…

  19. Meta-Evaluation in Clinical Anatomy: A Practical Application of Item Response Theory in Multiple Choice Examinations

    ERIC Educational Resources Information Center

    Severo, Milton; Tavares, Maria A. Ferreira

    2010-01-01

    The nature of anatomy education has changed substantially in recent decades, though the traditional multiple-choice written examination remains the cornerstone of assessing students' knowledge. This study sought to measure the quality of a clinical anatomy multiple-choice final examination using item response theory (IRT) models. One hundred…

  20. A Comparison of Information Functions of Multiple-Choice and Free-Response Vocabulary Items. Research Report 77-2.

    ERIC Educational Resources Information Center

    Vale, C. David; Weiss, David J.

    Twenty multiple-choice vocabulary items and 20 free-response vocabulary items were administered to 660 college students. The free-response items consisted of the stem words of the multiple-choice items. Testees were asked to respond to the free-response items with synonyms. A computer algorithm was developed to transform the numerous…

  1. Application of Item Analysis to Assess Multiple-Choice Examinations in the Mississippi Master Cattle Producer Program

    ERIC Educational Resources Information Center

    Parish, Jane A.; Karisch, Brandi B.

    2013-01-01

    Item analysis can serve as a useful tool in improving multiple-choice questions used in Extension programming. It can identify gaps between instruction and assessment. An item analysis of Mississippi Master Cattle Producer program multiple-choice examination responses was performed to determine the difficulty of individual examinations, assess the…

  2. Sustainable Assessment for Large Science Classes: Non-Multiple Choice, Randomised Assignments through a Learning Management System

    ERIC Educational Resources Information Center

    Schultz, Madeleine

    2011-01-01

    This paper reports on the development of a tool that generates randomised, non-multiple choice assessment within the BlackBoard Learning Management System interface. An accepted weakness of multiple-choice assessment is that it cannot elicit learning outcomes from upper levels of Biggs' SOLO taxonomy. However, written assessment items require…

  3. Use of flawed multiple-choice items by the New England Journal of Medicine for continuing medical education.

    PubMed

    Stagnaro-Green, Alex S; Downing, Steven M

    2006-09-01

    Physicians in the United States are required to complete a minimum number of continuing medical education (CME) credits annually. The goal of CME is to ensure that physicians maintain their knowledge and skills throughout their medical career. The New England Journal of Medicine (NEJM) provides its readers with the opportunity to obtain weekly CME credits. Deviation from established item-writing principles may result in a decrease in validity evidence for tests. This study evaluated the quality of 40 NEJM MCQs using the standard evidence-based principles of effective item writing. Each multiple-choice item reviewed had at least three item flaws, with a mean of 5.1 and a range of 3 to 7. The results of this study demonstrate that the NEJM uses flawed MCQs in its weekly CME program.

  4. Odor naming methodology: correct identification with multiple-choice versus repeatable identification in a free task.

    PubMed

    Sulmont-Rossé, Claire; Issanchou, Sylvie; Köster, E P

    2005-01-01

    Since there is rarely a social labeling consensus in the identification of odors, it would be better to assess whether participants identify an odor by the same name upon repeated presentation rather than by the name designated as 'correct' by the experimenter (veridical label) in identification tasks. To examine the relevance of this proposition, participants were asked to identify familiar odors both in a free and a multiple-choice task. The free task was replicated in order to determine the percentage of repeatable identification. Results showed that the difference between the percentage of correct identification in the multiple-choice task and the percentage of repeatable identification in the free task was small, and that participants often used a repeatable name which differed from the veridical label. Thus, it was suggested that allowing participants to give their own name to an odor when it is not present on a pre-developed list, and measuring whether participants repeat the same name in independent measurements, might improve the relevance of multiple-choice tasks.

  5. Test-Taking Strategies on a Multiple-Choice Test of Reading Comprehension.

    ERIC Educational Resources Information Center

    Nevo, Nava

    1989-01-01

    Comparison of the reading comprehension processes used by Hebrew-speaking students of French in both their native and second languages found that there was a transfer of strategies from the native to the second language, although in the second language, students used more strategies that did not lead to the selection of a correct response than in…

  6. Comparison of performance on multiple-choice questions and open-ended questions in an introductory astronomy laboratory

    NASA Astrophysics Data System (ADS)

    Wooten, Michelle M.; Cool, Adrienne M.; Prather, Edward E.; Tanner, Kimberly D.

    2014-12-01

    When considering the variety of questions that can be used to measure students' learning, instructors may choose to use multiple-choice questions, which are easier to score than responses to open-ended questions. However, by design, analyses of multiple-choice responses cannot describe all of students' understanding. One method that can be used to learn more about students' learning is the analysis of the open-ended responses students' provide when explaining their multiple-choice response. In this study, we examined the extent to which introductory astronomy students' performance on multiple-choice questions was comparable to their ability to provide evidence when asked to respond to an open-ended question. We quantified students' open-ended responses by developing rubrics that allowed us to score the amount of relevant evidence students' provided. A minimum rubric score was determined for each question based on two astronomy educators perception of the minimum amount of evidence needed to substantiate a scientifically accurate multiple-choice response. The percentage of students meeting both criteria of (1) attaining the minimum rubric score and (2) selecting the correct multiple-choice response was examined at three different phases of instruction: directly before lab instruction, directly after lab instruction, and at the end of the semester. Results suggested that a greater proportion of students were able to choose the correct multiple-choice response than were able to provide responses that attained the minimum rubric score at both the post-lab and post-instruction phases.

  7. Education techniques for lifelong learning: writing multiple-choice questions for continuing medical education activities and self-assessment modules.

    PubMed

    Collins, Jannette

    2006-01-01

    The multiple-choice question (MCQ) is the most commonly used type of test item in radiologic graduate medical and continuing medical education examinations. Now that radiologists are participating in the maintenance of certification process, there is an increased need for self-assessment modules that include MCQs and persons with test item-writing skills to develop such modules. Although principles of effective test item writing have been documented, violations of these principles are common in medical education. Guidelines for test construction are related to development of educational objectives, defining levels of learning for each objective, and writing effective MCQs that test that learning. Educational objectives should be written in observable, behavioral terms that allow for an accurate assessment of whether the learner has achieved the objectives. Learning occurs at many levels, from simple recall to problem solving. The educational objectives and the MCQs that accompany them should target all levels of learning appropriate for the given content. Characteristics of effective MCQs can be described in terms of the overall item, the stem, and the options. Flawed MCQs interfere with accurate and meaningful interpretation of test scores and negatively affect student pass rates. Therefore, to develop reliable and valid tests, items must be constructed that are free of such flaws. The article provides an overview of established guidelines for writing effective MCQs, a discussion of writing appropriate educational objectives and MCQs that match those objectives, and a brief review of item analysis.

  8. Step by Step: Biology Undergraduates' Problem-Solving Procedures during Multiple-Choice Assessment.

    PubMed

    Prevost, Luanna B; Lemons, Paula P

    2016-01-01

    This study uses the theoretical framework of domain-specific problem solving to explore the procedures students use to solve multiple-choice problems about biology concepts. We designed several multiple-choice problems and administered them on four exams. We trained students to produce written descriptions of how they solved the problem, and this allowed us to systematically investigate their problem-solving procedures. We identified a range of procedures and organized them as domain general, domain specific, or hybrid. We also identified domain-general and domain-specific errors made by students during problem solving. We found that students use domain-general and hybrid procedures more frequently when solving lower-order problems than higher-order problems, while they use domain-specific procedures more frequently when solving higher-order problems. Additionally, the more domain-specific procedures students used, the higher the likelihood that they would answer the problem correctly, up to five procedures. However, if students used just one domain-general procedure, they were as likely to answer the problem correctly as if they had used two to five domain-general procedures. Our findings provide a categorization scheme and framework for additional research on biology problem solving and suggest several important implications for researchers and instructors.

  9. Written justifications to multiple-choice concept questions during active learning in class

    NASA Astrophysics Data System (ADS)

    Koretsky, Milo D.; Brooks, Bill J.; Higgins, Adam Z.

    2016-07-01

    Increasingly, instructors of large, introductory STEM courses are having students actively engage during class by answering multiple-choice concept questions individually and in groups. This study investigates the use of a technology-based tool that allows students to answer such questions during class. The tool also allows the instructor to prompt students to provide written responses to justify the selection of the multiple-choice answer that they have chosen. We hypothesize that prompting students to explain and elaborate on their answer choices leads to greater focus and use of normative scientific reasoning processes, and will allow them to answer questions correctly more often. The study contains two parts. First, a crossover quasi-experimental design is employed to determine the influence of asking students to individually provide written explanations (treatment condition) of their answer choices to 39 concept questions as compared to students who do not. Second, we analyze a subset of the questions to see whether students identify the salient concepts and use appropriate reasoning in their explanations. Results show that soliciting written explanations can have a significant influence on answer choice and, when it does, that influence is usually positive. However, students are not always able to articulate the correct reason for their answer.

  10. Step by Step: Biology Undergraduates’ Problem-Solving Procedures during Multiple-Choice Assessment

    PubMed Central

    Prevost, Luanna B.; Lemons, Paula P.

    2016-01-01

    This study uses the theoretical framework of domain-specific problem solving to explore the procedures students use to solve multiple-choice problems about biology concepts. We designed several multiple-choice problems and administered them on four exams. We trained students to produce written descriptions of how they solved the problem, and this allowed us to systematically investigate their problem-solving procedures. We identified a range of procedures and organized them as domain general, domain specific, or hybrid. We also identified domain-general and domain-specific errors made by students during problem solving. We found that students use domain-general and hybrid procedures more frequently when solving lower-order problems than higher-order problems, while they use domain-specific procedures more frequently when solving higher-order problems. Additionally, the more domain-specific procedures students used, the higher the likelihood that they would answer the problem correctly, up to five procedures. However, if students used just one domain-general procedure, they were as likely to answer the problem correctly as if they had used two to five domain-general procedures. Our findings provide a categorization scheme and framework for additional research on biology problem solving and suggest several important implications for researchers and instructors. PMID:27909021

  11. Grading Multiple Choice Exams with Low-Cost and Portable Computer-Vision Techniques

    NASA Astrophysics Data System (ADS)

    Fisteus, Jesus Arias; Pardo, Abelardo; García, Norberto Fernández

    2013-08-01

    Although technology for automatic grading of multiple choice exams has existed for several decades, it is not yet as widely available or affordable as it should be. The main reasons preventing this adoption are the cost and the complexity of the setup procedures. In this paper, Eyegrade, a system for automatic grading of multiple choice exams is presented. While most current solutions are based on expensive scanners, Eyegrade offers a truly low-cost solution requiring only a regular off-the-shelf webcam. Additionally, Eyegrade performs both mark recognition as well as optical character recognition of handwritten student identification numbers, which avoids the use of bubbles in the answer sheet. When compared with similar webcam-based systems, the user interface in Eyegrade has been designed to provide a more efficient and error-free data collection procedure. The tool has been validated with a set of experiments that show the ease of use (both setup and operation), the reduction in grading time, and an increase in the reliability of the results when compared with conventional, more expensive systems.

  12. Examen en Vue du Diplome Douzieme Annee. Langue et Litterature 30. Partie B: Lecture (Choix Multiples). Livret de Questions (Examination for the Twelfth Grade Diploma, Language and Literature 30. Part B: Reading--Multiple Choice. Questions Booklet).

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    As part of an examination required by the Alberta (Canada) Department of Education in order for 12th grade students to receive a diploma in French, this booklet contains the 80 multiple choice questions portion of Part B, the language and literature component of the January 1987 tests. Representing the genres of poetry, short story, the novel, and…

  13. The Social Attribution Task-Multiple Choice (SAT-MC): A Psychometric and Equivalence Study of an Alternate Form.

    PubMed

    Johannesen, Jason K; Lurie, Jessica B; Fiszdon, Joanna M; Bell, Morris D

    2013-01-01

    The Social Attribution Task-Multiple Choice (SAT-MC) uses a 64-second video of geometric shapes set in motion to portray themes of social relatedness and intentions. Considered a test of "Theory of Mind," the SAT-MC assesses implicit social attribution formation while reducing verbal and basic cognitive demands required of other common measures. We present a comparability analysis of the SAT-MC and the new SAT-MC-II, an alternate form created for repeat testing, in a university sample (n = 92). Score distributions and patterns of association with external validation measures were nearly identical between the two forms, with convergent and discriminant validity supported by association with affect recognition ability and lack of association with basic visual reasoning. Internal consistency of the SAT-MC-II was superior (alpha = .81) to the SAT-MC (alpha = .56). Results support the use of SAT-MC and new SAT-MC-II as equivalent test forms. Demonstrating relatively higher association to social cognitive than basic cognitive abilities, the SAT-MC may provide enhanced sensitivity as an outcome measure of social cognitive intervention trials.

  14. The assessment of critical thinking skills in anatomy and physiology students who practice writing higher order multiple choice questions

    NASA Astrophysics Data System (ADS)

    Shaw, Jason

    Critical thinking is a complex abstraction that defies homogeneous interpretation. This means that no operational definition is universal and no critical thinking measurement tool is all encompassing. Instructors will likely find evidence based strategies to facilitate thinking skills only as numerous research efforts from multiple disciplines accumulate. This study focuses on a question writing exercise designed to help anatomy and physiology students. Students were asked to design multiple choice questions that combined course concepts in new and novel ways. Instructions and examples were provided on how to construct these questions and student attempts were sorted into levels one through three of Bloom's Cognitive Taxonomy (Bloom et al. 1956). Students submitted their question designs weekly and received individual feedback as to how they might improve. Eight course examinations were created to contain questions that modeled the Bloom's Cognitive Taxonomy levels that students were attempting. Students were assessed on their course examination performance as well as performance on a discipline independent critical thinking test called the California Critical Thinking Skills Test (CCTST). The performance of students in this study was compared to students from two previous years that took the same course but did not have the question writing activity. Results suggest that students do not improve their ability to answer critical thinking multiple choices questions when they practice the task of creating such problems. The effect of class level on critical thinking is examined and it appears that the longer a student has attended college the better the performance on both discipline specific and discipline independent critical thinking questions. The data were also used to analyze students who improved their course examination grades in the second semester of this course. There is a pattern to suggest that students who improve their performance on course examinations

  15. A set partitioning reformulation for the multiple-choice multidimensional knapsack problem

    NASA Astrophysics Data System (ADS)

    Voß, Stefan; Lalla-Ruiz, Eduardo

    2016-05-01

    The Multiple-choice Multidimensional Knapsack Problem (MMKP) is a well-known ?-hard combinatorial optimization problem that has received a lot of attention from the research community as it can be easily translated to several real-world problems arising in areas such as allocating resources, reliability engineering, cognitive radio networks, cloud computing, etc. In this regard, an exact model that is able to provide high-quality feasible solutions for solving it or being partially included in algorithmic schemes is desirable. The MMKP basically consists of finding a subset of objects that maximizes the total profit while observing some capacity restrictions. In this article a reformulation of the MMKP as a set partitioning problem is proposed to allow for new insights into modelling the MMKP. The computational experimentation provides new insights into the problem itself and shows that the new model is able to improve on the best of the known results for some of the most common benchmark instances.

  16. Adverse selection with a multiple choice among health insurance plans: a simulation analysis.

    PubMed

    Marquis, M S

    1992-08-01

    This study uses simulation methods to quantify the effects of adverse selection. The data used to develop the model provide information about whether families can accurately forecast their risk and whether this forecast affects the purchase of insurance coverage--key conditions for adverse selection to matter. The results suggest that adverse selection is sufficient to eliminate high-option benefit plans in multiple choice markets if insurers charge a single, experience-rated premium. Adverse selection is substantially reduced if premiums are varied according to demographic factors. Adverse selection is also restricted in supplementary insurance markets. In this market, supplementary policies are underpriced because a part of the additional benefits that purchasers can expect is a cost to the base plan and is not reflected in the supplementary premium. As a result, full supplementary coverage is attractive to both low and high risks.

  17. Rasch scaling procedures for informing development of a valid Fetal Surveillance Education Program multiple-choice assessment

    PubMed Central

    Zoanetti, Nathan; Griffin, Patrick; Beaves, Mark; Wallace, Euan M

    2009-01-01

    Background It is widely recognised that deficiencies in fetal surveillance practice continue to contribute significantly to the burden of adverse outcomes. This has prompted the development of evidence-based clinical practice guidelines by the Royal Australian and New Zealand College of Obstetricians and Gynaecologists and an associated Fetal Surveillance Education Program to deliver the associated learning. This article describes initial steps in the validation of a corresponding multiple-choice assessment of the relevant educational outcomes through a combination of item response modelling and expert judgement. Methods The Rasch item response model was employed for item and test analysis and to empirically derive the substantive interpretation of the assessment variable. This interpretation was then compared to the hierarchy of competencies specified a priori by a team of eight subject-matter experts. Classical Test Theory analyses were also conducted. Results A high level of agreement between the hypothesised and derived variable provided evidence of construct validity. Item and test indices from Rasch analysis and Classical Test Theory analysis suggested that the current test form was of moderate quality. However, the analyses made clear the required steps for establishing a valid assessment of sufficient psychometric quality. These steps included: increasing the number of items from 40 to 50 in the first instance, reviewing ineffective items, targeting new items to specific content and difficulty gaps, and formalising the assessment blueprint in light of empirical information relating item structure to item difficulty. Conclusion The application of the Rasch model for criterion-referenced assessment validation with an expert stakeholder group is herein described. Recommendations for subsequent item and test construction are also outlined in this article. PMID:19402898

  18. An analysis of complex multiple-choice science-technology-society items: Methodological development and preliminary results

    NASA Astrophysics Data System (ADS)

    Vázquez-Alonso, Ángel; Manassero-Mas, María-Antonia; Acevedo-Díaz, José-Antonio

    2006-07-01

    The scarce attention to the assessment and evaluation in science education research has been especially harmful for teaching science-technology-society (STS) issues, due to the dialectical, tentative, value-laden, and polemic nature of most STS topics. This paper tackles the methodological difficulties of the instruments that monitor views related to STS topics and rationalizes a quantitative methodology and an analysis technique to improve the utility of an empirically developed multiple-choice item pool, the Questionnaire of Opinions on STS. This methodology embraces an item-scaling psychometrics based on the judgments by a panel of experts, a multiple response model, a scoring system, and the data analysis. The methodology finally produces normalized attitudinal indices that represent the respondent's reasoned beliefs toward STS statements, the respondent's position on an item that comprises several statements, or the respondent's position on an entire STS topic that encompasses a set of items. Some preliminary results show the methodology's ability to evaluate the STS attitudes in a qualitative and quantitative way and for statistical hypothesis testing. Lastly, some applications for teacher training and STS curriculum development in science classrooms are discussed.

  19. Force Concept Inventory-Based Multiple-Choice Test for Investigating Students' Representational Consistency

    ERIC Educational Resources Information Center

    Nieminen, Pasi; Savinainen, Antti; Viiri, Jouni

    2010-01-01

    This study investigates students' ability to interpret multiple representations consistently (i.e., representational consistency) in the context of the force concept. For this purpose we developed the Representational Variant of the Force Concept Inventory (R-FCI), which makes use of nine items from the 1995 version of the Force Concept Inventory…

  20. Proceedings of the 4th NAII Conference. Multiple Choice: The True Test of the Future.

    ERIC Educational Resources Information Center

    National Association For the Individualization of Instruction, Wyandach, NY.

    The report of a conference on individualized instruction contains brief descriptions of the two conference sponsors--the Westinghouse Learning Corporation and the National Association for the Individualization of Instruction. The report also provides abstracts of some speeches delivered at the conference and biographies of the speakers. A…

  1. Use of a multiple-choice procedure with college student drinkers.

    PubMed

    Little, Carrie; Correia, Christopher J

    2006-12-01

    The Multiple-Choice Procedure (MCP) was developed to investigate the relationship between drug preferences and alternative reinforcers. The current studies were designed to validate survey and laboratory versions of the MCP with college student drinkers. In Study 1, 320 undergraduates with a recent history of alcohol consumption used a survey version of the MCP to make 120 discrete hypothetical choices between two amounts of alcohol and escalating amounts of money delivered immediately or after a 1-week delay. In Study 2, 21 undergraduates completed a laboratory version of the MCP to make 120 discrete choices involving real alcohol and monetary payments. Responses to both versions of the MCP were related to measures of alcohol use and varied as a function of delay associated with the money choice. Responses to the survey version of the MCP also varied as a function of the amount of alcohol hypothetically available. The results of the 2 studies are consistent with a behavioral choice perspective of alcohol use, which focuses on preferences in the context of competing alternative reinforcers.

  2. [Development of a Set of Rehabilitation Related Multiple-choice-questions in Medical Education].

    PubMed

    Gutt, S; Bergelt, C; Faller, H; Krischak, G; Spyra, K; Uhlmann, A; Mau, W

    2015-08-01

    In the rehabilitation related teaching as in other subjects of the medical training multiple choice (MC) examinations are the most frequent type of examinations. Compared to other subjects only a few MC questions are available for the interdisciplinary subject Rehabilitation. Therefore an internet-based online platform "Pool of rehabilitation related MC questions" was developed to assist teachers regarding the provision, design and organization of high-quality rehabilitation related MC questions. A total of 502 existing MC questions were collected from 12 German Medical Faculties. After removal of 59 questions not suitable for formal and content reasons a total of 443 questions were presented to 6 reviewers for triple reviews (a total of 1 329 expert reviews received). Of the 502 questions 335 (67%) were included in the final pool including short cases with 46 case studies. The questions refer to the following learning objectives: principles of rehabilitation (40%), rehabilitative interventions (20%), diagnosis and assessment (18%), initiation and control of the rehabilitation process (12%) and methods/quality of rehabilitative interventions (10%). The use of the online platform modules resp. the questions are for free for lecturers. This includes the compilation and output of complete examinations, the statistical evaluation, and other audit-related materials. This examination pool counteracts the current lack of quality-assured rehabilitation-related MC questions and contributes to set common standards for the Medical Faculties to rehabilitation related examinations.

  3. Student-Generated Content: Enhancing learning through sharing multiple-choice questions

    NASA Astrophysics Data System (ADS)

    Hardy, Judy; Bates, Simon P.; Casey, Morag M.; Galloway, Kyle W.; Galloway, Ross K.; Kay, Alison E.; Kirsop, Peter; McQueen, Heather A.

    2014-09-01

    The relationship between students' use of PeerWise, an online tool that facilitates peer learning through student-generated content in the form of multiple-choice questions (MCQs), and achievement, as measured by their performance in the end-of-module examinations, was investigated in 5 large early-years science modules (in physics, chemistry and biology) across 3 research-intensive UK universities. A complex pattern was observed in terms of which type of activity (writing, answering or commenting on questions) was most beneficial for students; however, there was some evidence that students of lower intermediate ability may have gained particular benefit. In all modules, a modest but statistically significant positive correlation was found between students' PeerWise activity and their examination performance, after taking prior ability into account. This suggests that engaging with the production and discussion of student-generated content in the form of MCQs can support student learning in a way that is not critically dependent on course, institution, instructor or student.

  4. A participatory learning approach to biochemistry using student authored and evaluated multiple-choice questions.

    PubMed

    Bottomley, Steven; Denny, Paul

    2011-01-01

    A participatory learning approach, combined with both a traditional and a competitive assessment, was used to motivate students and promote a deep approach to learning biochemistry. Students were challenged to research, author, and explain their own multiple-choice questions (MCQs). They were also required to answer, evaluate, and discuss MCQs written by their peers. The technology used to support this activity was PeerWise--a freely available, innovative web-based system that supports students in the creation of an annotated question repository. In this case study, we describe students' contributions to, and perceptions of, the PeerWise system for a cohort of 107 second-year biomedical science students from three degree streams studying a core biochemistry subject. Our study suggests that the students are eager participants and produce a large repository of relevant, good quality MCQs. In addition, they rate the PeerWise system highly and use higher order thinking skills while taking an active role in their learning. We also discuss potential issues and future work using PeerWise for biomedical students.

  5. Behavioral economic analysis of drug preference using multiple choice procedure data.

    PubMed

    Greenwald, Mark K

    2008-01-11

    The multiple choice procedure has been used to evaluate preference for psychoactive drugs, relative to money amounts (price), in human subjects. The present re-analysis shows that MCP data are compatible with behavioral economic analysis of drug choices. Demand curves were constructed from studies with intravenous fentanyl, intramuscular hydromorphone and oral methadone in opioid-dependent individuals; oral d-amphetamine, oral MDMA alone and during fluoxetine treatment, and smoked marijuana alone or following naltrexone pretreatment in recreational drug users. For each participant and dose, the MCP crossover point was converted into unit price (UP) by dividing the money value ($) by the drug dose (mg/70kg). At the crossover value, the dose ceases to function as a reinforcer, so "0" was entered for this and higher UPs to reflect lack of drug choice. At lower UPs, the dose functions as a reinforcer and "1" was entered to reflect drug choice. Data for UP vs. average percent choice were plotted in log-log space to generate demand functions. Rank of order of opioid inelasticity (slope of non-linear regression) was: fentanyl>hydromorphone (continuing heroin users)>methadone>hydromorphone (heroin abstainers). Rank order of psychostimulant inelasticity was d-amphetamine>MDMA>MDMA+fluoxetine. Smoked marijuana was more inelastic with high-dose naltrexone. These findings show this method translates individuals' drug preferences into estimates of population demand, which has the potential to yield insights into pharmacotherapy efficacy, abuse liability assessment, and individual differences in susceptibility to drug abuse.

  6. Diagnosing Secondary Students' Misconceptions of Photosynthesis and Respiration in Plants Using a Two-Tier Multiple Choice Instrument.

    ERIC Educational Resources Information Center

    Haslam, Filocha; Treagust, David F.

    1987-01-01

    Describes a multiple-choice instrument that reliably and validly diagnoses secondary students' understanding of photosynthesis and respiration in plants. Highlights the consistency of students' misconceptions across secondary levels and indicates a high percentage of students have misconceptions regarding plant physiology. (CW)

  7. Understanding Rasch Measurement: Distractors with Information in Multiple Choice Items: A Rationale Based on the Rasch Model

    ERIC Educational Resources Information Center

    Andrich, David; Styles, Irene

    2011-01-01

    There is a substantial literature on attempts to obtain information on the proficiency of respondents from distractors in multiple choice items. Information in a distractor implies that a person who chooses that distractor has greater proficiency than if the person chose another distractor with no information. A further implication is that the…

  8. The Development and Validation of a Two-Tiered Multiple-Choice Instrument to Identify Alternative Conceptions in Earth Science

    ERIC Educational Resources Information Center

    Mangione, Katherine Anna

    2010-01-01

    This study was to determine reliability and validity for a two-tiered, multiple- choice instrument designed to identify alternative conceptions in earth science. Additionally, this study sought to identify alternative conceptions in earth science held by preservice teachers, to investigate relationships between self-reported confidence scores and…

  9. An Australian Study Comparing the Use of Multiple-Choice Questionnaires with Assignments as Interim, Summative Law School Assessment

    ERIC Educational Resources Information Center

    Huang, Vicki

    2017-01-01

    To the author's knowledge, this is the first Australian study to empirically compare the use of a multiple-choice questionnaire (MCQ) with the use of a written assignment for interim, summative law school assessment. This study also surveyed the same student sample as to what types of assessments are preferred and why. In total, 182 undergraduate…

  10. Estimating the Effect on Grades of Using Multiple-Choice versus Constructive-Response Questions: Data from the Classroom

    ERIC Educational Resources Information Center

    Hickson, Stephen; Reed, W. Robert; Sander, Nicholas

    2012-01-01

    This study investigates the degree to which grades based solely on constructed-response (CR) questions differ from grades based solely on multiple-choice (MC) questions. If CR questions are to justify their higher costs, they should produce different grade outcomes than MC questions. We use a data set composed of thousands of observations on…

  11. Reliability and Validity of Two-Option Multiple-Choice and Comparably Written True-False Items.

    ERIC Educational Resources Information Center

    Sax, Gilbert; Reiter, Pauline B.

    Despite the popularity of both multiple-choice (MC) and true-false (TF) items, most investigations comparing the two formats have done so to determine the optimum number of choices to be given to students within a given time period. The purpose of this investigation was to compare the reliabilities and the validities of both formats when the items…

  12. A One-Day Dental Faculty Workshop in Writing Multiple-Choice Questions: An Impact Evaluation.

    PubMed

    AlFaris, Eiad; Naeem, Naghma; Irfan, Farhana; Qureshi, Riaz; Saad, Hussain; Al Sadhan, Ra'ed; Abdulghani, Hamza Mohammad; Van der Vleuten, Cees

    2015-11-01

    Long training workshops on the writing of exam questions have been shown to be effective; however, the effectiveness of short workshops needs to be demonstrated. The aim of this study was to evaluate the impact of a one-day, seven-hour faculty development workshop at the College of Dentistry, King Saud University, Saudi Arabia, on the quality of multiple-choice questions (MCQs). Kirkpatrick's four-level evaluation model was used. Participants' satisfaction (Kirkpatrick's Level 1) was evaluated with a post-workshop questionnaire. A quasi-experimental, randomized separate sample, pretest-posttest design was used to assess the learning effect (Kirkpatrick's Level 2). To evaluate transfer of learning to practice (Kirkpatrick's Level 3), MCQs created by ten faculty members as a result of the training were assessed. To assess Kirkpatrick's Level 4 regarding institutional change, interviews with three key leaders of the school were conducted, coded, and analyzed. A total of 72 course directors were invited to and attended some part of the workshop; all 52 who attended the entire workshop completed the satisfaction form; and 22 of the 36 participants in the experimental group completed the posttest. The results showed that all 52 participants were highly satisfied with the workshop, and significant positive changes were found in the faculty members' knowledge and the quality of their MCQs with effect sizes of 0.7 and 0.28, respectively. At the institutional level, the interviews demonstrated positive structural changes in the school's assessment system. Overall, this one-day item-writing faculty workshop resulted in positive changes at all four of Kirkpatrick's levels; these effects suggest that even a short training session can improve a dental school's assessment of its students.

  13. Exploring problem solving strategies on multiple-choice science items: Comparing native Spanish-speaking English Language Learners and mainstream monolinguals

    NASA Astrophysics Data System (ADS)

    Kachchaf, Rachel Rae

    The purpose of this study was to compare how English language learners (ELLs) and monolingual English speakers solved multiple-choice items administered with and without a new form of testing accommodation---vignette illustration (VI). By incorporating theories from second language acquisition, bilingualism, and sociolinguistics, this study was able to gain more accurate and comprehensive input into the ways students interacted with items. This mixed methods study used verbal protocols to elicit the thinking processes of thirty-six native Spanish-speaking English language learners (ELLs), and 36 native-English speaking non-ELLs when solving multiple-choice science items. Results from both qualitative and quantitative analyses show that ELLs used a wider variety of actions oriented to making sense of the items than non-ELLs. In contrast, non-ELLs used more problem solving strategies than ELLs. There were no statistically significant differences in student performance based on the interaction of presence of illustration and linguistic status or the main effect of presence of illustration. However, there were significant differences based on the main effect of linguistic status. An interaction between the characteristics of the students, the items, and the illustrations indicates considerable heterogeneity in the ways in which students from both linguistic groups think about and respond to science test items. The results of this study speak to the need for more research involving ELLs in the process of test development to create test items that do not require ELLs to carry out significantly more actions to make sense of the item than monolingual students.

  14. Medical students' vs. family physicians' assessment of practical and logical values of pathophysiology multiple-choice questions.

    PubMed

    Secic, Damir; Husremovic, Dzenana; Kapur, Eldan; Jatic, Zaim; Hadziahmetovic, Nina; Vojnikovic, Benjamin; Fajkic, Almir; Meholjic, Amir; Bradic, Lejla; Hadzic, Amila

    2017-03-01

    Testing strategies can either have a very positive or negative effect on the learning process. The aim of this study was to examine the degree of consistency in evaluating the practicality and logic of questions from a medical school pathophysiology test, between students and family medicine doctors. The study engaged 77 family medicine doctors and 51 students. Ten questions were taken from cardiac pathophysiology and 10 questions from pulmonary pathophysiology, and each question was assessed on the criteria of practicality and logic. A nonparametric Mann-Whitney test was used to test the difference between evaluators. On the criteria of logic, only four out of 20 items were evaluated differently by students in comparison to doctors, two items each from the fields of cardiology and pulmonology. On the criteria of practicality, for six of the 20 items there were statistically significant differences between the students and doctors, with three items each from cardiology and pulmonology. Based on these indicative results, students should be involved in the qualitative assessment of exam questions, which should be performed regularly under a strictly regulated process.

  15. Treatment of burns in the first 24 hours: simple and practical guide by answering 10 questions in a step-by-step form

    PubMed Central

    2012-01-01

    Residents in training, medical students and other staff in surgical sector, emergency room (ER) and intensive care unit (ICU) or Burn Unit face a multitude of questions regarding burn care. Treatment of burns is not always straightforward. Furthermore, National and International guidelines differ from one region to another. On one hand, it is important to understand pathophysiology, classification of burns, surgical treatment, and the latest updates in burn science. On the other hand, the clinical situation for treating these cases needs clear guidelines to cover every single aspect during the treatment procedure. Thus, 10 questions have been organised and discussed in a step-by-step form in order to achieve the excellence of education and the optimal treatment of burn injuries in the first 24 hours. These 10 questions will clearly discuss referral criteria to the burn unit, primary and secondary survey, estimation of the total burned surface area (%TBSA) and the degree of burns as well as resuscitation process, routine interventions, laboratory tests, indications of Bronchoscopy and special considerations for Inhalation trauma, immediate consultations and referrals, emergency surgery and admission orders. Understanding and answering the 10 questions will not only cover the management process of Burns during the first 24 hours but also seems to be an interactive clear guide for education purpose. PMID:22583548

  16. Treatment of burns in the first 24 hours: simple and practical guide by answering 10 questions in a step-by-step form.

    PubMed

    Alharbi, Ziyad; Piatkowski, Andrzej; Dembinski, Rolf; Reckort, Sven; Grieb, Gerrit; Kauczok, Jens; Pallua, Norbert

    2012-05-14

    Residents in training, medical students and other staff in surgical sector, emergency room (ER) and intensive care unit (ICU) or Burn Unit face a multitude of questions regarding burn care. Treatment of burns is not always straightforward. Furthermore, National and International guidelines differ from one region to another. On one hand, it is important to understand pathophysiology, classification of burns, surgical treatment, and the latest updates in burn science. On the other hand, the clinical situation for treating these cases needs clear guidelines to cover every single aspect during the treatment procedure. Thus, 10 questions have been organised and discussed in a step-by-step form in order to achieve the excellence of education and the optimal treatment of burn injuries in the first 24 hours. These 10 questions will clearly discuss referral criteria to the burn unit, primary and secondary survey, estimation of the total burned surface area (%TBSA) and the degree of burns as well as resuscitation process, routine interventions, laboratory tests, indications of Bronchoscopy and special considerations for Inhalation trauma, immediate consultations and referrals, emergency surgery and admission orders. Understanding and answering the 10 questions will not only cover the management process of Burns during the first 24 hours but also seems to be an interactive clear guide for education purpose.

  17. Predicting social and communicative ability in school-age children with autism spectrum disorder: A pilot study of the Social Attribution Task, Multiple Choice.

    PubMed

    Burger-Caplan, Rebecca; Saulnier, Celine; Jones, Warren; Klin, Ami

    2016-11-01

    The Social Attribution Task, Multiple Choice is introduced as a measure of implicit social cognitive ability in children, addressing a key challenge in quantification of social cognitive function in autism spectrum disorder, whereby individuals can often be successful in explicit social scenarios, despite marked social adaptive deficits. The 19-question Social Attribution Task, Multiple Choice, which presents ambiguous stimuli meant to elicit social attribution, was administered to children with autism spectrum disorder (N = 23) and to age-matched and verbal IQ-matched typically developing children (N = 57). The Social Attribution Task, Multiple Choice performance differed between autism spectrum disorder and typically developing groups, with typically developing children performing significantly better than children with autism spectrum disorder. The Social Attribution Task, Multiple Choice scores were positively correlated with age (r = 0.474) while being independent from verbal IQ (r = 0.236). The Social Attribution Task, Multiple Choice was strongly correlated with Vineland Adaptive Behavior Scales Communication (r = 0.464) and Socialization (r = 0.482) scores, but not with Daily Living Skills scores (r = 0.116), suggesting that the implicit social cognitive ability underlying performance on the Social Attribution Task, Multiple Choice is associated with real-life social adaptive function.

  18. The Effect of Using Different Weights for Multiple-Choice and Free-Response Item Sections

    ERIC Educational Resources Information Center

    Hendrickson, Amy; Patterson, Brian; Melican, Gerald

    2008-01-01

    Presented at the Annual National Council on Measurement in Education (NCME) in New York in March 2008. This presentation explores how different item weighting can affect the effective weights, validity coefficents and test reliability of composite scores among test takers.

  19. Grade 12 Diploma Examination, English 30. Part B: Reading (Multiple Choice). Readings Booklet. June 1988 Edition.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    Intended for students taking the English 30 Grade 12 Diploma Examinations in Alberta, Canada, this test (to be administered along with a questions booklet) contains the reading selections portion of Part B, the reading component of the June 1988 tests. Representing the genres of fiction, nonfiction, poetry, and drama, the 10 selections consist of:…

  20. Modified Multiple-Choice Items for Alternate Assessments: Reliability, Difficulty, and Differential Boost

    ERIC Educational Resources Information Center

    Kettler, Ryan J.; Rodriguez, Michael C.; Bolt, Daniel M.; Elliott, Stephen N.; Beddow, Peter A.; Kurz, Alexander

    2011-01-01

    Federal policy on alternate assessment based on modified academic achievement standards (AA-MAS) inspired this research. Specifically, an experimental study was conducted to determine whether tests composed of modified items would have the same level of reliability as tests composed of original items, and whether these modified items helped reduce…

  1. Do Multiple-Choice Options Inflate Estimates of Vocabulary Size on the VST?

    ERIC Educational Resources Information Center

    Stewart, Jeffrey

    2014-01-01

    Validated under a Rasch framework (Beglar, 2010), the Vocabulary Size Test (VST) (Nation & Beglar, 2007) is an increasingly popular measure of decontextualized written receptive vocabulary size in the field of second language acquisition. However, although the validation indicates that the test has high internal reliability, still unaddressed…

  2. Grade 12 Diploma Examination, English 33. Part B: Reading (Multiple Choice). Readings Booklet. June 1988 Edition.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    Intended for students taking the English 33 Grade 12 Diploma Examinations in Alberta, Canada, this test (to be administered along with a questions booklet) contains the reading selections portion of Part B, the reading component of the June 1988 tests. The following short selections taken from fiction, nonfiction, poetry, drama, and day-to-day…

  3. Data-mining to build a knowledge representation store for clinical decision support. Studies on curation and validation based on machine performance in multiple choice medical licensing examinations.

    PubMed

    Robson, Barry; Boray, Srinidhi

    2016-06-01

    Extracting medical knowledge by structured data mining of many medical records and from unstructured data mining of natural language source text on the Internet will become increasingly important for clinical decision support. Output from these sources can be transformed into large numbers of elements of knowledge in a Knowledge Representation Store (KRS), here using the notation and to some extent the algebraic principles of the Q-UEL Web-based universal exchange and inference language described previously, rooted in Dirac notation from quantum mechanics and linguistic theory. In a KRS, semantic structures or statements about the world of interest to medicine are analogous to natural language sentences seen as formed from noun phrases separated by verbs, prepositions and other descriptions of relationships. A convenient method of testing and better curating these elements of knowledge is by having the computer use them to take the test of a multiple choice medical licensing examination. It is a venture which perhaps tells us almost as much about the reasoning of students and examiners as it does about the requirements for Artificial Intelligence as employed in clinical decision making. It emphasizes the role of context and of contextual probabilities as opposed to the more familiar intrinsic probabilities, and of a preliminary form of logic that we call presyllogistic reasoning.

  4. Multiple choice answers: what to do when you have too many questions.

    PubMed

    Jupiter, Daniel C

    2015-01-01

    Carrying out too many statistical tests in a single study throws results into doubt, for reasons statistical and ethical. I discuss why this is the case and briefly mention ways to handle the problem.

  5. A Novel Multiple Choice Question Generation Strategy: Alternative Uses for Controlled Vocabulary Thesauri in Biomedical-Sciences Education

    PubMed Central

    Lopetegui, Marcelo A.; Lara, Barbara A.; Yen, Po-Yin; Çatalyürek, Ümit V.; Payne, Philip R.O.

    2015-01-01

    Multiple choice questions play an important role in training and evaluating biomedical science students. However, the resource intensive nature of question generation limits their open availability, reducing their contribution to evaluation purposes mainly. Although applied-knowledge questions require a complex formulation process, the creation of concrete-knowledge questions (i.e., definitions, associations) could be assisted by the use of informatics methods. We envisioned a novel and simple algorithm that exploits validated knowledge repositories and generates concrete-knowledge questions by leveraging concepts’ relationships. In this manuscript we present the development and validation of a prototype which successfully produced meaningful concrete-knowledge questions, opening new applications for existing knowledge repositories, potentially benefiting students of all biomedical sciences disciplines. PMID:26958222

  6. We Don't Live in a Multiple-Choice World: Inquiry and the Common Core

    ERIC Educational Resources Information Center

    Jaeger, Paige

    2012-01-01

    The Common Core raises the bar for states struggling to decide what should be taught or tested. As low-performing schools strive to improve instruction, the blueprint has been defined. The Common Core defines the curriculum in enough detail and specifies ways to teach that content creatively and innovatively, to produce graduates who are problem…

  7. Grade 12 Diploma Examination, English 30. Part B: Reading (Multiple Choice). Readings Booklet. 1986 Edition.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    Intended for students taking the Grade 12 Examination in English 30 in Alberta, Canada, this reading test (to be administered along with the questions booklet) contains 10 short reading selections taken from fiction, nonfiction, poetry, and drama, including the following: "My Magical Metronome" (Lewis Thomas); "Queen Street…

  8. Grade 12 Diploma Examination, English 30. Part B: Reading (Multiple Choice). Readings Booklet. June 1989 Edition.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    Intended for students taking the Grade 12 Examination in English 30 in Alberta, Canada, this reading test (to be administered along with a questions booklet) includes the following eight short selections taken from fiction, nonfiction, poetry, and drama: "Loyalties" (Roo Borson); "Clever Animals" (Lewis Thomas); "Death of…

  9. Grade 12 Diploma Examination, English 33. Part B. Reading (Multiple Choice). Readings Booklet. 1987 Edition.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    Intended for students taking Grade 12 Diploma Examinations in English 33 in Alberta, Canada, this reading test is designed to be administered with a questions booklet. The following short selections taken from fiction, nonfiction, poetry, drama, and day-to-day functional materials are included: (1) "Interpreter" (Gary Hyland); (2) an…

  10. Grade 12 Diploma Examination, English 33. Part B: Reading (Multiple Choice). Readings Booklet.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    Intended for students taking the Grade 12 Examination in English 33 in Alberta, Canada, this reading test (to be administered along with the questions booklet) contains 8 short reading selections taken from fiction, nonfiction, poetry, and drama, including the following: an excerpt from "Circus Nerves" (Eric Nicol); "Follower"…

  11. Grade 12 Diploma Examination, English 30. Part B: Reading (Multiple Choice). Readings Booklet. 1987 Edition.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    Intended for students taking the Grade 12 Examination in English 30 in Alberta, Canada, this reading test (to be administered along with a questions booklet) includes the following 10 short selections taken from fiction, nonfiction, poetry, and drama: "Parents as People (with Children)" (Ellen Goodman); "Everybody Knows about the…

  12. Grade 12 Diploma Examination, English 30. Part B: Reading (Multiple Choice). Readings Booklet. 1988 Edition.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    Intended for students taking the Grade 12 Examinations in English 30 in Alberta, Canada, this reading test (to be administered along with a questions booklet) includes the following nine short selections taken from fiction, nonfiction, poetry, and drama: "The Biggest Liar in the World" (Harry Mark Petrakis); "Victorian…

  13. Grade 12 Diploma Examination, English 33. Part B: Reading. (Multiple Choice). Readings Booklet. 1988 Edition.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    Intended for students taking Grade 12 Diploma Examinations in English 33 in Alberta, Canada, this reading test is designed to be administered with a questions booklet. The following short selections taken from fiction, nonfiction, poetry, drama, and day-to-day functional materials are included: (1) "M is for Mother" (Marjorie Riddle);…

  14. A Study of Three-option and Four-option Multiple Choice Exams.

    ERIC Educational Resources Information Center

    Cooper, Terence H.

    1988-01-01

    Describes a study used to determine differences in exam reliability, difficulty, and student evaluations. Indicates that when a fourth option was added to the three-option items, the exams became more difficult. Includes methods, results discussion, and tables on student characteristics, whole test analyses, and selected items. (RT)

  15. Grade 12 Diploma Examination, English 30. Part B: Reading (Multiple Choice). Readings Booklet.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    Intended for students taking the Grade 12 Examination in English 30 in Alberta, Canada, this reading test (to be administered along with the questions booklet) contains 10 short reading selections taken from fiction, nonfiction, poetry and drama, including the following: "At the Age at Which Mozart Was Dead Already" (Ellen Goodman);…

  16. Grade 12 Diploma Examination, English 30. Part B: Reading (Multiple Choice). Readings Booklet.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    Intended for students taking the Grade 12 Examination in English 30 in Alberta, Canada, this reading test (to be administered along with the questions booklet) contains 10 short reading selections taken from fiction, nonfiction, poetry, and drama, including the following: an excerpt from "Where Did You Go?""Out." (Robert Paul…

  17. Multiple-Choice Answers: To Change or Not to Change? Perhaps Not Such a Simple Question

    NASA Astrophysics Data System (ADS)

    Wainscott, Heidi

    2016-11-01

    When grading students' quizzes and exams, I find that students are seemingly always changing their answers from the right answer to the wrong answer. In fact, I have cautioned students against changing their answer. Colleagues have made similar observations and some books on test-taking strategies advise against answer-changing. In an effort to find out how pervasive the answer-changing problem was, I collected some data and dug a little deeper into the research. My hypothesis was that students most frequently changed their answers from right to wrong. Have you made similar observations? If yes, then the results of this study may surprise you.

  18. Juvenile mice show greater flexibility in multiple choice reversal learning than adults

    PubMed Central

    Johnson, Carolyn; Wilbrecht, Linda

    2011-01-01

    We hypothesized that decision-making strategies in juvenile animals, rather than being immature, are optimized to navigate the uncertainty and instability likely to be encountered in the environment at the time of the animal’s transition to independence. We tested juvenile and young adult mice on discrimination and reversal of a 4-choice and 2-choice odor-based foraging task. Juvenile mice (P26–27) learned a 4-choice discrimination and reversal faster than adults (P60–70), making fewer perseverative and distraction errors. Juvenile mice had shorter choice latencies and more focused search strategies. In both ages, performance of the task was significantly impaired by a lesion of the dorsomedial frontal cortex. Our data show that the frontal cortex can support highly flexible behavior in juvenile mice at a time coincident with weaning and first independence. The unexpected developmental decline in flexibility of behavior one month later suggests that frontal cortex based executive function may not inevitably become more flexible with age, but rather may be developmentally tuned to optimize exploratory and exploitative behavior for each life stage. PMID:21949556

  19. Gender and Ethnicity Differences in Multiple-Choice Testing. Effects of Self-Assessment and Risk-Taking Propensity

    DTIC Science & Technology

    1993-05-01

    and Non- Hispanic parents may not differ in the value they place on education for their children , Hispanic parents tend to * 13 I I I encourage their...male children to pursue advanced education more than their female children . Mestre (1988) contends that there is a clear difference between Non-Hispanic...Hispanic children are more likely to do their homework than Non- I Hispanic children , and that Hispanic parents are very supportive of their

  20. Large-Scale Assessment of Language Proficiency: Theoretical and Pedagogical Reflections on the Use of Multiple-Choice Tests

    ERIC Educational Resources Information Center

    Argüelles Álvarez, Irina

    2013-01-01

    The new requirement placed on students in tertiary settings in Spain to demonstrate a B1 or a B2 proficiency level of English, in accordance with the Common European Framework of Reference for Languages (CEFRL), has led most Spanish universities to develop a program of certification or accreditation of the required level. The first part of this…

  1. Adolescent exposure to methylphenidate impairs serial pattern learning in the serial multiple choice (SMC) task in adult rats.

    PubMed

    Rowan, James D; McCarty, Madison K; Kundey, Shannon M A; Osburn, Crystal D; Renaud, Samantha M; Kelley, Brian M; Matoushek, Amanda Willey; Fountain, Stephen B

    2015-01-01

    The long-term effects of adolescent exposure to methylphenidate (MPD) on adult cognitive capacity are largely unknown. We utilized a serial multiple choice (SMC) task, which is a sequential learning paradigm for studying complex learning, to observe the effects of methylphenidate exposure during adolescence on later serial pattern acquisition during adulthood. Following 20.0mg/kg/day MPD or saline exposure for 5 days/week for 5 weeks during adolescence, male rats were trained to produce a highly structured serial response pattern in an octagonal operant chamber for water reinforcement as adults. During a transfer phase, a violation to the previously-learned pattern structure was introduced as the last element of the sequential pattern. Results indicated that while rats in both groups were able to learn the training and transfer patterns, adolescent exposure to MPD impaired learning for some aspects of pattern learning in the training phase which are learned using discrimination learning or serial position learning. In contrast adolescent exposure to MPD had no effect on other aspects of pattern learning which have been shown to tap into rule learning mechanisms. Additionally, adolescent MPD exposure impaired learning for the violation element in the transfer phase. This indicates a deficit in multi-item learning previously shown to be responsible for violation element learning. Thus, these results clearly show that adolescent MPD produced multiple cognitive impairments in male rats that persisted into adulthood long after MPD exposure ended.

  2. The validity of multiple choice practical examinations as an alternative to traditional free response examination formats in gross anatomy.

    PubMed

    Shaibah, Hassan Sami; van der Vleuten, Cees P M

    2013-01-01

    Traditionally, an anatomy practical examination is conducted using a free response format (FRF). However, this format is resource-intensive, as it requires a relatively large time investment from anatomy course faculty in preparation and grading. Thus, several interventions have been reported where the response format was changed to a selected response format (SRF). However, validity evidence from those interventions has not proved entirely adequate for the practical anatomy examination, and thus, further investigation was required. In this study, the validity evidence of SRF was examined using multiple choice questions (MCQs) constructed according to different levels of Bloom's taxonomy in comparison with the traditional free response format. A group of 100 medical students registered in a gross anatomy course volunteered to be enrolled in this study. The experimental MCQ examinations were part of graded midterm and final steeplechase practical examination. Volunteer students were instructed to complete the practical examinations twice, once in each of two separate examination rooms. The two separate examinations consisted of a traditional free response format and MCQ format. Scores from the two examinations (FRF and MCQ) displayed a strong correlation, even with higher level Bloom's taxonomy questions. In conclusion, the results of this study provide empirical evidence that the SRF (MCQ) response format is a valid method and can be used as an alternative to the traditional FRF steeplechase examination.

  3. How Are the Form and Magnitude of DIF Effects in Multiple-Choice Items Determined by Distractor-Level Invariance Effects?

    ERIC Educational Resources Information Center

    Penfield, Randall D.

    2011-01-01

    This article explores how the magnitude and form of differential item functioning (DIF) effects in multiple-choice items are determined by the underlying differential distractor functioning (DDF) effects, as modeled under the nominal response model. The results of a numerical investigation indicated that (a) the presence of one or more nonzero DDF…

  4. Assessing Understanding of the Concept of Function: A Study Comparing Prospective Secondary Mathematics Teachers' Responses to Multiple-Choice and Constructed-Response Items

    ERIC Educational Resources Information Center

    Feeley, Susan Jane

    2013-01-01

    The purpose of this study was to determine whether multiple-choice and constructed-response items assessed prospective secondary mathematics teachers' understanding of the concept of function. The conceptual framework for the study was the Dreyfus and Eisenberg (1982) Function Block. The theoretical framework was Sierpinska's (1992, 1994)…

  5. A Stratified Study of Students' Understanding of Basic Optics Concepts in Different Contexts Using Two-Tier Multiple-Choice Items

    ERIC Educational Resources Information Center

    Chu, Hye-Eun; Treagust, David F.; Chandrasegaran, A. L.

    2009-01-01

    A large scale study involving 1786 year 7-10 Korean students from three school districts in Seoul was undertaken to evaluate their understanding of basic optics concepts using a two-tier multiple-choice diagnostic instrument consisting of four pairs of items, each of which evaluated the same concept in two different contexts. The instrument, which…

  6. Using Distractor-Driven Standards-Based Multiple-Choice Assessments and Rasch Modeling to Investigate Hierarchies of Chemistry Misconceptions and Detect Structural Problems with Individual Items

    ERIC Educational Resources Information Center

    Herrmann-Abell, Cari F.; DeBoer, George E.

    2011-01-01

    Distractor-driven multiple-choice assessment items and Rasch modeling were used as diagnostic tools to investigate students' understanding of middle school chemistry ideas. Ninety-one items were developed according to a procedure that ensured content alignment to the targeted standards and construct validity. The items were administered to 13360…

  7. Incorporating Multiple-Choice Questions into an AACSB Assurance of Learning Process: A Course-Embedded Assessment Application to an Introductory Finance Course

    ERIC Educational Resources Information Center

    Santos, Michael R.; Hu, Aidong; Jordan, Douglas

    2014-01-01

    The authors offer a classification technique to make a quantitative skills rubric more operational, with the groupings of multiple-choice questions to match the student learning levels in knowledge, calculation, quantitative reasoning, and analysis. The authors applied this classification technique to the mid-term exams of an introductory finance…

  8. A Comparison of the Performance on Three Multiple Choice Question Papers in Obstetrics and Gynecology Over a Period of Three Years Administered at Five London Medical Schools

    ERIC Educational Resources Information Center

    Stevens, J. M.; And Others

    1977-01-01

    Five of the medical schools in the University of London collaborated in administering one multiple choice question paper in obstetrics and gynecology, and results showed differences in performance between the five schools on questions and alternatives within questions. The rank order of the schools may result from differences in teaching methods.…

  9. Predicting Social and Communicative Ability in School-Age Children with Autism Spectrum Disorder: A Pilot Study of the Social Attribution Task, Multiple Choice

    ERIC Educational Resources Information Center

    Burger-Caplan, Rebecca; Saulnier, Celine; Jones, Warren; Klin, Ami

    2016-01-01

    The Social Attribution Task, Multiple Choice is introduced as a measure of implicit social cognitive ability in children, addressing a key challenge in quantification of social cognitive function in autism spectrum disorder, whereby individuals can often be successful in explicit social scenarios, despite marked social adaptive deficits. The…

  10. Analysis of the Difficulty and Discrimination Indices of Multiple-Choice Questions According to Cognitive Levels in an Open and Distance Learning Context

    ERIC Educational Resources Information Center

    Koçdar, Serpil; Karadag, Nejdet; Sahin, Murat Dogan

    2016-01-01

    This is a descriptive study which intends to determine whether the difficulty and discrimination indices of the multiple-choice questions show differences according to cognitive levels of the Bloom's Taxonomy, which are used in the exams of the courses in a business administration bachelor's degree program offered through open and distance…

  11. Development and Application of a Two-Tier Multiple Choice Diagnostic Instrument To Assess High School Students' Understanding of Inorganic Chemistry Qualitative Analysis.

    ERIC Educational Resources Information Center

    Tan, Kim Chwee Daniel; Goh, Ngoh Khang; Chia, Lian Sai; Treagust, David F.

    2002-01-01

    Describes the development and application of a two-tier multiple choice diagnostic instrument to assess high school students' understanding of inorganic chemistry qualitative analysis. Shows that the Grade 10 students had difficulty understanding the reactions involved in the identification of cations and anions, for example, double decomposition…

  12. Analyzing the Curriculum of the Faculty of Medicine, University of Gezira using Harden’s 10 questions framework

    PubMed Central

    AHMED, YASAR ALBUSHRA; ALNEEL, SALMA

    2017-01-01

    Introduction: Despite the importance of curriculum analysis for internal refinement of a programme, the approach for such a step in under-described in the literature. This article describes the analysis of the medical curriculum at the Faculty of Medicine, University of Gezira (FMUG). This analysis is crucial in the era of innovative medical education since introducing new curricula and curricular changes has become a common occurrence in medical education worldwide. Methods: The curriculum analysis was qualitatively approached using descriptive analysis and adopting Harden’s 10 Questions of curriculum development framework approach. Answering Harden's questions reflects the fundamental curricular components and how the different aspects of a curriculum framework fit together. The key features highlighted in the curriculum-related material and literature have been presented. Results: The analysis of the curriculum of FMUG reveals a curriculum with interactive components. Clear structured objectives and goals reflect the faculty’s vision. The approach for needs assessment is based on a scientific ground, and the curriculum integrated contents have been set to meet national and international requirements. Adopting SPICES strategies helps FMUG and students achieve the objectives of the curriculum. Multiple motivated instructional methods are adopted, fostering coping with the programme objectives and outcomes. A wide range of assessment methods has been adopted to assess the learning outcomes of the curriculum correctly, reliably, and in alignment with the intended outcomes. The prevailing conducive educational environment of FMUG is favourable for its operation and profoundly influences the outcome of the programme. And there is a well-defined policy for curriculum management, monitoring and evaluation. Conclusion: Harden’s 10 questions are satisfactorily addressed by the multi-disciplinary and well-developed FMUG curriculum. The current curriculum supports the

  13. On the Use of Ordering Theory with Intelligence Test Items.

    ERIC Educational Resources Information Center

    Scheuneman, Janice

    This study investigated the feasibility of using the ordering theoretic procedure with multiple choice items, and its usefulness as an interpretive aid for intelligence test data. Data from two components of a group-administered multiple choice intelligence test (Otis-Lennon Mental Ability Tests) were analyzed using ordering theory procedure for…

  14. Higher Retention after a New Take-Home Computerised Test

    ERIC Educational Resources Information Center

    Park, Jooyong; Choi, Byung-Chul

    2008-01-01

    A new computerised testing system was used at home to promote learning and also to save classroom instruction time. The testing system combined the features of short-answer and multiple-choice formats. The questions of the multiple-choice problems were presented without the options so that students had to generate answers for themselves; they…

  15. Estimating Guessing Effects on the Vocabulary Levels Test for Differing Degrees of Word Knowledge

    ERIC Educational Resources Information Center

    Stewart, Jeffrey; White, David A.

    2011-01-01

    Multiple-choice tests such as the Vocabulary Levels Test (VLT) are often viewed as a preferable estimator of vocabulary knowledge when compared to yes/no checklists, because self-reporting tests introduce the possibility of students overreporting or underreporting scores. However, multiple-choice tests have their own unique disadvantages. It has…

  16. Grading Scheme, Test Difficulty, and the Immediate Feedback Assessment Technique

    ERIC Educational Resources Information Center

    DiBattista, David; Gosse, Leanne; Sinnige-Egger, Jo-Anne; Candale, Bela; Sargeson, Kim

    2009-01-01

    The authors examined how the grading scheme affects learning and students' reactions to the Immediate Feedback Assessment Technique (IFAT), an answer form providing immediate feedback on multiple-choice questions. Undergraduate students (N = 141) took a general-knowledge multiple-choice test of low, medium, or high difficulty. They used the IFAT…

  17. Introducing New Material for a Standardized Exam: A Controlled, Prospective, Double-Blinded Test of Student Learning

    ERIC Educational Resources Information Center

    Halperin, Kopl; Dunbar, William S.

    2016-01-01

    Do multiple choice unit tests reflect what students have learned during the unit? The day before the administration of a county-mandated multiple choice test, two classes were shown a topic they had not previously seen, and told it would be on the test. One class was shown the same material and told it was not important, and two classes were not…

  18. Medical Students' vs. Family Physicians' Assessment of Practical and Logical Values of Pathophysiology Multiple-Choice Questions

    ERIC Educational Resources Information Center

    Secic, Damir; Husremovic, Dzenana; Kapur, Eldan; Jatic, Zaim; Hadziahmetovic, Nina; Vojnikovic, Benjamin; Fajkic, Almir; Meholjic, Amir; Bradic, Lejla; Hadzic, Amila

    2017-01-01

    Testing strategies can either have a very positive or negative effect on the learning process. The aim of this study was to examine the degree of consistency in evaluating the practicality and logic of questions from a medical school pathophysiology test, between students and family medicine doctors. The study engaged 77 family medicine doctors…

  19. A univariate analysis of variance design for multiple-choice feeding-preference experiments: A hypothetical example with fruit-eating birds

    NASA Astrophysics Data System (ADS)

    Larrinaga, Asier R.

    2010-01-01

    I consider statistical problems in the analysis of multiple-choice food-preference experiments, and propose a univariate analysis of variance design for experiments of this type. I present an example experimental design, for a hypothetical comparison of fruit colour preferences between two frugivorous bird species. In each fictitious trial, four trays each containing a known weight of artificial fruits (red, blue, black, or green) are introduced into the cage, while four equivalent trays are left outside the cage, to control for tray weight loss due to other factors (notably desiccation). The proposed univariate approach allows data from such designs to be analysed with adequate power and no major violations of statistical assumptions. Nevertheless, there is no single "best" approach for experiments of this type: the best analysis in each case will depend on the particular aims and nature of the experiments.

  20. Development of the Astronomy Diagnostic Test

    NASA Astrophysics Data System (ADS)

    Hufnagel, B.

    2001-12-01

    The starting point for questions in the Astronomy Diagnostic Test (ADT) Version 2.0 was two precursor surveys, the STAR Evaluation by Philip M. Sadler and Michael Zeilik's Astronomy Diagnostic Test Version 1.0. Questions were selected or developed for the new ADT which (1) addressed concepts included in most introductory astronomy courses for non-science majors, (2) included only concepts recognizable to most high-school graduates, (3) focused on one concept only, and (4) stressed concepts and not jargon. This version was administered to about 1000 students at four colleges and universities. The statistical results, e.g., item discrimination, guided re-writing and elimination of questions. Sixty student interviews at Montana State and the University of Maryland, as well as thirty written responses to the questions in open-ended format, were the basis for determining if the questions were interpreted by the students as intended. This student input was also the basis for distractors (wrong answers) reflecting the ideas and the words of the students themselves. After revision, the ADT was administered the next semester to 1557 students enrolled in 22 introductory classes, twenty students were interviewed, and comments solicited from the instructors of those classes. The result was the final ADT Version 2.0, which consists of 21 content and 12 student background multiple-choice questions. This work has been partly supported by NSF grant # DGE-9714489.

  1. The undergraduate curriculum of Faculty of Medicine and Health Sciences, Universiti Malaysia Sarawak in terms of Harden's 10 questions.

    PubMed

    Malik, Alam Sher; Malik, Rukhsana Hussain

    2002-11-01

    The curriculum of the Faculty of Medicine and Health Sciences (FMHS) is designed particularly to cater for the health needs of the State of Sarawak, Malaysia. The framework of the curriculum is built on four strands: biological knowledge, clinical skills, behavioural and population aspects. The training is community based and a graduate of FMHS is expected to possess the ability to deal with many ethnic groups with different cultures and beliefs; expertise in tropical infectious diseases; skills to deal with emergencies such as snakebite and near drowning; qualities of an administrator, problem-solver and community leader; and proficiency in information and communication technology. The content of the curriculum strives for commitment to lifelong learning and professional values. The FMHS has adopted a 'mixed economy' of education strategies and a 'mixed menu approach' to test a wide range of curriculum outcomes. The FMHS fosters intellectual and academic pursuits, encourages friendliness and a sense of social responsibility and businesslike efficiency.

  2. The Role of Essay Tests Assessment in e-Learning: A Japanese Case Study

    ERIC Educational Resources Information Center

    Nakayama, Minoru; Yamamoto, Hiroh; Santiago, Rowena

    2010-01-01

    e-Learning has some restrictions on how learning performance is assessed. Online testing is usually in the form of multiple-choice questions, without any essay type of learning assessment. Major reasons for employing multiple-choice tasks in e-learning include ease of implementation and ease of managing learner's responses. To address this…

  3. Distractor Efficiency: A Study into the Nature of Distractor Efficiency in Foreign Language Testing.

    ERIC Educational Resources Information Center

    Goodrich, Hubbard C.

    The aim of the research in question was to investigate the efficiency of various classes of distractors used in multiple-choice vocabulary question testing. Much of the quality of multiple-choice questions relies on the extent to which incorrect choices tempt the less proficient student. The study selected 8 classes of distractors and attempted to…

  4. Algorithms for Developing Test Questions from Sentences in Instructional Materials. Interim Report, January-September 1977.

    ERIC Educational Resources Information Center

    Roid, Gale; Finn, Patrick

    The feasibility of generating multiple-choice test questions by transforming sentences from prose instructional materials was examined. A computer-based algorithm was used to analyze prose subject matter and to identify high-information words. Sentences containing selected words were then transformed into multiple-choice items by four writers who…

  5. Exploring Secondary Students' Knowledge and Misconceptions about Influenza: Development, validation, and implementation of a multiple-choice influenza knowledge scale

    NASA Astrophysics Data System (ADS)

    Romine, William L.; Barrow, Lloyd H.; Folk, William R.

    2013-07-01

    Understanding infectious diseases such as influenza is an important element of health literacy. We present a fully validated knowledge instrument called the Assessment of Knowledge of Influenza (AKI) and use it to evaluate knowledge of influenza, with a focus on misconceptions, in Midwestern United States high-school students. A two-phase validation process was used. In phase 1, an initial factor structure was calculated based on 205 students of grades 9-12 at a rural school. In phase 2, one- and two-dimensional factor structures were analyzed from the perspectives of classical test theory and the Rasch model using structural equation modeling and principal components analysis (PCA) on Rasch residuals, respectively. Rasch knowledge measures were calculated for 410 students from 6 school districts in the Midwest, and misconceptions were verified through the χ 2 test. Eight items measured knowledge of flu transmission, and seven measured knowledge of flu management. While alpha reliability measures for the subscales were acceptable, Rasch person reliability measures and PCA on residuals advocated for a single-factor scale. Four misconceptions were found, which have not been previously documented in high-school students. The AKI is the first validated influenza knowledge assessment, and can be used by schools and health agencies to provide a quantitative measure of impact of interventions aimed at increasing understanding of influenza. This study also adds significantly to the literature on misconceptions about influenza in high-school students, a necessary step toward strategic development of educational interventions for these students.

  6. Do Students Know What They Know and What They Don't Know? Using a Four-Tier Diagnostic Test to Assess the Nature of Students' Alternative Conceptions

    ERIC Educational Resources Information Center

    Caleon, Imelda S.; Subramaniam, R.

    2010-01-01

    This study reports on the development and application of a four-tier multiple-choice (4TMC) diagnostic instrument, which has not been reported in the literature. It is an enhanced version of the two-tier multiple-choice (2TMC) test. As in 2TMC tests, its answer and reason tiers measure students' content knowledge and explanatory knowledge,…

  7. Test Science, Not Reading.

    ERIC Educational Resources Information Center

    Rakow, Steven J.; Gee, Thomas C.

    1987-01-01

    Reviews some of the ways researchers estimate readability with a focus on multiple choice test items in science. Presents criteria to consider for minimizing readability problems in test items. Examines samples from the National Assessment of Educational Progress test bank for readability. (ML)

  8. 10 Questions about Independent Reading

    ERIC Educational Resources Information Center

    Truby, Dana

    2012-01-01

    Teachers know that establishing a robust independent reading program takes more than giving kids a little quiet time after lunch. But how do they set up a program that will maximize their students' gains? Teachers have to know their students' reading levels inside and out, help them find just-right books, and continue to guide them during…

  9. Linking neuroscientific research on decision making to the educational context of novice students assigned to a multiple-choice scientific task involving common misconceptions about electrical circuits

    PubMed Central

    Potvin, Patrice; Turmel, Élaine; Masson, Steve

    2014-01-01

    Functional magnetic resonance imaging was used to identify the brain-based mechanisms of uncertainty and certainty associated with answers to multiple-choice questions involving common misconceptions about electric circuits. Twenty-two scientifically novice participants (humanities and arts college students) were asked, in an fMRI study, whether or not they thought the light bulbs in images presenting electric circuits were lighted up correctly, and if they were certain or uncertain of their answers. When participants reported that they were unsure of their responses, analyses revealed significant activations in brain areas typically involved in uncertainty (anterior cingulate cortex, anterior insula cortex, and superior/dorsomedial frontal cortex) and in the left middle/superior temporal lobe. Certainty was associated with large bilateral activations in the occipital and parietal regions usually involved in visuospatial processing. Correct-and-certain answers were associated with activations that suggest a stronger mobilization of visual attention resources when compared to incorrect-and-certain answers. These findings provide insights into brain-based mechanisms of uncertainty that are activated when common misconceptions, identified as such by science education research literature, interfere in decision making in a school-like task. We also discuss the implications of these results from an educational perspective. PMID:24478680

  10. Linking neuroscientific research on decision making to the educational context of novice students assigned to a multiple-choice scientific task involving common misconceptions about electrical circuits.

    PubMed

    Potvin, Patrice; Turmel, Elaine; Masson, Steve

    2014-01-01

    Functional magnetic resonance imaging was used to identify the brain-based mechanisms of uncertainty and certainty associated with answers to multiple-choice questions involving common misconceptions about electric circuits. Twenty-two scientifically novice participants (humanities and arts college students) were asked, in an fMRI study, whether or not they thought the light bulbs in images presenting electric circuits were lighted up correctly, and if they were certain or uncertain of their answers. When participants reported that they were unsure of their responses, analyses revealed significant activations in brain areas typically involved in uncertainty (anterior cingulate cortex, anterior insula cortex, and superior/dorsomedial frontal cortex) and in the left middle/superior temporal lobe. Certainty was associated with large bilateral activations in the occipital and parietal regions usually involved in visuospatial processing. Correct-and-certain answers were associated with activations that suggest a stronger mobilization of visual attention resources when compared to incorrect-and-certain answers. These findings provide insights into brain-based mechanisms of uncertainty that are activated when common misconceptions, identified as such by science education research literature, interfere in decision making in a school-like task. We also discuss the implications of these results from an educational perspective.

  11. General Chemistry Students' Understanding of the Chemistry Underlying Climate Science and the Development of a Two-Tiered Multiple-Choice Diagnostic Instrument

    NASA Astrophysics Data System (ADS)

    Versprille, A.; Towns, M.; Mahaffy, P.; Martin, B.; McKenzie, L.; Kirchhoff, M.

    2013-12-01

    As part of the NSF funded Visualizing the Chemistry of Climate Change (VC3) project, we have developed a chemistry of climate science diagnostic instrument for use in general chemistry courses based on twenty-four student interviews. We have based our interview protocol on misconceptions identified in the research literature and the essential principles of climate change outlined in the CCSP document that pertain to chemistry (CCSP, 2009). The undergraduate student interviews elicited their understanding of the greenhouse effect, global warming, climate change, greenhouse gases, climate, and weather, and the findings from these interviews informed and guided the development of the multiple-choice diagnostic instrument. Our analysis and findings from the interviews indicate that students seem to confuse the greenhouse effect, global warming, and the ozone layer and in terms of chemistry concepts, the students lack a particulate level understanding of greenhouse gases causing them to not fully conceptualize the greenhouse effect and climate change. Details of the findings from the interviews, development of diagnostic instrument, and preliminary findings from the full implementation of the diagnostic instrument will be shared.

  12. Methods and Materials for Teaching Occupational Survival Skills. Module Tests.

    ERIC Educational Resources Information Center

    Illinois Univ., Urbana. Dept. of Vocational and Technical Education.

    This document contains twelve sixteen-item multiple choice tests and answer keys for the modules in the Occupational Survival Skills series (CE 018 557-568.) (CE 018 556 describes the series and its development.) (JH)

  13. Stimulus Seeks Enriched Tests

    ERIC Educational Resources Information Center

    Sawchuk, Stephen

    2009-01-01

    No matter where teachers, state officials, and testing experts stand on the debate about school accountability, they generally agree that the United States' current multiple-choice-dominated Kinder-12 tests are, to use language borrowed from the No Child Left Behind (NCLB) Act, "in need of improvement." Now, federal officials are…

  14. Test Design Project: Studies in Test Bias. Annual Report.

    ERIC Educational Resources Information Center

    McArthur, David

    Item bias in a multiple-choice test can be detected by appropriate analyses of the persons x items scoring matrix. This permits comparison of groups of examinees tested with the same instrument. The test may be biased if it is not measuring the same thing in comparable groups, if groups are responding to different aspects of the test items, or if…

  15. Central muscarinic cholinergic involvement in serial pattern learning: Atropine impairs acquisition and retention in a serial multiple choice (SMC) task in rats.

    PubMed

    Chenoweth, Amber M; Fountain, Stephen B

    2015-09-01

    Atropine sulfate is a muscarinic cholinergic antagonist which impairs acquisition and retention performance on a variety of cognitive tasks. The present study examined the effects of atropine on acquisition and retention of a highly-structured serial pattern in a serial multiple choice (SMC) task. Rats were given daily intraperitoneal injections of either saline or atropine sulfate (50mg/kg) and trained in an octagonal operant chamber equipped with a lever on each wall. They learned to press the levers in a particular order (the serial pattern) for brain-stimulation reward in a discrete-trial procedure with correction. The two groups learned a pattern composed of eight 3-element chunks ending with a violation element: 123-234-345-456-567-678-781-818 where the digits represent the clock-wise positions of levers in the chamber, dashes indicate 3-s pauses, and other intertrial intervals were 1s. Central muscarinic cholinergic blockade by atropine caused profound impairments during acquisition, specifically in the encoding of chunk-boundary elements (the first element of chunks) and the violation element of the pattern, but had a significant but negligible effect on the encoding of within-chunk elements relative to saline-injected rats. These effects persisted when atropine was removed, and similar impairments were also observed in retention performance. The results indicate that intact central muscarinic cholinergic systems are necessary for learning and producing appropriate responses at places in sequences where pattern structure changes. The results also provide further evidence that multiple cognitive systems are recruited to learn and perform within-chunk, chunk-boundary, and violation elements of a serial pattern.

  16. Alternate item types: continuing the quest for authentic testing.

    PubMed

    Wendt, Anne; Kenny, Lorraine E

    2009-03-01

    Many test developers suggest that multiple-choice items can be used to evaluate critical thinking if the items are focused on measuring higher order thinking ability. The literature supports the use of alternate item types to assess additional competencies, such as higher level cognitive processing and critical thinking, as well as ways to allow examinees to demonstrate their competencies differently. This research study surveyed nurses after taking a test composed of alternate item types paired with multiple-choice items. The participants were asked to provide opinions regarding the items and the item formats. Demographic information was asked. In addition, information was collected as the participants responded to the items. The results of this study reveal that the participants thought that, in general, the items were more authentic and allowed them to demonstrate their competence better than multiple-choice items did. Further investigation into the optimal blend of alternate items and multiple-choice items is needed.

  17. Handbook for Driving Knowledge Testing.

    ERIC Educational Resources Information Center

    Pollock, William T.; McDole, Thomas L.

    Materials intended for driving knowledge test development for use by operational licensing and education agencies are presented. A pool of 1,313 multiple choice test items is included, consisting of sets of specially developed and tested items covering principles of safe driving, legal regulations, and traffic control device knowledge pertinent to…

  18. Test Pool Questions, Area III.

    ERIC Educational Resources Information Center

    Sloan, Jamee Reid

    This manual contains multiple choice questions to be used in testing students on nurse training objectives. Each test includes several questions covering each concept. The concepts in section A, medical surgical nursing, are diseases of the following systems: musculoskeletal; central nervous; cardiovascular; gastrointestinal; urinary and male…

  19. Paradoxical effects of injection stress and nicotine exposure experienced during adolescence on learning in a serial multiple choice (SMC) task in adult female rats.

    PubMed

    Renaud, Samantha M; Pickens, Laura R G; Fountain, Stephen B

    2015-01-01

    Nicotine exposure in adolescent rats has been shown to cause learning impairments that persist into adulthood long after nicotine exposure has ended. This study was designed to assess the extent to which the effects of adolescent nicotine exposure on learning in adulthood can be accounted for by adolescent injection stress experienced concurrently with adolescent nicotine exposure. Female rats received either 0.033 mg/h nicotine (expressed as the weight of the free base) or bacteriostatic water vehicle by osmotic pump infusion on postnatal days 25-53 (P25-53). Half of the nicotine-exposed rats and half of the vehicle rats also received twice-daily injection stress consisting of intraperitoneal saline injections on P26-53. Together these procedures produced 4 groups: No Nicotine/No Stress, Nicotine/No Stress, No Nicotine/Stress, and Nicotine/Stress. On P65-99, rats were trained to perform a structurally complex 24-element serial pattern of responses in the serial multiple choice (SMC) task. Four general results were obtained in the current study. First, learning for within-chunk elements was not affected by either adolescent nicotine exposure, consistent with past work (Pickens, Rowan, Bevins, and Fountain, 2013), or adolescent injection stress. Thus, there were no effects of adolescent nicotine exposure or injection stress on adult within-chunk learning typically attributed to rule learning in the SMC task. Second, adolescent injection stress alone (i.e., without concurrent nicotine exposure) caused transient but significant facilitation of adult learning restricted to a single element of the 24-element pattern, namely, the "violation element," that was the only element of the pattern that was inconsistent with pattern structure. Thus, adolescent injection stress alone facilitated violation element acquisition in adulthood. Third, also consistent with past work (Pickens et al., 2013), adolescent nicotine exposure, in this case both with and without adolescent

  20. Pursuing the Qualities of a "Good" Test

    ERIC Educational Resources Information Center

    Coniam, David

    2014-01-01

    This article examines the issue of the quality of teacher-produced tests, limiting itself in the current context to objective, multiple-choice tests. The article investigates a short, two-part 20-item English language test. After a brief overview of the key test qualities of reliability and validity, the article examines the two subtests in terms…

  1. Test Builder Program Instructions. IBM Version.

    ERIC Educational Resources Information Center

    Patton, Jan; Steffee, John

    This document provides printed instructions for teachers to use with an IBM-compatible microcomputer to construct tests and then have the computer give the tests, grade them, and print the test results. Computerized tests constructed in this way may contain true-false questions, multiple-choice questions, or a combination of both. The questions…

  2. Relationships of Cognitive Components of Test Anxiety to Test Performance: Implications for Assessment and Treatment.

    ERIC Educational Resources Information Center

    Bruch, Monroe A.; And Others

    1983-01-01

    Assessed the degree to which components of test-taking strategies, covert self-statements, and subjective anxiety during an exam provide increments in prediction of test performance of undergraduates (N=72). Results showed that only test-taking strategies provided a significant increment to multiple-choice and essay test performance but not math…

  3. [Nursing] Test Pool Questions. Area II.

    ERIC Educational Resources Information Center

    Watkins, Nettie; Patton, Bob

    This manual consists of area 2 test pool questions which are designed to assist instructors in selecting appropriate questions to help prepare practical nursing students for the Oklahoma state board exam. Multiple choice questions are utilized to facilitate testing of nursing 2 curriculum objectives. Each test contains questions covering each…

  4. Why Standardized Tests Threaten Multiculturalism.

    ERIC Educational Resources Information Center

    Bigelow, Bill

    1999-01-01

    Oregon's statewide social-studies assessment (a randomized, multiple-choice maze) is part of a "democratic" national standards movement that threatens good teaching and multicultural studies. If multiculturalism's key goal is accounting for historical influences on current social realities, then Oregon's standards and tests earn a…

  5. Grade 9 Achievement Test. Mathematics. June 1988 = 9e Annee Test de Rendement. Mathematiques. Juin 1988.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    This achievement test for ninth grade mathematics is written in both French and English. The test consists of 75 multiple-choice items. Students are given 90 minutes to complete the examination, with the use of calculators permitted. The test content covers a wide range of mathematical content including: positive and negative exponents; word…

  6. Grade 9 Pilot Test. Mathematics. June 1988 = 9e Annee Test Pilote. Mathematiques. Juin 1988.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    This pilot test for ninth grade mathematics is written in both French and English. The test consists of 75 multiple-choice items. Students are given 90 minutes to complete the examination and the use of a calculator is highly recommended. The test content covers a wide range of mathematical topics including: decimals; exponents; arithmetic word…

  7. Effects of Repeated Testing on Short- and Long-Term Memory Performance across Different Test Formats

    ERIC Educational Resources Information Center

    Stenlund, Tova; Sundström, Anna; Jonsson, Bert

    2016-01-01

    This study examined whether practice testing with short-answer (SA) items benefits learning over time compared to practice testing with multiple-choice (MC) items, and rereading the material. More specifically, the aim was to test the hypotheses of "retrieval effort" and "transfer appropriate processing" by comparing retention…

  8. Effects of Including Humor in Test Items.

    ERIC Educational Resources Information Center

    McMorris, Robert F.; And Others

    Two 50-item multiple-choice forms of a grammar test were developed differing only in humor being included in 20 items of one form. One hundred twenty-six (126) eighth graders received the test plus alternate forms of a questionnaire. Humor inclusion did not affect grammar scores on matched humorous/nonhumorous items nor on common post-treatment…

  9. Standardized Reading Tests and the Postsecondary Reading Curriculum.

    ERIC Educational Resources Information Center

    Wood, Nancy V.

    To help college reading teachers develop an awareness of what standardized reading tests do and do not reveal about students' reading abilities, a study examined the testing of reading and criticized four major standardized tests. Results indicated that reading is tested through (1) reading passages accompanied by multiple choice questions, (2)…

  10. Do the Guideline Violations Influence Test Difficulty of High-Stake Test?: An Investigation on University Entrance Examination in Turkey

    ERIC Educational Resources Information Center

    Atalmis, Erkan Hasan

    2016-01-01

    Multiple-choice (MC) items are commonly used in high-stake tests. Thus, each item of such tests should be meticulously constructed to increase the accuracy of decisions based on test results. Haladyna and his colleagues (2002) addressed the valid item-writing guidelines to construct high quality MC items in order to increase test reliability and…

  11. Test Your Sodium Smarts

    MedlinePlus

    ... You may be surprised to learn how much sodium is in many foods. Sodium, including sodium chloride ... foods with little or no salt. Test your sodium smarts by answering these 10 questions about which ...

  12. Development of Achievement Test: Validity and Reliability Study for Achievement Test on Matter Changing

    ERIC Educational Resources Information Center

    Kara, Filiz; Celikler, Dilek

    2015-01-01

    For "Matter Changing" unit included in the Secondary School 5th Grade Science Program, it is intended to develop a test conforming the gains described in the program, and that can determine students' achievements. For this purpose, a multiple-choice test of 48 questions is arranged, consisting of 8 questions for each gain included in the…

  13. A Method for Writing Open-Ended Curved Arrow Notation Questions for Multiple-Choice Exams and Electronic-Response Systems

    ERIC Educational Resources Information Center

    Ruder, Suzanne M.; Straumanis, Andrei R.

    2009-01-01

    A critical stage in the process of developing a conceptual understanding of organic chemistry is learning to use curved arrow notation. From this stems the ability to predict reaction products and mechanisms beyond the realm of memorization. Since evaluation (i.e., testing) is known to be a key driver of student learning, it follows that a new…

  14. Machine-Scored Testing, Part II: Creativity and Item Analysis.

    ERIC Educational Resources Information Center

    Leuba, Richard J.

    1986-01-01

    Explains how multiple choice test items can be devised to measure higher-order learning, including engineering problem solving. Discusses the value and information provided in item analysis procedures with machine-scored tests. Suggests elements to consider in test design. (ML)

  15. The Relation of Task to Performance in Testing Verbs.

    ERIC Educational Resources Information Center

    Gradman, Harry L.; Hanania, Edith

    A study investigated the variability of language performance on different types of testing task, global versus discrete-focus. Three tests (cloze, multiple-choice, and fill-in-the-blank) were developed to measure learners' knowledge of five verb forms. The tests, containing corresponding items designed to elicit equivalent structures, were…

  16. ACER Chemistry Test Item Collection. ACER Chemtic Year 12.

    ERIC Educational Resources Information Center

    Australian Council for Educational Research, Hawthorn.

    The chemistry test item banks contains 225 multiple-choice questions suitable for diagnostic and achievement testing; a three-page teacher's guide; answer key with item facilities; an answer sheet; and a 45-item sample achievement test. Although written for the new grade 12 chemistry course in Victoria, Australia, the items are widely applicable.…

  17. Construction of Valid and Reliable Test for Assessment of Students

    ERIC Educational Resources Information Center

    Osadebe, P. U.

    2015-01-01

    The study was carried out to construct a valid and reliable test in Economics for secondary school students. Two research questions were drawn to guide the establishment of validity and reliability for the Economics Achievement Test (EAT). It is a multiple choice objective test of five options with 100 items. A sample of 1000 students was randomly…

  18. Development of a Test of Experimental Problem-Solving Skills.

    ERIC Educational Resources Information Center

    Ross, John A.; Maynes, Florence J.

    1983-01-01

    Multiple-choice tests were constructed for seven problem-solving skills using learning hierarchies based on expert-novice differences and refined in three phases of field testing. Includes test reliabilities (sufficient for making judgments of group performance but insufficient in single-administration for individual assessment), validity, and…

  19. Project Physics Tests 6, The Nucleus.

    ERIC Educational Resources Information Center

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    Test items relating to Project Physics Unit 6 are presented in this booklet. Included are 70 multiple-choice and 24 problem-and-essay questions. Nuclear physics fundamentals are examined with respect to the shell model, isotopes, neutrons, protons, nuclides, charge-to-mass ratios, alpha particles, Becquerel's discovery, gamma rays, cyclotrons,…

  20. Project Physics Tests 4, Light and Electromagnetism.

    ERIC Educational Resources Information Center

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    Test items relating to Project Physics Unit 4 are presented in this booklet. Included are 70 multiple-choice and 22 problem-and-essay questions. Concepts of light and electromagnetism are examined on charges, reflection, electrostatic forces, electric potential, speed of light, electromagnetic waves and radiations, Oersted's and Faraday's work,…

  1. Accountability Is More than a Test Score

    ERIC Educational Resources Information Center

    Turnipseed, Stephan; Darling-Hammond, Linda

    2015-01-01

    The number one quality business leaders look for in employees is creativity and yet the U.S. education system undermines the development of the higher-order skills that promote creativity by its dogged focus on multiple-choice tests. Stephan Turnipseed and Linda DarlingHammond discuss the kind of rich accountability system that will help students…

  2. Rules Urge New Style of Testing

    ERIC Educational Resources Information Center

    Gewertz, Catherine

    2010-01-01

    The author reports a federal competition that has opened for $350 million in federal money to design new ways of assessing what students learn. Rules for the contest make clear that the government wants to leave behind multiple-choice testing more often in favor of essays, multidisciplinary projects, and other more nuanced measures of achievement.…

  3. Test Reliability by Ability Level of Examinees.

    ERIC Educational Resources Information Center

    Green, Kathy; Sax, Gilbert

    Achievement test reliability as a function of ability was determined for multiple sections of a large university French class (n=193). A 5-option multiple-choice examination was constructed, least attractive distractors were eliminated based on the instructor's judgment, and the resulting three forms of the examination (i.e. 3-, 4-, or 5-choice…

  4. Assessing Differential Item Functioning in Performance Tests.

    ERIC Educational Resources Information Center

    Zwick, Rebecca; And Others

    Although the belief has been expressed that performance assessments are intrinsically more fair than multiple-choice measures, some forms of performance assessment may in fact be more likely than conventional tests to tap construct-irrelevant factors. As performance assessment grows in popularity, it will be increasingly important to monitor the…

  5. Efficient Methods of Estimating the Operating Characteristics of Item Response Categories and Challenge to a New Model for the Multiple-Choice Item

    DTIC Science & Technology

    1981-11-01

    broader areas and even includes the multi- dimensional latent space, Latent Trait Theory sounds more appropriate.4 In the present report, therefore, Latent...that the standard error of measurement is defined more meaningfully, as a function of the latent trait e F It is defined as the inverse of the square...latent trait are set more or less arbitrarily, say, aa - 0.25 and b - 0.00 . From the test score of the subset of equivalent binary items, the maximum

  6. Group versus modified individual standard-setting on multiple-choice questions with the Angoff method for fourth-year medical students in the internal medicine clerkship

    PubMed Central

    Senthong, Vichai; Chindaprasirt, Jarin; Sawanyawisuth, Kittisak; Aekphachaisawat, Noppadol; Chaowattanapanit, Suteeraporn; Limpawattana, Panita; Choonhakarn, Charoen; Sookprasert, Aumkhae

    2013-01-01

    Background The Angoff method is one of the preferred methods for setting a passing level in an exam. Normally, group meetings are required, which may be a problem for busy medical educators. Here, we compared a modified Angoff individual method to the conventional group method. Methods Six clinical instructors were divided into two groups matched by teaching experience: modified Angoff individual method (three persons) and conventional group method (three persons). The passing scores were set by using the Angoff theory. The groups set the scores individually and then met to determine the passing score. In the modified Angoff individual method, passing scores were judged by each instructor and the final passing score was adjusted by the concordance method and reliability index. Results There were 94 fourth-year medical students who took the test. The mean (standard deviation) test score was 65.35 (8.38), with a median of 64 (range 46–82). The three individual instructors took 45, 60, and 60 minutes to finish the task, while the group spent 90 minutes in discussion. The final passing score in the modified Angoff individual method was 52.18 (56.75 minus 4.57) or 52 versus 51 from the standard group method. There was not much difference in numbers of failed students by either method (four versus three). Conclusion The modified Angoff individual method may be a feasible way to set a standard passing score with less time consumed and more independent rather than group work by instructors. PMID:24101890

  7. Student Assessment System. Domain Referenced Tests. Allied Health Occupations/Practical Nursing. Volume II: Theory.

    ERIC Educational Resources Information Center

    Campbell, Gene, Comp.; Simpson, Bruce, Comp.

    These written domain referenced tests (DRTs) for the area of allied health occupations/practical nursing test cognitive abilities or knowledge of theory. Introductory materials describe domain referenced testing and test development. Each multiple choice test includes a domain statement, describing the behavior and content of the domain, and a…

  8. Evaluation of Assumptions Related to the Testing of Phonics Skills. Final Report.

    ERIC Educational Resources Information Center

    Ramsey, Wallace Z.

    One hundred thirty-eight second graders, identified by their teachers as "poor readers with incomplete phonics skills" were given four specially constructed tests of phonics skills: a context test over meaningful but visually unfamiliar words, an isolated sounds test, a McKee type multiple choice test, and a word completion test. Eighty…

  9. 32 CFR 287.10 - Questions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... INFORMATION ACT PROGRAM DEFENSE INFORMATION SYSTEMS AGENCY FREEDOM OF INFORMATION ACT PROGRAM § 287.10..., faxes, and electronic mail. FOIA requests should be addressed as follows: Defense Information...

  10. Student Assessment System. Domain Referenced Tests. Transportation/Automotive Mechanics. Volume II: Theory. Georgia Vocational Education Program Articulation.

    ERIC Educational Resources Information Center

    Watkins, James F., Comp.

    These written domain referenced tests (DRTs) for the area of transportation/automotive mechanics test cognitive abilities or knowledge of theory. Introductory materials describe domain referenced testing and test development. Each multiple choice test includes a domain statement, describing the behavior and content of the domain, and a test item…

  11. Electronics. Criterion-Referenced Test (CRT) Item Bank.

    ERIC Educational Resources Information Center

    Davis, Diane, Ed.

    This document contains 519 criterion-referenced multiple choice and true or false test items for a course in electronics. The test item bank is designed to work with both the Vocational Instructional Management System (VIMS) and the Vocational Administrative Management System (VAMS) in Missouri. The items are grouped into 15 units covering the…

  12. Cooperative Industrial/Vocational Education. Test Items and Assessment Techniques.

    ERIC Educational Resources Information Center

    Smith, Clifton L.; Elias, Julie Whitaker

    This document contains multiple-choice test items and assessment techniques in the form of instructional management plans for Missouri's cooperative industrial-vocational education core curriculum. The test items and techniques are relevant to these 15 occupational duties: (1) career research and planning; (2) computer awareness; (3) employment…

  13. Food Service Worker. Dietetic Support Personnel Achievement Test.

    ERIC Educational Resources Information Center

    Oklahoma State Dept. of Vocational and Technical Education, Stillwater.

    This guide contains a series of multiple-choice items and guidelines to assist instructors in composing criterion-referenced tests for use in the food service worker component of Oklahoma's Dietetic Support Personnel training program. Test items addressing each of the following occupational duty areas are provided: human relations; personal…

  14. Mathematics Strategy Use in Solving Test Items in Varied Formats

    ERIC Educational Resources Information Center

    Bonner, Sarah M.

    2013-01-01

    Although test scores from similar tests in multiple choice and constructed response formats are highly correlated, equivalence in rankings may mask differences in substantive strategy use. The author used an experimental design and participant think-alouds to explore cognitive processes in mathematical problem solving among undergraduate examinees…

  15. My Child Doesn't Test Well. Carnegie Perspectives

    ERIC Educational Resources Information Center

    Bond, Lloyd

    2007-01-01

    The writer examines a variety of reasons why test performance may not always be a valid measure of a person's competence or potential. Citing that a sizable percentage of students perform well in their schoolwork but poorly on standardized, multiple-choice tests, Bond defines and discusses four candidates as source factors for the phenomenon: (1)…

  16. Investigation of Response Changes in the GRE Revised General Test

    ERIC Educational Resources Information Center

    Liu, Ou Lydia; Bridgeman, Brent; Gu, Lixiong; Xu, Jun; Kong, Nan

    2015-01-01

    Research on examinees' response changes on multiple-choice tests over the past 80 years has yielded some consistent findings, including that most examinees make score gains by changing answers. This study expands the research on response changes by focusing on a high-stakes admissions test--the Verbal Reasoning and Quantitative Reasoning measures…

  17. Estimating the Reliability of Multiple True-False Tests.

    ERIC Educational Resources Information Center

    Frisbie, David A.; Druva, Cynthia A.

    1986-01-01

    This study was designed to examine the level of dependence within multiple true-false test-item clusters by computing sets of item correlations with data from a test composed of both multiple true-false and multiple-choice items. (Author/LMO)

  18. Food Service Supervisor. Dietetic Support Personnel Achievement Test.

    ERIC Educational Resources Information Center

    Oklahoma State Dept. of Vocational and Technical Education, Stillwater.

    This guide contains a series of multiple-choice items and guidelines to assist instructors in composing criterion-referenced tests for use in the food service supervisor component of Oklahoma's Dietetic Support Personnel training program. Test items addressing each of the following occupational duty areas are provided: human relations; nutrient…

  19. Auto Mechanics. Criterion-Referenced Test (CRT) Item Bank.

    ERIC Educational Resources Information Center

    Tannehill, Dana, Ed.

    This document contains 546 criterion-referenced multiple choice and true or false test items for a course in auto mechanics. The test item bank is designed to work with both the Vocational Instructional Management System (VIMS) and Vocational Administrative Management System (VAMS) in Missouri. The items are grouped into 35 units covering the…

  20. Sampling Knowledge and Understanding: How Long Should a Test Be?

    ERIC Educational Resources Information Center

    Burton, Richard F.

    2006-01-01

    Many academic tests (e.g. short-answer and multiple-choice) sample required knowledge with questions scoring 0 or 1 (dichotomous scoring). Few textbooks give useful guidance on the length of test needed to do this reliably. Posey's binomial error model of 1932 provides the best starting point, but allows neither for heterogeneity of question…

  1. Geography Students Assess Their Learning Using Computer-Marked Tests.

    ERIC Educational Resources Information Center

    Hogg, Jim

    1997-01-01

    Reports on a pilot study designed to assess the potential of computer-marked tests for allowing students to monitor their learning. Students' answers to multiple choice tests were fed into a computer that provided a full analysis of their strengths and weaknesses. Students responded favorably to the feedback. (MJP)

  2. Roofing Workbook and Tests: Entering the Roofing and Waterproofing Industry.

    ERIC Educational Resources Information Center

    California State Dept. of Education, Sacramento. Vocational Education Services.

    This document is one of a series of nine individual units of instruction for use in roofing apprenticeship classes in California. The unit consists of a workbook and test, perforated for student use. Fourteen topics are covered in the workbook and corresponding multiple-choice tests. For each topic, objectives, information sheets, and study…

  3. Australian Chemistry Test Item Bank: Years 11 & 12. Volume 1.

    ERIC Educational Resources Information Center

    Commons, C., Ed.; Martin, P., Ed.

    Volume 1 of the Australian Chemistry Test Item Bank, consisting of two volumes, contains nearly 2000 multiple-choice items related to the chemistry taught in Year 11 and Year 12 courses in Australia. Items which were written during 1979 and 1980 were initially published in the "ACER Chemistry Test Item Collection" and in the "ACER…

  4. ACER Chemistry Test Item Collection (ACER CHEMTIC Year 12 Supplement).

    ERIC Educational Resources Information Center

    Australian Council for Educational Research, Hawthorn.

    This publication contains 317 multiple-choice chemistry test items related to topics covered in the Victorian (Australia) Year 12 chemistry course. It allows teachers access to a range of items suitable for diagnostic and achievement purposes, supplementing the ACER Chemistry Test Item Collection--Year 12 (CHEMTIC). The topics covered are: organic…

  5. Food Production Worker. Dietetic Support Personnel Achievement Test.

    ERIC Educational Resources Information Center

    Oklahoma State Dept. of Vocational and Technical Education, Stillwater.

    This guide contains a series of multiple-choice items and guidelines to assist instructors in composing criterion-referenced tests for use in the food production worker component of Oklahoma's Dietetic Support Personnel training program. Test items addressing each of the following occupational duty areas are provided: human relations; hygiene and…

  6. Demonstrating Local Item Dependence for Recognition and Supply Format Tests.

    ERIC Educational Resources Information Center

    Bastick, Tony

    This study tested the hypothesis that the common approach to test construction in which recognition questions (RQs), such as multiple-choice items, are followed by constructed response questions (CRQs) encourages students to use the informationally rich RQs to gain marks on the CRQs, thus introducing Local Item Dependence (LID) and inflating the…

  7. Computer Adaptive Language Tests (CALT) Offer a Great Potential for Functional Testing. Yet Why Don't They?

    ERIC Educational Resources Information Center

    Meunier, Lydie E.

    1994-01-01

    Computer adaptive language testing (CALT) offers a variety of advantages; however, since CALT cannot test the multidimensional nature of language, it does not assess communicative/functional language. This article proposes to replace multiple choice and cloze formats and to apply CALT to live-action simulations. (18 references) (LB)

  8. An Instrument to Predict Job Performance of Home Health Aides--Testing the Reliability and Validity.

    ERIC Educational Resources Information Center

    Sturges, Jack; Quina, Patricia

    The development of four paper-and-pencil tests, useful in assessing the effectiveness of inservice training provided to either nurses aides or home health aides, was described. These tests were designed for utilization in employment selection and case assignment. Two tests of 37 multiple-choice items and two tests of 10 matching items were…

  9. Using Multigroup Confirmatory Factor Analysis to Test Measurement Invariance in Raters: A Clinical Skills Examination Application

    ERIC Educational Resources Information Center

    Kahraman, Nilufer; Brown, Crystal B.

    2015-01-01

    Psychometric models based on structural equation modeling framework are commonly used in many multiple-choice test settings to assess measurement invariance of test items across examinee subpopulations. The premise of the current article is that they may also be useful in the context of performance assessment tests to test measurement invariance…

  10. The Positive and Negative Effects of Science Concept Tests on Student Conceptual Understanding

    ERIC Educational Resources Information Center

    Chang, Chun-Yen; Yeh, Ting-Kuang; Barufaldi, James P.

    2010-01-01

    This study explored the phenomenon of testing effect during science concept assessments, including the mechanism behind it and its impact upon a learner's conceptual understanding. The participants consisted of 208 high school students, in either the 11th or 12th grade. Three types of tests (traditional multiple-choice test, correct concept test,…

  11. Multi-Digit (MDT) Testing in the Teaching of Criminal Justice Sciences.

    ERIC Educational Resources Information Center

    Anderson, Paul S.; Alexander, Diane

    The Multi-Digit (MDT) testing procedure is a computer-scored testing innovation conceptualized in 1982. It is fully compatible with multiple choice and true/false tests well suited for the testing of discreet terms and concepts such as in fill-in-the-blank examinations. The student reads the question and selects the appropriate response from an…

  12. NCME 2008 Presidential Address: The Impact of Anchor Test Configuration on Student Proficiency Rates

    ERIC Educational Resources Information Center

    Fitzpatrick, Anne R.

    2008-01-01

    Examined in this study were the effects of reducing anchor test length on student proficiency rates for 12 multiple-choice tests administered in an annual, large-scale, high-stakes assessment. The anchor tests contained 15 items, 10 items, or five items. Five content representative samples of items were drawn at each anchor test length from a…

  13. Ontology-Based Multiple Choice Question Generation

    PubMed Central

    Al-Yahya, Maha

    2014-01-01

    With recent advancements in Semantic Web technologies, a new trend in MCQ item generation has emerged through the use of ontologies. Ontologies are knowledge representation structures that formally describe entities in a domain and their relationships, thus enabling automated inference and reasoning. Ontology-based MCQ item generation is still in its infancy, but substantial research efforts are being made in the field. However, the applicability of these models for use in an educational setting has not been thoroughly evaluated. In this paper, we present an experimental evaluation of an ontology-based MCQ item generation system known as OntoQue. The evaluation was conducted using two different domain ontologies. The findings of this study show that ontology-based MCQ generation systems produce satisfactory MCQ items to a certain extent. However, the evaluation also revealed a number of shortcomings with current ontology-based MCQ item generation systems with regard to the educational significance of an automatically constructed MCQ item, the knowledge level it addresses, and its language structure. Furthermore, for the task to be successful in producing high-quality MCQ items for learning assessments, this study suggests a novel, holistic view that incorporates learning content, learning objectives, lexical knowledge, and scenarios into a single cohesive framework. PMID:24982937

  14. Super-resolution optical microscopy: multiple choices.

    PubMed

    Huang, Bo

    2010-02-01

    The recent invention of super-resolution optical microscopy enables the visualization of fine features in biological samples with unprecedented clarity. It creates numerous opportunities in biology because vast amount of previously obscured subcellular processes now can be directly observed. Rapid development in this field in the past two years offers many imaging modalities that address different needs but they also complicates the choice of the 'perfect' method for answering a specific question. Here I will briefly describe the principles of super-resolution optical microscopy techniques and then focus on comparing their characteristics in various aspects of practical applications.

  15. Ambient taxes when polluters have multiple choices

    SciTech Connect

    Horan, R.D.; Shortle, J.S.; Abler, D.G.

    1998-09-01

    Economic research on environmental policy design has largely been concerned with the merits of emissions-based economic incentives (e.g., emissions charges, emissions reduction subsidies, transferable discharge permits). Ambient-based tax-subsidy schemes have drawn considerable interest in nonpoint pollution literature as alternatives to emissions-based instruments. Expanding especially on Segerson`s seminal article, this article examines the optimal design and budget-balancing properties of ambient tax-subsidy schemes under more realistic assumptions about the dimensions of firms` choice sets than prior research.

  16. The Multiple Choices of Sex Education

    ERIC Educational Resources Information Center

    Hamilton, Rashea; Sanders, Megan; Anderman, Eric M.

    2013-01-01

    Sex education in middle and high school health classes is critically important because it frequently comprises the primary mechanism for conveying information about sexual health to adolescents. Deliver evidence-based information on HIV and pregnancy prevention practices and they will be less likely to engage in risky sexual behaviors, the theory…

  17. Developing Information Skills Test for Malaysian Youth Students Using Rasch Analysis

    ERIC Educational Resources Information Center

    Karim, Aidah Abdul; Shah, Parilah M.; Din, Rosseni; Ahmad, Mazalah; Lubis, Maimun Aqhsa

    2014-01-01

    This study explored the psychometric properties of a locally developed information skills test for youth students in Malaysia using Rasch analysis. The test was a combination of 24 structured and multiple choice items with a 4-point grading scale. The test was administered to 72 technical college students and 139 secondary school students. The…

  18. Two-Dimensional, Implicit Confidence Tests as a Tool for Recognizing Student Misconceptions

    ERIC Educational Resources Information Center

    Klymkowsky, Michael W.; Taylor, Linda B.; Spindler, Shana R.; Garvin-Doxas, R. Kathy

    2006-01-01

    The misconceptions that students bring with them, or that arise during instruction, are a critical barrier to learning. Implicit-confidence tests, a simple modification of the multiple-choice test, can be used as a strategy for recognizing student misconceptions. An important issue, however, is whether such tests are gender-neutral. We analyzed…

  19. A Study of the Relationship between Scores and Time on Tests.

    ERIC Educational Resources Information Center

    Kennedy, Rob

    The purpose of this study was to investigate the relationship between the scores students earned on multiple choice tests and the number of minutes students required to complete the tests. The 5 tests were made up of 20 randomly drawn questions from a large pool of questions about research methods. Students were allowed an unlimited amount of time…

  20. An Explanatory Item Response Theory Approach for a Computer-Based Case Simulation Test

    ERIC Educational Resources Information Center

    Kahraman, Nilüfer

    2014-01-01

    Problem: Practitioners working with multiple-choice tests have long utilized Item Response Theory (IRT) models to evaluate the performance of test items for quality assurance. The use of similar applications for performance tests, however, is often encumbered due to the challenges encountered in working with complicated data sets in which local…

  1. CDA (Child Development Associate) Instructional Materials. Assessing Competency: Tests for CDA Competencies (Experimental Edition). Book 7.

    ERIC Educational Resources Information Center

    Hotvedt, Kathleen J.; Hotvedt, Martyn O.

    This book of tests is designed to assess the competencies of the Child Development Associate (CDA) trainee: both what the trainee knows and how well the trainee works with children. The tests are designed as posttests to be administered after the trainee's completion of the relevant learning module. Each test consists of multiple choice questions,…

  2. A Teacher's Dream Come True - A Simple Program for Writing Tests.

    ERIC Educational Resources Information Center

    Vittitoe, Ted W.; Bradley, James V.

    1984-01-01

    Describes a test writing program for a 48K memory Apple microcomputer with lower-case capability. The program permits the production of any number of different tests and also different forms of the same multiple-choice or essay test. (JN)

  3. Project Physics Tests 5, Models of the Atom.

    ERIC Educational Resources Information Center

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    Test items relating to Project Physics Unit 5 are presented in this booklet. Included are 70 multiple-choice and 23 problem-and-essay questions. Concepts of atomic model are examined on aspects of relativistic corrections, electron emission, photoelectric effects, Compton effect, quantum theories, electrolysis experiments, atomic number and mass,…

  4. Regulating Accuracy on University Tests with the Plurality Option

    ERIC Educational Resources Information Center

    Higham, Philip A.

    2013-01-01

    A single experiment is reported in which introductory psychology students were administered a multiple-choice test on psychology with either 4 (n = 78) or 5 alternatives (n = 92) prior to any lectures being delivered. Two answers were generated for each question: a small answer consisting of their favorite alternative, and a large answer…

  5. Minimum Library Use Skills: Standards, Test, and Bibliography.

    ERIC Educational Resources Information Center

    Carr, Jo Ann, Ed.

    Compiled by the Wisconsin Association of Academic Librarians' (WAAL) Education and Library Use Committee, the Test of Minimum Library Use Skills provides examples of questions which may be used to assess students' knowledge of the 13 minimum library use skills adapted by the WAAL membership in 1983. The 59 multiple choice questions that make up…

  6. Fundamentals of Marketing Core Curriculum. Test Items and Assessment Techniques.

    ERIC Educational Resources Information Center

    Smith, Clifton L.; And Others

    This document contains multiple choice test items and assessment techniques for Missouri's fundamentals of marketing core curriculum. The core curriculum is divided into these nine occupational duties: (1) communications in marketing; (2) economics and marketing; (3) employment and advancement; (4) human relations in marketing; (5) marketing…

  7. Web-Based Testing Tools for Electrical Engineering Courses

    DTIC Science & Technology

    2001-09-01

    scored well on the lower levels. The Adaptive format is also employed in the Test of English as a Foreign Language ( TOEFL ®), an examination designed...to measure one’s level of fluency in the English language. The following figure illustrates a page of the TOEFL examination. Note the multiple-choice

  8. Project Physics Tests 2, Motion in the Heavens.

    ERIC Educational Resources Information Center

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    Test items relating to Project Physics Unit 2 are presented in this booklet. Included are 70 multiple-choice and 22 problem-and-essay questions. Concepts of motion in the heavens are examined for planetary motions, heliocentric theory, forces exerted on the planets, Kepler's laws, gravitational force, Galileo's work, satellite orbits, Jupiter's…

  9. Project Physics Tests 3, The Triumph of Mechanics.

    ERIC Educational Resources Information Center

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    Test items relating to Project Physics Unit 3 are presented in this booklet. Included are 70 multiple-choice and 20 problem-and-essay questions. Concepts of mechanics are examined on energy, momentum, kinetic theory of gases, pulse analyses, "heat death," water waves, power, conservation laws, normal distribution, thermodynamic laws, and…

  10. Advanced Marketing Core Curriculum. Test Items and Assessment Techniques.

    ERIC Educational Resources Information Center

    Smith, Clifton L.; And Others

    This document contains duties and tasks, multiple-choice test items, and other assessment techniques for Missouri's advanced marketing core curriculum. The core curriculum begins with a list of 13 suggested textbook resources. Next, nine duties with their associated tasks are given. Under each task appears one or more citations to appropriate…

  11. Sources of Interference when Testing for Students Learning from Sentences.

    ERIC Educational Resources Information Center

    Ghatala, Elizabeth S.; And Others

    1981-01-01

    Two experiments on multiple-choice assessment of students learning from sentences were conducted. Interference arising from intersentence similarity was a function of both the kind of learning strategies students were instructed to employ and the kind of strategies they reported having employed spontaneously. Implications for test construction and…

  12. The Effects of Violating Standard Item Writing Principles on Tests and Students: The Consequences of Using Flawed Test Items on Achievement Examinations in Medical Education

    ERIC Educational Resources Information Center

    Downing, Steven M.

    2005-01-01

    The purpose of this research was to study the effects of violations of standard multiple-choice item writing principles on test characteristics, student scores, and pass-fail outcomes. Four basic science examinations, administered to year-one and year-two medical students, were randomly selected for study. Test items were classified as either…

  13. The Importance of the Item Format with Respect to Gender Differences in Test Performance: A Study of Open-Format Items in the DTM Test.

    ERIC Educational Resources Information Center

    Wester, Anita

    1995-01-01

    The effect of different item formats (multiple choice and open) on gender differences in test performance was studied for the Swedish Diagrams, Tables, and Maps (DTM) test with 90 secondary school students. The change to open format resulted in no reduction in gender differences on the DTM. (SLD)

  14. Measuring Student Learning Using Initial and Final Concept Test in an STEM Course

    ERIC Educational Resources Information Center

    Kaw, Autar; Yalcin, Ali

    2012-01-01

    Effective assessment is a cornerstone in measuring student learning in higher education. For a course in Numerical Methods, a concept test was used as an assessment tool to measure student learning and its improvement during the course. The concept test comprised 16 multiple choice questions and was given in the beginning and end of the class for…

  15. The Disaggregation of Value-Added Test Scores to Assess Learning Outcomes in Economics Courses

    ERIC Educational Resources Information Center

    Walstad, William B.; Wagner, Jamie

    2016-01-01

    This study disaggregates posttest, pretest, and value-added or difference scores in economics into four types of economic learning: positive, retained, negative, and zero. The types are derived from patterns of student responses to individual items on a multiple-choice test. The micro and macro data from the "Test of Understanding in College…

  16. Applied Reading Test--Forms A and B, Interim Manual, and Answer Sheets.

    ERIC Educational Resources Information Center

    Australian Council for Educational Research, Hawthorn.

    Designed for use in the selection of apprentices, trainees, technical and trade personnel, and any other persons who need to read and understand text of a technical nature, this Applied Reading Test specimen set contains six passages and 32 items, has a 30-minute time limit, and is presented in a reusable multiple choice test booklet. The specimen…

  17. Inferring Cross Sections of 3D Objects: A New Spatial Thinking Test

    ERIC Educational Resources Information Center

    Cohen, Cheryl A.; Hegarty, Mary

    2012-01-01

    A new spatial ability test was administered online to 223 undergraduate students enrolled in introductory science courses. The 30-item multiple choice test measures individual differences in ability to identify the two-dimensional cross section of a three-dimensional geometric solid, a skill that has been identified as important in science,…

  18. Dividing the Force Concept Inventory into Two Equivalent Half-Length Tests

    ERIC Educational Resources Information Center

    Han, Jing; Bao, Lei; Chen, Li; Cai, Tianfang; Pi, Yuan; Zhou, Shaona; Tu, Yan; Koenig, Kathleen

    2015-01-01

    The Force Concept Inventory (FCI) is a 30-question multiple-choice assessment that has been a building block for much of the physics education research done today. In practice, there are often concerns regarding the length of the test and possible test-retest effects. Since many studies in the literature use the mean score of the FCI as the…

  19. Australian Chemistry Test Item Bank: Years 11 and 12. Volume 2.

    ERIC Educational Resources Information Center

    Commons, C., Ed.; Martin, P., Ed.

    The second volume of the Australian Chemistry Test Item Bank, consisting of two volumes, contains nearly 2000 multiple-choice items related to the chemistry taught in Year 11 and Year 12 courses in Australia. Items which were written during 1979 and 1980 were initially published in the "ACER Chemistry Test Item Collection" and in the…

  20. A Model-Based Method for Content Validation of Automatically Generated Test Items

    ERIC Educational Resources Information Center

    Zhang, Xinxin; Gierl, Mark

    2016-01-01

    The purpose of this study is to describe a methodology to recover the item model used to generate multiple-choice test items with a novel graph theory approach. Beginning with the generated test items and working backward to recover the original item model provides a model-based method for validating the content used to automatically generate test…

  1. Teacher-made Test Items in American History: Emphasis Junior High School. Bulletin Number 40.

    ERIC Educational Resources Information Center

    Kurfman, Dana

    Designed originally for use in junior-high-school classes, this bulletin provides an extensive file of 420 multiple-choice test questions in American history. The test items are intended to measure substantive understandings as well as such abilities as interpretation, analysis, synthesis, evaluation, and application. The initial questions were…

  2. A Comparison of Domain-Referenced and Classic Psychometric Test Construction Methods.

    ERIC Educational Resources Information Center

    Willoughby, Lee; And Others

    This study compared a domain referenced approach with a traditional psychometric approach in the construction of a test. Results of the December, 1975 Quarterly Profile Exam (QPE) administered to 400 examinees at a university were the source of data. The 400 item QPE is a five alternative multiple choice test of information a "safe"…

  3. Integrated Testlets: A New Form of Expert-Student Collaborative Testing

    ERIC Educational Resources Information Center

    Shiell, Ralph C.; Slepkov, Aaron D.

    2015-01-01

    Integrated testlets are a new assessment tool that encompass the procedural benefits of multiple-choice testing, the pedagogical advantages of free-response-based tests, and the collaborative aspects of a viva voce or defence examination format. The result is a robust assessment tool that provides a significant formative aspect for students.…

  4. Development and Application of a Three-Tier Diagnostic Test to Assess Secondary Students' Understanding of Waves

    ERIC Educational Resources Information Center

    Caleon, Imelda; Subramaniam, R.

    2010-01-01

    This study focused on the development and application of a three-tier multiple-choice diagnostic test (or three-tier test) on the nature and propagation of waves. A question in a three-tier test comprises the "content tier", which measures content knowledge; the "reason tier", which measures explanatory knowledge; and the…

  5. Scratching Where They Itch: Evaluation of Feedback on a Diagnostic English Grammar Test for Taiwanese University Students

    ERIC Educational Resources Information Center

    Yin, Muchun; Sims, James; Cothran, Daniel

    2012-01-01

    Feedback to the test taker is a defining characteristic of diagnostic language testing (Alderson, 2005). This article reports on a study that investigated how much and in what ways students at a Taiwan university perceived the feedback to be useful on an online multiple-choice diagnostic English grammar test, both in general and by students of…

  6. Failing to Meet the Standards: The English Language Arts Test for Fourth Graders in New York State

    ERIC Educational Resources Information Center

    Hill, Clifford

    2004-01-01

    This article examines two kinds of problems associated with the English Language Arts test at the fourth-grade level in New York State: (1) problems that inhere in the test itself and (2) problems associated with its use. As for the test itself, three kinds of problems are analyzed: (1) the use of multiple-choice tasks to assess reading…

  7. National Conference on Critical Issues in Competency-Based Testing for Vocational-Technical Education (Nashville, Tennessee, April 11-13, 1988). Conference Notebook.

    ERIC Educational Resources Information Center

    Vocational Technical Education Consortium of States, Decatur, GA.

    This notebook contains the following conference presentations: "Identifying and Validating Task Lists by Business and Industry for Test/Test Item Development" (Charles Losh); "Conducting a Task Analysis for Competency-Based Test Item Development" (Brenda Hattaway); "Writing and Reviewing Test Items: Multiple Choice,…

  8. Mixed-Format Test Score Equating: Effect of Item-Type Multidimensionality, Length and Composition of Common-Item Set, and Group Ability Difference

    ERIC Educational Resources Information Center

    Wang, Wei

    2013-01-01

    Mixed-format tests containing both multiple-choice (MC) items and constructed-response (CR) items are now widely used in many testing programs. Mixed-format tests often are considered to be superior to tests containing only MC items although the use of multiple item formats leads to measurement challenges in the context of equating conducted under…

  9. Continued research on computer-based testing.

    PubMed Central

    Clyman, S. G.; Julian, E. R.; Orr, N. A.; Dillon, G. F.; Cotton, K. E.

    1991-01-01

    The National Board of Medical Examiners has developed computer-based examination formats for use in evaluating physicians in training. This paper describes continued research on these formats including attitudes about computers and effects of factors not related to the trait being measured; differences between paper-administered and computer-administered multiple-choice questions; and the characteristics of simulation formats. The implications for computer-based testing and further research are discussed. PMID:1807703

  10. State Test Programs Mushroom as NCLB Mandate Kicks in: Nearly Half of States Are Expanding Their Testing Programs to Additional Grades This School Year to Comply with the Federal No Child Left Behind Act

    ERIC Educational Resources Information Center

    Olson, Lynn

    2005-01-01

    Twenty-three states are expanding their testing programs to additional grades this school year to comply with the federal No Child Left Behind Act. In devising the new tests, most states have defied predictions and chosen to go beyond multiple-choice items, by including questions that ask students to construct their own responses. But many state…

  11. Issues and Consequences for State-Level Minimum Competency Testing Programs. State Assessment Series. Wyoming Report 1.

    ERIC Educational Resources Information Center

    Marion, Scott F.; Sheinker, Alan

    This report reviews the current status, empirical findings, theoretical issues, and practical considerations related to state-level minimum competency testing programs. It finds that, although two-thirds of current testing programs now use direct writing prompts to assess writing achievement, essentially all programs rely on multiple choice tests…

  12. A Theory of How External Incentives Affect, and Are Affected by, Computer-aided Admissible Probability Testing.

    ERIC Educational Resources Information Center

    Brown, T. A.

    Admissible probability testing is a way of administering multiple choice tests in which a student states his subjective probability that each alternative answer is correct. His response is then scored by an admissible scoring system designed so that the student will perceive that is is in his interest to report his true subjective probability.…

  13. A Study of the Effects of Contextualization and Familiarization on Responses to the TOEFL Vocabulary Test Items.

    ERIC Educational Resources Information Center

    Henning, Grant

    In order to evaluate the Test of English as a Foreign Language (TOEFL) vocabulary item format and to determine the effectiveness of alternative vocabulary test items, this study investigated the functioning of eight different multiple-choice formats that differed with regard to: (1) length and inference-generating quality of the stem; (2) the…

  14. The Effect of Topic Interest and Gender on Reading Test Types in a Second Language

    ERIC Educational Resources Information Center

    Ay, Sila; Bartan, Ozgur Sen

    2012-01-01

    This study explores how readers' interest, gender, and test types (multiple-choice questions, Yes/No questions, and short-answer formats) affect second language reading comprehension in three different levels and five different categories of topics. A questionnaire was administered to 168 Turkish EFL students to find out the gender-oriented topic…

  15. Building the BIKE: Development and Testing of the Biotechnology Instrument for Knowledge Elicitation (BIKE)

    ERIC Educational Resources Information Center

    Witzig, Stephen B.; Rebello, Carina M.; Siegel, Marcelle A.; Freyermuth, Sharyn K.; Izci, Kemal; McClure, Bruce

    2014-01-01

    Identifying students' conceptual scientific understanding is difficult if the appropriate tools are not available for educators. Concept inventories have become a popular tool to assess student understanding; however, traditionally, they are multiple choice tests. International science education standard documents advocate that assessments…

  16. Industrial Arts Test Development, Book III. Resource Items for Graphics Technology, Power Technology, Production Technology.

    ERIC Educational Resources Information Center

    New York State Education Dept., Albany.

    This booklet is designed to assist teachers in developing examinations for classroom use. It is a collection of 955 objective test questions, mostly multiple choice, for industrial arts students in the three areas of graphics technology, power technology, and production technology. Scoring keys are provided. There are no copyright restrictions,…

  17. Development and Analysis of a Cognitive Preference Test in the Social Sciences. Final Report.

    ERIC Educational Resources Information Center

    Payette, Roland Francis

    In order to allow for formulating affective objectives in communicable terms, the Cognitive Preference Test in the Social Sciences was developed. This exploratory device, which reflects cognitive preferences in terms of students' dispositions to respond consistently to either particular or general features of data, uses multiple choice wherein 4…

  18. Creative Math Assessment: How the "Fizz & Martina Approach" Helps Prepare Students for the Math Assessment Tests.

    ERIC Educational Resources Information Center

    Vaille, John; Kushins, Harold

    Many school districts around the nation are re-evaluating how they measure student performance in mathematics. Calls have been made for alternative, authentic assessment tools that go beyond simple, and widely ineffective, multiple-choice tests. This book examines how the Fizz & Martina math video series provides students with hands-on…

  19. Components of Spatial Thinking: Evidence from a Spatial Thinking Ability Test

    ERIC Educational Resources Information Center

    Lee, Jongwon; Bednarz, Robert

    2012-01-01

    This article introduces the development and validation of the spatial thinking ability test (STAT). The STAT consists of sixteen multiple-choice questions of eight types. The STAT was validated by administering it to a sample of 532 junior high, high school, and university students. Factor analysis using principal components extraction was applied…

  20. Can a Two-Question Test Be Reliable and Valid for Predicting Academic Outcomes?

    ERIC Educational Resources Information Center

    Bridgeman, Brent

    2016-01-01

    Scores on essay-based assessments that are part of standardized admissions tests are typically given relatively little weight in admissions decisions compared to the weight given to scores from multiple-choice assessments. Evidence is presented to suggest that more weight should be given to these assessments. The reliability of the writing scores…

  1. The Impact of Discourse Features of Science Test Items on ELL Performance

    ERIC Educational Resources Information Center

    Kachchaf, Rachel; Noble, Tracy; Rosebery, Ann; Wang, Yang; Warren, Beth; O'Connor, Mary Catherine

    2014-01-01

    Most research on linguistic features of test items negatively impacting English language learners' (ELLs') performance has focused on lexical and syntactic features, rather than discourse features that operate at the level of the whole item. This mixed-methods study identified two discourse features in 162 multiple-choice items on a standardized…

  2. Algorithms for Developing Test Questions from Sentences in Instructional Materials: An Extension of an Earlier Study.

    ERIC Educational Resources Information Center

    Roid, Gale H.; And Others

    An earlier study was extended and replicated to examine the feasibility of generating multiple-choice test questions by transforming sentences from prose instructional material. In the first study, a computer-based algorithm was used to analyze prose subject matter and to identify high-information words. Sentences containing selected words were…

  3. Performance on Factual and Inferential Post-Test Items in Prose Comprehension.

    ERIC Educational Resources Information Center

    Adejumo, Dayo

    To examine the use of factual and inferential questions as adjunct material in prose learning, a study was undertaken in which undergraduate introductory psychology students used different study strategies on a prose comprehension test consisting of an equal number of factual and inferential multiple-choice items. One-hundred twenty students were…

  4. An Algorithm to Improve Test Answer Copying Detection Using the Omega Statistic

    ERIC Educational Resources Information Center

    Maeda, Hotaka; Zhang, Bo

    2017-01-01

    The omega (?) statistic is reputed to be one of the best indices for detecting answer copying on multiple choice tests, but its performance relies on the accurate estimation of copier ability, which is challenging because responses from the copiers may have been contaminated. We propose an algorithm that aims to identify and delete the suspected…

  5. Computerized Classification Testing under the One-Parameter Logistic Response Model with Ability-Based Guessing

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Huang, Sheng-Yun

    2011-01-01

    The one-parameter logistic model with ability-based guessing (1PL-AG) has been recently developed to account for effect of ability on guessing behavior in multiple-choice items. In this study, the authors developed algorithms for computerized classification testing under the 1PL-AG and conducted a series of simulations to evaluate their…

  6. V-TECS Criterion-Referenced Test Item Bank for Radiologic Technology Occupations.

    ERIC Educational Resources Information Center

    Reneau, Fred; And Others

    This Vocational-Technical Education Consortium of States (V-TECS) criterion-referenced test item bank provides 696 multiple-choice items and 33 matching items for radiologic technology occupations. These job titles are included: radiologic technologist, chief; radiologic technologist; nuclear medicine technologist; radiation therapy technologist;…

  7. Ability Level Estimation of Students on Probability Unit via Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Özyurt, Hacer; Özyurt, Özcan

    2015-01-01

    Problem Statement: Learning-teaching activities bring along the need to determine whether they achieve their goals. Thus, multiple choice tests addressing the same set of questions to all are frequently used. However, this traditional assessment and evaluation form contrasts with modern education, where individual learning characteristics are…

  8. Measuring Gains in Critical Thinking in Food Science and Human Nutrition Courses: The Cornell Critical Thinking Test, Problem-Based Learning Activities, and Student Journal Entries

    ERIC Educational Resources Information Center

    Iwaoka, Wayne T.; Li, Yong; Rhee, Walter Y.

    2010-01-01

    The Cornell Critical Thinking Test (CCTT) is one of the many multiple-choice tests with validated questions that have been reported to measure general critical thinking (CT) ability. One of the IFT Education Standards for undergraduate degrees in Food Science is the emphasis on the development of critical thinking. While this skill is easy to list…

  9. A Three-Tier Diagnostic Test to Assess Pre-Service Teachers' Misconceptions about Global Warming, Greenhouse Effect, Ozone Layer Depletion, and Acid Rain

    ERIC Educational Resources Information Center

    Arslan, Harika Ozge; Cigdemoglu, Ceyhan; Moseley, Christine

    2012-01-01

    This study describes the development and validation of a three-tier multiple-choice diagnostic test, the atmosphere-related environmental problems diagnostic test (AREPDiT), to reveal common misconceptions of global warming (GW), greenhouse effect (GE), ozone layer depletion (OLD), and acid rain (AR). The development of a two-tier diagnostic test…

  10. A Normalized Direct Approach for Estimating the Parameters of the Normal Ogive Three-Parameter Model for Ability Tests.

    ERIC Educational Resources Information Center

    Gugel, John F.

    A new method for estimating the parameters of the normal ogive three-parameter model for multiple-choice test items--the normalized direct (NDIR) procedure--is examined. The procedure is compared to a more commonly used estimation procedure, Lord's LOGIST, using computer simulations. The NDIR procedure uses the normalized (mid-percentile)…

  11. The Nature of Field Independence: Percentiles and Factor Structure of the Finding Embedded Figures Test--Research Edition.

    ERIC Educational Resources Information Center

    Melancon, Janet G.; Thompson, Bruce

    This study investigated the nature of field independence by exploring the structure underlying responses to Forms A and B of a multiple-choice measure of field-independence, the Finding Embedded Figures Test (FEFT). Subjects included 302 students (52.7% male) enrolled in mathematics courses at a university in the southern United States. Students…

  12. Controlling Guessing Bias in the Dichotomous Rasch Model Applied to a Large-Scale, Vertically Scaled Testing Program

    ERIC Educational Resources Information Center

    Andrich, David; Marais, Ida; Humphry, Stephen Mark

    2016-01-01

    Recent research has shown how the statistical bias in Rasch model difficulty estimates induced by guessing in multiple-choice items can be eliminated. Using vertical scaling of a high-profile national reading test, it is shown that the dominant effect of removing such bias is a nonlinear change in the unit of scale across the continuum. The…

  13. A Comparison of Three- and Four-Option English Tests for University Entrance Selection Purposes in Japan

    ERIC Educational Resources Information Center

    Shizuka, Tetsuhito; Takeuchi, Osamu; Yashima, Tomoko; Yoshizawa, Kiyomi

    2006-01-01

    The present study investigated the effects of reducing the number of options per item on psychometric characteristics of a Japanese EFL university entrance examination. A four-option multiple-choice reading test used for entrance screening at a university in Japan was later converted to a three-option version by eliminating the least frequently…

  14. Using Two-Tier Test to Identify Primary Students' Conceptual Understanding and Alternative Conceptions in Acid Base

    ERIC Educational Resources Information Center

    Bayrak, Beyza Karadeniz

    2013-01-01

    The purpose of this study was to identify primary students' conceptual understanding and alternative conceptions in acid-base. For this reason, a 15 items two-tier multiple choice test administered 56 eighth grade students in spring semester 2009-2010. Data for this study were collected using a conceptual understanding scale prepared to include…

  15. SAT Wars: The Case for Test-Optional College Admissions

    ERIC Educational Resources Information Center

    Soares, Joseph A., Ed.

    2011-01-01

    What can a college admissions officer safely predict about the future of a 17-year-old? Are the best and the brightest students the ones who can check off the most correct boxes on a multiple-choice exam? Or are there better ways of measuring ability and promise? In this penetrating and revealing look at high-stakes standardized admissions tests,…

  16. Performance Support Technology to Assess Training Effectiveness: Functional and Test-Bed Requirements

    DTIC Science & Technology

    1992-10-01

    tests . Journal of Educational Measurement, Fall, 21(3 , 221- 224. This article focuses upon the importance of a minimal level of internal reliability ...An ideal PST would automate construction of items as much as possible, and rely very little on teaching the SME to build them. The test plan feature...MOS multiple choice tests require about 60 items for adequate reliability . This, in fact, is the number suggested by TRADOC guidelines for SDTs. But CRT

  17. Stereotype threat? Effects of inquiring about test takers' gender on conceptual test performance in physics

    NASA Astrophysics Data System (ADS)

    Maries, Alexandru; Singh, Chandralekha

    2015-12-01

    It has been found that activation of a stereotype, for example by indicating one's gender before a test, typically alters performance in a way consistent with the stereotype, an effect called "stereotype threat." On a standardized conceptual physics assessment, we found that asking test takers to indicate their gender right before taking the test did not deteriorate performance compared to an equivalent group who did not provide gender information. Although a statistically significant gender gap was present on the standardized test whether or not students indicated their gender, no gender gap was observed on the multiple-choice final exam students took, which included both quantitative and conceptual questions on similar topics.

  18. How Do Chinese ESL Learners Recognize English Words during a Reading Test? A Comparison with Romance-Language-Speaking ESL Learners

    ERIC Educational Resources Information Center

    Li, Hongli; Suen, Hoi K.

    2015-01-01

    This study examines how Chinese ESL learners recognize English words while responding to a multiple-choice reading test as compared to Romance-language-speaking ESL learners. Four adult Chinese ESL learners and three adult Romance-language-speaking ESL learners participated in a think-aloud study with the Michigan English Language Assessment…

  19. Analysis Test of Understanding of Vectors with the Three-Parameter Logistic Model of Item Response Theory and Item Response Curves Technique

    ERIC Educational Resources Information Center

    Rakkapao, Suttida; Prasitpong, Singha; Arayathanitkul, Kwan

    2016-01-01

    This study investigated the multiple-choice test of understanding of vectors (TUV), by applying item response theory (IRT). The difficulty, discriminatory, and guessing parameters of the TUV items were fit with the three-parameter logistic model of IRT, using the parscale program. The TUV ability is an ability parameter, here estimated assuming…

  20. Research and Teaching: Does the Color-Coding of Examination Versions Affect College Science Students' Test Performance? Countering Claims of Bias

    ERIC Educational Resources Information Center

    Clary, Renee; Wandersee, James; Elias, Janet Schexnayder

    2007-01-01

    To circumvent the problem of academic dishonesty through the mass administration of multiple-choice exams in college classrooms, a study was conducted from 2003 to 2005, in which multiple versions of the same examination were color coded during testing in a large-enrollment classroom. Instructors reported that this color-coded exam system appeared…

  1. Use of the NBME Comprehensive Basic Science Examination as a Progress Test in the Preclerkship Curriculum of a New Medical School

    ERIC Educational Resources Information Center

    Johnson, Teresa R.; Khalil, Mohammed K.; Peppler, Richard D.; Davey, Diane D.; Kibble, Jonathan D.

    2014-01-01

    In the present study, we describe the innovative use of the National Board of Medical Examiners (NBME) Comprehensive Basic Science Examination (CBSE) as a progress test during the preclerkship medical curriculum. The main aim of this study was to provide external validation of internally developed multiple-choice assessments in a new medical…

  2. Examen et Vue du Diplome Douzieme Annee. Langue et Litterature 30. Partie B: Lecture (Choix Multiples). Livret de Textes (Examination for the Twelfth Grade Diploma, Language and Literature 30. Part B: Reading--Multiple Choice. Text Booklet).

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    As part of an examination required by the Alberta (Canada) Department of Education in order for 12th grade students to receive a diploma in French, this booklet contains the reading selections portion of Part B, the language and literature component of the January 1988 tests. Representing the genres of poetry, short story, novel, and drama, the…

  3. Examen en Vue du Diplome Douzieme Annee. Langue et Litterature 30. Partie B: Lecture (Choix Multiples). Livret de Textes (Examination for the Twelfth Grade Diploma, Language and Literature 30. Part B: Reading--Multiple Choice. Text Booklet).

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    As part of an examination required by the Alberta (Canada) Department of Education in order for 12th grade students to receive a diploma in French, this booklet contains the reading selections portion of Part B, the language and literature component of the January 1987 tests. Representing the genres of poetry, short story, novel, and drama, the…

  4. Virtual test: A student-centered software to measure student's critical thinking on human disease

    NASA Astrophysics Data System (ADS)

    Rusyati, Lilit; Firman, Harry

    2016-02-01

    The study "Virtual Test: A Student-Centered Software to Measure Student's Critical Thinking on Human Disease" is descriptive research. The background is importance of computer-based test that use element and sub element of critical thinking. Aim of this study is development of multiple choices to measure critical thinking that made by student-centered software. Instruments to collect data are (1) construct validity sheet by expert judge (lecturer and medical doctor) and professional judge (science teacher); and (2) test legibility sheet by science teacher and junior high school student. Participants consisted of science teacher, lecturer, and medical doctor as validator; and the students as respondent. Result of this study are describe about characteristic of virtual test that use to measure student's critical thinking on human disease, analyze result of legibility test by students and science teachers, analyze result of expert judgment by science teachers and medical doctor, and analyze result of trial test of virtual test at junior high school. Generally, result analysis shown characteristic of multiple choices to measure critical thinking was made by eight elements and 26 sub elements that developed by Inch et al.; complete by relevant information; and have validity and reliability more than "enough". Furthermore, specific characteristic of multiple choices to measure critical thinking are information in form science comic, table, figure, article, and video; correct structure of language; add source of citation; and question can guide student to critical thinking logically.

  5. PROJECT TOBI, THE DEVELOPMENT OF A PRE-SCHOOL ACHIEVEMENT TEST. FINAL REPORT.

    ERIC Educational Resources Information Center

    MOSS, MARGARET H.

    THE TEST OF BASIC INFORMATION (TOBI) IS A 54-ITEM, MULTIPLE-CHOICE PICTURE TEST DEVELOPED TO MEASURE PREACADEMIC, SCHOOL-RELEVANT KNOWLEDGE. IT CAN BE USED TO ASSESS PROGRAM EFFECTIVENESS BY GIVING IT AS A PRE- AND POSTTEST. IT CAN BE ADMINISTERED INDIVIDUALLY, OR TO GROUPS OF UP TO 15 IF THERE IS 1 ADULT FOR 3 OR 4 CHILDREN, AND TAKES FROM 15 TO…

  6. Lucky Guess or Knowledge: A Cross-Sectional Study Using the Bland and Altman Analysis to Compare Confidence-Based Testing of Pharmacological Knowledge in 3rd and 5th Year Medical Students

    ERIC Educational Resources Information Center

    Kampmeyer, Daniela; Matthes, Jan; Herzig, Stefan

    2015-01-01

    Multiple-choice-questions are common in medical examinations, but guessing biases assessment results. Confidence-based-testing (CBT) integrates indicated confidence levels. It has been suggested that correctness of and confidence in an answer together indicate knowledge levels thus determining the quality of a resulting decision. We used a CBT…

  7. Development of a test of experimental problem-solving skills

    NASA Astrophysics Data System (ADS)

    Ross, John A.; Maynes, Florence J.

    The emphasis given to experimental problem-solving skills in science curriculum innovation has not been matched by the development of comparable assessment tools. Multiple-choice tests were constructed for seven skills using learning hierarchies based on expert-novice differences. The instruments were refined in three phases of field testing. The reliabilities of the tests are sufficient for making judgments of group performance, but are insufficient in a single administration for individual assessment. Evidence of the validity of the tests is presented and their worth is discussed within the framework of a theory of instruction.

  8. Region 10 Questions and Answers #1: Title V Permit Development

    EPA Pesticide Factsheets

    This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Policy and Guidance Database available at www2.epa.gov/title-v-operating-permits/title-v-operating-permit-policy-and-guidance-document-index. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  9. Simple model for multiple-choice collective decision making

    NASA Astrophysics Data System (ADS)

    Lee, Ching Hua; Lucas, Andrew

    2014-11-01

    We describe a simple model of heterogeneous, interacting agents making decisions between n ≥2 discrete choices. For a special class of interactions, our model is the mean field description of random field Potts-like models and is effectively solved by finding the extrema of the average energy E per agent. In these cases, by studying the propagation of decision changes via avalanches, we argue that macroscopic dynamics is well captured by a gradient flow along E . We focus on the permutation symmetric case, where all n choices are (on average) the same, and spontaneous symmetry breaking (SSB) arises purely from cooperative social interactions. As examples, we show that bimodal heterogeneity naturally provides a mechanism for the spontaneous formation of hierarchies between decisions and that SSB is a preferred instability to discontinuous phase transitions between two symmetric points. Beyond the mean field limit, exponentially many stable equilibria emerge when we place this model on a graph of finite mean degree. We conclude with speculation on decision making with persistent collective oscillations. Throughout the paper, we emphasize analogies between methods of solution to our model and common intuition from diverse areas of physics, including statistical physics and electromagnetism.

  10. Test Writer.

    ERIC Educational Resources Information Center

    Cicciarella, Charles

    1982-01-01

    This program produces multiple-choice examinations from a pool of questions created by the user. The output is a printed copy of the exam and an answer key. The program is written in Applesoft BASIC and requires an Apple II Plus computer with 32K, a disk drive, and a printer. (MP)

  11. Effect of Examinee Certainty on Probabilistic Test Scores and a Comparison of Scoring Methods for Probabilistic Responses.

    DTIC Science & Technology

    1983-07-01

    OF PSYCHOLOGY UNIVERSITY OF MINNESOTA - MINNEAPOLIS, MN 55455 This research was supported by funds from the Air Force Office of Scientific Research...PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASKAREA & WORK UNIT NUMBERS • Department of Psychology P.E.:61153N Proj.:RR042-04...of .06. Test Administration The 30 multiple-choice analogy items chosen were then administered to 299 psychology and biology undergraduate students

  12. Wolf Testing: Open Source Testing Software

    NASA Astrophysics Data System (ADS)

    Braasch, P.; Gay, P. L.

    2004-12-01

    Wolf Testing is software for easily creating and editing exams. Wolf Testing allows the user to create an exam from a database of questions, view it on screen, and easily print it along with the corresponding answer guide. The questions can be multiple choice, short answer, long answer, or true and false varieties. This software can be accessed securely from any location, allowing the user to easily create exams from home. New questions, which can include associated pictures, can be added through a web-interface. After adding in questions, they can be edited, deleted, or duplicated into multiple versions. Long-term test creation is simplified, as you are able to quickly see what questions you have asked in the past and insert them, with or without editing, into future tests. All tests are archived in the database. Written in PHP and MySQL, this software can be installed on any UNIX / Linux platform, including Macintosh OS X. The secure interface keeps students out, and allows you to decide who can create tests and who can edit information already in the database. Tests can be output as either html with pictures or rich text without pictures, and there are plans to add PDF and MS Word formats as well. We would like to thank Dr. Wolfgang Rueckner and the Harvard University Science Center for providing incentive to start this project, computers and resources to complete this project, and inspiration for the project's name. We would also like to thank Dr. Ronald Newburgh for his assistance in beta testing.

  13. Test item linguistic complexity and assessments for deaf students.

    PubMed

    Cawthon, Stephanie

    2011-01-01

    Linguistic complexity of test items is one test format element that has been studied in the context of struggling readers and their participation in paper-and-pencil tests. The present article presents findings from an exploratory study on the potential relationship between linguistic complexity and test performance for deaf readers. A total of 64 students completed 52 multiple-choice items, 32 in mathematics and 20 in reading. These items were coded for linguistic complexity components of vocabulary, syntax, and discourse. Mathematics items had higher linguistic complexity ratings than reading items, but there were no significant relationships between item linguistic complexity scores and student performance on the test items. The discussion addresses issues related to the subject area, student proficiency levels in the test content, factors to look for in determining a "linguistic complexity effect," and areas for further research in test item development and deaf students.

  14. Computer-aided selection of diagnostic tests in jaundiced patients.

    PubMed Central

    Saint-Marc Girardin, M F; Le Minor, M; Alperovitch, A; Roudot-Thoraval, F; Metreau, J M; Dhumeaux, D

    1985-01-01

    A model has been developed for ordering diagnostic tests in jaundiced patients. The system proceeds in two steps: (i) diagnostic hypotheses are calculated for each patient from the results of physical examination and routine biological investigations; (ii) given these hypotheses, the most efficient test (out of 22) for reaching the final diagnosis is selected using four criteria: diagnostic value, risk, financial cost, and time in obtaining the result. This model was tested in 62 patients. In 43 of them (69%), the selected test was sufficient for reaching a diagnostic accuracy of 100%. In this group of patients, a mean of 3.7 (range 1-6) tests per patient was ordered by physicians. In the 19 remaining patients, the selected test was not sufficient for the final diagnosis, thus requiring a multiple choice process. It is suggested that such a system could help physicians to improve the care of patients by more efficient ordering of diagnostic tests. PMID:3896962

  15. Development of an integrated process skill test: TIPS II

    NASA Astrophysics Data System (ADS)

    Burns, Joseph C.; Okey, James R.; Wise, Kevin C.

    The purpose of this project was to develop a valid and reliable science process skill test for middle and high school students. Multiple-choice items were generated for each of five objectives. Following pilot testing and revision, the test was administered to middle and high school students in the northeastern United States. The 36-item test can be completed in a normal class period. Results yielded a mean score of 19.14 and a total test reliability of 0.86. Mean difficulty and discrimination indices were 0.53 and 0.35, respectively. Split-test correlations coefficients between TIPS II and the original TIPS items were 0.86 and 0.90. TIPS II provides another reliable instrument for measuring process skill achievement. Additionally, it increases the available item pool for measuring these skills.

  16. Analyzing Test-Taking Behavior: Decision Theory Meets Psychometric Theory.

    PubMed

    Budescu, David V; Bo, Yuanchao

    2015-12-01

    We investigate the implications of penalizing incorrect answers to multiple-choice tests, from the perspective of both test-takers and test-makers. To do so, we use a model that combines a well-known item response theory model with prospect theory (Kahneman and Tversky, Prospect theory: An analysis of decision under risk, Econometrica 47:263-91, 1979). Our results reveal that when test-takers are fully informed of the scoring rule, the use of any penalty has detrimental effects for both test-takers (they are always penalized in excess, particularly those who are risk averse and loss averse) and test-makers (the bias of the estimated scores, as well as the variance and skewness of their distribution, increase as a function of the severity of the penalty).

  17. Passageless comprehension on the Nelson-Denny reading test: well above chance for university students.

    PubMed

    Coleman, Chris; Lindstrom, Jennifer; Nelson, Jason; Lindstrom, William; Gregg, K Noël

    2010-01-01

    The comprehension section of the Nelson-Denny Reading Test (NDRT) is widely used to assess the reading comprehension skills of adolescents and adults in the United States. In this study, the authors explored the content validity of the NDRT Comprehension Test (Forms G and H) by asking university students (with and without at-risk status for learning disorders) to answer the multiple-choice comprehension questions without reading the passages. Overall accuracy rates were well above chance for both NDRT forms and both groups of students. These results raise serious questions about the validity of the NDRT and its use in the identification of reading disabilities.

  18. The effects of violating standard item writing principles on tests and students: the consequences of using flawed test items on achievement examinations in medical education.

    PubMed

    Downing, Steven M

    2005-01-01

    The purpose of this research was to study the effects of violations of standard multiple-choice item writing principles on test characteristics, student scores, and pass-fail outcomes. Four basic science examinations, administered to year-one and year-two medical students, were randomly selected for study. Test items were classified as either standard or flawed by three independent raters, blinded to all item performance data. Flawed test questions violated one or more standard principles of effective item writing. Thirty-six to sixty-five percent of the items on the four tests were flawed. Flawed items were 0-15 percentage points more difficult than standard items measuring the same construct. Over all four examinations, 646 (53%) students passed the standard items while 575 (47%) passed the flawed items. The median passing rate difference between flawed and standard items was 3.5 percentage points, but ranged from -1 to 35 percentage points. Item flaws had little effect on test score reliability or other psychometric quality indices. Results showed that flawed multiple-choice test items, which violate well established and evidence-based principles of effective item writing, disadvantage some medical students. Item flaws introduce the systematic error of construct-irrelevant variance to assessments, thereby reducing the validity evidence for examinations and penalizing some examinees.

  19. Phonetic Intelligibility Testing in Adults with Down Syndrome

    PubMed Central

    Bunton, Kate; Leddy, Mark; Miller, Jon

    2009-01-01

    The purpose of the study was to document speech intelligibility deficits for a group of five adult males with Down syndrome, and use listener based error profiles to identify phonetic dimensions underlying reduced intelligibility. Phonetic error profiles were constructed for each speaker using the Kent, Weismer, Kent, and Rosenbek (1989) word intelligibility test. The test was designed to allow for identification of reasons for the intelligibility deficit, quantitative analyses at varied levels, and sensitivity to potential speech deficits across populations. Listener generated profiles were calculated based on a multiple-choice task and a transcription task. The most disrupted phonetic features, across listening task, involved simplification of clusters in both the word initial and word final position, and contrasts involving tongue-posture, control, and timing (e.g., high-low vowel, front-back vowel, and place of articulation for stops and fricatives). Differences between speakers in the ranking of these phonetic features was found, however, the mean error proportion for the six most severely affected features correlated highly with the overall intelligibility score (0.88 based on multiple-choice task, .94 for the transcription task). The phonetic feature analyses are an index that may help clarify the suspected motor speech basis for the speech intelligibility deficits seen in adults with Down syndrome and may lead to improved speech management in these individuals. PMID:17692179

  20. Weekly quizzes in extended-matching format as a means of monitoring students' progress in gross anatomy.

    PubMed

    Lukić, I K; Gluncić, V; Katavić, V; Petanjek, Z; Jalsovec, D; Marusić, A

    2001-11-01

    We compared weekly quizzes in extended-matching format with multiple-choice questions and oral examinations as means of monitoring students' progress in gross anatomy. Students' performance on 19 weekly oral examinations or 10-question quizzes based on extended-matching or multiple-choice formats were correlated with their success on 3 interim examinations and the final comprehensive examination. The Kuder-Richardson formula 20, an estimate of precision of the test, was 0.64 for extended-matching quizzes. Students' performance on interim examinations did not differ significantly. There was a significant correlation between students' mean scores on weekly quizzes and mean scores on interim examinations in both the extended-matching (r = 0.516) and multiple-choice group (r = 0.823). The mean grades (ranging from 2 to 5) on the final exam, based on understanding of anatomical concepts and their application in clinical practice, were significantly higher in extended-matching group (4.8) than in the multiple-choice (4.1) and orally examined groups (3.9) (p < 0.05). We conclude that extended-matching quizzes were at least as effective as multiple-choice quizzes and oral examinations and may be better for acquiring synthetic understanding of anatomical concepts especially in combination with other means of knowledge assessment. We recommend them as a reliable and objective means of monitoring students' performance during a gross anatomy course.

  1. Classification Accuracy of Mixed Format Tests: A Bi-Factor Item Response Theory Approach

    PubMed Central

    Wang, Wei; Drasgow, Fritz; Liu, Liwen

    2016-01-01

    Mixed format tests (e.g., a test consisting of multiple-choice [MC] items and constructed response [CR] items) have become increasingly popular. However, the latent structure of item pools consisting of the two formats is still equivocal. Moreover, the implications of this latent structure are unclear: For example, do constructed response items tap reasoning skills that cannot be assessed with multiple choice items? This study explored the dimensionality of mixed format tests by applying bi-factor models to 10 tests of various subjects from the College Board's Advanced Placement (AP) Program and compared the accuracy of scores based on the bi-factor analysis with scores derived from a unidimensional analysis. More importantly, this study focused on a practical and important question—classification accuracy of the overall grade on a mixed format test. Our findings revealed that the degree of multidimensionality resulting from the mixed item format varied from subject to subject, depending on the disattenuated correlation between scores from MC and CR subtests. Moreover, remarkably small decrements in classification accuracy were found for the unidimensional analysis when the disattenuated correlations exceeded 0.90. PMID:26973568

  2. Are aphasic patients who fail the GOAT in PTA? A modified Galveston Orientation and Amnesia Test for persons with aphasia.

    PubMed

    Jain, N S; Layton, B S; Murray, P K

    2000-02-01

    Because the Galveston Orientation and Amnesia Test (GOAT) requires oral or written response, it risks misclassifying as amnestic aphasic patients who are not, in fact, amnestic. To correct for possible classification errors due to anomia, a modified multiple-choice format of the GOAT (AGOAT) was developed. The average GOAT score of 10 control nonaphasic head-injured patients suggested that an AGOAT score of 90 corresponds to the standard GOAT cutoff of 75 for resolution of posttraumatic amnesia (PTA). Using this criterion, 8 of 15 aphasic head-injured patients who technically were classified as amnestic on the GOAT were classified as nonamnestic on the AGOAT.

  3. Development of three-tier heat, temperature and internal energy diagnostic test

    NASA Astrophysics Data System (ADS)

    Gurcay, Deniz; Gulbas, Etna

    2015-05-01

    Background:Misconceptions are major obstacles to learning physics, and the concepts of heat and temperature are some of the common misconceptions that are encountered in daily life. Therefore, it is important to develop valid and reliable tools to determine students' misconceptions about basic thermodynamics concepts. Three-tier tests are effective assessment tools to determine misconceptions in physics. Although a limited number of three-tier tests about heat and temperature are discussed in the literature, no reports discuss three-tier tests that simultaneously consider heat, temperature and internal energy. Purpose:The aim of this study is to develop a valid and reliable three-tier test to determine students' misconceptions about heat, temperature and internal energy. Sample:The sample consists of 462 11th-grade Anatolian high school students. Of the participants, 46.8% were female and 53.2% were male. Design and methods:This research takes the form of a survey study. Initially, a multiple-choice test was developed. To each multiple-choice question was added one open-ended question asking the students to explain their answers. This test was then administered to 259 high school students and the data were analyzed both quantitatively and qualitatively. The students' answers for each open-ended question were analyzed and used to create the choices for the second-tier questions of the test. Depending on those results, a three-tier Heat, Temperature and Internal Energy Diagnostic Test (HTIEDT) was developed by adding a second-tier and certainty response index to each item. This three-tier test was administered to the sample of 462 high school students. Results:The Cronbach alpha reliability for the test was estimated for correct and misconception scores as .75 and .68, respectively. The results of the study suggested that HTIEDT could be used as a valid and reliable test in determining misconceptions about heat, temperature and internal energy concepts.

  4. Guide to good practices for the development of test items

    SciTech Connect

    1997-01-01

    While the methodology used in developing test items can vary significantly, to ensure quality examinations, test items should be developed systematically. Test design and development is discussed in the DOE Guide to Good Practices for Design, Development, and Implementation of Examinations. This guide is intended to be a supplement by providing more detailed guidance on the development of specific test items. This guide addresses the development of written examination test items primarily. However, many of the concepts also apply to oral examinations, both in the classroom and on the job. This guide is intended to be used as guidance for the classroom and laboratory instructor or curriculum developer responsible for the construction of individual test items. This document focuses on written test items, but includes information relative to open-reference (open book) examination test items, as well. These test items have been categorized as short-answer, multiple-choice, or essay. Each test item format is described, examples are provided, and a procedure for development is included. The appendices provide examples for writing test items, a test item development form, and examples of various test item formats.

  5. [Experience with new teaching methods and testing in psychiatric training].

    PubMed

    Schäfer, M; Georg, W; Mühlinghaus, I; Fröhmel, A; Rolle, D; Pruskil, S; Heinz, A; Burger, W

    2007-03-01

    In 1999, the Charité Medical University in Berlin, Germany, implemented a reformed medical study course (RMSC) along with traditional undergraduate medical education. The RMSC is characterized by problem-based learning (PBL), training in communication skills with "simulated patients", and interdisciplinary seminars. The curriculum is organized into blocks according to organ system and age (period od life). In a new intensive 4-week psychiatric block, 4th-year students get practical experience in psychiatric wards. Furthermore, PBL groups and workshops are offered that focus on frequent psychiatric disorders. By providing interactive courses with simulated patients, students are intensively trained in taking psychiatric histories and in generating psychopathological findings. Defined learning objectives are tested using multiple-choice items and objectively structured clinical examinations at semester end. First positive results indicate that this course represents an appropriate and practicable curriculum for teaching psychiatry in Germany.

  6. Assessment test before the reporting phase of tutorial session in problem-based learning

    PubMed Central

    Bestetti, Reinaldo B; Couto, Lucélio B; Restini, Carolina BA; Faria, Milton; Romão, Gustavo S

    2017-01-01

    Purpose In our context, problem-based learning is not used in the preuniversity environment. Consequently, students have a great deal of difficulty adapting to this method, particularly regarding self-study before the reporting phase of a tutorial session. Accordingly, the aim of this study was to assess if the application of an assessment test (multiple choice questions) before the reporting phase of a tutorial session would improve the academic achievement of students at the preclinical stage of our medical course. Methods A test consisting of five multiple choice questions, prepared by tutors of the module at hand and related to the problem-solving process of each tutorial session, was applied following the self-study phase and immediately before the reporting phase of all tutorial sessions. The questions were based on the previously established student learning goals. The assessment was applied to all modules from the fifth to the eighth semesters. The final scores achieved by students in the end-of-module tests were compared. Results Overall, the mean test score was 65.2±0.7% before and 68.0±0.7% after the introduction of an assessment test before the reporting phase (P<0.05). Students in the sixth semester scored 67.6±1.6% compared to 63.9±2.2% when they were in the fifth semester (P<0.05). Students in the seventh semester achieved a similar score to their sixth semester score (64.6±2.6% vs 63.3±2%, respectively, P>0.05). Students in the eighth semester scored 71.8±2.3% compared to 70±2% when they were in the seventh semester (P>0.05). Conclusion In our medical course, the application of an assessment test (a multiple choice test) before the reporting phase of the problem-based learning tutorial process increases the overall academic achievement of students, especially of those in the sixth semester in comparison with when they were in the fifth semester. PMID:28280404

  7. VLAT: Development of a Visualization Literacy Assessment Test.

    PubMed

    Lee, Sukwon; Kim, Sung-Hee; Kwon, Bum Chul

    2017-01-01

    The Information Visualization community has begun to pay attention to visualization literacy; however, researchers still lack instruments for measuring the visualization literacy of users. In order to address this gap, we systematically developed a visualization literacy assessment test (VLAT), especially for non-expert users in data visualization, by following the established procedure of test development in Psychological and Educational Measurement: (1) Test Blueprint Construction, (2) Test Item Generation, (3) Content Validity Evaluation, (4) Test Tryout and Item Analysis, (5) Test Item Selection, and (6) Reliability Evaluation. The VLAT consists of 12 data visualizations and 53 multiple-choice test items that cover eight data visualization tasks. The test items in the VLAT were evaluated with respect to their essentialness by five domain experts in Information Visualization and Visual Analytics (average content validity ratio = 0.66). The VLAT was also tried out on a sample of 191 test takers and showed high reliability (reliability coefficient omega = 0.76). In addition, we demonstrated the relationship between users' visualization literacy and aptitude for learning an unfamiliar visualization and showed that they had a fairly high positive relationship (correlation coefficient = 0.64). Finally, we discuss evidence for the validity of the VLAT and potential research areas that are related to the instrument.

  8. Development and application of a two-tier diagnostic test measuring college biology students' understanding of diffusion and osmosis after a course of instruction

    NASA Astrophysics Data System (ADS)

    Odom, Arthur Louis; Barrow, Lloyd H.

    This study involved the development and application of a two-tier diagnostic test measuring college biology students' understanding of diffusion and osmosis after a course of instruction. The development procedure had three general steps: defining the content boundaries of the test, collecting information on students' misconceptions, and instrument development. Misconception data were collected from interviews and multiple-choice questions with free response answers. The data were used to develop 12 two-tier multiple choice items in which the first tier examined content knowledge and the second examined understanding of that knowledge. The conceptual knowledge examined was the particulate and random nature of matter, concentration and tonicity, the influence of life forces on diffusion and osmosis, membranes, kinetic energy of matter, the process of diffusion, and the process of osmosis. The diagnostic instrument was administered to 240 students (123 non-biology majors and 117 biology majors) enrolled in a college freshman biology laboratory course. The students had completed a unit on diffusion and osmosis. The content taught was carefully defined by propositional knowledge statements, and was the same content that defined the content boundaries of the test. The split-half reliability was .74. Difficulty indices ranged from 0.23 to 0.95, and discrimination indices ranged from 0.21 to 0.65. Each item was analyzed to determine student understanding of, and identify misconceptions about, diffusion and osmosis.Received: 18 June 1993; Revised: 16 February 1994;

  9. ESMO - Magnitude of Clinical Benefit Scale V.1.0 questions and answers

    PubMed Central

    Cherny, N I; Sullivan, R; Dafni, U; Kerst, J M; Sobrero, A; Zielinski, C; Piccart, M J; Bogaerts, J; Tabernero, J; Latino, N J; de Vries, E G E

    2016-01-01

    The ESMO Magnitude of Clinical Benefit Scale (ESMO-MCBS) is a standardised, generic, validated tool to stratify the magnitude of clinical benefit that can be anticipated from anticancer therapies. The ESMO-MCBS is intended to both assist oncologists in explaining the likely benefits of a particular treatment to their patients as well as to aid public health decision makers' prioritise therapies for reimbursement. From its inception the ESMO-MCBS Working Group has invited questions and critiques to promote understanding and to address misunderstandings regarding the nuanced use of the scale, and to identify shortcomings in the scale to be addressed in future planned revisions and updates. The ESMO-MCBS V.1.0 has attracted many questions regarding its development, structure and potential applications. These questions, together with responses from the ESMO-MCBS Working Group, have been edited and collated, and are herein presented as a supplementary resource. PMID:27900206

  10. Multiple-Choice Item Difficulty: The Effects of Language and Distracter Set Similarity.

    ERIC Educational Resources Information Center

    Green, Kathy E.

    The purpose of this study was to determine whether item difficulty is significantly affected by language difficulty and response set convergence. Language difficulty was varied by increasing sentence (stem) length, increasing syntactic complexity, and substituting uncommon words for more familiar terms in the item stem. Item wording ranged from…

  11. Multiple Choice: How Public School Leaders in New Orleans' Saturated Market View Private School Competitors

    ERIC Educational Resources Information Center

    Jabbar, Huriya; Li, Dongmei M.

    2016-01-01

    School choice policies, such as charter schools and vouchers, are in part designed to induce competition between schools. While several studies have examined the impact of private school competition on public schools, few studies have explored school leaders' perceptions of private school competitors. This study examines the extent to which public…

  12. Self-organization and phase transition in financial markets with multiple choices

    NASA Astrophysics Data System (ADS)

    Zhong, Li-Xin; Xu, Wen-Juan; Huang, Ping; Qiu, Tian; He, Yun-Xin; Zhong, Chen-Yang

    2014-09-01

    Market confidence is essential for successful investing. By incorporating multi-market into the evolutionary minority game, we investigate the effects of investor beliefs on the evolution of collective behaviors and asset prices. It is found that the roles of market confidence are closely related to whether or not there exists another market. When there exists another investment opportunity, different market confidence may lead to the same price fluctuations and the same investment attainment. There are two feedback effects. Being overly optimistic about a particular asset makes an investor become insensitive to losses. A delayed strategy adjustment leads to a decline in wealth and one's runaway from the market. The withdrawal of the agents results in the optimization of the strategy distributions and an increase in wealth. Being overly pessimistic about a particular asset makes an investor over-sensitive to losses. One's too frequent strategy adjustment leads to a decline in wealth. The withdrawal of the agents results in the improvement of the market environment and an increase in wealth.

  13. Brief Daily Writing Activities and Performance on Major Multiple-Choice Exams

    ERIC Educational Resources Information Center

    Turner, Haley C.; Bliss, Stacy L.; Hautau, Briana; Carroll, Erin; Jaspers, Kathryn E.; Williams, Robert L.

    2006-01-01

    Although past research indicates that giving brief quizzes, administered either regularly or randomly, may lead to improvement in students' performance on major exams, negligible research has targeted daily writing activities that require the processing of course information at a deeper level than might result from simply reading course materials…

  14. Multiple Choices after School: Findings from the Extended-Service Schools Initiative.

    ERIC Educational Resources Information Center

    Grossman, Jean Baldwin; Price, Marilyn L.; Fellerath, Veronica; Jucovy, Linda Z.; Kotloff, Lauren J.; Raley, Rebecca; Walker, Karen E.

    This study evaluated the effectiveness of the Extended-Service Schools (ESS) Initiative, which supported the creation of 60 after school programs in 20 low-income communities nationwide. Each community adapted one of four nationally recognized models that had been successfully developed and implemented in other cities. The models all promoted…

  15. The Multiple-Choice Concept Map (MCCM): An Interactive Computer-Based Assessment Method

    ERIC Educational Resources Information Center

    Sas, Ioan Ciprian

    2010-01-01

    This research attempted to bridge the gap between cognitive psychology and educational measurement (Mislevy, 2008; Leighton & Gierl, 2007; Nichols, 1994; Messick, 1989; Snow & Lohman, 1989) by using cognitive theories from working memory (Baddeley, 1986; Miyake & Shah, 1999; Grimley & Banner, 2008), multimedia learning (Mayer, 2001), and cognitive…

  16. The Effects of Images on Multiple-Choice Questions in Computer-Based Formative Assessment

    ERIC Educational Resources Information Center

    Martín-SanJosé, Juan Fernando; Juan, M.-Carmen; Vivó, Roberto; Abad, Francisco

    2015-01-01

    Current learning and assessment are evolving into digital systems that can be used, stored, and processed online. In this paper, three different types of questionnaires for assessment are presented. All the questionnaires were filled out online on a web-based format. A study was carried out to determine whether the use of images related to each…

  17. An Investigation of the Representativeness Heuristic: The Case of a Multiple Choice Exam

    ERIC Educational Resources Information Center

    Chernoff, Egan J.; Mamolo, Ami; Zazkis, Rina

    2016-01-01

    By focusing on a particular alteration of the comparative likelihood task, this study contributes to research on teachers' understanding of probability. Our novel task presented prospective teachers with multinomial, contextualized sequences and asked them to identify which was least likely. Results demonstrate that determinants of…

  18. Differential Daily Writing Conditions and Performance on Major Multiple-Choice Exams

    ERIC Educational Resources Information Center

    Hautau, Briana; Turner, Haley C.; Carroll, Erin; Jaspers, Kathryn; Krohn, Katy; Parker, Megan; Williams, Robert L.

    2006-01-01

    Students (N=153) in three equivalent sections of an undergraduate human development course compared pairs of related concepts via either written or oral discussion at the beginning of most class sessions. A writing-for-random-credit section achieved significantly higher ratings on the writing activities than did a writing-for-no-credit section.…

  19. Scoring Issues in Selected Statewide Assessment Programs Using Non-Multiple-Choice Formats.

    ERIC Educational Resources Information Center

    Kahl, Stuart R.

    Although few question the positive impacts alternative forms of assessment can have on instruction, concerns about the psychometric quality of data obtained from such assessments are taking their toll. Scoring issues are at the heart of many of these concerns. This paper addresses the causes of these concerns: misinformation about psychometric…

  20. Lipids for intravenous nutrition in hospitalised adult patients: a multiple choice of options.

    PubMed

    Calder, Philip C

    2013-08-01

    Lipids used in parenteral nutrition provide energy, building blocks and essential fatty acids. Traditionally, these lipids have been based on n-6 PUFA-rich vegetable oils particularly soyabean oil. This may not be optimal because soyabean oil may present an excessive supply of linoleic acid. Alternatives to use of soyabean oil include its partial replacement by medium-chain TAG, olive oil or fish oil, either alone or in combination. Lipid emulsions containing these alternatives are well tolerated without adverse effects in a wide range of hospitalised adult patients. Lipid emulsions that include fish oil have been used in parenteral nutrition in adult patients' post-surgery (mainly gastrointestinal). This has been associated with alterations in patterns of inflammatory mediators and in immune function and, in some studies, a reduction in length of intensive care unit and hospital stay. These benefits are emphasised through recent meta-analyses. Perioperative administration of fish oil may be superior to post-operative administration. Parenteral fish oil has been used in critically ill adults. Here, the influence on inflammatory processes, immune function and clinical endpoints is not clear, since there are too few studies and those that are available report contradictory findings. However, some studies found reduced inflammation, improved gas exchange and shorter length of hospital stay in critically ill patients if they receive fish oil. More and better trials are needed in patient groups in which parenteral nutrition is used and where fish oil may offer benefits.

  1. Instructor Perspectives of Multiple-Choice Questions in Summative Assessment for Novice Programmers

    ERIC Educational Resources Information Center

    Shuhidan, Shuhaida; Hamilton, Margaret; D'Souza, Daryl

    2010-01-01

    Learning to program is known to be difficult for novices. High attrition and high failure rates in foundation-level programming courses undertaken at tertiary level in Computer Science programs, are commonly reported. A common approach to evaluating novice programming ability is through a combination of formative and summative assessments, with…

  2. On the Correction for Guessing on a Multiple-Choice Examination.

    ERIC Educational Resources Information Center

    Hamdan, M. A.

    1979-01-01

    The distribution theory underlying corrections for guessing is analyzed, and the probability distributions of the random variables are derived. The correction in grade, based on random guessing of unknown answers, is compared with corrections based on educated guessing. (Author/MH)

  3. The Role of Professional Identity in Patterns of Use of Multiple-Choice Assessment Tools

    ERIC Educational Resources Information Center

    Johannesen, Monica; Habib, Laurence

    2010-01-01

    This article uses the notion of professional identity within the framework of actor network theory to understand didactic practices within three faculties in an institution of higher education. The study is based on a series of interviews with lecturers in each faculty and diaries of their didactic practices. The article focuses on the use of a…

  4. Intuitive Judgments Govern Students' Answering Patterns in Multiple-Choice Exercises in Organic Chemistry

    ERIC Educational Resources Information Center

    Graulich, Nicole

    2015-01-01

    Research in chemistry education has revealed that students going through their undergraduate and graduate studies in organic chemistry have a fragmented conceptual knowledge of the subject. Rote memorization, rule-based reasoning, and heuristic strategies seem to strongly influence students' performances. There appears to be a gap between what we…

  5. The effects of a test-taking strategy intervention for high school students with test anxiety in advanced placement science courses

    NASA Astrophysics Data System (ADS)

    Markus, Doron J.

    Test anxiety is one of the most debilitating and disruptive factors associated with underachievement and failure in schools (Birenbaum, Menucha, Nasser, & Fadia, 1994; Tobias, 1985). Researchers have suggested that interventions that combine multiple test-anxiety reduction techniques are most effective at reducing test anxiety levels (Ergene, 2003). For the current study, involving 62 public high school students enrolled in advanced placement science courses, the researcher designed a multimodal intervention designed to reduce test anxiety. Analyses were conducted to assess the relationships among test anxiety levels, unit examination scores, and irregular multiple-choice error patterns (error clumping), as well as changes in these measures after the intervention. Results indicate significant, positive relationships between some measures of test anxiety and error clumping, as well as significant, negative relationships between test anxiety levels and student achievement. In addition, results show significant decreases in holistic measures of test anxiety among students with low anxiety levels, as well as decreases in Emotionality subscores of test anxiety among students with high levels of test anxiety. There were no significant changes over time in the Worry subscores of test anxiety. Suggestions for further research include further confirmation of the existence of error clumping, and its causal relationship with test anxiety.

  6. The Influence of a Response Format Test Accommodation for College Students with and without Disabilities

    ERIC Educational Resources Information Center

    Potter, Kyle; Lewandowski, Lawrence; Spenceley, Laura

    2016-01-01

    Standardised and other multiple-choice examinations often require the use of an answer sheet with fill-in bubbles (i.e. "bubble" or Scantron sheet). Students with disabilities causing impairments in attention, learning and/or visual-motor skill may have difficulties with multiple-choice examinations that employ such a response style.…

  7. The Positive and Negative Effects of Science Concept Tests on Student Conceptual Understanding

    NASA Astrophysics Data System (ADS)

    Chang, Chun-Yen; Yeh, Ting-Kuang; Barufaldi, James P.

    2010-01-01

    This study explored the phenomenon of testing effect during science concept assessments, including the mechanism behind it and its impact upon a learner's conceptual understanding. The participants consisted of 208 high school students, in either the 11th or 12th grade. Three types of tests (traditional multiple-choice test, correct concept test, and incorrect concept test) related to the greenhouse effect and global warming were developed to explore the mechanisms underlining the test effect. Interview data analyzed by means of the flow-map method were used to examine the two-week post-test consequences of taking one of these three tests. The results indicated: (1) Traditional tests can affect participants' long-term memory, both positively and negatively; in addition, when students ponder repeatedly and think harder about highly distracting choices during a test, they may gradually develop new conceptions; (2) Students develop more correct conceptions when more true descriptions are provided on the tests; on the other hand, students develop more misconceptions while completing tests in which more false descriptions of choices are provided. Finally, the results of this study revealed a noteworthy phenomenon that tests, if employed appropriately, may be also an effective instrument for assisting students' conceptual understanding.

  8. Explicit versus implicit social cognition testing in autism spectrum disorder

    PubMed Central

    Callenmark, Björn; Kjellin, Lars; Rönnqvist, Louise

    2014-01-01

    Although autism spectrum disorder is defined by reciprocal social-communication impairments, several studies have found no evidence for altered social cognition test performance. This study examined explicit (i.e. prompted) and implicit (i.e. spontaneous) variants of social cognition testing in autism spectrum disorder. A sample of 19 adolescents with autism spectrum disorder and 19 carefully matched typically developing controls completed the Dewey Story Test. ‘Explicit’ (multiple-choice answering format) and ‘implicit’ (free interview) measures of social cognition were obtained. Autism spectrum disorder participants did not differ from controls regarding explicit social cognition performance. However, the autism spectrum disorder group performed more poorly than controls on implicit social cognition performance in terms of spontaneous perspective taking and social awareness. Findings suggest that social cognition alterations in autism spectrum disorder are primarily implicit in nature and that an apparent absence of social cognition difficulties on certain tests using rather explicit testing formats does not necessarily mean social cognition typicality in autism spectrum disorder. PMID:24104519

  9. Explicit versus implicit social cognition testing in autism spectrum disorder.

    PubMed

    Callenmark, Björn; Kjellin, Lars; Rönnqvist, Louise; Bölte, Sven

    2014-08-01

    Although autism spectrum disorder is defined by reciprocal social-communication impairments, several studies have found no evidence for altered social cognition test performance. This study examined explicit (i.e. prompted) and implicit (i.e. spontaneous) variants of social cognition testing in autism spectrum disorder. A sample of 19 adolescents with autism spectrum disorder and 19 carefully matched typically developing controls completed the Dewey Story Test. 'Explicit' (multiple-choice answering format) and 'implicit' (free interview) measures of social cognition were obtained. Autism spectrum disorder participants did not differ from controls regarding explicit social cognition performance. However, the autism spectrum disorder group performed more poorly than controls on implicit social cognition performance in terms of spontaneous perspective taking and social awareness. Findings suggest that social cognition alterations in autism spectrum disorder are primarily implicit in nature and that an apparent absence of social cognition difficulties on certain tests using rather explicit testing formats does not necessarily mean social cognition typicality in autism spectrum disorder.

  10. FAA Pilot Knowledge Tests: Learning or Rote Memorization?

    NASA Technical Reports Server (NTRS)

    Casner, Stephen M.; Jones, Karen M.; Puentes, Antonio; Irani, Homi

    2004-01-01

    The FAA pilot knowledge test is a multiple-choice assessment tool designed to measure the extent to which applicants for FAA pilot certificates and ratings have mastered a corpus of required aeronautical knowledge. All questions that appear on the test are drawn from a database of questions that is made available to the public. The FAA and others are concerned that releasing test questions may encourage students to focus their study on memorizing test questions. To investigate this concern, we created our own database of questions that differed from FAA questions in four different ways. Our first three question types were derived by modifying existing FAA questions: (1) rewording questions and answers; (2) shuffling answers; and (3) substituting different figures for problems that used figures. Our last question type posed a question about required knowledge for which no FAA question currently exists. Forty-eight student pilots completed one of two paper-and-pencil knowledge tests that contained a mix of these experimental questions. The results indicate significantly lower scores for some question types when compared to unaltered FAA questions to which participants had prior access.

  11. Effect of Examinee Certainty on Probabilistic Test Scores and a Comparison of Scoring Methods for Probabilistic Responses.

    ERIC Educational Resources Information Center

    Suhadolnik, Debra; Weiss, David J.

    The present study was an attempt to alleviate some of the difficulties inherent in multiple-choice items by having examinees respond to multiple-choice items in a probabilistic manner. Using this format, examinees are able to respond to each alternative and to provide indications of any partial knowledge they may possess concerning the item. The…

  12. Integrating personalized medical test contents with XML and XSL-FO

    PubMed Central

    2011-01-01

    Background In 2004 the adoption of a modular curriculum at the medical faculty in Muenster led to the introduction of centralized examinations based on multiple-choice questions (MCQs). We report on how organizational challenges of realizing faculty-wide personalized tests were addressed by implementation of a specialized software module to automatically generate test sheets from individual test registrations and MCQ contents. Methods Key steps of the presented method for preparing personalized test sheets are (1) the compilation of relevant item contents and graphical media from a relational database with database queries, (2) the creation of Extensible Markup Language (XML) intermediates, and (3) the transformation into paginated documents. Results The software module by use of an open source print formatter consistently produced high-quality test sheets, while the blending of vectorized textual contents and pixel graphics resulted in efficient output file sizes. Concomitantly the module permitted an individual randomization of item sequences to prevent illicit collusion. Conclusions The automatic generation of personalized MCQ test sheets is feasible using freely available open source software libraries, and can be efficiently deployed on a faculty-wide scale. PMID:21362187

  13. Test Architecture, Test Retrofit

    ERIC Educational Resources Information Center

    Fulcher, Glenn; Davidson, Fred

    2009-01-01

    Just like buildings, tests are designed and built for specific purposes, people, and uses. However, both buildings and tests grow and change over time as the needs of their users change. Sometimes, they are also both used for purposes other than those intended in the original designs. This paper explores architecture as a metaphor for language…

  14. Use of the NBME Comprehensive Basic Science Examination as a progress test in the preclerkship curriculum of a new medical school.

    PubMed

    Johnson, Teresa R; Khalil, Mohammed K; Peppler, Richard D; Davey, Diane D; Kibble, Jonathan D

    2014-12-01

    In the present study, we describe the innovative use of the National Board of Medical Examiners (NBME) Comprehensive Basic Science Examination (CBSE) as a progress test during the preclerkship medical curriculum. The main aim of this study was to provide external validation of internally developed multiple-choice assessments in a new medical school. The CBSE is a practice exam for the United States Medical Licensing Examination (USMLE) Step 1 and is purchased directly from the NBME. We administered the CBSE five times during the first 2 yr of medical school. Student scores were compared with scores on newly created internal summative exams and to the USMLE Step 1. Significant correlations were observed between almost all our internal exams and CBSE scores over time as well as with USMLE Step 1 scores. The strength of correlations of internal exams to the CBSE and USMLE Step 1 broadly increased over time during the curriculum. Student scores on courses that have strong emphasis on physiology and pathophysiology correlated particularly well with USMLE Step 1 scores. Student progress, as measured by the CBSE, was found to be linear across time, and test performance fell behind the anticipated level by the end of the formal curriculum. These findings are discussed with respect to student learning behaviors. In conclusion, the CBSE was found to have good utility as a progress test and provided external validation of our new internally developed multiple-choice assessments. The data also provide performance benchmarks both for our future students to formatively assess their own progress and for other medical schools to compare learning progression patterns in different curricular models.

  15. Testing primary-school children's understanding of the nature of science.

    PubMed

    Koerber, Susanne; Osterhaus, Christopher; Sodian, Beate

    2015-03-01

    Understanding the nature of science (NOS) is a critical aspect of scientific reasoning, yet few studies have investigated its developmental beginnings and initial structure. One contributing reason is the lack of an adequate instrument. Two studies assessed NOS understanding among third graders using a multiple-select (MS) paper-and-pencil test. Study 1 investigated the validity of the MS test by presenting the items to 68 third graders (9-year-olds) and subsequently interviewing them on their underlying NOS conception of the items. All items were significantly related between formats, indicating that the test was valid. Study 2 applied the same instrument to a larger sample of 243 third graders, and their performance was compared to a multiple-choice (MC) version of the test. Although the MC format inflated the guessing probability, there was a significant relation between the two formats. In summary, the MS format was a valid method revealing third graders' NOS understanding, thereby representing an economical test instrument. A latent class analysis identified three groups of children with expertise in qualitatively different aspects of NOS, suggesting that there is not a single common starting point for the development of NOS understanding; instead, multiple developmental pathways may exist.

  16. An interactive computer program can effectively educate potential users of cystic fibrosis carrier tests.

    PubMed

    Castellani, Carlo; Perobelli, Sandra; Bianchi, Vera; Seia, Manuela; Melotti, Paola; Zanolla, Luisa; Assael, Baroukh Maurice; Lalatta, Faustina

    2011-04-01

    The demand for cystic fibrosis (CF) carrier testing is steadily growing, not only from individuals with raised a priori carrier risk, but also from the general population. This trend will likely exceed the availability of genetic counselors, making it impossible to provide standard face-to-face genetic counseling to all those asking for the test. In order to reduce the time needed to educate individuals on the basics of the disease, its genetic transmission, and carrier testing peculiarities, we developed an educational method based on an interactive computer program (IC). To assess the effectiveness of this program and to compare it to a classical genetic counseling session, we conducted a comparative trial. In a population setting of people undergoing assisted reproduction, 44 individuals were randomly assigned to either receiving standard one-on-one genetic counseling or education by the IC program. We measured pre- and post-intervention knowledge about CF genetic transmission and carrier testing. Starting from an equivalent baseline of correct answers to a specially designed multiple-choice questionnaire (47% in the counselor group and 45% in the computer group) both groups showed a highly significant and similar increase (reaching 84% in the counselor group and 85% in the computer group). The computer program under evaluation can successfully educate individuals considering genetic testing for CF.

  17. American Sign Language Comprehension Test: A Tool for Sign Language Researchers.

    PubMed

    Hauser, Peter C; Paludneviciene, Raylene; Riddle, Wanda; Kurz, Kim B; Emmorey, Karen; Contreras, Jessica

    2016-01-01

    The American Sign Language Comprehension Test (ASL-CT) is a 30-item multiple-choice test that measures ASL receptive skills and is administered through a website. This article describes the development and psychometric properties of the test based on a sample of 80 college students including deaf native signers, hearing native signers, deaf non-native signers, and hearing ASL students. The results revealed that the ASL-CT has good internal reliability (α = 0.834). Discriminant validity was established by demonstrating that deaf native signers performed significantly better than deaf non-native signers and hearing native signers. Concurrent validity was established by demonstrating that test results positively correlated with another measure of ASL ability (r = .715) and that hearing ASL students' performance positively correlated with the level of ASL courses they were taking (r = .726). Researchers can use the ASL-CT to characterize an individual's ASL comprehension skills, to establish a minimal skill level as an inclusion criterion for a study, to group study participants by ASL skill (e.g., proficient vs. nonproficient), or to provide a measure of ASL skill as a dependent variable.

  18. Differences in gender performance on competitive physics selection tests

    NASA Astrophysics Data System (ADS)

    Wilson, Kate; Low, David; Verdon, Matthew; Verdon, Alix

    2016-12-01

    [This paper is part of the Focused Collection on Gender in Physics.] We have investigated gender differences in performance over the past eight years on the Australian Science Olympiad Exam (ASOE) for physics, which is taken by nearly 1000 high school students each year. The ASOE, run by Australian Science Innovations (ASI), is the initial stage of the process of selection of teams to represent Australia at the Asian and International Physics Olympiads. Students taking the exam are generally in their penultimate year of school and selected by teachers as being high performing in physics. Together with the overall differences in facility, we have investigated how the content and presentation of multiple-choice questions (MCQs) affects the particular answers selected by male and female students. Differences in the patterns of responses by male and female students indicate that males and females might be modeling situations in different ways. Some strong patterns were found in the gender gaps when the questions were categorized in five broad dimensions: content, process required, difficulty, presentation, and context. Almost all questions saw male students performing better, although gender differences were relatively small for questions with a more abstract context. Male students performed significantly better on most questions with a concrete context, although notable exceptions were found, including two such questions where female students performed better. Other categories that showed consistently large gaps favoring male students include questions with projectile motion and other two-dimensional motion or forces content, and processes involving interpreting diagrams. Our results have important implications, suggesting that we should be able to reduce the gender gaps in performance on MCQ tests by changing the way information is presented and setting questions in contexts that are less likely to favor males over females. This is important as MCQ tests are

  19. Test plan :

    SciTech Connect

    Dwyer, Stephen F.

    2013-05-01

    This test plan is a document that provides a systematic approach to the planned testing of rooftop structures to determine their actual load carrying capacity. This document identifies typical tests to be performed, the responsible parties for testing, the general feature of the tests, the testing approach, test deliverables, testing schedule, monitoring requirements, and environmental and safety compliance.

  20. Pinworm test

    MedlinePlus

    Oxyuriasis test; Enterobiasis test; Tape test ... diagnose this infection is to do a tape test. The best time to do this is in ... lay their eggs at night. Steps for the test are: Firmly press the sticky side of a ...

  1. Thyroid Tests

    MedlinePlus

    ... calories and how fast your heart beats. Thyroid tests check how well your thyroid is working. They ... thyroid diseases such as hyperthyroidism and hypothyroidism. Thyroid tests include blood tests and imaging tests. Blood tests ...

  2. Susceptibility Testing

    MedlinePlus

    ... Also known as: Sensitivity Testing; Drug Resistance Testing; Culture and Sensitivity; C & S; Antimicrobial Susceptibility Formal name: Bacterial and Fungal Susceptibility Testing Related tests: Urine Culture ; Blood Culture ; Bacterial Wound Culture ; AFB Testing ; MRSA ; ...

  3. Identification and testing of oviposition attractant chemical compounds for Musca domestica

    PubMed Central

    Tang, Rui; Zhang, Feng; Kone, N’Golopé; Chen, Jing-Hua; Zhu, Fen; Han, Ri-Chou; Lei, Chao-Liang; Kenis, Marc; Huang, Ling-Qiao; Wang, Chen-Zhu

    2016-01-01

    Oviposition attractants for the house fly Musca domestica have been investigated using electrophysiological tests, behavioural assays and field tests. Volatiles were collected via head space absorption method from fermented wheat bran, fresh wheat bran, rearing substrate residue and house fly maggots. A Y-tube olfactometer assay showed that the odor of fermented wheat bran was a significant attractant for female house flies. Bioactive compounds from fermented wheat bran for house fly females were identified by electrophysiology and mass spectrophotometry and confirmed with standard chemicals. Four electrophysiologically active compounds including ethyl palmitate, ethyl linoleate, methyl linoleate, and linoleic acid were found at a proportion of 10:24:6:0.2. Functional imaging in the female antennal lobes revealed an overlapped active pattern for all chemicals. Further multiple-choice behavioural bioassays showed that these chemicals, as well as a mixture that mimicked the naturally occurring combination, increased the attractiveness of non-preferred rearing substrates of cotton and maize powder. Finally, a field demonstration test revealed that, by adding this mimic blend into a rearing substrate used to attract and breed house flies in West Africa, egg numbers laid by females were increased. These chemicals could be utilized to improve house fly production systems or considered for lure traps. PMID:27667397

  4. Identification and testing of oviposition attractant chemical compounds for Musca domestica.

    PubMed

    Tang, Rui; Zhang, Feng; Kone, N'Golopé; Chen, Jing-Hua; Zhu, Fen; Han, Ri-Chou; Lei, Chao-Liang; Kenis, Marc; Huang, Ling-Qiao; Wang, Chen-Zhu

    2016-09-26

    Oviposition attractants for the house fly Musca domestica have been investigated using electrophysiological tests, behavioural assays and field tests. Volatiles were collected via head space absorption method from fermented wheat bran, fresh wheat bran, rearing substrate residue and house fly maggots. A Y-tube olfactometer assay showed that the odor of fermented wheat bran was a significant attractant for female house flies. Bioactive compounds from fermented wheat bran for house fly females were identified by electrophysiology and mass spectrophotometry and confirmed with standard chemicals. Four electrophysiologically active compounds including ethyl palmitate, ethyl linoleate, methyl linoleate, and linoleic acid were found at a proportion of 10:24:6:0.2. Functional imaging in the female antennal lobes revealed an overlapped active pattern for all chemicals. Further multiple-choice behavioural bioassays showed that these chemicals, as well as a mixture that mimicked the naturally occurring combination, increased the attractiveness of non-preferred rearing substrates of cotton and maize powder. Finally, a field demonstration test revealed that, by adding this mimic blend into a rearing substrate used to attract and breed house flies in West Africa, egg numbers laid by females were increased. These chemicals could be utilized to improve house fly production systems or considered for lure traps.

  5. Have the Answers to Common Legal Questions Concerning Nutrition Support Changed Over the Past Decade? 10 Questions for 10 Years.

    PubMed

    Barrocas, Albert; Cohen, Michael L

    2016-06-01

    Clinical nutrition specialists (CNSs) are often confronted with technological, ethical, and legal questions, that is, what can be done technologically, what should be done ethically, and what must be done legally, which conflict at times. The conflict represents a "troubling trichotomy" as discussed in the lead article of this issue of Nutrition in Clinical Practice (NCP). During Clinical Nutrition Week in 2006, a symposium covering these 3 topics was presented, and later that year, an article covering the same topic was published in NCP In this article, we revisit several legal questions/issues that were raised 10 years ago and discuss current answers and approaches. Some of the answers remain unchanged. Other answers have been modified by additional legislation, court decisions, or regulations. In addition, new questions/issues have arisen. Some of the most common questions regarding nutrition support involve the following: liability, informed consent, medical decisional incapacity vs legal competence, advance directive specificity, surrogate decision making, physician orders for life-sustaining treatment and electronic medical orders for life-sustaining treatment, legal definition of death, patient vs family decision making, the noncompliant patient, and elder abuse obligations. In the current healthcare environment, these questions and issues are best addressed via a transdisciplinary team that focuses on function rather than form. The CNS can play a pivotal role in dealing with these challenges by applying the acronym ACT: being Accountable and Communicating with all stakeholders while actively participating as an integral part of the transdisciplinary Team.

  6. The role of the interventional cardiologist in selecting antiplatelet agents in acute coronary syndromes: a 10-question strategy

    PubMed Central

    Meneveau, Nicolas

    2012-01-01

    Antiplatelet agents play a major role in the management of patients with acute coronary syndromes (ACS). In recent years, the most important development has been the advent of new inhibitors of adenosine 5′-diphosphate (ADP) P2Y12 receptor inhibitors, namely prasugrel and ticagrelor. The arrival of these new drugs on the market, with their specific indications and combinations with aspirin, glycoprotein IIb/IIIa inhibitors, and anticoagulants, has rendered the therapeutic arena more complex. Achieving the best combination of all these drugs for each patient requires sound knowledge of the indications of each molecule according to the clinical situation, as well as evaluation of the ischaemic and haemorrhagic risks. In practical terms, the interventional cardiologist holds the key to therapeutic decisions, based on the anatomical information obtained in the cathlab. He/she should be able to recommend an appropriate antiplatelet treatment strategy even before the patient arrives in the cathlab, or alternatively, adapt or modify treatment according to the possibilities for revascularization, and advise on long-term therapy. In this report, we describe, in ten questions, the key elements that the interventional cardiologists should be ready to answer before choosing the appropriate antiplatelet regimen, based on recent guidelines, and covering the whole spectrum of management from pre-hospital, to the cathlab, and after invasive procedures. PMID:24062905

  7. The role of the interventional cardiologist in selecting antiplatelet agents in acute coronary syndromes: a 10-question strategy.

    PubMed

    Schiele, Francois; Meneveau, Nicolas

    2012-06-01

    Antiplatelet agents play a major role in the management of patients with acute coronary syndromes (ACS). In recent years, the most important development has been the advent of new inhibitors of adenosine 5'-diphosphate (ADP) P2Y12 receptor inhibitors, namely prasugrel and ticagrelor. The arrival of these new drugs on the market, with their specific indications and combinations with aspirin, glycoprotein IIb/IIIa inhibitors, and anticoagulants, has rendered the therapeutic arena more complex. Achieving the best combination of all these drugs for each patient requires sound knowledge of the indications of each molecule according to the clinical situation, as well as evaluation of the ischaemic and haemorrhagic risks. In practical terms, the interventional cardiologist holds the key to therapeutic decisions, based on the anatomical information obtained in the cathlab. He/she should be able to recommend an appropriate antiplatelet treatment strategy even before the patient arrives in the cathlab, or alternatively, adapt or modify treatment according to the possibilities for revascularization, and advise on long-term therapy. In this report, we describe, in ten questions, the key elements that the interventional cardiologists should be ready to answer before choosing the appropriate antiplatelet regimen, based on recent guidelines, and covering the whole spectrum of management from pre-hospital, to the cathlab, and after invasive procedures.

  8. Pharmacogenomic Testing

    MedlinePlus

    ... Primary care providers Specialists Getting covered Research Basic science research Research in people ... screening Diagnostic testing Direct-to-consumer genetic testing Newborn screening Pharmacogenomic testing ...

  9. Predictive Testing

    MedlinePlus

    ... Primary care providers Specialists Getting covered Research Basic science research Research in people ... screening Diagnostic testing Direct-to-consumer genetic testing Newborn screening Pharmacogenomic testing ...

  10. Effectiveness of web-based teaching modules: test-enhanced learning in dental education.

    PubMed

    Jackson, Tate H; Hannum, Wallace H; Koroluk, Lorne; Proffit, William R

    2011-06-01

    The purpose of our study was to evaluate the effectiveness of self-tests as a component of web-based self-instruction in predoctoral orthodontics and pediatric dentistry. To this end, the usage patterns of online teaching modules and self-tests by students enrolled in three courses at the University of North Carolina at Chapel Hill School of Dentistry were monitored and correlated to final exam grade and course average. We recorded the frequency of access to thirty relevant teaching modules and twenty-nine relevant self-tests for 157 second- and third-year D.D.S. students during the course of our data collection. There was a statistically significant positive correlation between frequency of accessing self-tests and course performance in one course that was totally based on self-instruction with seminars and multiple-choice examination (Level IV): Spearman correlation between frequency of self-test access and final exam grade, rho=0.23, p=0.044; correlation between frequency of self-test access and course average: rho=0.39, p=0.0004. In the other two courses we monitored, which included content beyond self-instruction with self-tests, the correlations were positive but not statistically significant. The students' use of online learning resources varied significantly from one course (Level I) to the next (Level II): Wilcoxon matched pairs signed-rank tests, S=-515.5, p=.0057 and S=1086, p<0.0001. The data from this study suggest that increased use of web-based self-tests may be correlated with more effective learning in predoctoral dental education by virtue of the testing effect and that dental students' usage of resources for learning changes significantly over the course of their education.

  11. Coronary heart disease knowledge test: developing a valid and reliable tool.

    PubMed

    Smith, M M; Hicks, V L; Heyward, V H

    1991-04-01

    This study tested the validity and reliability of a written test designed to assess knowledge of coronary heart disease (CHD) and its risk factors. The subjects were 93 males diagnosed with CHD. Subjects were classified into a treatment group (n = 48) or a control group (n = 45) based on whether or not they participated in a cardiac rehabilitation program (CRP). An additional 38 subjects were used to pilot test the original form of the knowledge test, which consisted of 80 multiple-choice questions. Content validity was established by a five-member jury of cardiac rehabilitation experts. Each question was rated using a Likert-type scale. Questions that did not receive an average rating of at least four were eliminated. The revised form was pilot tested for validity and internal consistency with the discrimination index (point biserial correlation coefficient) and the Kuder-Richardson formula 20 (KR-20). Questions with a discrimination index of less than 0.14 were eliminated; thus, the final form of the test consisted of 40 questions. Validation of this test yielded difficulty ratings (DRs) between 0 percent and 98 percent, with an average DR of 63 percent. Construct validation indicated that the average test score of subjects participating in a CRP was significantly higher than that of non-participants (t = 3.51, df = 91, p less than or equal to 0.01). The internal-consistency reliability of the test was 0.84. The results indicate that this test is a valid and reliable tool for assessing patients' knowledge of CHD and its risk factors.

  12. Pregnancy Tests

    MedlinePlus

    ... Us Home A-Z Health Topics Pregnancy tests Pregnancy tests > A-Z Health Topics Pregnancy test fact ... To receive Publications email updates Enter email Submit Pregnancy tests If you think you may be pregnant , ...

  13. VDRL test

    MedlinePlus

    ... The VDRL test is a screening test for syphilis. It measures substances (proteins), called antibodies, that your ... come in contact with the bacteria that cause syphilis. How the Test is Performed The test is ...

  14. Coombs test

    MedlinePlus

    Direct antiglobulin test; Indirect antiglobulin test; Anemia - hemolytic ... No special preparation is necessary for this test. ... There are 2 types of the Coombs test: Direct Indirect The direct ... that are stuck to the surface of red blood cells. Many diseases ...

  15. Ham test

    MedlinePlus

    Acid hemolysin test; Paroxysmal nocturnal hemoglobinuria - Ham test; PNH - Ham test ... BJ. In: Chernecky CC, Berger BJ, eds. Laboratory Tests and Diagnostic Procedures . 6th ed. Philadelphia, PA: Elsevier ...

  16. Trichomonas Testing

    MedlinePlus

    ... Trichomonas vaginalis by Amplified Detection; Trichomonas vaginalis by Direct Fluorescent Antibody (DFA) Related tests: Pap Test , Chlamydia ... by one of the following methods: Molecular testing, direct DNA probes, or nucleic acid amplification tests (NAATs)— ...

  17. Formulation of Multiple Choice Questions as a Revision Exercise at the End of a Teaching Module in Biochemistry

    ERIC Educational Resources Information Center

    Bobby, Zachariah; Radhika, M. R.; Nandeesha, H.; Balasubramanian, A.; Prerna, Singh; Archana, Nimesh; Thippeswamy, D. N.

    2012-01-01

    The graduate medical students often get less opportunity for clarifying their doubts and to reinforce their concepts after lecture classes. Assessment of the effect of MCQ preparation by graduate medical students as a revision exercise on the topic "Mineral metabolism." At the end of regular teaching module on the topic "Mineral metabolism,"…

  18. Using Ordered Multiple-Choice Items to Assess Students' Understanding of the Structure and Composition of Matter

    ERIC Educational Resources Information Center

    Hadenfeldt, Jan C.; Bernholt, Sascha; Liu, Xiufeng; Neumann, Knut; Parchmann, Ilka

    2013-01-01

    Helping students develop a sound understanding of scientific concepts can be a major challenge. Lately, learning progressions have received increasing attention as a means to support students in developing understanding of core scientific concepts. At the center of a learning progression is a sequence of developmental levels reflecting an…

  19. The Validity of Multiple Choice Practical Examinations as an Alternative to Traditional Free Response Examination Formats in Gross Anatomy

    ERIC Educational Resources Information Center

    Shaibah, Hassan Sami; van der Vleuten, Cees P. M.

    2013-01-01

    Traditionally, an anatomy practical examination is conducted using a free response format (FRF). However, this format is resource-intensive, as it requires a relatively large time investment from anatomy course faculty in preparation and grading. Thus, several interventions have been reported where the response format was changed to a selected…

  20. The Impact of 3-Option Responses to Multiple-Choice Questions on Guessing Strategies and Cut Score Determinations

    PubMed Central

    ROYAL, KENNETH D.; STOCKDALE, MYRAH R.

    2017-01-01

    Introduction: Research has asserted MCQ items using three response options (one correct answer with two distractors) is comparable to, and possibly preferable over, traditional MCQ item formats consisting of four response options (e.g., one correct answer with three distractors), or five response options (e.g., one correct answer with four distractors). Some medical educators have also adopted the practice of using 3-option responses on MCQ exams as a response to the difficulty experienced in generating additional plausible distractors. To date, however, little work has explored how 3-option responses might impact validity threats stemming from random guessing strategies, and what impact 3-option responses might have on cut-score determinations, particularly in the context of medical education classroom assessments. The purpose of this work is to further explore these critically important considerations that largely have gone ignored in the medical education literature to this point. Methods: A cumulative binomial distribution formula was used to calculate the probability that an examinee will answer at random a given number of items correctly on any exam (of any length). By way of a demonstration, a variety of scenarios were presented to illustrate how examination length and the number of response options impact examinees’ chances of passing a given examination, and how subsequent cut-score decisions may be impacted by these factors. Results: As a general rule, classroom assessments containing fewer items should utilize traditional 4-option or 5-option responses, whereas assessments of greater length are afforded greater flexibility in potentially utilizing 3-option responses. Conclusions: More research on items with 3-option responses is needed to better understand what value, if any, 3-option responses truly add to classroom assessments, and in what contexts potential benefits might be discernible. PMID:28367465