Sample records for computational learning theory

  1. Topics in Computational Learning Theory and Graph Algorithms.

    ERIC Educational Resources Information Center

    Board, Raymond Acton

    This thesis addresses problems from two areas of theoretical computer science. The first area is that of computational learning theory, which is the study of the phenomenon of concept learning using formal mathematical models. The goal of computational learning theory is to investigate learning in a rigorous manner through the use of techniques…

  2. Computer-based teaching module design: principles derived from learning theories.

    PubMed

    Lau, K H Vincent

    2014-03-01

    The computer-based teaching module (CBTM), which has recently gained prominence in medical education, is a teaching format in which a multimedia program serves as a single source for knowledge acquisition rather than playing an adjunctive role as it does in computer-assisted learning (CAL). Despite empirical validation in the past decade, there is limited research into the optimisation of CBTM design. This review aims to summarise research in classic and modern multimedia-specific learning theories applied to computer learning, and to collapse the findings into a set of design principles to guide the development of CBTMs. Scopus was searched for: (i) studies of classic cognitivism, constructivism and behaviourism theories (search terms: 'cognitive theory' OR 'constructivism theory' OR 'behaviourism theory' AND 'e-learning' OR 'web-based learning') and their sub-theories applied to computer learning, and (ii) recent studies of modern learning theories applied to computer learning (search terms: 'learning theory' AND 'e-learning' OR 'web-based learning') for articles published between 1990 and 2012. The first search identified 29 studies, dominated in topic by the cognitive load, elaboration and scaffolding theories. The second search identified 139 studies, with diverse topics in connectivism, discovery and technical scaffolding. Based on their relative representation in the literature, the applications of these theories were collapsed into a list of CBTM design principles. Ten principles were identified and categorised into three levels of design: the global level (managing objectives, framing, minimising technical load); the rhetoric level (optimising modality, making modality explicit, scaffolding, elaboration, spaced repeating), and the detail level (managing text, managing devices). This review examined the literature in the application of learning theories to CAL to develop a set of principles that guide CBTM design. Further research will enable educators to take advantage of this unique teaching format as it gains increasing importance in medical education. © 2014 John Wiley & Sons Ltd.

  3. Optimizing Computer Assisted Instruction By Applying Principles of Learning Theory.

    ERIC Educational Resources Information Center

    Edwards, Thomas O.

    The development of learning theory and its application to computer-assisted instruction (CAI) are described. Among the early theoretical constructs thought to be important are E. L. Thorndike's concept of learning connectisms, Neal Miller's theory of motivation, and B. F. Skinner's theory of operant conditioning. Early devices incorporating those…

  4. The Effects of Embedded Generative Learning Strategies and Collaboration on Knowledge Acquisition in a Cognitive Flexibility-Based Computer Learning Environment

    DTIC Science & Technology

    1998-08-07

    cognitive flexibility theory and generative learning theory which focus primarily on the individual student’s cognitive development , collaborative... develop "Handling Transfusion Hazards," a computer program based upon cognitive flexibility theory principles. The Program: Handling Transfusion Hazards...computer program was developed according to cognitive flexibility theory principles. A generative version was then developed by embedding

  5. Situated Learning in Computer Science Education

    ERIC Educational Resources Information Center

    Ben-Ari, Mordechai

    2004-01-01

    Sociocultural theories of learning such as Wenger and Lave's situated learning have been suggested as alternatives to cognitive theories of learning like constructivism. This article examines situated learning within the context of computer science (CS) education. Situated learning accurately describes some CS communities like open-source software…

  6. Metacognitive Load--Useful, or Extraneous Concept? Metacognitive and Self-Regulatory Demands in Computer-Based Learning

    ERIC Educational Resources Information Center

    Schwonke, Rolf

    2015-01-01

    Instructional design theories such as the "cognitive load theory" (CLT) or the "cognitive theory of multimedia learning" (CTML) explain learning difficulties in (computer-based) learning usually as a result of design deficiencies that hinder effective schema construction. However, learners often struggle even in well-designed…

  7. Realizing the Promise of Visualization in the Theory of Computing

    ERIC Educational Resources Information Center

    Cogliati, Joshua J.; Goosey, Frances W.; Grinder, Michael T.; Pascoe, Bradley A.; Ross, Rockford J.; Williams, Cheston J.

    2005-01-01

    Progress on a hypertextbook on the theory of computing is presented. The hypertextbook is a novel teaching and learning resource built around web technologies that incorporates text, sound, pictures, illustrations, slide shows, video clips, and--most importantly--active learning models of the key concepts of the theory of computing into an…

  8. Aids to Computer-Based Multimedia Learning.

    ERIC Educational Resources Information Center

    Mayer, Richard E.; Moreno, Roxana

    2002-01-01

    Presents a cognitive theory of multimedia learning that draws on dual coding theory, cognitive load theory, and constructivist learning theory and derives some principles of instructional design for fostering multimedia learning. These include principles of multiple representation, contiguity, coherence, modality, and redundancy. (SLD)

  9. Reconstructing constructivism: causal models, Bayesian learning mechanisms, and the theory theory.

    PubMed

    Gopnik, Alison; Wellman, Henry M

    2012-11-01

    We propose a new version of the "theory theory" grounded in the computational framework of probabilistic causal models and Bayesian learning. Probabilistic models allow a constructivist but rigorous and detailed approach to cognitive development. They also explain the learning of both more specific causal hypotheses and more abstract framework theories. We outline the new theoretical ideas, explain the computational framework in an intuitive and nontechnical way, and review an extensive but relatively recent body of empirical results that supports these ideas. These include new studies of the mechanisms of learning. Children infer causal structure from statistical information, through their own actions on the world and through observations of the actions of others. Studies demonstrate these learning mechanisms in children from 16 months to 4 years old and include research on causal statistical learning, informal experimentation through play, and imitation and informal pedagogy. They also include studies of the variability and progressive character of intuitive theory change, particularly theory of mind. These studies investigate both the physical and the psychological and social domains. We conclude with suggestions for further collaborative projects between developmental and computational cognitive scientists.

  10. Toward a Script Theory of Guidance in Computer-Supported Collaborative Learning

    ERIC Educational Resources Information Center

    Fischer, Frank; Kollar, Ingo; Stegmann, Karsten; Wecker, Christof

    2013-01-01

    This article presents an outline of a script theory of guidance for computer-supported collaborative learning (CSCL). With its 4 types of components of internal and external scripts (play, scene, role, and scriptlet) and 7 principles, this theory addresses the question of how CSCL practices are shaped by dynamically reconfigured internal…

  11. Learning Style Theory and Computer Mediated Communication.

    ERIC Educational Resources Information Center

    Atkins, Hilary; Moore, David; Sharpe, Simon; Hobbs, Dave

    This paper looks at the low participation rates in computer mediated conferences (CMC) and argues that one of the causes of this may be an incompatibility between students' learning styles and the style adopted by CMC. Curry's Onion Model provides a well-established framework within which to view the main learning style theories (Riding and…

  12. Computers and Cultural Diversity. Restructuring for School Success. SUNY Series, Computers in Education.

    ERIC Educational Resources Information Center

    DeVillar, Robert A.; Faltis, Christian J.

    This book offers an alternative conceptual framework for effectively incorporating computer use within the heterogeneous classroom. The framework integrates Vygotskian social-learning theory with Allport's contact theory and the principles of cooperative learning. In Part 1 an essential element is identified for each of these areas. These are, in…

  13. Computational models and motor learning paradigms: Could they provide insights for neuroplasticity after stroke? An overview.

    PubMed

    Kiper, Pawel; Szczudlik, Andrzej; Venneri, Annalena; Stozek, Joanna; Luque-Moreno, Carlos; Opara, Jozef; Baba, Alfonc; Agostini, Michela; Turolla, Andrea

    2016-10-15

    Computational approaches for modelling the central nervous system (CNS) aim to develop theories on processes occurring in the brain that allow the transformation of all information needed for the execution of motor acts. Computational models have been proposed in several fields, to interpret not only the CNS functioning, but also its efferent behaviour. Computational model theories can provide insights into neuromuscular and brain function allowing us to reach a deeper understanding of neuroplasticity. Neuroplasticity is the process occurring in the CNS that is able to permanently change both structure and function due to interaction with the external environment. To understand such a complex process several paradigms related to motor learning and computational modeling have been put forward. These paradigms have been explained through several internal model concepts, and supported by neurophysiological and neuroimaging studies. Therefore, it has been possible to make theories about the basis of different learning paradigms according to known computational models. Here we review the computational models and motor learning paradigms used to describe the CNS and neuromuscular functions, as well as their role in the recovery process. These theories have the potential to provide a way to rigorously explain all the potential of CNS learning, providing a basis for future clinical studies. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Designing Multimedia Learning Application with Learning Theories: A Case Study on a Computer Science Subject with 2-D and 3-D Animated Versions

    ERIC Educational Resources Information Center

    Rias, Riaza Mohd; Zaman, Halimah Badioze

    2011-01-01

    Higher learning based instruction may be primarily concerned in most cases with the content of their academic lessons, and not very much with their instructional delivery. However, the effective application of learning theories and technology in higher education has an impact on student performance. With the rapid progress in the computer and…

  15. Reconstructing constructivism: Causal models, Bayesian learning mechanisms and the theory theory

    PubMed Central

    Gopnik, Alison; Wellman, Henry M.

    2012-01-01

    We propose a new version of the “theory theory” grounded in the computational framework of probabilistic causal models and Bayesian learning. Probabilistic models allow a constructivist but rigorous and detailed approach to cognitive development. They also explain the learning of both more specific causal hypotheses and more abstract framework theories. We outline the new theoretical ideas, explain the computational framework in an intuitive and non-technical way, and review an extensive but relatively recent body of empirical results that supports these ideas. These include new studies of the mechanisms of learning. Children infer causal structure from statistical information, through their own actions on the world and through observations of the actions of others. Studies demonstrate these learning mechanisms in children from 16 months to 4 years old and include research on causal statistical learning, informal experimentation through play, and imitation and informal pedagogy. They also include studies of the variability and progressive character of intuitive theory change, particularly theory of mind. These studies investigate both the physical and psychological and social domains. We conclude with suggestions for further collaborative projects between developmental and computational cognitive scientists. PMID:22582739

  16. The Effect of Computer Game-Based Learning on FL Vocabulary Transferability

    ERIC Educational Resources Information Center

    Franciosi, Stephan J.

    2017-01-01

    In theory, computer game-based learning can support several vocabulary learning affordances that have been identified in the foreign language learning research. In the observable evidence, learning with computer games has been shown to improve performance on vocabulary recall tests. However, while simple recall can be a sign of learning,…

  17. Causal Learning with Local Computations

    ERIC Educational Resources Information Center

    Fernbach, Philip M.; Sloman, Steven A.

    2009-01-01

    The authors proposed and tested a psychological theory of causal structure learning based on local computations. Local computations simplify complex learning problems via cues available on individual trials to update a single causal structure hypothesis. Structural inferences from local computations make minimal demands on memory, require…

  18. The Psychology of Mathematics Learning: Past and Present.

    ERIC Educational Resources Information Center

    Education and Urban Society, 1985

    1985-01-01

    Reviews trends in applying psychology to mathematics learning. Discusses the influence of behaviorism and other functionalist theories, Gestalt theory, Piagetian theory, and the "new functionalism" evident in computer-oriented theories of information processing. (GC)

  19. Cognitive Theory of Multimedia Learning, Instructional Design Principles, and Students with Learning Disabilities in Computer-Based and Online Learning Environments

    ERIC Educational Resources Information Center

    Greer, Diana L.; Crutchfield, Stephen A.; Woods, Kari L.

    2013-01-01

    Struggling learners and students with Learning Disabilities often exhibit unique cognitive processing and working memory characteristics that may not align with instructional design principles developed with typically developing learners. This paper explains the Cognitive Theory of Multimedia Learning and underlying Cognitive Load Theory, and…

  20. Compute-to-Learn: Authentic Learning via Development of Interactive Computer Demonstrations within a Peer-Led Studio Environment

    ERIC Educational Resources Information Center

    Jafari, Mina; Welden, Alicia Rae; Williams, Kyle L.; Winograd, Blair; Mulvihill, Ellen; Hendrickson, Heidi P.; Lenard, Michael; Gottfried, Amy; Geva, Eitan

    2017-01-01

    In this paper, we report on the implementation of a novel compute-to-learn pedagogy, which is based upon the theories of situated cognition and meaningful learning. The "compute-to-learn" pedagogy is designed to simulate an authentic research experience as part of the undergraduate curriculum, including project development, teamwork,…

  1. Reconstructing Constructivism: Causal Models, Bayesian Learning Mechanisms, and the Theory Theory

    ERIC Educational Resources Information Center

    Gopnik, Alison; Wellman, Henry M.

    2012-01-01

    We propose a new version of the "theory theory" grounded in the computational framework of probabilistic causal models and Bayesian learning. Probabilistic models allow a constructivist but rigorous and detailed approach to cognitive development. They also explain the learning of both more specific causal hypotheses and more abstract framework…

  2. An Evaluation of Neurogames®: A Collection of Computer Games Designed to Improve Literacy and Numeracy

    ERIC Educational Resources Information Center

    Khan, Misbah Mahmood; Reed, Jonathan

    2011-01-01

    Games Based Learning needs to be linked to good learning theory to become an important educational intervention. This study examines the effectiveness of a collection of computer games called Neurogames®. Neurogames are a group of computer games aimed at improving reading and basic maths and are designed using neuropsychological theory. The…

  3. From Years of Work in Psychology and Computer Science, Scientists Build Theories of Thinking and Learning.

    ERIC Educational Resources Information Center

    Wheeler, David L.

    1988-01-01

    Scientists feel that progress in artificial intelligence and the availability of thousands of experimental results make this the right time to build and test theories on how people think and learn, using the computer to model minds. (MSE)

  4. Learning Theories Applied to Teaching Technology: Constructivism versus Behavioral Theory for Instructing Multimedia Software Programs

    ERIC Educational Resources Information Center

    Reed, Cajah S.

    2012-01-01

    This study sought to find evidence for a beneficial learning theory to teach computer software programs. Additionally, software was analyzed for each learning theory's applicability to resolve whether certain software requires a specific method of education. The results are meant to give educators more effective teaching tools, so students…

  5. Implicational Markedness and Frequency in Constraint-Based Computational Models of Phonological Learning

    ERIC Educational Resources Information Center

    Jarosz, Gaja

    2010-01-01

    This study examines the interacting roles of implicational markedness and frequency from the joint perspectives of formal linguistic theory, phonological acquisition and computational modeling. The hypothesis that child grammars are rankings of universal constraints, as in Optimality Theory (Prince & Smolensky, 1993/2004), that learning involves a…

  6. The Effect of Interactivity and Instructional Exposure on Learning Effectiveness and Knowledge Retention: A Comparative Study of Two U.S. Air Force Computer-Based Training (CBT) Courses for Network User Licensing

    DTIC Science & Technology

    2003-03-01

    sociocultural theory of learning was pioneered by Lev Vygotsky in the early twentieth century Soviet Union. Although his works were not published...Overview ....................................................................................................................... 14 Learning Theories ...and Teaching Strategies .................................................................. 14 Learning Theories and CBT

  7. Too Good to be True? Ideomotor Theory from a Computational Perspective

    PubMed Central

    Herbort, Oliver; Butz, Martin V.

    2012-01-01

    In recent years, Ideomotor Theory has regained widespread attention and sparked the development of a number of theories on goal-directed behavior and learning. However, there are two issues with previous studies’ use of Ideomotor Theory. Although Ideomotor Theory is seen as very general, it is often studied in settings that are considerably more simplistic than most natural situations. Moreover, Ideomotor Theory’s claim that effect anticipations directly trigger actions and that action-effect learning is based on the formation of direct action-effect associations is hard to address empirically. We address these points from a computational perspective. A simple computational model of Ideomotor Theory was tested in tasks with different degrees of complexity. The model evaluation showed that Ideomotor Theory is a computationally feasible approach for understanding efficient action-effect learning for goal-directed behavior if the following preconditions are met: (1) The range of potential actions and effects has to be restricted. (2) Effects have to follow actions within a short time window. (3) Actions have to be simple and may not require sequencing. The first two preconditions also limit human performance and thus support Ideomotor Theory. The last precondition can be circumvented by extending the model with more complex, indirect action generation processes. In conclusion, we suggest that Ideomotor Theory offers a comprehensive framework to understand action-effect learning. However, we also suggest that additional processes may mediate the conversion of effect anticipations into actions in many situations. PMID:23162524

  8. The computational neurobiology of learning and reward.

    PubMed

    Daw, Nathaniel D; Doya, Kenji

    2006-04-01

    Following the suggestion that midbrain dopaminergic neurons encode a signal, known as a 'reward prediction error', used by artificial intelligence algorithms for learning to choose advantageous actions, the study of the neural substrates for reward-based learning has been strongly influenced by computational theories. In recent work, such theories have been increasingly integrated into experimental design and analysis. Such hybrid approaches have offered detailed new insights into the function of a number of brain areas, especially the cortex and basal ganglia. In part this is because these approaches enable the study of neural correlates of subjective factors (such as a participant's beliefs about the reward to be received for performing some action) that the computational theories purport to quantify.

  9. A Pilot Study of the Naming Transaction Shell

    DTIC Science & Technology

    1991-06-01

    effective computer-based instructional design. AIDA will take established theories of knowledge, learning , and instruction and incorporate the theories...felt that anyone could learn to use the system both in design and delivery modes. Traditional course development (non- computer instruction) for the...students were studying and learning the material in the text. This often resulted in wasted effort in the simulator. By ensuring that the students knew the

  10. An introduction to quantum machine learning

    NASA Astrophysics Data System (ADS)

    Schuld, Maria; Sinayskiy, Ilya; Petruccione, Francesco

    2015-04-01

    Machine learning algorithms learn a desired input-output relation from examples in order to interpret new inputs. This is important for tasks such as image and speech recognition or strategy optimisation, with growing applications in the IT industry. In the last couple of years, researchers investigated if quantum computing can help to improve classical machine learning algorithms. Ideas range from running computationally costly algorithms or their subroutines efficiently on a quantum computer to the translation of stochastic methods into the language of quantum theory. This contribution gives a systematic overview of the emerging field of quantum machine learning. It presents the approaches as well as technical details in an accessible way, and discusses the potential of a future theory of quantum learning.

  11. Towards a Theory-Based Design Framework for an Effective E-Learning Computer Programming Course

    ERIC Educational Resources Information Center

    McGowan, Ian S.

    2016-01-01

    Built on Dabbagh (2005), this paper presents a four component theory-based design framework for an e-learning session in introductory computer programming. The framework, driven by a body of exemplars component, emphasizes the transformative interaction between the knowledge building community (KBC) pedagogical model, a mixed instructional…

  12. Impact of Blended Learning Environments Based on Algo-Heuristic Theory on Some Variables

    ERIC Educational Resources Information Center

    Aygün, Mustafa; Korkmaz, Özgen

    2012-01-01

    In this study, the effects of Algo-Heuristic Theory based blended learning environments on students' computer skills in their preparation of presentations, levels of attitudes towards computers, and levels of motivation regarding the information technology course were investigated. The research sample was composed of 71 students. A semi-empirical…

  13. Online Collaborative Learning: Theory and Practice

    ERIC Educational Resources Information Center

    Roberts, Tim, Ed.

    2004-01-01

    "Online Collaborative Learning: Theory and Practice" provides a resource for researchers and practitioners in the area of online collaborative learning (also known as CSCL, computer-supported collaborative learning), particularly those working within a tertiary education environment. It includes articles of relevance to those interested in both…

  14. Fuzzy logic of Aristotelian forms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perlovsky, L.I.

    1996-12-31

    Model-based approaches to pattern recognition and machine vision have been proposed to overcome the exorbitant training requirements of earlier computational paradigms. However, uncertainties in data were found to lead to a combinatorial explosion of the computational complexity. This issue is related here to the roles of a priori knowledge vs. adaptive learning. What is the a-priori knowledge representation that supports learning? I introduce Modeling Field Theory (MFT), a model-based neural network whose adaptive learning is based on a priori models. These models combine deterministic, fuzzy, and statistical aspects to account for a priori knowledge, its fuzzy nature, and data uncertainties.more » In the process of learning, a priori fuzzy concepts converge to crisp or probabilistic concepts. The MFT is a convergent dynamical system of only linear computational complexity. Fuzzy logic turns out to be essential for reducing the combinatorial complexity to linear one. I will discuss the relationship of the new computational paradigm to two theories due to Aristotle: theory of Forms and logic. While theory of Forms argued that the mind cannot be based on ready-made a priori concepts, Aristotelian logic operated with just such concepts. I discuss an interpretation of MFT suggesting that its fuzzy logic, combining a-priority and adaptivity, implements Aristotelian theory of Forms (theory of mind). Thus, 2300 years after Aristotle, a logic is developed suitable for his theory of mind.« less

  15. Computer-Supported Collaborative Learning in Higher Education

    ERIC Educational Resources Information Center

    Roberts, Tim, Ed.

    2005-01-01

    "Computer-Supported Collaborative Learning in Higher Education" provides a resource for researchers and practitioners in the area of computer-supported collaborative learning (also known as CSCL); particularly those working within a tertiary education environment. It includes articles of relevance to those interested in both theory and practice in…

  16. Relating Theory and Practice in Laboratory Work: A Variation Theoretical Study

    ERIC Educational Resources Information Center

    Eckerdal, Anna

    2015-01-01

    Computer programming education has practice-oriented as well as theory-oriented learning goals. Here, lab work plays an important role in students' learning. It is however widely reported that many students face great difficulties in learning theory as well as practice. This paper investigates the important but problematic relation between the…

  17. Game Engagement Theory and Adult Learning

    ERIC Educational Resources Information Center

    Whitton, Nicola

    2011-01-01

    One of the benefits of computer game-based learning is the ability of certain types of game to engage and motivate learners. However, theories of learning and engagement, particularly in the sphere of higher education, typically fail to consider gaming engagement theory. In this article, the author examines the principles of engagement from games…

  18. Myths and legends in learning classification rules

    NASA Technical Reports Server (NTRS)

    Buntine, Wray

    1990-01-01

    A discussion is presented of machine learning theory on empirically learning classification rules. Six myths are proposed in the machine learning community that address issues of bias, learning as search, computational learning theory, Occam's razor, universal learning algorithms, and interactive learning. Some of the problems raised are also addressed from a Bayesian perspective. Questions are suggested that machine learning researchers should be addressing both theoretically and experimentally.

  19. Assessment of (Computer-Supported) Collaborative Learning

    ERIC Educational Resources Information Center

    Strijbos, J. -W.

    2011-01-01

    Within the (Computer-Supported) Collaborative Learning (CS)CL research community, there has been an extensive dialogue on theories and perspectives on learning from collaboration, approaches to scaffold (script) the collaborative process, and most recently research methodology. In contrast, the issue of assessment of collaborative learning has…

  20. Simulating Serious Games: A Discrete-Time Computational Model Based on Cognitive Flow Theory

    ERIC Educational Resources Information Center

    Westera, Wim

    2018-01-01

    This paper presents a computational model for simulating how people learn from serious games. While avoiding the combinatorial explosion of a games micro-states, the model offers a meso-level pathfinding approach, which is guided by cognitive flow theory and various concepts from learning sciences. It extends a basic, existing model by exposing…

  1. Introducing Computers to Kindergarten Children Based on Vygotsky's Theory about Socio-Cultural Learning: The Greek Perspective.

    ERIC Educational Resources Information Center

    Pange, Jenny; Kontozisis, Dimitrios

    2001-01-01

    Greek preschoolers' level of knowledge about computers was examined as they participated in a classroom project to introduce them to new technologies. The project was based on Vygotsky's theory of socio-cultural learning. Findings suggest that this approach is a successful way to introduce new technologies to young children. (JPB)

  2. Linking Pedagogical Theory of Computer Games to their Usability

    ERIC Educational Resources Information Center

    Ang, Chee Siang; Avni, Einav; Zaphiris, Panayiotis

    2008-01-01

    This article reviews a range of literature of computer games and learning theories and attempts to establish a link between them by proposing a typology of games which we use as a new usability measure for the development of guidelines for game-based learning. First, we examine game literature in order to understand the key elements that…

  3. Cognitive Tools for Assessment and Learning in a High Information Flow Environment.

    ERIC Educational Resources Information Center

    Lajoie, Susanne P.; Azevedo, Roger; Fleiszer, David M.

    1998-01-01

    Describes the development of a simulation-based intelligent tutoring system for nurses working in a surgical intensive care unit. Highlights include situative learning theories and models of instruction, modeling expertise, complex decision making, linking theories of learning to the design of computer-based learning environments, cognitive task…

  4. A Theoretical Analysis of Learning with Graphics--Implications for Computer Graphics Design.

    ERIC Educational Resources Information Center

    ChanLin, Lih-Juan

    This paper reviews the literature pertinent to learning with graphics. The dual coding theory provides explanation about how graphics are stored and precessed in semantic memory. The level of processing theory suggests how graphics can be employed in learning to encourage deeper processing. In addition to dual coding theory and level of processing…

  5. Perceptions of teaching and learning automata theory in a college-level computer science course

    NASA Astrophysics Data System (ADS)

    Weidmann, Phoebe Kay

    This dissertation identifies and describes student and instructor perceptions that contribute to effective teaching and learning of Automata Theory in a competitive college-level Computer Science program. Effective teaching is the ability to create an appropriate learning environment in order to provide effective learning. We define effective learning as the ability of a student to meet instructor set learning objectives, demonstrating this by passing the course, while reporting a good learning experience. We conducted our investigation through a detailed qualitative case study of two sections (118 students) of Automata Theory (CS 341) at The University of Texas at Austin taught by Dr. Lily Quilt. Because Automata Theory has a fixed curriculum in the sense that many curricula and textbooks agree on what Automata Theory contains, differences being depth and amount of material to cover in a single course, a case study would allow for generalizable findings. Automata Theory is especially problematic in a Computer Science curriculum since students are not experienced in abstract thinking before taking this course, fail to understand the relevance of the theory, and prefer classes with more concrete activities such as programming. This creates a special challenge for any instructor of Automata Theory as motivation becomes critical for student learning. Through the use of student surveys, instructor interviews, classroom observation, material and course grade analysis we sought to understand what students perceived, what instructors expected of students, and how those perceptions played out in the classroom in terms of structure and instruction. Our goal was to create suggestions that would lead to a better designed course and thus a higher student success rate in Automata Theory. We created a unique theoretical basis, pedagogical positivism, on which to study college-level courses. Pedagogical positivism states that through examining instructor and student perceptions of teaching and learning, improvements to a course are possible. These improvements can eventually develop a "best practice" instructional environment. This view is not possible under a strictly constructivist learning theory as there is no way to teach a group of individuals in a "best" way. Using this theoretical basis, we examined the gathered data from CS 341. (Abstract shortened by UMI.)

  6. A Drawing and Multi-Representational Computer Environment for Beginners' Learning of Programming Using C: Design and Pilot Formative Evaluation

    ERIC Educational Resources Information Center

    Kordaki, Maria

    2010-01-01

    This paper presents both the design and the pilot formative evaluation study of a computer-based problem-solving environment (named LECGO: Learning Environment for programming using C using Geometrical Objects) for the learning of computer programming using C by beginners. In its design, constructivist and social learning theories were taken into…

  7. Computational Psychiatry and the Challenge of Schizophrenia.

    PubMed

    Krystal, John H; Murray, John D; Chekroud, Adam M; Corlett, Philip R; Yang, Genevieve; Wang, Xiao-Jing; Anticevic, Alan

    2017-05-01

    Schizophrenia research is plagued by enormous challenges in integrating and analyzing large datasets and difficulties developing formal theories related to the etiology, pathophysiology, and treatment of this disorder. Computational psychiatry provides a path to enhance analyses of these large and complex datasets and to promote the development and refinement of formal models for features of this disorder. This presentation introduces the reader to the notion of computational psychiatry and describes discovery-oriented and theory-driven applications to schizophrenia involving machine learning, reinforcement learning theory, and biophysically-informed neural circuit models. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center 2017.

  8. Myths and legends in learning classification rules

    NASA Technical Reports Server (NTRS)

    Buntine, Wray

    1990-01-01

    This paper is a discussion of machine learning theory on empirically learning classification rules. The paper proposes six myths in the machine learning community that address issues of bias, learning as search, computational learning theory, Occam's razor, 'universal' learning algorithms, and interactive learnings. Some of the problems raised are also addressed from a Bayesian perspective. The paper concludes by suggesting questions that machine learning researchers should be addressing both theoretically and experimentally.

  9. Seven Affordances of Computer-Supported Collaborative Learning: How to Support Collaborative Learning? How Can Technologies Help?

    ERIC Educational Resources Information Center

    Jeong, Heisawn; Hmelo-Silver, Cindy E.

    2016-01-01

    This article proposes 7 core affordances of technology for collaborative learning based on theories of collaborative learning and CSCL (Computer-Supported Collaborative Learning) practices. Technology affords learner opportunities to (1) engage in a joint task, (2) communicate, (3) share resources, (4) engage in productive collaborative learning…

  10. Learning Vocabulary in a Foreign Language: A Computer Software Based Model Attempt

    ERIC Educational Resources Information Center

    Yelbay Yilmaz, Yasemin

    2015-01-01

    This study aimed at devising a vocabulary learning software that would help learners learn and retain vocabulary items effectively. Foundation linguistics and learning theories have been adapted to the foreign language vocabulary learning context using a computer software named Parole that was designed exclusively for this study. Experimental…

  11. Cognitive Load Theory vs. Constructivist Approaches: Which Best Leads to Efficient, Deep Learning?

    ERIC Educational Resources Information Center

    Vogel-Walcutt, J. J.; Gebrim, J. B.; Bowers, C.; Carper, T. M.; Nicholson, D.

    2011-01-01

    Computer-assisted learning, in the form of simulation-based training, is heavily focused upon by the military. Because computer-based learning offers highly portable, reusable, and cost-efficient training options, the military has dedicated significant resources to the investigation of instructional strategies that improve learning efficiency…

  12. Do All Roads Lead to Rome? ("or" Reductions for Dummy Travelers)

    ERIC Educational Resources Information Center

    Kilpelainen, Pekka

    2010-01-01

    Reduction is a central ingredient of computational thinking, and an important tool in algorithm design, in computability theory, and in complexity theory. Reduction has been recognized to be a difficult topic for students to learn. Previous studies on teaching reduction have concentrated on its use in special courses on the theory of computing. As…

  13. Social Play at the Computer: Preschoolers Scaffold and Support Peers' Computer Competence.

    ERIC Educational Resources Information Center

    Freeman, Nancy K.; Somerindyke, Jennifer

    2001-01-01

    Describes preschoolers' collaboration during free play in a computer lab, focusing on the computer's contribution to active, peer-mediated learning. Discusses these observations in terms of Parten's insights on children's social play and Vygotsky's socio-cultural learning theory, noting that the children scaffolded each other's growing computer…

  14. Errors and Intelligence in Computer-Assisted Language Learning: Parsers and Pedagogues. Routledge Studies in Computer Assisted Language Learning

    ERIC Educational Resources Information Center

    Heift, Trude; Schulze, Mathias

    2012-01-01

    This book provides the first comprehensive overview of theoretical issues, historical developments and current trends in ICALL (Intelligent Computer-Assisted Language Learning). It assumes a basic familiarity with Second Language Acquisition (SLA) theory and teaching, CALL and linguistics. It is of interest to upper undergraduate and/or graduate…

  15. Integration of Ausubelian Learning Theory and Educational Computing.

    ERIC Educational Resources Information Center

    Heinze-Fry, Jane A.; And Others

    1984-01-01

    Examines possible benefits when Ausubelian learning approaches are integrated into computer-assisted instruction, presenting an example of this integration in a computer program dealing with introductory ecology concepts. The four program parts (tutorial, interactive concept mapping, simulations, and vee-mapping) are described. (JN)

  16. Learning control system design based on 2-D theory - An application to parallel link manipulator

    NASA Technical Reports Server (NTRS)

    Geng, Z.; Carroll, R. L.; Lee, J. D.; Haynes, L. H.

    1990-01-01

    An approach to iterative learning control system design based on two-dimensional system theory is presented. A two-dimensional model for the iterative learning control system which reveals the connections between learning control systems and two-dimensional system theory is established. A learning control algorithm is proposed, and the convergence of learning using this algorithm is guaranteed by two-dimensional stability. The learning algorithm is applied successfully to the trajectory tracking control problem for a parallel link robot manipulator. The excellent performance of this learning algorithm is demonstrated by the computer simulation results.

  17. A Computational Model of How Cholinergic Interneurons Protect Striatal-Dependent Learning

    ERIC Educational Resources Information Center

    Ashby, F. Gregory; Crossley, Matthew J.

    2011-01-01

    An essential component of skill acquisition is learning the environmental conditions in which that skill is relevant. This article proposes and tests a neurobiologically detailed theory of how such learning is mediated. The theory assumes that a key component of this learning is provided by the cholinergic interneurons in the striatum known as…

  18. Effects of Learning Style and Training Method on Computer Attitude and Performance in World Wide Web Page Design Training.

    ERIC Educational Resources Information Center

    Chou, Huey-Wen; Wang, Yu-Fang

    1999-01-01

    Compares the effects of two training methods on computer attitude and performance in a World Wide Web page design program in a field experiment with high school students in Taiwan. Discusses individual differences, Kolb's Experiential Learning Theory and Learning Style Inventory, Computer Attitude Scale, and results of statistical analyses.…

  19. Computers as Cognitive Tools.

    ERIC Educational Resources Information Center

    Lajoie, Susanne P., Ed.; Derry, Sharon J., Ed.

    This book provides exemplars of the types of computer-based learning environments represented by the theoretical camps within the field and the practical applications of the theories. The contributors discuss a variety of computer applications to learning, ranging from school-related topics such as geometry, algebra, biology, history, physics, and…

  20. The theoretical base of e-learning and its role in surgical education.

    PubMed

    Evgeniou, Evgenios; Loizou, Peter

    2012-01-01

    The advances in Internet and computer technology offer many solutions that can enhance surgical education and increase the effectiveness of surgical teaching. E-learning plays an important role in surgical education today, with many e-learning projects already available on the Internet. E-learning is based on a mixture of educational theories that derive from behaviorist, cognitivist, and constructivist educational theoretical frameworks. CAN EDUCATIONAL THEORY IMPROVE E-LEARNING?: Conventional educational theory can be applied to improve the quality and effectiveness of e-learning. The theory of "threshold concepts" and educational theories on reflection, motivation, and communities of practice can be applied when designing e-learning material. E-LEARNING IN SURGICAL EDUCATION: E-learning has many advantages but also has weaknesses. Studies have shown that e-learning is an effective teaching method that offers high levels of learner satisfaction. Instead of trying to compare e-learning with traditional methods of teaching, it is better to integrate in e-learning elements of traditional teaching that have been proven to be effective. E-learning can play an important role in surgical education as a blended approach, combined with more traditional methods of teaching, which offer better face-to-interaction with patients and colleagues in different circumstances and hands on practice of practical skills. National provision of e-learning can make evaluation easier. The correct utilization of Internet and computer resources combined with the application of valid conventional educational theory to design e-learning relevant to the various levels of surgical training can be effective in the training of future surgeons. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  1. Students' Perceptions of Computer-Based Learning Environments, Their Attitude towards Business Statistics, and Their Academic Achievement: Implications from a UK University

    ERIC Educational Resources Information Center

    Nguyen, ThuyUyen H.; Charity, Ian; Robson, Andrew

    2016-01-01

    This study investigates students' perceptions of computer-based learning environments, their attitude towards business statistics, and their academic achievement in higher education. Guided by learning environments concepts and attitudinal theory, a theoretical model was proposed with two instruments, one for measuring the learning environment and…

  2. Implications of Whole-Brained Theories of Learning and Thinking for Computer-Based Instruction.

    ERIC Educational Resources Information Center

    Torrance, E. Paul

    1981-01-01

    Discusses the implications of theories of hemispheric dominance for computer-assisted instruction, highlights some of the computer's instructional uses, lists specialized functions of the cerebral hemispheres, and lists recommended solutions to CBI program problems which were submitted by gifted children. Thirty-five sources are listed. (FM)

  3. Assessment of Situated Learning Using Computer Environments.

    ERIC Educational Resources Information Center

    Young, Michael

    1995-01-01

    Suggests that, based on a theory of situated learning, assessment must emphasize process as much as product. Several assessment examples are given, including a computer-based planning assistant for a mathematics and science video, suggestions for computer-based portfolio assessment, and speculations about embedded assessment of virtual situations.…

  4. Proceedings of the Workshop on Models of Complex Human Learning Held in Ithaca, New York on June 27-28, 1989

    DTIC Science & Technology

    1989-06-01

    to facilitate in-depth communication of research results in a multi-disciplinary gathering led to a decision to have long presentations and limit the...learning subfields such as computational learning theory and explanation based learning? Second, as the machine learning field increases its emphasis...Architecture, Pat Langley, University of California, Irvine .................................................... 22 A Theory of Human Plausible Reasoning

  5. It Takes a Village: Supporting Inquiry- and Equity-Oriented Computer Science Pedagogy through a Professional Learning Community

    ERIC Educational Resources Information Center

    Ryoo, Jean; Goode, Joanna; Margolis, Jane

    2015-01-01

    This article describes the importance that high school computer science teachers place on a teachers' professional learning community designed around an inquiry- and equity-oriented approach for broadening participation in computing. Using grounded theory to analyze four years of teacher surveys and interviews from the Exploring Computer Science…

  6. The Evolution of Networked Computing in the Teaching of Japanese as a Foreign Language.

    ERIC Educational Resources Information Center

    Harrison, Richard

    1998-01-01

    Reviews the evolution of Internet-based projects in Japanese computer-assisted language learning and suggests future directions in which the field may develop, based on emerging network technology and learning theory. (Author/VWL)

  7. Theories for Deep Change in Affect-sensitive Cognitive Machines: A Constructivist Model.

    ERIC Educational Resources Information Center

    Kort, Barry; Reilly, Rob

    2002-01-01

    There is an interplay between emotions and learning, but this interaction is far more complex than previous learning theories have articulated. This article proffers a novel model by which to regard the interplay of emotions upon learning and discusses the larger practical aim of crafting computer-based models that will recognize a learner's…

  8. Activity Theory Approach to Developing Context-Aware Mobile Learning Systems for Understanding Scientific Phenomenon and Theories

    ERIC Educational Resources Information Center

    Uden, Lorna; Hwang, Gwo-Jen

    2013-01-01

    Mobile computing offers potential opportunities for students' learning especially when it combines a sensing device such as RFID (Radio Frequency Identification). Researchers have indicated that a key feature of in-field learning supported by mobile devices and technology is context awareness, with which context and functionality provided by…

  9. A Flow Theory Perspective on Learner Motivation and Behavior in Distance Education

    ERIC Educational Resources Information Center

    Liao, Li-Fen

    2006-01-01

    Motivating learners to continue to study and enjoy learning is one of the critical factors in distance education. Flow theory is a useful framework for studying the individual experience of learning through using computers. In this study, I examine students' emotional and cognitive responses to distance learning systems by constructing two models…

  10. Teaching and Learning Methodologies Supported by ICT Applied in Computer Science

    ERIC Educational Resources Information Center

    Capacho, Jose

    2016-01-01

    The main objective of this paper is to show a set of new methodologies applied in the teaching of Computer Science using ICT. The methodologies are framed in the conceptual basis of the following sciences: Psychology, Education and Computer Science. The theoretical framework of the research is supported by Behavioral Theory, Gestalt Theory.…

  11. Impact of Media Richness and Flow on E-Learning Technology Acceptance

    ERIC Educational Resources Information Center

    Liu, Su-Houn; Liao, Hsiu-Li; Pratt, Jean A.

    2009-01-01

    Advances in e-learning technologies parallels a general increase in sophistication by computer users. The use of just one theory or model, such as the technology acceptance model, is no longer sufficient to study the intended use of e-learning systems. Rather, a combination of theories must be integrated in order to fully capture the complexity of…

  12. Mechanisms of Developmental Change in Infant Categorization

    ERIC Educational Resources Information Center

    Westermann, Gert; Mareschal, Denis

    2012-01-01

    Computational models are tools for testing mechanistic theories of learning and development. Formal models allow us to instantiate theories of cognitive development in computer simulations. Model behavior can then be compared to real performance. Connectionist models, loosely based on neural information processing, have been successful in…

  13. Effects of Combined Hands-on Laboratory and Computer Modeling on Student Learning of Gas Laws: A Quasi-Experimental Study

    ERIC Educational Resources Information Center

    Liu, Xiufeng

    2006-01-01

    Based on current theories of chemistry learning, this study intends to test a hypothesis that computer modeling enhanced hands-on chemistry laboratories are more effective than hands-on laboratories or computer modeling laboratories alone in facilitating high school students' understanding of chemistry concepts. Thirty-three high school chemistry…

  14. Model-based predictions for dopamine.

    PubMed

    Langdon, Angela J; Sharpe, Melissa J; Schoenbaum, Geoffrey; Niv, Yael

    2018-04-01

    Phasic dopamine responses are thought to encode a prediction-error signal consistent with model-free reinforcement learning theories. However, a number of recent findings highlight the influence of model-based computations on dopamine responses, and suggest that dopamine prediction errors reflect more dimensions of an expected outcome than scalar reward value. Here, we review a selection of these recent results and discuss the implications and complications of model-based predictions for computational theories of dopamine and learning. Copyright © 2017. Published by Elsevier Ltd.

  15. Why formal learning theory matters for cognitive science.

    PubMed

    Fulop, Sean; Chater, Nick

    2013-01-01

    This article reviews a number of different areas in the foundations of formal learning theory. After outlining the general framework for formal models of learning, the Bayesian approach to learning is summarized. This leads to a discussion of Solomonoff's Universal Prior Distribution for Bayesian learning. Gold's model of identification in the limit is also outlined. We next discuss a number of aspects of learning theory raised in contributed papers, related to both computational and representational complexity. The article concludes with a description of how semi-supervised learning can be applied to the study of cognitive learning models. Throughout this overview, the specific points raised by our contributing authors are connected to the models and methods under review. Copyright © 2013 Cognitive Science Society, Inc.

  16. Teacher-Education Students' Views about Knowledge Building Theory and Practice

    ERIC Educational Resources Information Center

    Hong, Huang-Yao; Chen, Fei-Ching; Chai, Ching Sing; Chan, Wen-Ching

    2011-01-01

    This study investigated the effects of engaging students to collectively learn and work with knowledge in a computer-supported collaborative learning environment called Knowledge Forum on their views about knowledge building theory and practice. Participants were 24 teacher-education students who took a required course titled "Integrating Theory…

  17. The Laboratory-Based Economics Curriculum.

    ERIC Educational Resources Information Center

    King, Paul G.; LaRoe, Ross M.

    1991-01-01

    Describes the liberal arts, computer laboratory-based economics program at Denison University (Ohio). Includes as goals helping students to (1) understand deductive arguments, (2) learn to apply theory in real-world situations, and (3) test and modify theory when necessary. Notes that the program combines computer laboratory experiments for…

  18. Health education and multimedia learning: educational psychology and health behavior theory (Part 1).

    PubMed

    Mas, Francisco G Soto; Plass, Jan; Kane, William M; Papenfuss, Richard L

    2003-07-01

    When health education researchers began to investigate how individuals make decisions related to health and the factors that influence health behaviors, they referred to frameworks shared by educational and learning research. Health education adopted the basic principles of the cognitive revolution, which were instrumental in advancing the field. There is currently a new challenge to confront: the widespread use of new technologies for health education. To better overcome this challenge, educational psychology and instructional technology theory should be considered. Unfortunately, the passion to incorporate new technologies too often overshadows how people learn or, in particular, how people learn through computer technologies. This two-part article explains how educational theory contributed to the early development of health behavior theory, describes the most relevant multimedia learning theories and constructs, and provides recommendations for developing multimedia health education programs and connecting theory and practice.

  19. Influence of Learning Style and Learning Flexibility on Clinical Judgment of Prelicensure Nursing Students within a Human Patient Computer Simulation Environment

    ERIC Educational Resources Information Center

    Robison, Elizabeth Sharon

    2012-01-01

    Nursing education is experiencing a transition in how students are exposed to clinical situations. Technology, specifically human patient computer simulation, is replacing human exposure in clinical education (Nehring, 2010b). Kaakinen and Arwood (2009) discuss the need to apply learning theories to instructional designs involving simulation for…

  20. Modeling Spanish Mood Choice in Belief Statements

    ERIC Educational Resources Information Center

    Robinson, Jason R.

    2013-01-01

    This work develops a computational methodology new to linguistics that empirically evaluates competing linguistic theories on Spanish verbal mood choice through the use of computational techniques to learn mood and other hidden linguistic features from Spanish belief statements found in corpora. The machine learned probabilistic linguistic models…

  1. Modelling ADHD: A review of ADHD theories through their predictions for computational models of decision-making and reinforcement learning.

    PubMed

    Ziegler, Sigurd; Pedersen, Mads L; Mowinckel, Athanasia M; Biele, Guido

    2016-12-01

    Attention deficit hyperactivity disorder (ADHD) is characterized by altered decision-making (DM) and reinforcement learning (RL), for which competing theories propose alternative explanations. Computational modelling contributes to understanding DM and RL by integrating behavioural and neurobiological findings, and could elucidate pathogenic mechanisms behind ADHD. This review of neurobiological theories of ADHD describes predictions for the effect of ADHD on DM and RL as described by the drift-diffusion model of DM (DDM) and a basic RL model. Empirical studies employing these models are also reviewed. While theories often agree on how ADHD should be reflected in model parameters, each theory implies a unique combination of predictions. Empirical studies agree with the theories' assumptions of a lowered DDM drift rate in ADHD, while findings are less conclusive for boundary separation. The few studies employing RL models support a lower choice sensitivity in ADHD, but not an altered learning rate. The discussion outlines research areas for further theoretical refinement in the ADHD field. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. The Modeling of Human Intelligence in the Computer as Demonstrated in the Game of DIPLOMAT.

    ERIC Educational Resources Information Center

    Collins, James Edward; Paulsen, Thomas Dean

    An attempt was made to develop human-like behavior in the computer. A theory of the human learning process was described. A computer game was presented which simulated the human capabilities of reasoning and learning. The program was required to make intelligent decisions based on past experiences and critical analysis of the present situation.…

  3. Probabilistic brains: knowns and unknowns

    PubMed Central

    Pouget, Alexandre; Beck, Jeffrey M; Ma, Wei Ji; Latham, Peter E

    2015-01-01

    There is strong behavioral and physiological evidence that the brain both represents probability distributions and performs probabilistic inference. Computational neuroscientists have started to shed light on how these probabilistic representations and computations might be implemented in neural circuits. One particularly appealing aspect of these theories is their generality: they can be used to model a wide range of tasks, from sensory processing to high-level cognition. To date, however, these theories have only been applied to very simple tasks. Here we discuss the challenges that will emerge as researchers start focusing their efforts on real-life computations, with a focus on probabilistic learning, structural learning and approximate inference. PMID:23955561

  4. An E-learning System based on Affective Computing

    NASA Astrophysics Data System (ADS)

    Duo, Sun; Song, Lu Xue

    In recent years, e-learning as a learning system is very popular. But the current e-learning systems cannot instruct students effectively since they do not consider the emotional state in the context of instruction. The emergence of the theory about "Affective computing" can solve this question. It can make the computer's intelligence no longer be a pure cognitive one. In this paper, we construct an emotional intelligent e-learning system based on "Affective computing". A dimensional model is put forward to recognize and analyze the student's emotion state and a virtual teacher's avatar is offered to regulate student's learning psychology with consideration of teaching style based on his personality trait. A "man-to-man" learning environment is built to simulate the traditional classroom's pedagogy in the system.

  5. Connecting Expectations and Values: Students' Perceptions of Developmental Mathematics in a Computer-Based Learning Environment

    ERIC Educational Resources Information Center

    Jackson, Karen Latrice Terrell

    2014-01-01

    Students' perceptions influence their expectations and values. According to Expectations and Values Theory of Achievement Motivation (EVT-AM), students' expectations and values impact their behaviors (Eccles & Wigfield, 2002). This study seeks to find students' perceptions of developmental mathematics in a mastery learning computer-based…

  6. Help Options and Multimedia Listening: Students' Use of Subtitles and the Transcript

    ERIC Educational Resources Information Center

    Grgurovic, Maja; Hegelheimer, Volker

    2007-01-01

    As multimedia language learning materials become prevalent in foreign and second language classrooms, their design is an important avenue of research in Computer-Assisted Language Learning (CALL). Some argue that the design of the pedagogical materials should be informed by theory such as the interactionist SLA theory, which suggests that input…

  7. Activity Theory and Technology Mediated Interaction: Cognitive Scaffolding Using Question-Based Consultation on "Facebook"

    ERIC Educational Resources Information Center

    Rambe, Patient

    2012-01-01

    Studies that employed activity theory as a theoretical lens for exploring computer-mediated interaction have not adopted social media as their object of study. However, social media provides lecturers with personalised learning environments for diagnostic and prognostic assessments of student mastery of content and deep learning. The integration…

  8. Assessing Cognitive Load Theory to Improve Student Learning for Mechanical Engineers

    ERIC Educational Resources Information Center

    Impelluso, Thomas J.

    2009-01-01

    A computer programming class for students of mechanical engineering was redesigned and assessed: Cognitive Load Theory was used to redesign the content; online technologies were used to redesign the delivery. Student learning improved and the dropout rate was reduced. This article reports on both attitudinal and objective assessment: comparing…

  9. SSL: A Theory of How People Learn to Select Strategies

    ERIC Educational Resources Information Center

    Rieskamp, Jorg; Otto, Philipp E.

    2006-01-01

    The assumption that people possess a repertoire of strategies to solve the inference problems they face has been raised repeatedly. However, a computational model specifying how people select strategies from their repertoire is still lacking. The proposed strategy selection learning (SSL) theory predicts a strategy selection process on the basis…

  10. Situated learning theory: adding rate and complexity effects via Kauffman's NK model.

    PubMed

    Yuan, Yu; McKelvey, Bill

    2004-01-01

    For many firms, producing information, knowledge, and enhancing learning capability have become the primary basis of competitive advantage. A review of organizational learning theory identifies two approaches: (1) those that treat symbolic information processing as fundamental to learning, and (2) those that view the situated nature of cognition as fundamental. After noting that the former is inadequate because it focuses primarily on behavioral and cognitive aspects of individual learning, this paper argues the importance of studying learning as interactions among people in the context of their environment. It contributes to organizational learning in three ways. First, it argues that situated learning theory is to be preferred over traditional behavioral and cognitive learning theories, because it treats organizations as complex adaptive systems rather than mere information processors. Second, it adds rate and nonlinear learning effects. Third, following model-centered epistemology, it uses an agent-based computational model, in particular a "humanized" version of Kauffman's NK model, to study the situated nature of learning. Using simulation results, we test eight hypotheses extending situated learning theory in new directions. The paper ends with a discussion of possible extensions of the current study to better address key issues in situated learning.

  11. Deepening Learning through Learning-by-Inventing

    ERIC Educational Resources Information Center

    Apiola, Mikko; Tedre, Matti

    2013-01-01

    It has been shown that deep approaches to learning, intrinsic motivation, and self-regulated learning have strong positive effects on learning. How those pedagogical theories can be integrated in computing curricula is, however, still lacking empirically grounded analyses. In a more general level, it has been widely acknowledged that in…

  12. A BCM theory of meta-plasticity for online self-reorganizing fuzzy-associative learning.

    PubMed

    Tan, Javan; Quek, Chai

    2010-06-01

    Self-organizing neurofuzzy approaches have matured in their online learning of fuzzy-associative structures under time-invariant conditions. To maximize their operative value for online reasoning, these self-sustaining mechanisms must also be able to reorganize fuzzy-associative knowledge in real-time dynamic environments. Hence, it is critical to recognize that they would require self-reorganizational skills to rebuild fluid associative structures when their existing organizations fail to respond well to changing circumstances. In this light, while Hebbian theory (Hebb, 1949) is the basic computational framework for associative learning, it is less attractive for time-variant online learning because it suffers from stability limitations that impedes unlearning. Instead, this paper adopts the Bienenstock-Cooper-Munro (BCM) theory of neurological learning via meta-plasticity principles (Bienenstock et al., 1982) that provides for both online associative and dissociative learning. For almost three decades, BCM theory has been shown to effectively brace physiological evidence of synaptic potentiation (association) and depression (dissociation) into a sound mathematical framework for computational learning. This paper proposes an interpretation of the BCM theory of meta-plasticity for an online self-reorganizing fuzzy-associative learning system to realize online-reasoning capabilities. Experimental findings are twofold: 1) the analysis using S&P-500 stock index illustrated that the self-reorganizing approach could follow the trajectory shifts in the time-variant S&P-500 index for about 60 years, and 2) the benchmark profiles showed that the fuzzy-associative approach yielded comparable results with other fuzzy-precision models with similar online objectives.

  13. Observations in the Computer Room: L2 Output and Learner Behaviour

    ERIC Educational Resources Information Center

    Leahy, Christine

    2004-01-01

    This article draws on second language theory, particularly output theory as defined by Swain (1995), in order to conceptualise observations made in a computer-assisted language learning setting. It investigates second language output and learner behaviour within an electronic role-play setting, based on a subject-specific problem solving task and…

  14. Toward a Script Theory of Guidance in Computer-Supported Collaborative Learning

    PubMed Central

    Fischer, Frank; Kollar, Ingo; Stegmann, Karsten; Wecker, Christof

    2013-01-01

    This article presents an outline of a script theory of guidance for computer-supported collaborative learning (CSCL). With its 4 types of components of internal and external scripts (play, scene, role, and scriptlet) and 7 principles, this theory addresses the question of how CSCL practices are shaped by dynamically reconfigured internal collaboration scripts of the participating learners. Furthermore, it explains how internal collaboration scripts develop through participation in CSCL practices. It emphasizes the importance of active application of subject matter knowledge in CSCL practices, and it prioritizes transactive over nontransactive forms of knowledge application in order to facilitate learning. Further, the theory explains how external collaboration scripts modify CSCL practices and how they influence the development of internal collaboration scripts. The principles specify an optimal scaffolding level for external collaboration scripts and allow for the formulation of hypotheses about the fading of external collaboration scripts. Finally, the article points toward conceptual challenges and future research questions. PMID:23378679

  15. Educational Game Design as Gateway for Operationalizing Computational Thinking Skills among Middle School Students

    ERIC Educational Resources Information Center

    Wu, Min Lun

    2018-01-01

    This qualitative case study reports descriptive findings of digital game-based learning involving 15 Taiwanese middle school students' use of computational thinking skills elicited through programmed activities in a game design workshop. Situated learning theory is utilized as framework to evaluate novice game designers' individual advancement in…

  16. Creating Effective Educational Computer Games for Undergraduate Classroom Learning: A Conceptual Model

    ERIC Educational Resources Information Center

    Rapeepisarn, Kowit; Wong, Kok Wai; Fung, Chun Che; Khine, Myint Swe

    2008-01-01

    When designing Educational Computer Games, designers usually consider target age, interactivity, interface and other related issues. They rarely explore the genres which should employ into one type of educational game. Recently, some digital game-based researchers made attempt to combine game genre with learning theory. Different researchers use…

  17. Special Education Technologies for Young Children: Present and Future Learning Scenarios with Related Research Literature.

    ERIC Educational Resources Information Center

    Watson, J. Allen; And Others

    1986-01-01

    The article surveys computer usage with young handicapped children by developing three instructional scenarios (present actual, present possible, and future). Research is reviewed on computer use with very young children, cognitive theory and microcomputer learning, and social aspects of the microcomputer experience. Trends in microcomputer,…

  18. Development of a Computer-Based Visualised Quantitative Learning System for Playing Violin Vibrato

    ERIC Educational Resources Information Center

    Ho, Tracy Kwei-Liang; Lin, Huann-shyang; Chen, Ching-Kong; Tsai, Jih-Long

    2015-01-01

    Traditional methods of teaching music are largely subjective, with the lack of objectivity being particularly challenging for violin students learning vibrato because of the existence of conflicting theories. By using a computer-based analysis method, this study found that maintaining temporal coincidence between the intensity peak and the target…

  19. Evidence and Interpretation in Language Learning Research: Opportunities for Collaboration with Computational Linguistics

    ERIC Educational Resources Information Center

    Meurers, Detmar; Dickinson, Markus

    2017-01-01

    This article discusses two types of opportunities for interdisciplinary collaboration between computational linguistics (CL) and language learning research. We target the connection between data and theory in second language (L2) research and highlight opportunities to (a) enrich the options for obtaining data and (b) support the identification…

  20. It takes a village: supporting inquiry- and equity-oriented computer science pedagogy through a professional learning community

    NASA Astrophysics Data System (ADS)

    Ryoo, Jean; Goode, Joanna; Margolis, Jane

    2015-10-01

    This article describes the importance that high school computer science teachers place on a teachers' professional learning community designed around an inquiry- and equity-oriented approach for broadening participation in computing. Using grounded theory to analyze four years of teacher surveys and interviews from the Exploring Computer Science (ECS) program in the Los Angeles Unified School District, this article describes how participating in professional development activities purposefully aimed at fostering a teachers' professional learning community helps ECS teachers make the transition to an inquiry-based classroom culture and break professional isolation. This professional learning community also provides experiences that challenge prevalent deficit notions and stereotypes about which students can or cannot excel in computer science.

  1. Effects of Modality and Redundancy Principles on the Learning and Attitude of a Computer-Based Music Theory Lesson among Jordanian Primary Pupils

    ERIC Educational Resources Information Center

    Aldalalah, Osamah Ahmad; Fong, Soon Fook

    2010-01-01

    The purpose of this study was to investigate the effects of modality and redundancy principles on the attitude and learning of music theory among primary pupils of different aptitudes in Jordan. The lesson of music theory was developed in three different modes, audio and image (AI), text with image (TI) and audio with image and text (AIT). The…

  2. Pattern perception and computational complexity: introduction to the special issue

    PubMed Central

    Fitch, W. Tecumseh; Friederici, Angela D.; Hagoort, Peter

    2012-01-01

    Research on pattern perception and rule learning, grounded in formal language theory (FLT) and using artificial grammar learning paradigms, has exploded in the last decade. This approach marries empirical research conducted by neuroscientists, psychologists and ethologists with the theory of computation and FLT, developed by mathematicians, linguists and computer scientists over the last century. Of particular current interest are comparative extensions of this work to non-human animals, and neuroscientific investigations using brain imaging techniques. We provide a short introduction to the history of these fields, and to some of the dominant hypotheses, to help contextualize these ongoing research programmes, and finally briefly introduce the papers in the current issue. PMID:22688630

  3. Models of Learning in ICAI.

    ERIC Educational Resources Information Center

    Duchastel, P.; And Others

    1989-01-01

    Discusses intelligent computer assisted instruction (ICAI) and presents various models of learning which have been proposed. Topics discussed include artificial intelligence; intelligent tutorial systems; tutorial strategies; learner control; system design; learning theory; and knowledge representation of proper and improper (i.e., incorrect)…

  4. Multimedia for occupational safety and health training: a pilot study examining a multimedia learning theory.

    PubMed

    Wallen, Erik S; Mulloy, Karen B

    2006-10-01

    Occupational diseases are a significant problem affecting public health. Safety training is an important method of preventing occupational illness. Training is increasingly being delivered by computer although theories of learning from computer-based multimedia have been tested almost entirely on college students. This study was designed to determine whether these theories might also be applied to safety training applications for working adults. Participants viewed either computer-based multimedia respirator use training with concurrent narration, narration prior to the animation, or unrelated safety training. Participants then took a five-item transfer test which measured their ability to use their knowledge in new and creative ways. Participants who viewed the computer-based multimedia trainings both did significantly better than the control group on the transfer test. The results of this pilot study suggest that design guidelines developed for younger learners may be effective for training workers in occupational safety and health although more investigation is needed.

  5. Exploring Students Intentions to Study Computer Science and Identifying the Differences among ICT and Programming Based Courses

    ERIC Educational Resources Information Center

    Giannakos, Michail N.

    2014-01-01

    Computer Science (CS) courses comprise both Programming and Information and Communication Technology (ICT) issues; however these two areas have substantial differences, inter alia the attitudes and beliefs of the students regarding the intended learning content. In this research, factors from the Social Cognitive Theory and Unified Theory of…

  6. Content Analysis in Computer-Mediated Communication: Analyzing Models for Assessing Critical Thinking through the Lens of Social Constructivism

    ERIC Educational Resources Information Center

    Buraphadeja, Vasa; Dawson, Kara

    2008-01-01

    This article reviews content analysis studies aimed to assess critical thinking in computer-mediated communication. It also discusses theories and content analysis models that encourage critical thinking skills in asynchronous learning environments and reviews theories and factors that may foster critical thinking skills and new knowledge…

  7. An Interactive Learning Environment for Information and Communication Theory

    ERIC Educational Resources Information Center

    Hamada, Mohamed; Hassan, Mohammed

    2017-01-01

    Interactive learning tools are emerging as effective educational materials in the area of computer science and engineering. It is a research domain that is rapidly expanding because of its positive impacts on motivating and improving students' performance during the learning process. This paper introduces an interactive learning environment for…

  8. Neurocomputational mechanisms of prosocial learning and links to empathy.

    PubMed

    Lockwood, Patricia L; Apps, Matthew A J; Valton, Vincent; Viding, Essi; Roiser, Jonathan P

    2016-08-30

    Reinforcement learning theory powerfully characterizes how we learn to benefit ourselves. In this theory, prediction errors-the difference between a predicted and actual outcome of a choice-drive learning. However, we do not operate in a social vacuum. To behave prosocially we must learn the consequences of our actions for other people. Empathy, the ability to vicariously experience and understand the affect of others, is hypothesized to be a critical facilitator of prosocial behaviors, but the link between empathy and prosocial behavior is still unclear. During functional magnetic resonance imaging (fMRI) participants chose between different stimuli that were probabilistically associated with rewards for themselves (self), another person (prosocial), or no one (control). Using computational modeling, we show that people can learn to obtain rewards for others but do so more slowly than when learning to obtain rewards for themselves. fMRI revealed that activity in a posterior portion of the subgenual anterior cingulate cortex/basal forebrain (sgACC) drives learning only when we are acting in a prosocial context and signals a prosocial prediction error conforming to classical principles of reinforcement learning theory. However, there is also substantial variability in the neural and behavioral efficiency of prosocial learning, which is predicted by trait empathy. More empathic people learn more quickly when benefitting others, and their sgACC response is the most selective for prosocial learning. We thus reveal a computational mechanism driving prosocial learning in humans. This framework could provide insights into atypical prosocial behavior in those with disorders of social cognition.

  9. From Requirements to Code: Issues and Learning in IS Students' Systems Development Projects

    ERIC Educational Resources Information Center

    Scott, Elsje

    2008-01-01

    The Computing Curricula (2005) place Information Systems (IS) at the intersection of exact sciences (e.g. General Systems Theory), technology (e.g. Computer Science), and behavioral sciences (e.g. Sociology). This presents particular challenges for teaching and learning, as future IS professionals need to be equipped with a wide range of…

  10. Enhancing Competence and Autonomy in Computer-Based Instruction Using a Skill-Challenge Balancing Strategy

    ERIC Educational Resources Information Center

    Kim, Jieun; Ryu, Hokyoung; Katuk, Norliza; Wang, Ruili; Choi, Gyunghyun

    2014-01-01

    The present study aims to show if a skill-challenge balancing (SCB) instruction strategy can assist learners to motivationally engage in computer-based learning. Csikszentmihalyi's flow theory (self-control, curiosity, focus of attention, and intrinsic interest) was applied to an account of the optimal learning experience in SCB-based learning…

  11. The Impact of the Digital Divide on First-Year Community College Students

    ERIC Educational Resources Information Center

    Mansfield, Malinda

    2017-01-01

    Some students do not possess the learning management system (LMS) and basic computer skills needed for success in first-year experience (FYE) courses. The purpose of this quantitative study, based on the Integrative Learning Design Framework and theory of transactional distance, was to identify what basic computer skills and LMS skills are needed…

  12. Computer-Supported Collaborative Inquiry on Buoyancy: A Discourse Analysis Supporting the "Pieces" Position on Conceptual Change

    ERIC Educational Resources Information Center

    Turcotte, Sandrine

    2012-01-01

    This article describes in detail a conversation analysis of conceptual change in a computer-supported collaborative learning environment. Conceptual change is an essential learning process in science education that has yet to be fully understood. While many models and theories have been developed over the last three decades, empirical data to…

  13. How Neural Networks Learn from Experience.

    ERIC Educational Resources Information Center

    Hinton, Geoffrey E.

    1992-01-01

    Discusses computational studies of learning in artificial neural networks and findings that may provide insights into the learning abilities of the human brain. Describes efforts to test theories about brain information processing, using artificial neural networks. Vignettes include information concerning how a neural network represents…

  14. Modelling Trial-by-Trial Changes in the Mismatch Negativity

    PubMed Central

    Lieder, Falk; Daunizeau, Jean; Garrido, Marta I.; Friston, Karl J.; Stephan, Klaas E.

    2013-01-01

    The mismatch negativity (MMN) is a differential brain response to violations of learned regularities. It has been used to demonstrate that the brain learns the statistical structure of its environment and predicts future sensory inputs. However, the algorithmic nature of these computations and the underlying neurobiological implementation remain controversial. This article introduces a mathematical framework with which competing ideas about the computational quantities indexed by MMN responses can be formalized and tested against single-trial EEG data. This framework was applied to five major theories of the MMN, comparing their ability to explain trial-by-trial changes in MMN amplitude. Three of these theories (predictive coding, model adjustment, and novelty detection) were formalized by linking the MMN to different manifestations of the same computational mechanism: approximate Bayesian inference according to the free-energy principle. We thereby propose a unifying view on three distinct theories of the MMN. The relative plausibility of each theory was assessed against empirical single-trial MMN amplitudes acquired from eight healthy volunteers in a roving oddball experiment. Models based on the free-energy principle provided more plausible explanations of trial-by-trial changes in MMN amplitude than models representing the two more traditional theories (change detection and adaptation). Our results suggest that the MMN reflects approximate Bayesian learning of sensory regularities, and that the MMN-generating process adjusts a probabilistic model of the environment according to prediction errors. PMID:23436989

  15. Causal learning with local computations.

    PubMed

    Fernbach, Philip M; Sloman, Steven A

    2009-05-01

    The authors proposed and tested a psychological theory of causal structure learning based on local computations. Local computations simplify complex learning problems via cues available on individual trials to update a single causal structure hypothesis. Structural inferences from local computations make minimal demands on memory, require relatively small amounts of data, and need not respect normative prescriptions as inferences that are principled locally may violate those principles when combined. Over a series of 3 experiments, the authors found (a) systematic inferences from small amounts of data; (b) systematic inference of extraneous causal links; (c) influence of data presentation order on inferences; and (d) error reduction through pretraining. Without pretraining, a model based on local computations fitted data better than a Bayesian structural inference model. The data suggest that local computations serve as a heuristic for learning causal structure. Copyright 2009 APA, all rights reserved.

  16. Toward a Unified Sub-symbolic Computational Theory of Cognition

    PubMed Central

    Butz, Martin V.

    2016-01-01

    This paper proposes how various disciplinary theories of cognition may be combined into a unifying, sub-symbolic, computational theory of cognition. The following theories are considered for integration: psychological theories, including the theory of event coding, event segmentation theory, the theory of anticipatory behavioral control, and concept development; artificial intelligence and machine learning theories, including reinforcement learning and generative artificial neural networks; and theories from theoretical and computational neuroscience, including predictive coding and free energy-based inference. In the light of such a potential unification, it is discussed how abstract cognitive, conceptualized knowledge and understanding may be learned from actively gathered sensorimotor experiences. The unification rests on the free energy-based inference principle, which essentially implies that the brain builds a predictive, generative model of its environment. Neural activity-oriented inference causes the continuous adaptation of the currently active predictive encodings. Neural structure-oriented inference causes the longer term adaptation of the developing generative model as a whole. Finally, active inference strives for maintaining internal homeostasis, causing goal-directed motor behavior. To learn abstract, hierarchical encodings, however, it is proposed that free energy-based inference needs to be enhanced with structural priors, which bias cognitive development toward the formation of particular, behaviorally suitable encoding structures. As a result, it is hypothesized how abstract concepts can develop from, and thus how they are structured by and grounded in, sensorimotor experiences. Moreover, it is sketched-out how symbol-like thought can be generated by a temporarily active set of predictive encodings, which constitute a distributed neural attractor in the form of an interactive free-energy minimum. The activated, interactive network attractor essentially characterizes the semantics of a concept or a concept composition, such as an actual or imagined situation in our environment. Temporal successions of attractors then encode unfolding semantics, which may be generated by a behavioral or mental interaction with an actual or imagined situation in our environment. Implications, further predictions, possible verification, and falsifications, as well as potential enhancements into a fully spelled-out unified theory of cognition are discussed at the end of the paper. PMID:27445895

  17. Analytical learning and term-rewriting systems

    NASA Technical Reports Server (NTRS)

    Laird, Philip; Gamble, Evan

    1990-01-01

    Analytical learning is a set of machine learning techniques for revising the representation of a theory based on a small set of examples of that theory. When the representation of the theory is correct and complete but perhaps inefficient, an important objective of such analysis is to improve the computational efficiency of the representation. Several algorithms with this purpose have been suggested, most of which are closely tied to a first order logical language and are variants of goal regression, such as the familiar explanation based generalization (EBG) procedure. But because predicate calculus is a poor representation for some domains, these learning algorithms are extended to apply to other computational models. It is shown that the goal regression technique applies to a large family of programming languages, all based on a kind of term rewriting system. Included in this family are three language families of importance to artificial intelligence: logic programming, such as Prolog; lambda calculus, such as LISP; and combinatorial based languages, such as FP. A new analytical learning algorithm, AL-2, is exhibited that learns from success but is otherwise quite different from EBG. These results suggest that term rewriting systems are a good framework for analytical learning research in general, and that further research should be directed toward developing new techniques.

  18. Learning Theories and Skills in Online Second Language Teaching and Learning: Dilemmas and Challenges

    ERIC Educational Resources Information Center

    Petersen, Karen Bjerg

    2014-01-01

    For decades foreign and second language teachers have taken advantage of the technology development and ensuing possibilities to use e-learning facilities for language training. Since the 1980s, the use of computer assisted language learning (CALL), Internet, web 2.0, and various kinds of e-learning technology has been developed and researched…

  19. Computer-Supported Team-Based Learning: The Impact of Motivation, Enjoyment and Team Contributions on Learning Outcomes

    ERIC Educational Resources Information Center

    Gomez, Elizabeth Avery; Wu, Dezhi; Passerini, Katia

    2010-01-01

    The benefits of teamwork and collaboration have long been advocated by many educational theories, such as constructivist and social learning models. Among the various applications of collaborative learning, the iterative team-based learning (TBL) process proposed by Michaelsen, Fink, and Knight (2002) has been successfully used in the classroom…

  20. Dual Coding Theory and Computer Education: Some Media Experiments To Examine the Effects of Different Media on Learning.

    ERIC Educational Resources Information Center

    Alty, James L.

    Dual Coding Theory has quite specific predictions about how information in different media is stored, manipulated and recalled. Different combinations of media are expected to have significant effects upon the recall and retention of information. This obviously may have important consequences in the design of computer-based programs. The paper…

  1. Leveraging Cognitive Load Theory, Scaffolding, and Distance Technologies to Enhance Computer Programming for Non-Majors

    ERIC Educational Resources Information Center

    Impelluso, Thomas J.

    2009-01-01

    Cognitive Load Theory (CLT) was used as a foundation to redesign a computer programming class for mechanical engineers, in which content was delivered with hybrid/distance technology. The effort confirmed the utility of CLT in course design. And it demonstrates that hybrid/distance learning is not merely a tool of convenience, but one, which, when…

  2. Situativity theory: a perspective on how participants and the environment can interact: AMEE Guide no. 52.

    PubMed

    Durning, Steven J; Artino, Anthony R

    2011-01-01

    Situativity theory refers to theoretical frameworks which argue that knowledge, thinking, and learning are situated (or located) in experience. The importance of context to these theories is paramount, including the unique contribution of the environment to knowledge, thinking, and learning; indeed, they argue that knowledge, thinking, and learning cannot be separated from (they are dependent upon) context. Situativity theory includes situated cognition, situated learning, ecological psychology, and distributed cognition. In this Guide, we first outline key tenets of situativity theory and then compare situativity theory to information processing theory; we suspect that the reader may be quite familiar with the latter, which has prevailed in medical education research. Contrasting situativity theory with information processing theory also serves to highlight some unique potential contributions of situativity theory to work in medical education. Further, we discuss each of these situativity theories and then relate the theories to the clinical context. Examples and illustrations for each of the theories are used throughout. We will conclude with some potential considerations for future exploration. Some implications of situativity theory include: a new way of approaching knowledge and how experience and the environment impact knowledge, thinking, and learning; recognizing that the situativity framework can be a useful tool to "diagnose" the teaching or clinical event; the notion that increasing individual responsibility and participation in a community (i.e., increasing "belonging") is essential to learning; understanding that the teaching and clinical environment can be complex (i.e., non-linear and multi-level); recognizing that explicit attention to how participants in a group interact with each other (not only with the teacher) and how the associated learning artifacts, such as computers, can meaningfully impact learning.

  3. Learning a theory of causality.

    PubMed

    Goodman, Noah D; Ullman, Tomer D; Tenenbaum, Joshua B

    2011-01-01

    The very early appearance of abstract knowledge is often taken as evidence for innateness. We explore the relative learning speeds of abstract and specific knowledge within a Bayesian framework and the role for innate structure. We focus on knowledge about causality, seen as a domain-general intuitive theory, and ask whether this knowledge can be learned from co-occurrence of events. We begin by phrasing the causal Bayes nets theory of causality and a range of alternatives in a logical language for relational theories. This allows us to explore simultaneous inductive learning of an abstract theory of causality and a causal model for each of several causal systems. We find that the correct theory of causality can be learned relatively quickly, often becoming available before specific causal theories have been learned--an effect we term the blessing of abstraction. We then explore the effect of providing a variety of auxiliary evidence and find that a collection of simple perceptual input analyzers can help to bootstrap abstract knowledge. Together, these results suggest that the most efficient route to causal knowledge may be to build in not an abstract notion of causality but a powerful inductive learning mechanism and a variety of perceptual supports. While these results are purely computational, they have implications for cognitive development, which we explore in the conclusion.

  4. Achieving Complex Learning Outcomes through Adoption of a Pedagogical Perspective: A Model for Computer Technology Delivered Instruction

    ERIC Educational Resources Information Center

    Bellard, Breshanica

    2018-01-01

    Professionals responsible for the delivery of education and training using technology systems and platforms can facilitate complex learning through application of relevant strategies, principles and theories that support how learners learn and that support how curriculum should be designed in a technology based learning environment. Technological…

  5. Implications of Research and Theory for the Use of Computers with the Learning Disabled. CREATE Monograph Series.

    ERIC Educational Resources Information Center

    Weisgerber, Robert A.

    This monograph, first in a series of six, provides the theoretical background and premises underlying the efforts of the research team and two collaborating California school districts to explore ways in which the computer and related technologies can be more fully and effectively used in the instruction of learning disabled students. Contents…

  6. Mobile Learning in Secondary Education: Teachers' and Students' Perceptions and Acceptance of Tablet Computers

    ERIC Educational Resources Information Center

    Montrieux, Hannelore; Courtois, Cédric; De Grove, Frederik; Raes, Annelies; Schellens, Tammy; De Marez, Lieven

    2014-01-01

    This paper examines the school-wide introduction of the tablet computer as a mobile learning tool in a secondary school in Belgium. Drawing upon the Decomposed Theory of Planned Behavior, we question during three waves of data collection which factors influence teachers' and students' acceptance and use of these devices for educational purposes.…

  7. Group Learning Assessment: Developing a Theory-Informed Analytics

    ERIC Educational Resources Information Center

    Xing, Wanli; Wadholm, Robert; Petakovic, Eva; Goggins, Sean

    2015-01-01

    Assessment in Computer Supported Collaborative Learning (CSCL) is an implicit issue, and most assessments are summative in nature. Process-oriented methods of assessment can vary significantly in their indicators and typically only partially address the complexity of group learning. Moreover, the majority of these assessment methods require…

  8. Mastering cognitive development theory in computer science education

    NASA Astrophysics Data System (ADS)

    Gluga, Richard; Kay, Judy; Lister, Raymond; Simon; Kleitman, Sabina

    2013-03-01

    To design an effective computer science curriculum, educators require a systematic method of classifying the difficulty level of learning activities and assessment tasks. This is important for curriculum design and implementation and for communication between educators. Different educators must be able to use the method consistently, so that classified activities and assessments are comparable across the subjects of a degree, and, ideally, comparable across institutions. One widespread approach to supporting this is to write learning objects in terms of Bloom's Taxonomy. This, or other such classifications, is likely to be more effective if educators can use them consistently, in the way experts would use them. To this end, we present the design and evaluation of our online interactive web-based tutorial system, which can be configured and used to offer training in different classification schemes. We report on results from three evaluations. First, 17 computer science educators complete a tutorial on using Bloom's Taxonomy to classify programming examination questions. Second, 20 computer science educators complete a Neo-Piagetian tutorial. Third evaluation was a comparison of inter-rater reliability scores of computer science educators classifying programming questions using Bloom's Taxonomy, before and after taking our tutorial. Based on the results from these evaluations, we discuss the effectiveness of our tutorial system design for teaching computer science educators how to systematically and consistently classify programming examination questions. We also discuss the suitability of Bloom's Taxonomy and Neo-Piagetian theory for achieving this goal. The Bloom's and Neo-Piagetian tutorials are made available as a community resource. The contributions of this paper are the following: the tutorial system for learning classification schemes for the purpose of coding the difficulty of computing learning materials; its evaluation; new insights into the consistency that computing educators can achieve using Bloom; and first insights into the use of Neo-Piagetian theory by a group of classifiers.

  9. Machine learning: Trends, perspectives, and prospects.

    PubMed

    Jordan, M I; Mitchell, T M

    2015-07-17

    Machine learning addresses the question of how to build computers that improve automatically through experience. It is one of today's most rapidly growing technical fields, lying at the intersection of computer science and statistics, and at the core of artificial intelligence and data science. Recent progress in machine learning has been driven both by the development of new learning algorithms and theory and by the ongoing explosion in the availability of online data and low-cost computation. The adoption of data-intensive machine-learning methods can be found throughout science, technology and commerce, leading to more evidence-based decision-making across many walks of life, including health care, manufacturing, education, financial modeling, policing, and marketing. Copyright © 2015, American Association for the Advancement of Science.

  10. Neurocomputational mechanisms of prosocial learning and links to empathy

    PubMed Central

    Apps, Matthew A. J.; Valton, Vincent; Viding, Essi; Roiser, Jonathan P.

    2016-01-01

    Reinforcement learning theory powerfully characterizes how we learn to benefit ourselves. In this theory, prediction errors—the difference between a predicted and actual outcome of a choice—drive learning. However, we do not operate in a social vacuum. To behave prosocially we must learn the consequences of our actions for other people. Empathy, the ability to vicariously experience and understand the affect of others, is hypothesized to be a critical facilitator of prosocial behaviors, but the link between empathy and prosocial behavior is still unclear. During functional magnetic resonance imaging (fMRI) participants chose between different stimuli that were probabilistically associated with rewards for themselves (self), another person (prosocial), or no one (control). Using computational modeling, we show that people can learn to obtain rewards for others but do so more slowly than when learning to obtain rewards for themselves. fMRI revealed that activity in a posterior portion of the subgenual anterior cingulate cortex/basal forebrain (sgACC) drives learning only when we are acting in a prosocial context and signals a prosocial prediction error conforming to classical principles of reinforcement learning theory. However, there is also substantial variability in the neural and behavioral efficiency of prosocial learning, which is predicted by trait empathy. More empathic people learn more quickly when benefitting others, and their sgACC response is the most selective for prosocial learning. We thus reveal a computational mechanism driving prosocial learning in humans. This framework could provide insights into atypical prosocial behavior in those with disorders of social cognition. PMID:27528669

  11. Concept Learning and Heuristic Classification in Weak-Theory Domains

    DTIC Science & Technology

    1990-03-01

    age and noise-induced cochlear age..gt.60 noise-induced cochlear air(mild) age-induced cochlear history(noise) norma ]_ear speechpoor)acousticneuroma...Annual review of computer science. Machine Learning, 4, 1990. (to appear). [18] R.T. Duran . Concept learning with incomplete data sets. Master’s thesis

  12. Feedback Processes in Multimedia Language Learning Software

    ERIC Educational Resources Information Center

    Kartal, Erdogan

    2010-01-01

    Feedback has been one of the important elements of learning and teaching theories and still pervades the literature and instructional models, especially computer and web-based ones. However, the mechanisms about feedback dominating the fundamentals of all the instructional models designed for self-learning have changed considerably with the…

  13. Using Wikis for Learning and Knowledge Building: Results of an Experimental Study

    ERIC Educational Resources Information Center

    Kimmerle, Joachim; Moskaliuk, Johannes; Cress, Ulrike

    2011-01-01

    Computer-supported learning and knowledge building play an increasing role in online collaboration. This paper outlines some theories concerning the interplay between individual processes of learning and collaborative processes of knowledge building. In particular, it describes the co-evolution model that attempts to examine processes of learning…

  14. Complex Mobile Learning That Adapts to Learners' Cognitive Load

    ERIC Educational Resources Information Center

    Deegan, Robin

    2015-01-01

    Mobile learning is cognitively demanding and frequently the ubiquitous nature of mobile computing means that mobile devices are used in cognitively demanding environments. This paper examines the use of mobile devices from a Learning, Usability and Cognitive Load Theory perspective. It suggests scenarios where these fields interact and presents an…

  15. The theory of reasoned action as parallel constraint satisfaction: towards a dynamic computational model of health behavior.

    PubMed

    Orr, Mark G; Thrush, Roxanne; Plaut, David C

    2013-01-01

    The reasoned action approach, although ubiquitous in health behavior theory (e.g., Theory of Reasoned Action/Planned Behavior), does not adequately address two key dynamical aspects of health behavior: learning and the effect of immediate social context (i.e., social influence). To remedy this, we put forth a computational implementation of the Theory of Reasoned Action (TRA) using artificial-neural networks. Our model re-conceptualized behavioral intention as arising from a dynamic constraint satisfaction mechanism among a set of beliefs. In two simulations, we show that constraint satisfaction can simultaneously incorporate the effects of past experience (via learning) with the effects of immediate social context to yield behavioral intention, i.e., intention is dynamically constructed from both an individual's pre-existing belief structure and the beliefs of others in the individual's social context. In a third simulation, we illustrate the predictive ability of the model with respect to empirically derived behavioral intention. As the first known computational model of health behavior, it represents a significant advance in theory towards understanding the dynamics of health behavior. Furthermore, our approach may inform the development of population-level agent-based models of health behavior that aim to incorporate psychological theory into models of population dynamics.

  16. The Theory of Reasoned Action as Parallel Constraint Satisfaction: Towards a Dynamic Computational Model of Health Behavior

    PubMed Central

    Orr, Mark G.; Thrush, Roxanne; Plaut, David C.

    2013-01-01

    The reasoned action approach, although ubiquitous in health behavior theory (e.g., Theory of Reasoned Action/Planned Behavior), does not adequately address two key dynamical aspects of health behavior: learning and the effect of immediate social context (i.e., social influence). To remedy this, we put forth a computational implementation of the Theory of Reasoned Action (TRA) using artificial-neural networks. Our model re-conceptualized behavioral intention as arising from a dynamic constraint satisfaction mechanism among a set of beliefs. In two simulations, we show that constraint satisfaction can simultaneously incorporate the effects of past experience (via learning) with the effects of immediate social context to yield behavioral intention, i.e., intention is dynamically constructed from both an individual’s pre-existing belief structure and the beliefs of others in the individual’s social context. In a third simulation, we illustrate the predictive ability of the model with respect to empirically derived behavioral intention. As the first known computational model of health behavior, it represents a significant advance in theory towards understanding the dynamics of health behavior. Furthermore, our approach may inform the development of population-level agent-based models of health behavior that aim to incorporate psychological theory into models of population dynamics. PMID:23671603

  17. Computational Modeling of Statistical Learning: Effects of Transitional Probability versus Frequency and Links to Word Learning

    ERIC Educational Resources Information Center

    Mirman, Daniel; Estes, Katharine Graf; Magnuson, James S.

    2010-01-01

    Statistical learning mechanisms play an important role in theories of language acquisition and processing. Recurrent neural network models have provided important insights into how these mechanisms might operate. We examined whether such networks capture two key findings in human statistical learning. In Simulation 1, a simple recurrent network…

  18. The Design of Instructional Multimedia in E-Learning: A Media Richness Theory-Based Approach

    ERIC Educational Resources Information Center

    Sun, Pei-Chen; Cheng, Hsing Kenny

    2007-01-01

    The rapid development of computer and Internet technologies has made e-Learning become an important learning method. There has been a considerable increase in the needs for multimedia instructional material in e-Learning recently as such content has been shown to attract a learner's attention and interests. The multimedia content alone, however,…

  19. Computational Constraints in Cognitive Theories of Forgetting

    PubMed Central

    Ecker, Ullrich K. H.; Lewandowsky, Stephan

    2012-01-01

    This article highlights some of the benefits of computational modeling for theorizing in cognition. We demonstrate how computational models have been used recently to argue that (1) forgetting in short-term memory is based on interference not decay, (2) forgetting in list-learning paradigms is more parsimoniously explained by a temporal distinctiveness account than by various forms of consolidation, and (3) intrusion asymmetries that appear when information is learned in different contexts can be explained by temporal context reinstatement rather than labilization and reconsolidation processes. PMID:23091467

  20. Computational constraints in cognitive theories of forgetting.

    PubMed

    Ecker, Ullrich K H; Lewandowsky, Stephan

    2012-01-01

    This article highlights some of the benefits of computational modeling for theorizing in cognition. We demonstrate how computational models have been used recently to argue that (1) forgetting in short-term memory is based on interference not decay, (2) forgetting in list-learning paradigms is more parsimoniously explained by a temporal distinctiveness account than by various forms of consolidation, and (3) intrusion asymmetries that appear when information is learned in different contexts can be explained by temporal context reinstatement rather than labilization and reconsolidation processes.

  1. Improved object optimal synthetic description, modeling, learning, and discrimination by GEOGINE computational kernel

    NASA Astrophysics Data System (ADS)

    Fiorini, Rodolfo A.; Dacquino, Gianfranco

    2005-03-01

    GEOGINE (GEOmetrical enGINE), a state-of-the-art OMG (Ontological Model Generator) based on n-D Tensor Invariants for n-Dimensional shape/texture optimal synthetic representation, description and learning, was presented in previous conferences elsewhere recently. Improved computational algorithms based on the computational invariant theory of finite groups in Euclidean space and a demo application is presented. Progressive model automatic generation is discussed. GEOGINE can be used as an efficient computational kernel for fast reliable application development and delivery in advanced biomedical engineering, biometric, intelligent computing, target recognition, content image retrieval, data mining technological areas mainly. Ontology can be regarded as a logical theory accounting for the intended meaning of a formal dictionary, i.e., its ontological commitment to a particular conceptualization of the world object. According to this approach, "n-D Tensor Calculus" can be considered a "Formal Language" to reliably compute optimized "n-Dimensional Tensor Invariants" as specific object "invariant parameter and attribute words" for automated n-Dimensional shape/texture optimal synthetic object description by incremental model generation. The class of those "invariant parameter and attribute words" can be thought as a specific "Formal Vocabulary" learned from a "Generalized Formal Dictionary" of the "Computational Tensor Invariants" language. Even object chromatic attributes can be effectively and reliably computed from object geometric parameters into robust colour shape invariant characteristics. As a matter of fact, any highly sophisticated application needing effective, robust object geometric/colour invariant attribute capture and parameterization features, for reliable automated object learning and discrimination can deeply benefit from GEOGINE progressive automated model generation computational kernel performance. Main operational advantages over previous, similar approaches are: 1) Progressive Automated Invariant Model Generation, 2) Invariant Minimal Complete Description Set for computational efficiency, 3) Arbitrary Model Precision for robust object description and identification.

  2. I Use the Computer to ADVANCE Advances in Comprehension-Strategy Research.

    ERIC Educational Resources Information Center

    Blohm, Paul J.

    Merging the instructional implications drawn from theory and research in the interactive reading model, schemata, and metacognition with computer based instruction seems a natural approach for actively involving students' participation in reading and learning from text. Computer based graphic organizers guide students' preview or review of lengthy…

  3. Application of Adaptive Decision Aiding Systems to Computer-Assisted Instruction. Final Report, January-December 1974.

    ERIC Educational Resources Information Center

    May, Donald M.; And Others

    The minicomputer-based Computerized Diagnostic and Decision Training (CDDT) system described combines the principles of artificial intelligence, decision theory, and adaptive computer assisted instruction for training in electronic troubleshooting. The system incorporates an adaptive computer program which learns the student's diagnostic and…

  4. Metocognitive Support Accelerates Computer Assisted Learning for Novice Programmers

    ERIC Educational Resources Information Center

    Rum, Siti Nurulain Mohd; Ismail, Maizatul Akmar

    2017-01-01

    Computer programming is a part of the curriculum in computer science education, and high drop rates for this subject are a universal problem. Development of metacognitive skills, including the conceptual framework provided by socio-cognitive theories that afford reflective thinking, such as actively monitoring, evaluating, and modifying one's…

  5. Impact of Collaborative Project-Based Learning on Self-Efficacy of Urban Minority Students in Engineering

    ERIC Educational Resources Information Center

    Chen, Pearl; Hernandez, Anthony; Dong, Jane

    2015-01-01

    This paper presents an interdisciplinary research project that studies the impact of collaborative project-based learning (CPBL) on the development of self-efficacy of students from various ethnic groups in an undergraduate senior-level computer networking class. Grounded in social constructivist and situated theories of learning, the study…

  6. Mobile Learning Application Interfaces: First Steps to a Cognitive Load Aware System

    ERIC Educational Resources Information Center

    Deegan, Robin

    2013-01-01

    Mobile learning is a cognitively demanding application and more frequently the ubiquitous nature of mobile computing means that mobile devices are used in cognitively demanding environments. This paper examines the nature of this use of mobile devices from a Learning, Usability and Cognitive Load Theory perspective. It suggests scenarios where…

  7. LSQuiz: A Collaborative Classroom Response System to Support Active Learning through Ubiquitous Computing

    ERIC Educational Resources Information Center

    Caceffo, Ricardo; Azevedo, Rodolfo

    2014-01-01

    The constructivist theory indicates that knowledge is not something finished and complete. However, the individuals must construct it through the interaction with the physical and social environment. The Active Learning is a methodology designed to support the constructivism through the involvement of students in their learning process, allowing…

  8. Influence of Participation, Facilitator Styles, and Metacognitive Reflection on Knowledge Building in Online University Courses

    ERIC Educational Resources Information Center

    Cacciamani, Stefano; Cesareni, Donatella; Martini, Francesca; Ferrini, Tiziana; Fujita, Nobuko

    2012-01-01

    Understanding how to foster knowledge building in online and blended learning environments is a key for computer-supported collaborative learning research. Knowledge building is a deeply constructivist pedagogy and kind of inquiry learning focused on theory building. A strong indicator of engagement in knowledge building activity is the…

  9. Distinct Roles of Dopamine and Subthalamic Nucleus in Learning and Probabilistic Decision Making

    ERIC Educational Resources Information Center

    Coulthard, Elizabeth J.; Bogacz, Rafal; Javed, Shazia; Mooney, Lucy K.; Murphy, Gillian; Keeley, Sophie; Whone, Alan L.

    2012-01-01

    Even simple behaviour requires us to make decisions based on combining multiple pieces of learned and new information. Making such decisions requires both learning the optimal response to each given stimulus as well as combining probabilistic information from multiple stimuli before selecting a response. Computational theories of decision making…

  10. Situational Leadership Theory as a Foundation for a Blended Learning Framework

    ERIC Educational Resources Information Center

    Meier, David

    2016-01-01

    Ultimately with the raise of computer technology, blended learning has found its way into teaching. The technology continues to evolve, challenging teachers and lecturers alike. Most studies on blended learning focus on the practical or applied side and use essentially pedagogical concepts. This study demonstrates that the leadership sciences can…

  11. Retrospective Evaluation of a Collaborative LearningScience Module: The Users' Perspective

    ERIC Educational Resources Information Center

    DeWitt, Dorothy; Siraj, Saedah; Alias, Norlidah; Leng, Chin Hai

    2013-01-01

    This study focuses on the retrospective evaluation of collaborative mLearning (CmL) Science module for teaching secondary school science which was designed based on social constructivist learning theories and Merrill's First Principle of Instruction. This study is part of a developmental research in which computer-mediated communication (CMC)…

  12. A Theory of Causal Learning in Children: Causal Maps and Bayes Nets

    ERIC Educational Resources Information Center

    Gopnik, Alison; Glymour, Clark; Sobel, David M.; Schulz, Laura E.; Kushnir, Tamar; Danks, David

    2004-01-01

    The authors outline a cognitive and computational account of causal learning in children. They propose that children use specialized cognitive systems that allow them to recover an accurate "causal map" of the world: an abstract, coherent, learned representation of the causal relations among events. This kind of knowledge can be perspicuously…

  13. Cognitive biases, linguistic universals, and constraint-based grammar learning.

    PubMed

    Culbertson, Jennifer; Smolensky, Paul; Wilson, Colin

    2013-07-01

    According to classical arguments, language learning is both facilitated and constrained by cognitive biases. These biases are reflected in linguistic typology-the distribution of linguistic patterns across the world's languages-and can be probed with artificial grammar experiments on child and adult learners. Beginning with a widely successful approach to typology (Optimality Theory), and adapting techniques from computational approaches to statistical learning, we develop a Bayesian model of cognitive biases and show that it accounts for the detailed pattern of results of artificial grammar experiments on noun-phrase word order (Culbertson, Smolensky, & Legendre, 2012). Our proposal has several novel properties that distinguish it from prior work in the domains of linguistic theory, computational cognitive science, and machine learning. This study illustrates how ideas from these domains can be synthesized into a model of language learning in which biases range in strength from hard (absolute) to soft (statistical), and in which language-specific and domain-general biases combine to account for data from the macro-level scale of typological distribution to the micro-level scale of learning by individuals. Copyright © 2013 Cognitive Science Society, Inc.

  14. Measuring E-Learning Readiness among EFL Teachers in Intermediate Public Schools in Saudi Arabia

    ERIC Educational Resources Information Center

    Al-Furaydi, Ahmed Ajab

    2013-01-01

    This study will determine their readiness level for the e-learning in several aspects such as attitude toward e-learning, and computer literacy also this study attempt to investigate the main the barriers that EFL teachers have to overcome while incorporating e-learning into their teaching. The theory upon which the study was technology acceptance…

  15. Students' Learning with the Connected Chemistry (CC1) Curriculum: Navigating the Complexities of the Particulate World

    ERIC Educational Resources Information Center

    Levy, Sharona T.; Wilensky, Uri

    2009-01-01

    The focus of this study is students' learning with a Connected Chemistry unit, CC1 (denotes Connected Chemistry, chapter 1), a computer-based environment for learning the topics of gas laws and kinetic molecular theory in chemistry (Levy and Wilensky 2009). An investigation was conducted into high-school students' learning with Connected…

  16. Cooperative inference: Features, objects, and collections.

    PubMed

    Searcy, Sophia Ray; Shafto, Patrick

    2016-10-01

    Cooperation plays a central role in theories of development, learning, cultural evolution, and education. We argue that existing models of learning from cooperative informants have fundamental limitations that prevent them from explaining how cooperation benefits learning. First, existing models are shown to be computationally intractable, suggesting that they cannot apply to realistic learning problems. Second, existing models assume a priori agreement about which concepts are favored in learning, which leads to a conundrum: Learning fails without precise agreement on bias yet there is no single rational choice. We introduce cooperative inference, a novel framework for cooperation in concept learning, which resolves these limitations. Cooperative inference generalizes the notion of cooperation used in previous models from omission of labeled objects to the omission values of features, labels for objects, and labels for collections of objects. The result is an approach that is computationally tractable, does not require a priori agreement about biases, applies to both Boolean and first-order concepts, and begins to approximate the richness of real-world concept learning problems. We conclude by discussing relations to and implications for existing theories of cognition, cognitive development, and cultural evolution. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  17. Towards a general theory of neural computation based on prediction by single neurons.

    PubMed

    Fiorillo, Christopher D

    2008-10-01

    Although there has been tremendous progress in understanding the mechanics of the nervous system, there has not been a general theory of its computational function. Here I present a theory that relates the established biophysical properties of single generic neurons to principles of Bayesian probability theory, reinforcement learning and efficient coding. I suggest that this theory addresses the general computational problem facing the nervous system. Each neuron is proposed to mirror the function of the whole system in learning to predict aspects of the world related to future reward. According to the model, a typical neuron receives current information about the state of the world from a subset of its excitatory synaptic inputs, and prior information from its other inputs. Prior information would be contributed by synaptic inputs representing distinct regions of space, and by different types of non-synaptic, voltage-regulated channels representing distinct periods of the past. The neuron's membrane voltage is proposed to signal the difference between current and prior information ("prediction error" or "surprise"). A neuron would apply a Hebbian plasticity rule to select those excitatory inputs that are the most closely correlated with reward but are the least predictable, since unpredictable inputs provide the neuron with the most "new" information about future reward. To minimize the error in its predictions and to respond only when excitation is "new and surprising," the neuron selects amongst its prior information sources through an anti-Hebbian rule. The unique inputs of a mature neuron would therefore result from learning about spatial and temporal patterns in its local environment, and by extension, the external world. Thus the theory describes how the structure of the mature nervous system could reflect the structure of the external world, and how the complexity and intelligence of the system might develop from a population of undifferentiated neurons, each implementing similar learning algorithms.

  18. The right time to learn: mechanisms and optimization of spaced learning

    PubMed Central

    Smolen, Paul; Zhang, Yili; Byrne, John H.

    2016-01-01

    For many types of learning, spaced training, which involves repeated long inter-trial intervals, leads to more robust memory formation than does massed training, which involves short or no intervals. Several cognitive theories have been proposed to explain this superiority, but only recently have data begun to delineate the underlying cellular and molecular mechanisms of spaced training, and we review these theories and data here. Computational models of the implicated signalling cascades have predicted that spaced training with irregular inter-trial intervals can enhance learning. This strategy of using models to predict optimal spaced training protocols, combined with pharmacotherapy, suggests novel ways to rescue impaired synaptic plasticity and learning. PMID:26806627

  19. Prolegomena to the field

    NASA Astrophysics Data System (ADS)

    Chen, Su Shing; Caulfield, H. John

    1994-03-01

    Adaptive Computing, vs. Classical Computing, is emerging to be a field which is the culmination during the last 40 and more years of various scientific and technological areas, including cybernetics, neural networks, pattern recognition networks, learning machines, selfreproducing automata, genetic algorithms, fuzzy logics, probabilistic logics, chaos, electronics, optics, and quantum devices. This volume of "Critical Reviews on Adaptive Computing: Mathematics, Electronics, and Optics" is intended as a synergistic approach to this emerging field. There are many researchers in these areas working on important results. However, we have not seen a general effort to summarize and synthesize these results in theory as well as implementation. In order to reach a higher level of synergism, we propose Adaptive Computing as the field which comprises of the above mentioned computational paradigms and various realizations. The field should include both the Theory (or Mathematics) and the Implementation. Our emphasis is on the interplay of Theory and Implementation. The interplay, an adaptive process itself, of Theory and Implementation is the only "holistic" way to advance our understanding and realization of brain-like computation. We feel that a theory without implementation has the tendency to become unrealistic and "out-of-touch" with reality, while an implementation without theory runs the risk to be superficial and obsolete.

  20. Computer-based training for safety: comparing methods with older and younger workers.

    PubMed

    Wallen, Erik S; Mulloy, Karen B

    2006-01-01

    Computer-based safety training is becoming more common and is being delivered to an increasingly aging workforce. Aging results in a number of changes that make it more difficult to learn from certain types of computer-based training. Instructional designs derived from cognitive learning theories may overcome some of these difficulties. Three versions of computer-based respiratory safety training were shown to older and younger workers who then took a high and a low level learning test. Younger workers did better overall. Both older and younger workers did best with the version containing text with pictures and audio narration. Computer-based training with pictures and audio narration may be beneficial for workers over 45 years of age. Computer-based safety training has advantages but workers of different ages may benefit differently. Computer-based safety programs should be designed and selected based on their ability to effectively train older as well as younger learners.

  1. Differential theory of learning for efficient neural network pattern recognition

    NASA Astrophysics Data System (ADS)

    Hampshire, John B., II; Vijaya Kumar, Bhagavatula

    1993-09-01

    We describe a new theory of differential learning by which a broad family of pattern classifiers (including many well-known neural network paradigms) can learn stochastic concepts efficiently. We describe the relationship between a classifier's ability to generate well to unseen test examples and the efficiency of the strategy by which it learns. We list a series of proofs that differential learning is efficient in its information and computational resource requirements, whereas traditional probabilistic learning strategies are not. The proofs are illustrated by a simple example that lends itself to closed-form analysis. We conclude with an optical character recognition task for which three different types of differentially generated classifiers generalize significantly better than their probabilistically generated counterparts.

  2. Differential theory of learning for efficient neural network pattern recognition

    NASA Astrophysics Data System (ADS)

    Hampshire, John B., II; Vijaya Kumar, Bhagavatula

    1993-08-01

    We describe a new theory of differential learning by which a broad family of pattern classifiers (including many well-known neural network paradigms) can learn stochastic concepts efficiently. We describe the relationship between a classifier's ability to generalize well to unseen test examples and the efficiency of the strategy by which it learns. We list a series of proofs that differential learning is efficient in its information and computational resource requirements, whereas traditional probabilistic learning strategies are not. The proofs are illustrated by a simple example that lends itself to closed-form analysis. We conclude with an optical character recognition task for which three different types of differentially generated classifiers generalize significantly better than their probabilistically generated counterparts.

  3. The effect of computer-assisted learning versus conventional teaching methods on the acquisition and retention of handwashing theory and skills in pre-qualification nursing students: a randomised controlled trial.

    PubMed

    Bloomfield, Jacqueline; Roberts, Julia; While, Alison

    2010-03-01

    High quality health care demands a nursing workforce with sound clinical skills. However, the clinical competency of newly qualified nurses continues to stimulate debate about the adequacy of current methods of clinical skills education and emphasises the need for innovative teaching strategies. Despite the increasing use of e-learning within nurse education, evidence to support its use for clinical skills teaching is limited and inconclusive. This study tested whether nursing students could learn and retain the theory and skill of handwashing more effectively when taught using computer-assisted learning compared with conventional face-to-face methods. The study employed a two group randomised controlled design. The intervention group used an interactive, multimedia, self-directed computer-assisted learning module. The control group was taught by an experienced lecturer in a clinical skills room. Data were collected over a 5-month period between October 2004 and February 2005. Knowledge was tested at four time points and handwashing skills were assessed twice. Two-hundred and forty-two first year nursing students of mixed gender; age; educational background and first language studying at one British university were recruited to the study. Participant attrition increased during the study. Knowledge scores increased significantly from baseline in both groups and no significant differences were detected between the scores of the two groups. Skill performance scores were similar in both groups at the 2-week follow-up with significant differences emerging at the 8-week follow-up in favour of the intervention group, however, this finding must be interpreted with caution in light of sample size and attrition rates. The computer-assisted learning module was an effective strategy for teaching both the theory and practice of handwashing to nursing students and in this study was found to be at least as effective as conventional face-to-face teaching methods. Copyright 2009 Elsevier Ltd. All rights reserved.

  4. Rethinking Extinction

    PubMed Central

    Dunsmoor, Joseph E.; Niv, Yael; Daw, Nathaniel; Phelps, Elizabeth A.

    2015-01-01

    Extinction serves as the leading theoretical framework and experimental model to describe how learned behaviors diminish through absence of anticipated reinforcement. In the past decade, extinction has moved beyond the realm of associative learning theory and behavioral experimentation in animals and has become a topic of considerable interest in the neuroscience of learning, memory, and emotion. Here, we review research and theories of extinction, both as a learning process and as a behavioral technique, and consider whether traditional understandings warrant a re-examination. We discuss the neurobiology, cognitive factors, and major computational theories, and revisit the predominant view that extinction results in new learning that interferes with expression of the original memory. Additionally, we reconsider the limitations of extinction as a technique to prevent the relapse of maladaptive behavior, and discuss novel approaches, informed by contemporary theoretical advances, that augment traditional extinction methods to target and potentially alter maladaptive memories. PMID:26447572

  5. Computer Aided Instruction: A Study of Student Evaluations and Academic Performance

    ERIC Educational Resources Information Center

    Collins, David; Deck, Alan; McCrickard, Myra

    2008-01-01

    Computer aided instruction (CAI) encompasses a broad range of computer technologies that supplement the classroom learning environment and can dramatically increase a student's access to information. Criticism of CAI generally focuses on two issues: it lacks an adequate foundation in educational theory and the software is difficult to implement…

  6. Computer Ethics: A Slow Fade from Black and White to Shades of Gray

    ERIC Educational Resources Information Center

    Kraft, Theresa A.; Carlisle, Judith

    2011-01-01

    The expanded use of teaching case based analysis based on current events and news stories relating to computer ethics improves student engagement, encourages creativity and fosters an active learning environment. Professional ethics standards, accreditation standards for computer curriculum, ethics theories, resources for ethics on the internet,…

  7. A Quantitative Exploration of Preservice Teachers' Intent to Use Computer-based Technology

    ERIC Educational Resources Information Center

    Kim, Kioh; Jain, Sachin; Westhoff, Guy; Rezabek, Landra

    2008-01-01

    Based on Bandura's (1977) social learning theory, the purpose of this study is to identify the relationship of preservice teachers' perceptions of faculty modeling of computer-based technology and preservice teachers' intent of using computer-based technology in educational settings. There were 92 participants in this study; they were enrolled in…

  8. A Novel Machine Learning Classifier Based on a Qualia Modeling Agent (QMA)

    DTIC Science & Technology

    Information Theory (IIT) of Consciousness , which proposes that the fundamental structural elements of consciousness are qualia. By modeling the...This research develops a computational agent, which overcomes this problem. The Qualia Modeling Agent (QMA) is modeled after two cognitive theories

  9. Gender Divide and Acceptance of Collaborative Web 2.0 Applications for Learning in Higher Education

    ERIC Educational Resources Information Center

    Huang, Wen-Hao David; Hood, Denice Ward; Yoo, Sun Joo

    2013-01-01

    Situated in the gender digital divide framework, this survey study investigated the role of computer anxiety in influencing female college students' perceptions toward Web 2.0 applications for learning. Based on 432 college students' "Web 2.0 for learning" perception ratings collected by relevant categories of "Unified Theory of Acceptance and Use…

  10. Optimising ICT Effectiveness in Instruction and Learning: Multilevel Transformation Theory and a Pilot Project in Secondary Education

    ERIC Educational Resources Information Center

    Mooij, Ton

    2004-01-01

    Specific combinations of educational and ICT conditions including computer use may optimise learning processes, particularly for learners at risk. This position paper asks which curricular, instructional, and ICT characteristics can be expected to optimise learning processes and outcomes, and how to best achieve this optimization. A theoretical…

  11. Redesigning College Algebra: Combining Educational Theory and Web-Based Learning to Improve Student Attitudes and Performance

    ERIC Educational Resources Information Center

    Hagerty, Gary; Smith, Stanley; Goodwin, Danielle

    2010-01-01

    In 2001, Black Hills State University (BHSU) redesigned college algebra to use the computer-based mastery learning program, Assessment and Learning in Knowledge Spaces [1], historical development of concepts modules, whole class discussions, cooperative activities, relevant applications problems, and many fewer lectures. This resulted in a 21%…

  12. Multimedia as a Means to Enhance Teaching Technical Vocabulary to Physics Undergraduates in Rwanda

    ERIC Educational Resources Information Center

    Rusanganwa, Joseph

    2013-01-01

    This study investigates whether the integration of ICT in education can facilitate teaching and learning. An example of such integration is computer assisted language learning (CALL) of English technical vocabulary by undergraduate physics students in Rwanda. The study draws on theories of cognitive load and multimedia learning to explore learning…

  13. Computer-Mediated Counter-Arguments and Individual Learning

    ERIC Educational Resources Information Center

    Hsu, Jack Shih-Chieh; Huang, Hsieh-Hong; Linden, Lars P.

    2011-01-01

    This study explores a de-bias function for a decision support systems (DSS) that is designed to help a user avoid confirmation bias by increasing the user's learning opportunities. Grounded upon the theory of mental models, the use of DSS is viewed as involving a learning process, whereby a user is directed to build mental models so as to reduce…

  14. A Computer Environment for Beginners' Learning of Sorting Algorithms: Design and Pilot Evaluation

    ERIC Educational Resources Information Center

    Kordaki, M.; Miatidis, M.; Kapsampelis, G.

    2008-01-01

    This paper presents the design, features and pilot evaluation study of a web-based environment--the SORTING environment--for the learning of sorting algorithms by secondary level education students. The design of this environment is based on modeling methodology, taking into account modern constructivist and social theories of learning while at…

  15. Towards the Development of an Automated Learning Assistant for Vector Calculus: Integration over Planar Regions

    ERIC Educational Resources Information Center

    Yaacob, Yuzita; Wester, Michael; Steinberg, Stanly

    2010-01-01

    This paper presents a prototype of a computer learning assistant ILMEV (Interactive Learning-Mathematica Enhanced Vector calculus) package with the purpose of helping students to understand the theory and applications of integration in vector calculus. The main problem for students using Mathematica is to convert a textbook description of a…

  16. Information Theory, Inference and Learning Algorithms

    NASA Astrophysics Data System (ADS)

    Mackay, David J. C.

    2003-10-01

    Information theory and inference, often taught separately, are here united in one entertaining textbook. These topics lie at the heart of many exciting areas of contemporary science and engineering - communication, signal processing, data mining, machine learning, pattern recognition, computational neuroscience, bioinformatics, and cryptography. This textbook introduces theory in tandem with applications. Information theory is taught alongside practical communication systems, such as arithmetic coding for data compression and sparse-graph codes for error-correction. A toolbox of inference techniques, including message-passing algorithms, Monte Carlo methods, and variational approximations, are developed alongside applications of these tools to clustering, convolutional codes, independent component analysis, and neural networks. The final part of the book describes the state of the art in error-correcting codes, including low-density parity-check codes, turbo codes, and digital fountain codes -- the twenty-first century standards for satellite communications, disk drives, and data broadcast. Richly illustrated, filled with worked examples and over 400 exercises, some with detailed solutions, David MacKay's groundbreaking book is ideal for self-learning and for undergraduate or graduate courses. Interludes on crosswords, evolution, and sex provide entertainment along the way. In sum, this is a textbook on information, communication, and coding for a new generation of students, and an unparalleled entry point into these subjects for professionals in areas as diverse as computational biology, financial engineering, and machine learning.

  17. A new computational account of cognitive control over reinforcement-based decision-making: Modeling of a probabilistic learning task.

    PubMed

    Zendehrouh, Sareh

    2015-11-01

    Recent work on decision-making field offers an account of dual-system theory for decision-making process. This theory holds that this process is conducted by two main controllers: a goal-directed system and a habitual system. In the reinforcement learning (RL) domain, the habitual behaviors are connected with model-free methods, in which appropriate actions are learned through trial-and-error experiences. However, goal-directed behaviors are associated with model-based methods of RL, in which actions are selected using a model of the environment. Studies on cognitive control also suggest that during processes like decision-making, some cortical and subcortical structures work in concert to monitor the consequences of decisions and to adjust control according to current task demands. Here a computational model is presented based on dual system theory and cognitive control perspective of decision-making. The proposed model is used to simulate human performance on a variant of probabilistic learning task. The basic proposal is that the brain implements a dual controller, while an accompanying monitoring system detects some kinds of conflict including a hypothetical cost-conflict one. The simulation results address existing theories about two event-related potentials, namely error related negativity (ERN) and feedback related negativity (FRN), and explore the best account of them. Based on the results, some testable predictions are also presented. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. A Symbolic Model of the Nonconscious Acquisition of Information.

    ERIC Educational Resources Information Center

    Ling, Charles X.; Marinov, Marin

    1994-01-01

    Challenges Smolensky's theory that human intuitive/nonconscious cognitive processes can only be accurately explained in terms of subsymbolic computations in artificial neural networks. Symbolic learning models of two cognitive tasks involving nonconscious acquisition of information are presented: learning production rules and artificial finite…

  19. Art and Dream.

    ERIC Educational Resources Information Center

    Guo, Shesen

    2003-01-01

    A computer-assisted learning/teaching model is conceived with implications of constructivist theory and an analogy between the traditional art form Shuanghuang and the teaching/learning environment. The virtual character of the model interacts with the learner, in the form of human behavior and speech supported by recognition biometrics,…

  20. Interactions of timing and prediction error learning.

    PubMed

    Kirkpatrick, Kimberly

    2014-01-01

    Timing and prediction error learning have historically been treated as independent processes, but growing evidence has indicated that they are not orthogonal. Timing emerges at the earliest time point when conditioned responses are observed, and temporal variables modulate prediction error learning in both simple conditioning and cue competition paradigms. In addition, prediction errors, through changes in reward magnitude or value alter timing of behavior. Thus, there appears to be a bi-directional interaction between timing and prediction error learning. Modern theories have attempted to integrate the two processes with mixed success. A neurocomputational approach to theory development is espoused, which draws on neurobiological evidence to guide and constrain computational model development. Heuristics for future model development are presented with the goal of sparking new approaches to theory development in the timing and prediction error fields. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Developmental Changes in Learning: Computational Mechanisms and Social Influences

    PubMed Central

    Bolenz, Florian; Reiter, Andrea M. F.; Eppinger, Ben

    2017-01-01

    Our ability to learn from the outcomes of our actions and to adapt our decisions accordingly changes over the course of the human lifespan. In recent years, there has been an increasing interest in using computational models to understand developmental changes in learning and decision-making. Moreover, extensions of these models are currently applied to study socio-emotional influences on learning in different age groups, a topic that is of great relevance for applications in education and health psychology. In this article, we aim to provide an introduction to basic ideas underlying computational models of reinforcement learning and focus on parameters and model variants that might be of interest to developmental scientists. We then highlight recent attempts to use reinforcement learning models to study the influence of social information on learning across development. The aim of this review is to illustrate how computational models can be applied in developmental science, what they can add to our understanding of developmental mechanisms and how they can be used to bridge the gap between psychological and neurobiological theories of development. PMID:29250006

  2. ATR applications of minimax entropy models of texture and shape

    NASA Astrophysics Data System (ADS)

    Zhu, Song-Chun; Yuille, Alan L.; Lanterman, Aaron D.

    2001-10-01

    Concepts from information theory have recently found favor in both the mainstream computer vision community and the military automatic target recognition community. In the computer vision literature, the principles of minimax entropy learning theory have been used to generate rich probabilitistic models of texture and shape. In addition, the method of types and large deviation theory has permitted the difficulty of various texture and shape recognition tasks to be characterized by 'order parameters' that determine how fundamentally vexing a task is, independent of the particular algorithm used. These information-theoretic techniques have been demonstrated using traditional visual imagery in applications such as simulating cheetah skin textures and such as finding roads in aerial imagery. We discuss their application to problems in the specific application domain of automatic target recognition using infrared imagery. We also review recent theoretical and algorithmic developments which permit learning minimax entropy texture models for infrared textures in reasonable timeframes.

  3. An Overview of Selected Theories about Student Learning

    ERIC Educational Resources Information Center

    Goel, Sanjay

    2011-01-01

    Engineering educators are often not familiar with the theories and research findings of educational psychology, adult development, curriculum design, and instruction design. Even the published research in engineering/computing education does not sufficiently leverage this body of knowledge. Often in the educational reports and recommendations by…

  4. THE CURRENT STATUS OF RESEARCH AND THEORY IN HUMAN PROBLEM SOLVING.

    ERIC Educational Resources Information Center

    DAVIS, GARY A.

    PROBLEM-SOLVING THEORIES IN THREE AREAS - TRADITIONAL (STIMULUS-RESPONSE) LEARNING, COGNITIVE-GESTALT APPROACHES, AND COMPUTER AND MATHEMATICAL MODELS - WERE SUMMARIZED. RECENT EMPIRICAL STUDIES (1960-65) ON PROBLEM SOLVING WERE CATEGORIZED ACCORDING TO TYPE OF BEHAVIOR ELICITED BY PARTICULAR PROBLEM-SOLVING TASKS. ANAGRAM,…

  5. A Survey of Computer Science Capstone Course Literature

    ERIC Educational Resources Information Center

    Dugan, Robert F., Jr.

    2011-01-01

    In this article, we surveyed literature related to undergraduate computer science capstone courses. The survey was organized around course and project issues. Course issues included: course models, learning theories, course goals, course topics, student evaluation, and course evaluation. Project issues included: software process models, software…

  6. Optimal Sequential Rules for Computer-Based Instruction.

    ERIC Educational Resources Information Center

    Vos, Hans J.

    1998-01-01

    Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…

  7. Teaching Machines, Programming, Computers, and Instructional Technology: The Roots of Performance Technology.

    ERIC Educational Resources Information Center

    Deutsch, William

    1992-01-01

    Reviews the history of the development of the field of performance technology. Highlights include early teaching machines, instructional technology, learning theory, programed instruction, the systems approach, needs assessment, branching versus linear program formats, programing languages, and computer-assisted instruction. (LRW)

  8. Criteria for Evaluating a Game-Based CALL Platform

    ERIC Educational Resources Information Center

    Ní Chiaráin, Neasa; Ní Chasaide, Ailbhe

    2017-01-01

    Game-based Computer-Assisted Language Learning (CALL) is an area that currently warrants attention, as task-based, interactive, multimodal games increasingly show promise for language learning. This area is inherently multidisciplinary--theories from second language acquisition, games, and psychology must be explored and relevant concepts from…

  9. ZAPs: Using Interactive Programs for Learning Psychology

    ERIC Educational Resources Information Center

    Hulshof, Casper D.; Eysink, Tessa H. S.; Loyens, Sofie; de Jong, Ton

    2005-01-01

    ZAPs are short, self-contained computer programs that encourage students to experience psychological phenomena in a vivid, self-explanatory way, and that are meant to evoke enthusiasm about psychological topics. ZAPs were designed according to principles that originate from experiential and discovery learning theories. The interactive approach…

  10. Reducing Foreign Language Communication Apprehension with Computer-Mediated Communication: A Preliminary Study

    ERIC Educational Resources Information Center

    Arnold, Nike

    2007-01-01

    Many studies (e.g., [Beauvois, M.H., 1998. "E-talk: Computer-assisted classroom discussion--attitudes and motivation." In: Swaffar, J., Romano, S., Markley, P., Arens, K. (Eds.), "Language learning online: Theory and practice in the ESL and L2 computer classroom." Labyrinth Publications, Austin, TX, pp. 99-120; Bump, J., 1990. "Radical changes in…

  11. Learner-Environment Fit: University Students in a Computer Room.

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    The purpose of this study was to apply the theory of person-environment fit in assessing student well-being in a university computer room. Subjects were 12 students enrolled in a computer literacy course. Their learning behavior and well-being were evaluated on the basis of three symptoms of video display terminal stress usually found in the…

  12. How Computers Are Used in the Teaching of Music and Speculations about How Artificial Intelligence Could Be Applied to Radically Improve the Learning of Compositional Skills. CITE Report No. 6.

    ERIC Educational Resources Information Center

    Holland, Simon

    This paper forms part of a preliminary survey for work on the application of artificial intelligence theories and techniques to the learning of music composition skills. The paper deals with present day applications of computers to the teaching of music and speculations about how artificial intelligence might be used to foster music composition in…

  13. A Revision of Learning and Teaching = Revision del aprender y del ensenar.

    ERIC Educational Resources Information Center

    Reggini, Horace C.

    1983-01-01

    This review of the findings of recent cognitive science research pertaining to learning and teaching focuses on how science and mathematics are being taught, analyzes how the presence of the computer demonstrates a need for radical rethinking of both the theory and the practice of learning, and points out that if educators fail to consider the…

  14. Teaching about Complex Systems Is No Simple Matter: Building Effective Professional Development for Computer-Supported Complex Systems Instruction

    ERIC Educational Resources Information Center

    Yoon, Susan A.; Anderson, Emma; Koehler-Yom, Jessica; Evans, Chad; Park, Miyoung; Sheldon, Josh; Schoenfeld, Ilana; Wendel, Daniel; Scheintaub, Hal; Klopfer, Eric

    2017-01-01

    The recent next generation science standards in the United States have emphasized learning about complex systems as a core feature of science learning. Over the past 15 years, a number of educational tools and theories have been investigated to help students learn about complex systems; but surprisingly, little research has been devoted to…

  15. Bridging Levels of Analysis: Learning, Information Theory, and the Lexicon

    ERIC Educational Resources Information Center

    Dye, Melody

    2017-01-01

    While information theory is typically considered in the context of modern computing and engineering, its core mathematical principles provide a potentially useful lens through which to consider human language. Like the artificial communication systems such principles were invented to describe, natural languages involve a sender and receiver, a…

  16. Integrated Language Skills CALL Course Design

    ERIC Educational Resources Information Center

    Watson, Kevin; Agawa, Grant

    2013-01-01

    The importance of a structured learning framework or interrelated frameworks is the cornerstone of a solid English as a foreign language (EFL) computer-assisted language learning (CALL) curriculum. While the benefits of CALL are widely promoted in the literature, there is often an endemic discord separating theory and practice. Oftentimes the…

  17. Engaging Language Learners through Technology Integration: Theory, Applications, and Outcomes

    ERIC Educational Resources Information Center

    Li, Shuai, Ed.; Swanson, Peter, Ed.

    2014-01-01

    Web 2.0 technologies, open source software platforms, and mobile applications have transformed teaching and learning of second and foreign languages. Language teaching has transitioned from a teacher-centered approach to a student-centered approach through the use of Computer-Assisted Language Learning (CALL) and new teaching approaches.…

  18. Qualitative Research on "Mediated Dialogism" among Educators and Pupils

    ERIC Educational Resources Information Center

    Hansson, Thomas

    2004-01-01

    The relevance of qualitative research to virtual practices rests on subject knowledge and practical know-how on operations for exchange, growth, learning, and dialogue. Highlighting the discursive perspective, this paper covers theory on emerging didactics for online learning. In doing so, the contents show how computer-mediated learning…

  19. Student Satisfaction with Online Learning: Is It a Psychological Contract?

    ERIC Educational Resources Information Center

    Dziuban, Charles; Moskal, Patsy; Thompson, Jessica; Kramer, Lauren; DeCantis, Genevieve; Hermsdorfer, Andrea

    2015-01-01

    The authors explore the possible relationship between student satisfaction with online learning and the theory of psychological contracts. The study incorporates latent trait models using the image analysis procedure and computation of Anderson and Rubin factors scores with contrasts for students who are satisfied, ambivalent, or dissatisfied with…

  20. The Novelty Exploration Bonus and Its Attentional Modulation

    ERIC Educational Resources Information Center

    Krebs, Ruth M.; Schott, Bjorn H.; Schutze, Hartmut; Duzel, Emrah

    2009-01-01

    We hypothesized that novel stimuli represent salient learning signals that can motivate "exploration" in search for potential rewards. In computational theories of reinforcement learning, this is referred to as the novelty "exploration bonus" for rewards. If true, stimulus novelty should enhance the reward anticipation signals in brain areas that…

  1. Humour in Game-Based Learning

    ERIC Educational Resources Information Center

    Dormann, Claire; Biddle, Robert

    2006-01-01

    This paper focuses on the benefits and utilisation of humour in digital game-based learning. Through the activity theory framework, we emphasise the role of humour as a mediating tool which helps resolve contradictions within the activity system from conjoining educational objectives within the computer game. We then discuss the role of humour…

  2. Ethical Issues in Computer-Assisted Language Learning: Perceptions of Teachers and Learners

    ERIC Educational Resources Information Center

    Wang, Shudong; Heffernan, Neil

    2010-01-01

    Pedagogical theories and the applications of information technology for language learning have been widely researched in various dimensions. However, ethical issues, such as online privacy and security, and learners' personal data disclosure, are not receiving enough research attention. The perceptions and attitudes from those who participate in…

  3. Integrating Incremental Learning and Episodic Memory Models of the Hippocampal Region

    ERIC Educational Resources Information Center

    Meeter, M.; Myers, C. E.; Gluck, M. A.

    2005-01-01

    By integrating previous computational models of corticohippocampal function, the authors develop and test a unified theory of the neural substrates of familiarity, recollection, and classical conditioning. This approach integrates models from 2 traditions of hippocampal modeling, those of episodic memory and incremental learning, by drawing on an…

  4. Theoretical Foundations of Active Learning

    DTIC Science & Technology

    2009-05-01

    on the rate of convergence of the loss of an estimator, as a function of the number of labeled examples observed [e.g., Benedek and Itai , 1988...1537, 2005. 2.8 G. Benedek and A. Itai . Learnability by fixed distributions. In Proc. of the First Workshop on Computational Learning Theory, pages 80

  5. New supervised learning theory applied to cerebellar modeling for suppression of variability of saccade end points.

    PubMed

    Fujita, Masahiko

    2013-06-01

    A new supervised learning theory is proposed for a hierarchical neural network with a single hidden layer of threshold units, which can approximate any continuous transformation, and applied to a cerebellar function to suppress the end-point variability of saccades. In motor systems, feedback control can reduce noise effects if the noise is added in a pathway from a motor center to a peripheral effector; however, it cannot reduce noise effects if the noise is generated in the motor center itself: a new control scheme is necessary for such noise. The cerebellar cortex is well known as a supervised learning system, and a novel theory of cerebellar cortical function developed in this study can explain the capability of the cerebellum to feedforwardly reduce noise effects, such as end-point variability of saccades. This theory assumes that a Golgi-granule cell system can encode the strength of a mossy fiber input as the state of neuronal activity of parallel fibers. By combining these parallel fiber signals with appropriate connection weights to produce a Purkinje cell output, an arbitrary continuous input-output relationship can be obtained. By incorporating such flexible computation and learning ability in a process of saccadic gain adaptation, a new control scheme in which the cerebellar cortex feedforwardly suppresses the end-point variability when it detects a variation in saccadic commands can be devised. Computer simulation confirmed the efficiency of such learning and showed a reduction in the variability of saccadic end points, similar to results obtained from experimental data.

  6. Evaluation of an Educational Computer Programme as a Change Agent in Science Classrooms

    NASA Astrophysics Data System (ADS)

    Muwanga-Zake, Johnnie Wycliffe Frank

    2007-12-01

    I report on benefits from 26 teacher-participant evaluators of a computer game designed to motivate learning and to ease conceptual understanding of biology in South Africa. Using a developmental, social constructivist and interpretative model, the recommendation is to include the value systems and needs of end-users (through social dialogue); curriculum issues (learning theories in the ECP and those the education authorities recommend, as well as ECP-curriculum integration); the nature of the subject the ECP presents (e.g., Nature of Science); and the compatibility of the ECP with school computers.

  7. Optical selectionist approach to optical connectionist systems

    NASA Astrophysics Data System (ADS)

    Caulfield, H. John

    1994-03-01

    Two broad approaches to computing are known - connectionist (which includes Turing Machines but is demonstrably more powerful) and selectionist. Human computer engineers tend to prefer the connectionist approach which includes neural networks. Nature uses both but may show an overall preference for selectionism. "Looking back into the history of biology, it appears that whenever a phenomenon resembles learning, an instructive theory was first proposed to account for the underlying mechanisms. In every case, this was later replaced by a selective theory." - N. K. Jeme, Nobelist in Immunology.

  8. Piece-wise quadratic approximations of arbitrary error functions for fast and robust machine learning.

    PubMed

    Gorban, A N; Mirkes, E M; Zinovyev, A

    2016-12-01

    Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L 1 norm or even sub-linear potentials corresponding to quasinorms L p (0

  9. Lay Theories Regarding Computer-Mediated Communication in Remote Collaboration

    ERIC Educational Resources Information Center

    Parke, Karl; Marsden, Nicola; Connolly, Cornelia

    2017-01-01

    Computer-mediated communication and remote collaboration has become an unexceptional norm as an educational modality for distance and open education, therefore the need to research and analyze students' online learning experience is necessary. This paper seeks to examine the assumptions and expectations held by students in regard to…

  10. Mastering Cognitive Development Theory in Computer Science Education

    ERIC Educational Resources Information Center

    Gluga, Richard; Kay, Judy; Lister, Raymond; Kleitman, Simon; Kleitman, Sabina

    2013-01-01

    To design an effective computer science curriculum, educators require a systematic method of classifying the difficulty level of learning activities and assessment tasks. This is important for curriculum design and implementation and for communication between educators. Different educators must be able to use the method consistently, so that…

  11. Computer-Aided Instruction.

    ERIC Educational Resources Information Center

    Hunt, Graham

    This report discusses the impact of and presents guidelines for developing a computer-aided instructional (CAI) system. The first section discusses CAI in terms of the need for the countries of Asia to increase their economic self-sufficiency. The second section examines various theories on the nature of learning with special attention to the role…

  12. Theoretical Definition of Instructor Role in Computer-Managed Instruction.

    ERIC Educational Resources Information Center

    McCombs, Barbara L.; Dobrovolny, Jacqueline L.

    This report describes the results of a theoretical analysis of the ideal role functions of the Computer Managed Instruction (CMI) instructor. Concepts relevant to instructor behavior are synthesized from both cognitive and operant learning theory perspectives, and the roles allocated to instructors by seven large-scale operational CMI systems are…

  13. Computer-Assisted Instruction to Avert Teen Pregnancy.

    ERIC Educational Resources Information Center

    Starn, Jane Ryburn; Paperny, David M.

    Teenage pregnancy has become a major public health problem in the United States. A study was conducted to assess an intervention based upon computer-assisted instruction (CAI) to avert teenage pregnancy. Social learning and decision theory were applied to mediate the adolescent environment through CAI so that adolescent development would be…

  14. SIAM Conference on Geometric Design and Computing. Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2002-03-11

    The SIAM Conference on Geometric Design and Computing attracted 164 domestic and international researchers, from academia, industry, and government. It provided a stimulating forum in which to learn about the latest developments, to discuss exciting new research directions, and to forge stronger ties between theory and applications. Final Report

  15. ComPLuS Model: A New Insight in Pupils' Collaborative Talk, Actions and Balance during a Computer-Mediated Music Task

    ERIC Educational Resources Information Center

    Nikolaidou, Georgia N.

    2012-01-01

    This exploratory work describes and analyses the collaborative interactions that emerge during computer-based music composition in the primary school. The study draws on socio-cultural theories of learning, originated within Vygotsky's theoretical context, and proposes a new model, namely Computer-mediated Praxis and Logos under Synergy (ComPLuS).…

  16. Using Commercial-off-the-Shelf Computer Games to Train and Educate Complexity and Complex Decision-Making

    DTIC Science & Technology

    2008-09-01

    Jean Piaget is one of the pioneers of constructivist learning theory , Piaget states that knowledge is constructed and learning occurs through an...the mechanics of each game. For instance, if a training program is developed around the u.S. Army’s America ’ s Army computer games then little funds...gathering and maintaining the data needed. and C04󈧏pIeting and reviewing this collection of information. Send OOIT’II’lents regarding thi s burden

  17. SELECTED ANNOTATED BIBLIOGRAPHY ON SYSTEMS OF THEORETICAL DEVICES,

    DTIC Science & Technology

    BIONICS, BIBLIOGRAPHIES), (*BIBLIOGRAPHIES, BIONICS), (*CYBERNETICS, BIBLIOGRAPHIES), MATHEMATICS, COMPUTER LOGIC, NETWORKS, NERVOUS SYSTEM , THEORY , SEQUENCE SWITCHES, SWITCHING CIRCUITS, REDUNDANT COMPONENTS, LEARNING, MATHEMATICAL MODELS, BEHAVIOR, NERVES, SIMULATION, NERVE CELLS

  18. Mirror representations innate versus determined by experience: a viewpoint from learning theory.

    PubMed

    Giese, Martin A

    2014-04-01

    From the viewpoint of pattern recognition and computational learning, mirror neurons form an interesting multimodal representation that links action perception and planning. While it seems unlikely that all details of such representations are specified by the genetic code, robust learning of such complex representations likely requires an appropriate interplay between plasticity, generalization, and anatomical constraints of the underlying neural architecture.

  19. Training Technology Handbook Development. Phase I. Annotated Literature Review.

    DTIC Science & Technology

    1981-08-01

    chief means for currently influencing the students , learning is through the sequencing of instruction . Use of the findings, models, and theories from...determine what aspects of the learning experience might influence student attitudes toward computer-assisted instruction (CAI). Sixty-four randomly...learners seem to learn most efficiently when left alone with the instructional objective and the necessary materials. The middle aptitude trainees appear to

  20. For Whom Is a Picture Worth a Thousand Words? Extensions of a Dual-Coding Theory of Multimedia Learning.

    ERIC Educational Resources Information Center

    Mayer, Richard E.; Sims, Valerie K.

    1994-01-01

    In 2 experiments, 162 high- and low-spatial ability students viewed a computer-generated animation and heard a concurrent or successive explanation. The concurrent group generated more creative solutions to transfer problems and demonstrated a contiguity effect consistent with dual-coding theory. (SLD)

  1. Enhancing Student Explanations of Evolution: Comparing Elaborating and Competing Theory Prompts

    ERIC Educational Resources Information Center

    Donnelly, Dermot F.; Namdar, Bahadir; Vitale, Jonathan M.; Lai, Kevin; Linn, Marcia C.

    2016-01-01

    In this study, we explore how two different prompt types within an online computer-based inquiry learning environment enhance 392 7th grade students' explanations of evolution with three teachers. In the "elaborating" prompt condition, students are prompted to write explanations that support the accepted theory of evolution. In the…

  2. A Schema Theory Account of Some Cognitive Processes in Complex Learning. Technical Report No. 81.

    ERIC Educational Resources Information Center

    Munro, Allen; Rigney, Joseph W.

    Procedural semantics models have diminished the distinction between data structures and procedures in computer simulations of human intelligence. This development has theoretical consequences for models of cognition. One type of procedural semantics model, called schema theory, is presented, and a variety of cognitive processes are explained in…

  3. The Role and Design of Screen Images in Software Documentation.

    ERIC Educational Resources Information Center

    van der Meij, Hans

    2000-01-01

    Discussion of learning a new computer software program focuses on how to support the joint handling of a manual, input devices, and screen display. Describes a study that examined three design styles for manuals that included screen images to reduce split-attention problems and discusses theory versus practice and cognitive load theory.…

  4. Why Computational Models Are Better than Verbal Theories: The Case of Nonword Repetition

    ERIC Educational Resources Information Center

    Jones, Gary; Gobet, Fernand; Freudenthal, Daniel; Watson, Sarah E.; Pine, Julian M.

    2014-01-01

    Tests of nonword repetition (NWR) have often been used to examine children's phonological knowledge and word learning abilities. However, theories of NWR primarily explain performance either in terms of phonological working memory or long-term knowledge, with little consideration of how these processes interact. One theoretical account that…

  5. Artificial grammar learning meets formal language theory: an overview

    PubMed Central

    Fitch, W. Tecumseh; Friederici, Angela D.

    2012-01-01

    Formal language theory (FLT), part of the broader mathematical theory of computation, provides a systematic terminology and set of conventions for describing rules and the structures they generate, along with a rich body of discoveries and theorems concerning generative rule systems. Despite its name, FLT is not limited to human language, but is equally applicable to computer programs, music, visual patterns, animal vocalizations, RNA structure and even dance. In the last decade, this theory has been profitably used to frame hypotheses and to design brain imaging and animal-learning experiments, mostly using the ‘artificial grammar-learning’ paradigm. We offer a brief, non-technical introduction to FLT and then a more detailed analysis of empirical research based on this theory. We suggest that progress has been hampered by a pervasive conflation of distinct issues, including hierarchy, dependency, complexity and recursion. We offer clarifications of several relevant hypotheses and the experimental designs necessary to test them. We finally review the recent brain imaging literature, using formal languages, identifying areas of convergence and outstanding debates. We conclude that FLT has much to offer scientists who are interested in rigorous empirical investigations of human cognition from a neuroscientific and comparative perspective. PMID:22688631

  6. Pragmatically Framed Cross-Situational Noun Learning Using Computational Reinforcement Models

    PubMed Central

    Najnin, Shamima; Banerjee, Bonny

    2018-01-01

    Cross-situational learning and social pragmatic theories are prominent mechanisms for learning word meanings (i.e., word-object pairs). In this paper, the role of reinforcement is investigated for early word-learning by an artificial agent. When exposed to a group of speakers, the agent comes to understand an initial set of vocabulary items belonging to the language used by the group. Both cross-situational learning and social pragmatic theory are taken into account. As social cues, joint attention and prosodic cues in caregiver's speech are considered. During agent-caregiver interaction, the agent selects a word from the caregiver's utterance and learns the relations between that word and the objects in its visual environment. The “novel words to novel objects” language-specific constraint is assumed for computing rewards. The models are learned by maximizing the expected reward using reinforcement learning algorithms [i.e., table-based algorithms: Q-learning, SARSA, SARSA-λ, and neural network-based algorithms: Q-learning for neural network (Q-NN), neural-fitted Q-network (NFQ), and deep Q-network (DQN)]. Neural network-based reinforcement learning models are chosen over table-based models for better generalization and quicker convergence. Simulations are carried out using mother-infant interaction CHILDES dataset for learning word-object pairings. Reinforcement is modeled in two cross-situational learning cases: (1) with joint attention (Attentional models), and (2) with joint attention and prosodic cues (Attentional-prosodic models). Attentional-prosodic models manifest superior performance to Attentional ones for the task of word-learning. The Attentional-prosodic DQN outperforms existing word-learning models for the same task. PMID:29441027

  7. Aspects of a Theory of Simplification, Debugging, and Coaching.

    ERIC Educational Resources Information Center

    Fischer, Gerhard; And Others

    This paper analyses new methods of teaching skiing in terms of a computational paradigm for learning called increasingly complex microworlds (ICM). Examining the factors that underlie the dramatic enhancement of the learning of skiing led to the focus on the processes of simplification, debugging, and coaching. These three processes are studied in…

  8. Computer-Supported Inquiry Learning: Effects of Training and Practice

    ERIC Educational Resources Information Center

    Beishuizen, Jos; Wilhelm, Pascal; Schimmel, Marieke

    2004-01-01

    Inquiry learning requires the ability to understand that theory and evidence have to be distinguished and co-ordinated. Moreover, learners have to be able to control two or more independent variables when formulating hypotheses, designing experiments and interpreting outcomes. Can sixth-grade (9-10 years) children be trained to acquire these…

  9. Some Technical Implications of Distributed Cognition on the Design on Interactive Learning Environments.

    ERIC Educational Resources Information Center

    Dillenbourg, Pierre

    1996-01-01

    Maintains that diagnosis, explanation, and tutoring, the functions of an interactive learning environment, are collaborative processes. Examines how human-computer interaction can be improved using a distributed cognition framework. Discusses situational and distributed knowledge theories and provides a model on how they can be used to redesign…

  10. The Brain as an Efficient and Robust Adaptive Learner.

    PubMed

    Denève, Sophie; Alemi, Alireza; Bourdoukan, Ralph

    2017-06-07

    Understanding how the brain learns to compute functions reliably, efficiently, and robustly with noisy spiking activity is a fundamental challenge in neuroscience. Most sensory and motor tasks can be described as dynamical systems and could presumably be learned by adjusting connection weights in a recurrent biological neural network. However, this is greatly complicated by the credit assignment problem for learning in recurrent networks, e.g., the contribution of each connection to the global output error cannot be determined based only on locally accessible quantities to the synapse. Combining tools from adaptive control theory and efficient coding theories, we propose that neural circuits can indeed learn complex dynamic tasks with local synaptic plasticity rules as long as they associate two experimentally established neural mechanisms. First, they should receive top-down feedbacks driving both their activity and their synaptic plasticity. Second, inhibitory interneurons should maintain a tight balance between excitation and inhibition in the circuit. The resulting networks could learn arbitrary dynamical systems and produce irregular spike trains as variable as those observed experimentally. Yet, this variability in single neurons may hide an extremely efficient and robust computation at the population level. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Researching Computer-Based Collaborative Learning in Inclusive Classrooms in Cyprus: The Role of the Computer in Pupils' Interaction

    ERIC Educational Resources Information Center

    Mavrou, Katerina; Lewis, Ann; Douglas, Graeme

    2010-01-01

    This paper discusses the results of a study of the role of the computer in scaffolding pupils' interaction and its effects on the disabled (D) pupils' participation and inclusion in the context of socio-cultural theories and the ideals of inclusive education. The study investigated the interactions of pairs of D and non-disabled (ND) pupils…

  12. Collaborative Dialogue in Synchronous Computer-Mediated Communication and Face-to-Face Communication

    ERIC Educational Resources Information Center

    Zeng, Gang

    2017-01-01

    Previous research has documented that collaborative dialogue promotes L2 learning in both face-to-face (F2F) and synchronous computer-mediated communication (SCMC) modalities. However, relatively little research has explored modality effects on collaborative dialogue. Thus, motivated by sociocultual theory, this study examines how F2F compares…

  13. Information Prosthetics for the Handicapped. Artificial Intelligence Memo No. 496.

    ERIC Educational Resources Information Center

    Papert, Seymour A.; Weir, Sylvia

    The proposal outlines a study to assess the role of computers in assessing and instructing students with severe cerebral palsy in spatial and communication skills. The computer's capacity to make learning interesting and challenging to the severely disabled student is noted, along with its use as a diagnostic tool. Implications for theories on…

  14. Analysis of Computer Algebra System Tutorials Using Cognitive Load Theory

    ERIC Educational Resources Information Center

    May, Patricia

    2004-01-01

    Most research in the area of Computer Algebra Systems (CAS) has been designed to compare the effectiveness of instructional technology to traditional lecture-based formats. While results are promising, research also indicates evidence of the steep learning curve imposed by the technology. Yet no studies have been conducted to investigate this…

  15. Collaborative Research Goes to School: Guided Inquiry with Computers in Classrooms. Technical Report.

    ERIC Educational Resources Information Center

    Wiske, Martha Stone; And Others

    Twin aims--to advance theory and to improve practice in science, mathematics, and computing education--guided the Educational Technology Center's (ETC) research from its inception in 1983. These aims led ETC to establish collaborative research groups in which people whose primary interest was classroom teaching and learning, and researchers…

  16. Investigating the Effectiveness of Computer Simulations for Chemistry Learning

    ERIC Educational Resources Information Center

    Plass, Jan L.; Milne, Catherine; Homer, Bruce D.; Schwartz, Ruth N.; Hayward, Elizabeth O.; Jordan, Trace; Verkuilen, Jay; Ng, Florrie; Wang, Yan; Barrientos, Juan

    2012-01-01

    Are well-designed computer simulations an effective tool to support student understanding of complex concepts in chemistry when integrated into high school science classrooms? We investigated scaling up the use of a sequence of simulations of kinetic molecular theory and associated topics of diffusion, gas laws, and phase change, which we designed…

  17. Formalizing Neurath's ship: Approximate algorithms for online causal learning.

    PubMed

    Bramley, Neil R; Dayan, Peter; Griffiths, Thomas L; Lagnado, David A

    2017-04-01

    Higher-level cognition depends on the ability to learn models of the world. We can characterize this at the computational level as a structure-learning problem with the goal of best identifying the prevailing causal relationships among a set of relata. However, the computational cost of performing exact Bayesian inference over causal models grows rapidly as the number of relata increases. This implies that the cognitive processes underlying causal learning must be substantially approximate. A powerful class of approximations that focuses on the sequential absorption of successive inputs is captured by the Neurath's ship metaphor in philosophy of science, where theory change is cast as a stochastic and gradual process shaped as much by people's limited willingness to abandon their current theory when considering alternatives as by the ground truth they hope to approach. Inspired by this metaphor and by algorithms for approximating Bayesian inference in machine learning, we propose an algorithmic-level model of causal structure learning under which learners represent only a single global hypothesis that they update locally as they gather evidence. We propose a related scheme for understanding how, under these limitations, learners choose informative interventions that manipulate the causal system to help elucidate its workings. We find support for our approach in the analysis of 3 experiments. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. The neuroscience of learning: beyond the Hebbian synapse.

    PubMed

    Gallistel, C R; Matzel, Louis D

    2013-01-01

    From the traditional perspective of associative learning theory, the hypothesis linking modifications of synaptic transmission to learning and memory is plausible. It is less so from an information-processing perspective, in which learning is mediated by computations that make implicit commitments to physical and mathematical principles governing the domains where domain-specific cognitive mechanisms operate. We compare the properties of associative learning and memory to the properties of long-term potentiation, concluding that the properties of the latter do not explain the fundamental properties of the former. We briefly review the neuroscience of reinforcement learning, emphasizing the representational implications of the neuroscientific findings. We then review more extensively findings that confirm the existence of complex computations in three information-processing domains: probabilistic inference, the representation of uncertainty, and the representation of space. We argue for a change in the conceptual framework within which neuroscientists approach the study of learning mechanisms in the brain.

  19. The research of computer multimedia assistant in college English listening

    NASA Astrophysics Data System (ADS)

    Zhang, Qian

    2012-04-01

    With the technology development of network information, there exists more and more seriously questions to our education. Computer multimedia application breaks the traditional foreign language teaching and brings new challenges and opportunities for the education. Through the multiple media application, the teaching process is full of animation, image, voice, and characters. This can improve the learning initiative and objective with great development of learning efficiency. During the traditional foreign language teaching, people use characters learning. However, through this method, the theory performance is good but the practical application is low. During the long time computer multimedia application in the foreign language teaching, many teachers still have prejudice. Therefore, the method is not obtaining the effect. After all the above, the research has significant meaning for improving the teaching quality of foreign language.

  20. A computational visual saliency model based on statistics and machine learning.

    PubMed

    Lin, Ru-Je; Lin, Wei-Song

    2014-08-01

    Identifying the type of stimuli that attracts human visual attention has been an appealing topic for scientists for many years. In particular, marking the salient regions in images is useful for both psychologists and many computer vision applications. In this paper, we propose a computational approach for producing saliency maps using statistics and machine learning methods. Based on four assumptions, three properties (Feature-Prior, Position-Prior, and Feature-Distribution) can be derived and combined by a simple intersection operation to obtain a saliency map. These properties are implemented by a similarity computation, support vector regression (SVR) technique, statistical analysis of training samples, and information theory using low-level features. This technique is able to learn the preferences of human visual behavior while simultaneously considering feature uniqueness. Experimental results show that our approach performs better in predicting human visual attention regions than 12 other models in two test databases. © 2014 ARVO.

  1. A Comparison of Parallelism in Interface Designs for Computer-Based Learning Environments

    ERIC Educational Resources Information Center

    Min, Rik; Yu, Tao; Spenkelink, Gerd; Vos, Hans

    2004-01-01

    In this paper we discuss an experiment that was carried out with a prototype, designed in conformity with the concept of parallelism and the Parallel Instruction theory (the PI theory). We designed this prototype with five different interfaces, and ran an empirical study in which 18 participants completed an abstract task. The five basic designs…

  2. Constraint-Based Modeling: From Cognitive Theory to Computer Tutoring--and Back Again

    ERIC Educational Resources Information Center

    Ohlsson, Stellan

    2016-01-01

    The ideas behind the constraint-based modeling (CBM) approach to the design of intelligent tutoring systems (ITSs) grew out of attempts in the 1980's to clarify how declarative and procedural knowledge interact during skill acquisition. The learning theory that underpins CBM was based on two conceptual innovations. The first innovation was to…

  3. Distance Learning Success--A Perspective from Socio-Technical Systems Theory

    ERIC Educational Resources Information Center

    Wang, Jianfeng; Solan, David; Ghods, Abe

    2010-01-01

    With widespread adoption of computer-based distance education as a mission-critical component of the institution's educational program, the need for evaluation has emerged. In this research, we aim to expand on the systems approach by offering a model for evaluation based on socio-technical systems theory addressing a stated need in the literature…

  4. Using "You've Got Mail" to Teach Social Information Processing Theory and Hyperpersonal Perspective in Online Interactions

    ERIC Educational Resources Information Center

    Heinemann, Daria S.

    2011-01-01

    With the expansion of online interactions and exponential growth of Computer Mediated Communication (CMC), attention is brought to those theories in communication that address the implications of relationships developed within these contexts. In communication courses students learn about both face-to-face (FtF) and CMC relationships and have the…

  5. Monitoring the Learner--Who, Why and What For?

    ERIC Educational Resources Information Center

    Bertin, Jean-Claude; Narcy-Combes, Jean-Paul

    2007-01-01

    This paper is the result of a need to develop a conceptual framework for monitoring the learner in a computer-mediated language learning environment. In agreement with Chapelle, who suggests that technological capacities must be questioned in the terms of SLA theory, the position held here is that theory is needed even in the case of practical…

  6. Students Perception towards the Implementation of Computer Graphics Technology in Class via Unified Theory of Acceptance and Use of Technology (UTAUT) Model

    NASA Astrophysics Data System (ADS)

    Binti Shamsuddin, Norsila

    Technology advancement and development in a higher learning institution is a chance for students to be motivated to learn in depth in the information technology areas. Students should take hold of the opportunity to blend their skills towards these technologies as preparation for them when graduating. The curriculum itself can rise up the students' interest and persuade them to be directly involved in the evolvement of the technology. The aim of this study is to see how deep is the students' involvement as well as their acceptance towards the adoption of the technology used in Computer Graphics and Image Processing subjects. The study will be towards the Bachelor students in Faculty of Industrial Information Technology (FIIT), Universiti Industri Selangor (UNISEL); Bac. In Multimedia Industry, BSc. Computer Science and BSc. Computer Science (Software Engineering). This study utilizes the new Unified Theory of Acceptance and Use of Technology (UTAUT) to further validate the model and enhance our understanding of the adoption of Computer Graphics and Image Processing Technologies. Four (4) out of eight (8) independent factors in UTAUT will be studied towards the dependent factor.

  7. Modeling Reality - How Computers Mirror Life

    NASA Astrophysics Data System (ADS)

    Bialynicki-Birula, Iwo; Bialynicka-Birula, Iwona

    2005-01-01

    The bookModeling Reality covers a wide range of fascinating subjects, accessible to anyone who wants to learn about the use of computer modeling to solve a diverse range of problems, but who does not possess a specialized training in mathematics or computer science. The material presented is pitched at the level of high-school graduates, even though it covers some advanced topics (cellular automata, Shannon's measure of information, deterministic chaos, fractals, game theory, neural networks, genetic algorithms, and Turing machines). These advanced topics are explained in terms of well known simple concepts: Cellular automata - Game of Life, Shannon's formula - Game of twenty questions, Game theory - Television quiz, etc. The book is unique in explaining in a straightforward, yet complete, fashion many important ideas, related to various models of reality and their applications. Twenty-five programs, written especially for this book, are provided on an accompanying CD. They greatly enhance its pedagogical value and make learning of even the more complex topics an enjoyable pleasure.

  8. Geometry of the perceptual space

    NASA Astrophysics Data System (ADS)

    Assadi, Amir H.; Palmer, Stephen; Eghbalnia, Hamid; Carew, John

    1999-09-01

    The concept of space and geometry varies across the subjects. Following Poincare, we consider the construction of the perceptual space as a continuum equipped with a notion of magnitude. The study of the relationships of objects in the perceptual space gives rise to what we may call perceptual geometry. Computational modeling of objects and investigation of their deeper perceptual geometrical properties (beyond qualitative arguments) require a mathematical representation of the perceptual space. Within the realm of such a mathematical/computational representation, visual perception can be studied as in the well-understood logic-based geometry. This, however, does not mean that one could reduce all problems of visual perception to their geometric counterparts. Rather, visual perception as reported by a human observer, has a subjective factor that could be analytically quantified only through statistical reasoning and in the course of repetitive experiments. Thus, the desire to experimentally verify the statements in perceptual geometry leads to an additional probabilistic structure imposed on the perceptual space, whose amplitudes are measured through intervention by human observers. We propose a model for the perceptual space and the case of perception of textured surfaces as a starting point for object recognition. To rigorously present these ideas and propose computational simulations for testing the theory, we present the model of the perceptual geometry of surfaces through an amplification of theory of Riemannian foliation in differential topology, augmented by statistical learning theory. When we refer to the perceptual geometry of a human observer, the theory takes into account the Bayesian formulation of the prior state of the knowledge of the observer and Hebbian learning. We use a Parallel Distributed Connectionist paradigm for computational modeling and experimental verification of our theory.

  9. Learning Science in a Virtual Reality Application: The Impacts of Animated-Virtual Actors' Visual Complexity

    ERIC Educational Resources Information Center

    Kartiko, Iwan; Kavakli, Manolya; Cheng, Ken

    2010-01-01

    As the technology in computer graphics advances, Animated-Virtual Actors (AVAs) in Virtual Reality (VR) applications become increasingly rich and complex. Cognitive Theory of Multimedia Learning (CTML) suggests that complex visual materials could hinder novice learners from attending to the lesson properly. On the other hand, previous studies have…

  10. The Role of Theory and Technology in Learning Video Production: The Challenge of Change

    ERIC Educational Resources Information Center

    Shewbridge, William; Berge, Zane L.

    2004-01-01

    The video production field has evolved beyond being exclusively relevant to broadcast television. The convergence of low-cost consumer cameras and desktop computer editing has led to new applications of video in a wide range of areas, including the classroom. This presents educators with an opportunity to rethink how students learn video…

  11. Measuring Cognitive Load in Test Items: Static Graphics versus Animated Graphics

    ERIC Educational Resources Information Center

    Dindar, M.; Kabakçi Yurdakul, I.; Inan Dönmez, F.

    2015-01-01

    The majority of multimedia learning studies focus on the use of graphics in learning process but very few of them examine the role of graphics in testing students' knowledge. This study investigates the use of static graphics versus animated graphics in a computer-based English achievement test from a cognitive load theory perspective. Three…

  12. Memorization Effects of Pronunciation and Stroke Order Animation in Digital Flashcards

    ERIC Educational Resources Information Center

    Zhu, Yu; Fung, Andy S. L.; Wang, Hongyan

    2012-01-01

    Digital flashcards are one of the most popular self-study computer-assisted vocabulary learning tools for beginners of Chinese as a foreign language. However, studies on the effects of this widely used learning tool are scarce. Introducing a new concept--referential stimulus--into the Dual Coding Theory (DCT) framework, this study acknowledges the…

  13. Case-Based Planning: An Integrated Theory of Planning, Learning and Memory

    DTIC Science & Technology

    1986-10-01

    rtvoeoo oldo II nocomtmry and Idonltly by block numbor) planning Case-based reasoning learning Artificial Intelligence 20. ABSTRACT (Conllnum...Computational Model of Analogical Prob- lem Solving, Proceedings of the Seventh International Joint Conference on Artificial Intelligence ...Understanding and Generalizing Plans., Proceedings of the Eight Interna- tional Joint Conference on Artificial Intelligence , IJCAI, Karlsrhue, Germany

  14. Virtual Learning. A Revolutionary Approach to Building a Highly Skilled Workforce.

    ERIC Educational Resources Information Center

    Schank, Roger

    This book offers trainers and human resource managers an alternative approach to train people more effectively and capitalize on multimedia-based tools. The approach is based on computer-based training and virtual learning theory. Chapter 1 discusses how to remedy problems caused by bad training. Chapter 2 focuses on simulating work and creating…

  15. New Theoretical Frameworks for Machine Learning

    DTIC Science & Technology

    2008-09-15

    New York, 1974. 6.3 [52] G.M. Benedek and A. Itai . Learnability by fixed distributions. In Proc. 1st Workshop Computat. Learning Theory, pages 80–90...1988. 3.4.3 [53] G.M. Benedek and A. Itai . Learnability with respect to a fixed distribution. Theoretical Computer Science, 86:377–389, 1991. 2.1, 2.1.1

  16. Cross-Cultural Competence in the Department of Defense: An Annotated Bibliography

    DTIC Science & Technology

    2014-04-01

    computer toward the best possible strategy. The article outlines, in detail, how the game is played in theory as well as how it was played in this...validation of the CQS: The cultural intelligence scale. In S. Ang & L. Van Dyne (Eds.), Handbook of cultural intelligence: Theory , measurement, and...weaknesses of various approaches, general learning theory , and the utility of employing civilian style education to prepare Soldiers to interact in

  17. Reinforcement Learning and Episodic Memory in Humans and Animals: An Integrative Framework.

    PubMed

    Gershman, Samuel J; Daw, Nathaniel D

    2017-01-03

    We review the psychology and neuroscience of reinforcement learning (RL), which has experienced significant progress in the past two decades, enabled by the comprehensive experimental study of simple learning and decision-making tasks. However, one challenge in the study of RL is computational: The simplicity of these tasks ignores important aspects of reinforcement learning in the real world: (a) State spaces are high-dimensional, continuous, and partially observable; this implies that (b) data are relatively sparse and, indeed, precisely the same situation may never be encountered twice; furthermore, (c) rewards depend on the long-term consequences of actions in ways that violate the classical assumptions that make RL tractable. A seemingly distinct challenge is that, cognitively, theories of RL have largely involved procedural and semantic memory, the way in which knowledge about action values or world models extracted gradually from many experiences can drive choice. This focus on semantic memory leaves out many aspects of memory, such as episodic memory, related to the traces of individual events. We suggest that these two challenges are related. The computational challenge can be dealt with, in part, by endowing RL systems with episodic memory, allowing them to (a) efficiently approximate value functions over complex state spaces, (b) learn with very little data, and (c) bridge long-term dependencies between actions and rewards. We review the computational theory underlying this proposal and the empirical evidence to support it. Our proposal suggests that the ubiquitous and diverse roles of memory in RL may function as part of an integrated learning system.

  18. Developing a new experimental system for an undergraduate laboratory exercise to teach theories of visuomotor learning.

    PubMed

    Kasuga, Shoko; Ushiba, Junichi

    2014-01-01

    Humans have a flexible motor ability to adapt their movements to changes in the internal/external environment. For example, using arm-reaching tasks, a number of studies experimentally showed that participants adapt to a novel visuomotor environment. These results helped develop computational models of motor learning implemented in the central nervous system. Despite the importance of such experimental paradigms for exploring the mechanisms of motor learning, because of the cost and preparation time, most students are unable to participate in such experiments. Therefore, in the current study, to help students better understand motor learning theories, we developed a simple finger-reaching experimental system using commonly used laptop PC components with an open-source programming language (Processing Motor Learning Toolkit: PMLT). We found that compared to a commercially available robotic arm-reaching device, our PMLT accomplished similar learning goals (difference in the error reduction between the devices, P = 0.10). In addition, consistent with previous reports from visuomotor learning studies, the participants showed after-effects indicating an adaptation of the motor learning system. The results suggest that PMLT can serve as a new experimental system for an undergraduate laboratory exercise of motor learning theories with minimal time and cost for instructors.

  19. Machine Learning to Discover and Optimize Materials

    NASA Astrophysics Data System (ADS)

    Rosenbrock, Conrad Waldhar

    For centuries, scientists have dreamed of creating materials by design. Rather than discovery by accident, bespoke materials could be tailored to fulfill specific technological needs. Quantum theory and computational methods are essentially equal to the task, and computational power is the new bottleneck. Machine learning has the potential to solve that problem by approximating material behavior at multiple length scales. A full end-to-end solution must allow us to approximate the quantum mechanics, microstructure and engineering tasks well enough to be predictive in the real world. In this dissertation, I present algorithms and methodology to address some of these problems at various length scales. In the realm of enumeration, systems with many degrees of freedom such as high-entropy alloys may contain prohibitively many unique possibilities so that enumerating all of them would exhaust available compute memory. One possible way to address this problem is to know in advance how many possibilities there are so that the user can reduce their search space by restricting the occupation of certain lattice sites. Although tools to calculate this number were available, none performed well for very large systems and none could easily be integrated into low-level languages for use in existing scientific codes. I present an algorithm to solve these problems. Testing the robustness of machine-learned models is an essential component in any materials discovery or optimization application. While it is customary to perform a small number of system-specific tests to validate an approach, this may be insufficient in many cases. In particular, for Cluster Expansion models, the expansion may not converge quickly enough to be useful and reliable. Although the method has been used for decades, a rigorous investigation across many systems to determine when CE "breaks" was still lacking. This dissertation includes this investigation along with heuristics that use only a small training database to predict whether a model is worth pursuing in detail. To be useful, computational materials discovery must lead to experimental validation. However, experiments are difficult due to sample purity, environmental effects and a host of other considerations. In many cases, it is difficult to connect theory to experiment because computation is deterministic. By combining advanced group theory with machine learning, we created a new tool that bridges the gap between experiment and theory so that experimental and computed phase diagrams can be harmonized. Grain boundaries in real materials control many important material properties such as corrosion, thermal conductivity, and creep. Because of their high dimensionality, learning the underlying physics to optimizing grain boundaries is extremely complex. By leveraging a mathematically rigorous representation for local atomic environments, machine learning becomes a powerful tool to approximate properties for grain boundaries. But it also goes beyond predicting properties by highlighting those atomic environments that are most important for influencing the boundary properties. This provides an immense dimensionality reduction that empowers grain boundary scientists to know where to look for deeper physical insights.

  20. Introducing Seismic Tomography with Computational Modeling

    NASA Astrophysics Data System (ADS)

    Neves, R.; Neves, M. L.; Teodoro, V.

    2011-12-01

    Learning seismic tomography principles and techniques involves advanced physical and computational knowledge. In depth learning of such computational skills is a difficult cognitive process that requires a strong background in physics, mathematics and computer programming. The corresponding learning environments and pedagogic methodologies should then involve sets of computational modelling activities with computer software systems which allow students the possibility to improve their mathematical or programming knowledge and simultaneously focus on the learning of seismic wave propagation and inverse theory. To reduce the level of cognitive opacity associated with mathematical or programming knowledge, several computer modelling systems have already been developed (Neves & Teodoro, 2010). Among such systems, Modellus is particularly well suited to achieve this goal because it is a domain general environment for explorative and expressive modelling with the following main advantages: 1) an easy and intuitive creation of mathematical models using just standard mathematical notation; 2) the simultaneous exploration of images, tables, graphs and object animations; 3) the attribution of mathematical properties expressed in the models to animated objects; and finally 4) the computation and display of mathematical quantities obtained from the analysis of images and graphs. Here we describe virtual simulations and educational exercises which enable students an easy grasp of the fundamental of seismic tomography. The simulations make the lecture more interactive and allow students the possibility to overcome their lack of advanced mathematical or programming knowledge and focus on the learning of seismological concepts and processes taking advantage of basic scientific computation methods and tools.

  1. Distributed Bayesian Computation and Self-Organized Learning in Sheets of Spiking Neurons with Local Lateral Inhibition

    PubMed Central

    Bill, Johannes; Buesing, Lars; Habenschuss, Stefan; Nessler, Bernhard; Maass, Wolfgang; Legenstein, Robert

    2015-01-01

    During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input. PMID:26284370

  2. Deep learning for computational chemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goh, Garrett B.; Hodas, Nathan O.; Vishnu, Abhinav

    The rise and fall of artificial neural networks is well documented in the scientific literature of both the fields of computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on “deep” neural networks. Within the last few years, we have seen the transformative impact of deep learning the computer science domain, notably in speech recognition and computer vision, to the extent that the majority of practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. Inmore » this review, we provide an introductory overview into the theory of deep neural networks and their unique properties as compared to traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including QSAR, virtual screening, protein structure modeling, QM calculations, materials synthesis and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non neural networks state-of-the-art models across disparate research topics, and deep neural network based models often exceeded the “glass ceiling” expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a useful tool and may grow into a pivotal role for various challenges in the computational chemistry field.« less

  3. Integrating Mathematical Modeling for Undergraduate Pre-Service Science Education Learning and Instruction in Middle School Classrooms

    ERIC Educational Resources Information Center

    Carrejo, David; Robertson, William H.

    2011-01-01

    Computer-based mathematical modeling in physics is a process of constructing models of concepts and the relationships between them in the scientific characteristics of work. In this manner, computer-based modeling integrates the interactions of natural phenomenon through the use of models, which provide structure for theories and a base for…

  4. Relativity in a Rock Field: A Study of Physics Learning with a Computer Game

    ERIC Educational Resources Information Center

    Carr, David; Bossomaier, Terry

    2011-01-01

    The "Theory of Special Relativity" is widely regarded as a difficult topic for learners in physics to grasp, as it reformulates fundamental conceptions of space, time and motion, and predominantly deals with situations outside of everyday experience. In this paper, we describe embedding the physics of relativity into a computer game, and…

  5. Lost in Second Life: Virtual Embodiment and Language Learning via Multimodal Communication

    ERIC Educational Resources Information Center

    Pasfield-Neofitou, Sarah; Huang, Hui; Grant, Scott

    2015-01-01

    Increased recognition of the role of the body and environment in cognition has taken place in recent decades in the form of new theories of embodied and extended cognition. The growing use of ever more sophisticated computer-generated 3D virtual worlds and avatars has added a new dimension to these theories of cognition. Both developments provide…

  6. The computational nature of memory modification.

    PubMed

    Gershman, Samuel J; Monfils, Marie-H; Norman, Kenneth A; Niv, Yael

    2017-03-15

    Retrieving a memory can modify its influence on subsequent behavior. We develop a computational theory of memory modification, according to which modification of a memory trace occurs through classical associative learning, but which memory trace is eligible for modification depends on a structure learning mechanism that discovers the units of association by segmenting the stream of experience into statistically distinct clusters (latent causes). New memories are formed when the structure learning mechanism infers that a new latent cause underlies current sensory observations. By the same token, old memories are modified when old and new sensory observations are inferred to have been generated by the same latent cause. We derive this framework from probabilistic principles, and present a computational implementation. Simulations demonstrate that our model can reproduce the major experimental findings from studies of memory modification in the Pavlovian conditioning literature.

  7. A multimedia adult literacy program: Combining NASA technology, instructional design theory, and authentic literacy concepts

    NASA Technical Reports Server (NTRS)

    Willis, Jerry W.

    1993-01-01

    For a number of years, the Software Technology Branch of the Information Systems Directorate has been involved in the application of cutting edge hardware and software technologies to instructional tasks related to NASA projects. The branch has developed intelligent computer aided training shells, instructional applications of virtual reality and multimedia, and computer-based instructional packages that use fuzzy logic for both instructional and diagnostic decision making. One outcome of the work on space-related technology-supported instruction has been the creation of a significant pool of human talent in the branch with current expertise on the cutting edges of instructional technologies. When the human talent is combined with advanced technologies for graphics, sound, video, CD-ROM, and high speed computing, the result is a powerful research and development group that both contributes to the applied foundations of instructional technology and creates effective instructional packages that take advantage of a range of advanced technologies. Several branch projects are currently underway that combine NASA-developed expertise to significant instructional problems in public education. The branch, for example, has developed intelligent computer aided software to help high school students learn physics and staff are currently working on a project to produce educational software for young children with language deficits. This report deals with another project, the adult literacy tutor. Unfortunately, while there are a number of computer-based instructional packages available for adult literacy instruction, most of them are based on the same instructional models that failed these students when they were in school. The teacher-centered, discrete skill and drill-oriented, instructional strategies, even when they are supported by color computer graphics and animation, that form the foundation for most of the computer-based literacy packages currently on the market may not be the most effective or most desirable way to use computer technology in literacy programs. This project is developing a series of instructional packages that are based on a different instructional model - authentic instruction. The instructional development model used to create these packages is also different. Instead of using the traditional five stage linear, sequential model based on behavioral learning theory, the project uses the recursive, reflective design and development model (R2D2) that is based on cognitive learning theory, particularly the social constructivism of Vygotsky, and an epistemology based on critical theory. Using alternative instructional and instructional development theories, the result of the summer faculty fellowship is LiteraCity, a multimedia adult literacy instructional package that is a simulation of finding and applying for a job. The program, which is about 120 megabytes, is distributed on CD-ROM.

  8. Integration of Basic Skills into Vocational Education: Expert Systems in Electronics Technology. Vocational Education Research.

    ERIC Educational Resources Information Center

    University of Southwestern Louisiana, Lafayette.

    A student who plans to enter the field of technology education must be especially motivated to incorporate computer technology into the theories of learning. Evaluation prior to the learning process establishes a frame of reference for students. After preparing students with the basic concepts of resistors and the mental tools, the expert system…

  9. Courseware Development Model (CDM): The Effects of CDM on Primary School Pre-Service Teachers' Achievements and Attitudes

    ERIC Educational Resources Information Center

    Efendioglu, Akin

    2012-01-01

    The main purpose of this study is to design a "Courseware Development Model" (CDM) and investigate its effects on pre-service teachers' academic achievements in the field of geography and attitudes toward computer-based education (ATCBE). The CDM consisted of three components: content (C), learning theory, namely, meaningful learning (ML), and…

  10. Pedagogical Praxis: The Professions as Models for Learning in the Age of the Smart Machine. WCER Working Paper No. 2003-6

    ERIC Educational Resources Information Center

    Shaffer, David W.

    2003-01-01

    Successful curricula are not collections of isolated elements; rather, effective learning environments function as coherent systems (Brown & Campione, 1996; see also Papert, 1980; Shaffer, 1998). The theory of pedagogical praxis begins with the premise that computers and other information technologies make it easier for students to become active…

  11. Mobile Learning in Secondary Education: Perceptions and Acceptance of Tablets of Teachers and Pupils

    ERIC Educational Resources Information Center

    Montrieux, Hannelore; Courtois, Cédric; De Grove, Frederik; Raes, Annelies; Schellens, Tammy; De Marez, Lieven

    2013-01-01

    This paper reports on the introduction of the tablet computer as a personal, mobile learning tool in a secondary school in Flanders, Belgium. In this longitudinal research project, drawing upon the Theory of Planned Behavior, we question the relative extent to which attitude, subjective norm, and self-efficacy explain the prospective uptake of the…

  12. The Effects of the Use of Activity-Based Costing Software in the Learning Process: An Empirical Analysis

    ERIC Educational Resources Information Center

    Tan, Andrea; Ferreira, Aldónio

    2012-01-01

    This study investigates the influence of the use of accounting software in teaching activity-based costing (ABC) on the learning process. It draws upon the Theory of Planned Behaviour and uses the end-user computer satisfaction (EUCS) framework to examine students' satisfaction with the ABC software. The study examines students' satisfaction with…

  13. Control Theoretic Modeling for Uncertain Cultural Attitudes and Unknown Adversarial Intent

    DTIC Science & Technology

    2009-02-01

    Constructive computational tools. 15. SUBJECT TERMS social learning, social networks , multiagent systems, game theory 16. SECURITY CLASSIFICATION OF: a...over- reactionary behaviors; 3) analysis of rational social learning in networks : analysis of belief propagation in social networks in various...general methodology as a predictive device for social network formation and for communication network formation with constraints on the lengths of

  14. Information Technology Education for Older Adults as a Continuing Peer-Learning Process: A Chinese Case Study

    ERIC Educational Resources Information Center

    Xie, Bo

    2007-01-01

    This article examines older Chinese's learning and use of computers and the Internet, focusing on the major barriers encountered and strategies employed to overcome those barriers. A total of 33 interviews were conducted in 2004 in Shanghai. Data analysis was guided by grounded theory. The following are the major findings : (a) lack of technical…

  15. Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems

    NASA Astrophysics Data System (ADS)

    Giulioni, Massimiliano; Corradi, Federico; Dante, Vittorio; Del Giudice, Paolo

    2015-10-01

    Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a ‘basin’ of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.

  16. Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems.

    PubMed

    Giulioni, Massimiliano; Corradi, Federico; Dante, Vittorio; del Giudice, Paolo

    2015-10-14

    Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a 'basin' of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.

  17. Learning visual balance from large-scale datasets of aesthetically highly rated images

    NASA Astrophysics Data System (ADS)

    Jahanian, Ali; Vishwanathan, S. V. N.; Allebach, Jan P.

    2015-03-01

    The concept of visual balance is innate for humans, and influences how we perceive visual aesthetics and cognize harmony. Although visual balance is a vital principle of design and taught in schools of designs, it is barely quantified. On the other hand, with emergence of automantic/semi-automatic visual designs for self-publishing, learning visual balance and computationally modeling it, may escalate aesthetics of such designs. In this paper, we present how questing for understanding visual balance inspired us to revisit one of the well-known theories in visual arts, the so called theory of "visual rightness", elucidated by Arnheim. We define Arnheim's hypothesis as a design mining problem with the goal of learning visual balance from work of professionals. We collected a dataset of 120K images that are aesthetically highly rated, from a professional photography website. We then computed factors that contribute to visual balance based on the notion of visual saliency. We fitted a mixture of Gaussians to the saliency maps of the images, and obtained the hotspots of the images. Our inferred Gaussians align with Arnheim's hotspots, and confirm his theory. Moreover, the results support the viability of the center of mass, symmetry, as well as the Rule of Thirds in our dataset.

  18. Promoting elementary students' epistemology of science through computer-supported knowledge-building discourse and epistemic reflection

    NASA Astrophysics Data System (ADS)

    Lin, Feng; Chan, Carol K. K.

    2018-04-01

    This study examined the role of computer-supported knowledge-building discourse and epistemic reflection in promoting elementary-school students' scientific epistemology and science learning. The participants were 39 Grade 5 students who were collectively pursuing ideas and inquiry for knowledge advance using Knowledge Forum (KF) while studying a unit on electricity; they also reflected on the epistemic nature of their discourse. A comparison class of 22 students, taught by the same teacher, studied the same unit using the school's established scientific investigation method. We hypothesised that engaging students in idea-driven and theory-building discourse, as well as scaffolding them to reflect on the epistemic nature of their discourse, would help them understand their own scientific collaborative discourse as a theory-building process, and therefore understand scientific inquiry as an idea-driven and theory-building process. As hypothesised, we found that students engaged in knowledge-building discourse and reflection outperformed comparison students in scientific epistemology and science learning, and that students' understanding of collaborative discourse predicted their post-test scientific epistemology and science learning. To further understand the epistemic change process among knowledge-building students, we analysed their KF discourse to understand whether and how their epistemic practice had changed after epistemic reflection. The implications on ways of promoting epistemic change are discussed.

  19. A Bayesian Theory of Sequential Causal Learning and Abstract Transfer.

    PubMed

    Lu, Hongjing; Rojas, Randall R; Beckers, Tom; Yuille, Alan L

    2016-03-01

    Two key research issues in the field of causal learning are how people acquire causal knowledge when observing data that are presented sequentially, and the level of abstraction at which learning takes place. Does sequential causal learning solely involve the acquisition of specific cause-effect links, or do learners also acquire knowledge about abstract causal constraints? Recent empirical studies have revealed that experience with one set of causal cues can dramatically alter subsequent learning and performance with entirely different cues, suggesting that learning involves abstract transfer, and such transfer effects involve sequential presentation of distinct sets of causal cues. It has been demonstrated that pre-training (or even post-training) can modulate classic causal learning phenomena such as forward and backward blocking. To account for these effects, we propose a Bayesian theory of sequential causal learning. The theory assumes that humans are able to consider and use several alternative causal generative models, each instantiating a different causal integration rule. Model selection is used to decide which integration rule to use in a given learning environment in order to infer causal knowledge from sequential data. Detailed computer simulations demonstrate that humans rely on the abstract characteristics of outcome variables (e.g., binary vs. continuous) to select a causal integration rule, which in turn alters causal learning in a variety of blocking and overshadowing paradigms. When the nature of the outcome variable is ambiguous, humans select the model that yields the best fit with the recent environment, and then apply it to subsequent learning tasks. Based on sequential patterns of cue-outcome co-occurrence, the theory can account for a range of phenomena in sequential causal learning, including various blocking effects, primacy effects in some experimental conditions, and apparently abstract transfer of causal knowledge. Copyright © 2015 Cognitive Science Society, Inc.

  20. A Theory of Object Recognition: Computations and Circuits in the Feedforward Path of the Ventral Stream in Primate Visual Cortex

    DTIC Science & Technology

    2005-12-01

    Computational Learning in the Department of Brain & Cognitive Sciences and in the Computer Science and Artificial Intelligence Laboratory at the Massachusetts...physiology and cognitive science . . . . . . . . . . . . . . . . . . . . . 67 2 CONTENTS A Appendices 68 A.1 Detailed model implementation and...physiol- ogy to cognitive science. The original model [Riesenhuber and Poggio, 1999b] made also a few predictions ranging from biophysics to psychophysics

  1. The dynamics of discrete-time computation, with application to recurrent neural networks and finite state machine extraction.

    PubMed

    Casey, M

    1996-08-15

    Recurrent neural networks (RNNs) can learn to perform finite state computations. It is shown that an RNN performing a finite state computation must organize its state space to mimic the states in the minimal deterministic finite state machine that can perform that computation, and a precise description of the attractor structure of such systems is given. This knowledge effectively predicts activation space dynamics, which allows one to understand RNN computation dynamics in spite of complexity in activation dynamics. This theory provides a theoretical framework for understanding finite state machine (FSM) extraction techniques and can be used to improve training methods for RNNs performing FSM computations. This provides an example of a successful approach to understanding a general class of complex systems that has not been explicitly designed, e.g., systems that have evolved or learned their internal structure.

  2. Proceedings of the NATO IST-128 Workshop: Assessing Mission Impact of Cyberattacks Held in Istanbul, Turkey on 15-17 June 2015

    DTIC Science & Technology

    2015-12-01

    combine satisficing behaviour with learning and adaptation through environmental feedback. This a sequential decision making with one alternative...next action that an opponent will most likely take in a strategic interaction. Also, cognitive models derived from instance- based learning theory (IBL... through instance- based learning . In Y. Li (Ed.), Lecture Notes in Computer Science (Vol. 6818, pp. 281-293). Heidelberg: Springer Berlin. Gonzalez, C

  3. Interactions of spatial strategies producing generalization gradient and blocking: A computational approach

    PubMed Central

    Dollé, Laurent; Chavarriaga, Ricardo

    2018-01-01

    We present a computational model of spatial navigation comprising different learning mechanisms in mammals, i.e., associative, cognitive mapping and parallel systems. This model is able to reproduce a large number of experimental results in different variants of the Morris water maze task, including standard associative phenomena (spatial generalization gradient and blocking), as well as navigation based on cognitive mapping. Furthermore, we show that competitive and cooperative patterns between different navigation strategies in the model allow to explain previous apparently contradictory results supporting either associative or cognitive mechanisms for spatial learning. The key computational mechanism to reconcile experimental results showing different influences of distal and proximal cues on the behavior, different learning times, and different abilities of individuals to alternatively perform spatial and response strategies, relies in the dynamic coordination of navigation strategies, whose performance is evaluated online with a common currency through a modular approach. We provide a set of concrete experimental predictions to further test the computational model. Overall, this computational work sheds new light on inter-individual differences in navigation learning, and provides a formal and mechanistic approach to test various theories of spatial cognition in mammals. PMID:29630600

  4. Instructable autonomous agents. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Huffman, Scott Bradley

    1994-01-01

    In contrast to current intelligent systems, which must be laboriously programmed for each task they are meant to perform, instructable agents can be taught new tasks and associated knowledge. This thesis presents a general theory of learning from tutorial instruction and its use to produce an instructable agent. Tutorial instruction is a particularly powerful form of instruction, because it allows the instructor to communicate whatever kind of knowledge a student needs at whatever point it is needed. To exploit this broad flexibility, however, a tutorable agent must support a full range of interaction with its instructor to learn a full range of knowledge. Thus, unlike most machine learning tasks, which target deep learning of a single kind of knowledge from a single kind of input, tutorability requires a breadth of learning from a broad range of instructional interactions. The theory of learning from tutorial instruction presented here has two parts. First, a computational model of an intelligent agent, the problem space computational model, indicates the types of knowledge that determine an agent's performance, and thus, that should be acquirable via instruction. Second, a learning technique, called situated explanation specifies how the agent learns general knowledge from instruction. The theory is embodied by an implemented agent, Instructo-Soar, built within the Soar architecture. Instructo-Soar is able to learn hierarchies of completely new tasks, to extend task knowledge to apply in new situations, and in fact to acquire every type of knowledge it uses during task performance - control knowledge, knowledge of operators' effects, state inferences, etc. - from interactive natural language instructions. This variety of learning occurs by applying the situated explanation technique to a variety of instructional interactions involving a variety of types of instructions (commands, statements, conditionals, etc.). By taking seriously the requirements of flexible tutorial instruction, Instructo-Soar demonstrates a breadth of interaction and learning capabilities that goes beyond previous instructable systems, such as learning apprentice systems. Instructo-Soar's techniques could form the basis for future 'instructable technologies' that come equipped with basic capabilities, and can be taught by novice users to perform any number of desired tasks.

  5. Cocaine addiction as a homeostatic reinforcement learning disorder.

    PubMed

    Keramati, Mehdi; Durand, Audrey; Girardeau, Paul; Gutkin, Boris; Ahmed, Serge H

    2017-03-01

    Drug addiction implicates both reward learning and homeostatic regulation mechanisms of the brain. This has stimulated 2 partially successful theoretical perspectives on addiction. Many important aspects of addiction, however, remain to be explained within a single, unified framework that integrates the 2 mechanisms. Building upon a recently developed homeostatic reinforcement learning theory, the authors focus on a key transition stage of addiction that is well modeled in animals, escalation of drug use, and propose a computational theory of cocaine addiction where cocaine reinforces behavior due to its rapid homeostatic corrective effect, whereas its chronic use induces slow and long-lasting changes in homeostatic setpoint. Simulations show that our new theory accounts for key behavioral and neurobiological features of addiction, most notably, escalation of cocaine use, drug-primed craving and relapse, individual differences underlying dose-response curves, and dopamine D2-receptor downregulation in addicts. The theory also generates unique predictions about cocaine self-administration behavior in rats that are confirmed by new experimental results. Viewing addiction as a homeostatic reinforcement learning disorder coherently explains many behavioral and neurobiological aspects of the transition to cocaine addiction, and suggests a new perspective toward understanding addiction. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Integrating Computer Spreadsheet Modeling into a Microeconomics Curriculum: Principles to Managerial.

    ERIC Educational Resources Information Center

    Clark, Joy L.; Hegji, Charles E.

    1997-01-01

    Notes that using spreadsheets to teach microeconomics principles enables learning by doing in the exploration of basic concepts. Introduction of increasingly complex topics leads to exploration of theory and managerial decision making. (SK)

  7. Deep learning for computational chemistry.

    PubMed

    Goh, Garrett B; Hodas, Nathan O; Vishnu, Abhinav

    2017-06-15

    The rise and fall of artificial neural networks is well documented in the scientific literature of both computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on multilayer neural networks. Within the last few years, we have seen the transformative impact of deep learning in many domains, particularly in speech recognition and computer vision, to the extent that the majority of expert practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. In this review, we provide an introductory overview into the theory of deep neural networks and their unique properties that distinguish them from traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including quantitative structure activity relationship, virtual screening, protein structure prediction, quantum chemistry, materials design, and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non-neural networks state-of-the-art models across disparate research topics, and deep neural network-based models often exceeded the "glass ceiling" expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a valuable tool for computational chemistry. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  8. Effects of Computer Support, Collaboration, and Time Lag on Performance Self-Efficacy and Transfer of Training: A Longitudinal Meta-Analysis

    ERIC Educational Resources Information Center

    Gegenfurtner, Andreas; Veermans, Koen; Vauras, Marja

    2013-01-01

    This meta-analysis (29 studies, k = 33, N = 4158) examined the longitudinal development of the relationship between performance self-efficacy and transfer before and after training. A specific focus was on training programs that afforded varying degrees of computer-supported collaborative learning (CSCL). Consistent with social cognitive theory,…

  9. The Use of Fuzzy Theory in Grading of Students in Math

    ERIC Educational Resources Information Center

    Bjelica, Momcilo; Rankovic, Dragica

    2010-01-01

    The development of computer science, statistics and other technological fields, give us more opportunities to improve the process of evaluation of degree of knowledge and achievements in a learning process of our students. More and more we are relying on the computer software to guide us in the grading process. An improved way of grading can help…

  10. Evaluating Students' Programming Skill Behaviour and Personalizing Their Computer Learning Environment Using "The Hour of Code" Paradigm

    ERIC Educational Resources Information Center

    Mallios, Nikolaos; Vassilakopoulos, Michael Gr.

    2015-01-01

    One of the most intriguing objectives when teaching computer science in mid-adolescence high school students is attracting and mainly maintaining their concentration within the limits of the class. A number of theories have been proposed and numerous methodologies have been applied, aiming to assist in the implementation of a personalized learning…

  11. Theoretical Investigation of oxides for batteries and fuel cell applications

    NASA Astrophysics Data System (ADS)

    Ganesh, Panchapakesan; Lubimtsev, Andrew A.; Balachandran, Janakiraman

    I will present theoretical studies of Li-ion and proton-conducting oxides using a combination of theory and computations that involve Density Functional Theory based atomistic modeling, cluster-expansion based studies, global optimization, high-throughput computations and machine learning based investigation of ionic transport in oxide materials. In Li-ion intercalated oxides, we explain the experimentally observed (Nature Materials 12, 518-522 (2013)) 'intercalation pseudocapacitance' phenomenon, and explain why Nb2O5 is special to show this behavior when Li-ions are intercalated (J. Mater. Chem. A, 2013,1, 14951-14956), but not when Na-ions are used. In addition, we explore Li-ion intercalation theoretically in VO2 (B) phase, which is somewhat structurally similar to Nb2O5 and predict an interesting role of site-trapping on the voltage and capacity of the material, validated by ongoing experiments. Computations of proton conducting oxides explain why Y-doped BaZrO3 , one of the fastest proton conducting oxide, shows a decrease in conductivity above 20% Y-doping. Further, using high throughput computations and machine learning tools we discover general principles to improve proton conductivity. Acknowledgements: LDRD at ORNL and CNMS at ORNL

  12. Plasticity in the Rat Prefrontal Cortex: Linking Gene Expression and an Operant Learning with a Computational Theory

    PubMed Central

    Rapanelli, Maximiliano; Lew, Sergio Eduardo; Frick, Luciana Romina; Zanutto, Bonifacio Silvano

    2010-01-01

    The plasticity in the medial Prefrontal Cortex (mPFC) of rodents or lateral prefrontal cortex in non human primates (lPFC), plays a key role neural circuits involved in learning and memory. Several genes, like brain-derived neurotrophic factor (BDNF), cAMP response element binding (CREB), Synapsin I, Calcium/calmodulin-dependent protein kinase II (CamKII), activity-regulated cytoskeleton-associated protein (Arc), c-jun and c-fos have been related to plasticity processes. We analysed differential expression of related plasticity genes and immediate early genes in the mPFC of rats during learning an operant conditioning task. Incompletely and completely trained animals were studied because of the distinct events predicted by our computational model at different learning stages. During learning an operant conditioning task, we measured changes in the mRNA levels by Real-Time RT-PCR during learning; expression of these markers associated to plasticity was incremented while learning and such increments began to decline when the task was learned. The plasticity changes in the lPFC during learning predicted by the model matched up with those of the representative gene BDNF. Herein, we showed for the first time that plasticity in the mPFC in rats during learning of an operant conditioning is higher while learning than when the task is learned, using an integrative approach of a computational model and gene expression. PMID:20111591

  13. A computational learning model for metrical phonology.

    PubMed

    Dresher, B E; Kaye, J D

    1990-02-01

    One of the major challenges to linguistic theory is the solution of what has been termed the "projection problem". Simply put, linguistics must account for the fact that starting from a data base that is both unsystematic and relatively small, a human child is capable of constructing a grammar that mirrors, for all intents and purposes, the adult system. In this article we shall address ourselves to the question of the learnability of a postulated subsystem of phonological structure: the stress system. We shall describe a computer program which is designed to acquire this subpart of linguistic structure. Our approach follows the "principles and parameters" model of Chomsky (1981a, b). This model is particularly interesting from both a computational point of view and with respect to the development of learning theories. We encode the relevant aspects of universal grammar (UG)--those aspects of linguistic structure that are presumed innate and thus present in every linguistic system. The learning process consists of fixing a number of parameters which have been shown to underlie stress systems and which should, in principle, lead the learner to the postulation of the system from which the primary linguistic data (i.e., the input to the learner) is drawn. We go on to explore certain formal and substantive properties of this learning system. Questions such as cross-parameter dependencies, determinism, subsets, and incremental versus all-at-once learning are raised and discussed in the article. The issues raised by this study provide another perspective on the formal structure of stress systems and the learnability of parameter systems in general.

  14. Neurodynamic system theory: scope and limits.

    PubMed

    Erdi, P

    1993-06-01

    This paper proposes that neurodynamic system theory may be used to connect structural and functional aspects of neural organization. The paper claims that generalized causal dynamic models are proper tools for describing the self-organizing mechanism of the nervous system. In particular, it is pointed out that ontogeny, development, normal performance, learning, and plasticity, can be treated by coherent concepts and formalism. Taking into account the self-referential character of the brain, autopoiesis, endophysics and hermeneutics are offered as elements of a poststructuralist brain (-mind-computer) theory.

  15. Neural Basis of Reinforcement Learning and Decision Making

    PubMed Central

    Lee, Daeyeol; Seo, Hyojung; Jung, Min Whan

    2012-01-01

    Reinforcement learning is an adaptive process in which an animal utilizes its previous experience to improve the outcomes of future choices. Computational theories of reinforcement learning play a central role in the newly emerging areas of neuroeconomics and decision neuroscience. In this framework, actions are chosen according to their value functions, which describe how much future reward is expected from each action. Value functions can be adjusted not only through reward and penalty, but also by the animal’s knowledge of its current environment. Studies have revealed that a large proportion of the brain is involved in representing and updating value functions and using them to choose an action. However, how the nature of a behavioral task affects the neural mechanisms of reinforcement learning remains incompletely understood. Future studies should uncover the principles by which different computational elements of reinforcement learning are dynamically coordinated across the entire brain. PMID:22462543

  16. Information Compression, Multiple Alignment, and the Representation and Processing of Knowledge in the Brain.

    PubMed

    Wolff, J Gerard

    2016-01-01

    The SP theory of intelligence , with its realization in the SP computer model , aims to simplify and integrate observations and concepts across artificial intelligence, mainstream computing, mathematics, and human perception and cognition, with information compression as a unifying theme. This paper describes how abstract structures and processes in the theory may be realized in terms of neurons, their interconnections, and the transmission of signals between neurons. This part of the SP theory- SP-neural -is a tentative and partial model for the representation and processing of knowledge in the brain. Empirical support for the SP theory-outlined in the paper-provides indirect support for SP-neural. In the abstract part of the SP theory (SP-abstract), all kinds of knowledge are represented with patterns , where a pattern is an array of atomic symbols in one or two dimensions. In SP-neural, the concept of a "pattern" is realized as an array of neurons called a pattern assembly , similar to Hebb's concept of a "cell assembly" but with important differences. Central to the processing of information in SP-abstract is information compression via the matching and unification of patterns (ICMUP) and, more specifically, information compression via the powerful concept of multiple alignment , borrowed and adapted from bioinformatics. Processes such as pattern recognition, reasoning and problem solving are achieved via the building of multiple alignments, while unsupervised learning is achieved by creating patterns from sensory information and also by creating patterns from multiple alignments in which there is a partial match between one pattern and another. It is envisaged that, in SP-neural, short-lived neural structures equivalent to multiple alignments will be created via an inter-play of excitatory and inhibitory neural signals. It is also envisaged that unsupervised learning will be achieved by the creation of pattern assemblies from sensory information and from the neural equivalents of multiple alignments, much as in the non-neural SP theory-and significantly different from the "Hebbian" kinds of learning which are widely used in the kinds of artificial neural network that are popular in computer science. The paper discusses several associated issues, with relevant empirical evidence.

  17. A Computer-Assisted Framework Based on a Cognitivist Learning Theory for Teaching Mathematics in the Early Primary Years

    ERIC Educational Resources Information Center

    Moradmand, Nasrin; Datta, Amitava; Oakley, Grace

    2012-01-01

    With the world moving rapidly into digital media and information, the ways in which learning activities in mathematics can be created and delivered are changing. However, to get the best results from the integration of ICTs in education, any application's design and development needs to be based on pedagogically appropriate principles, in terms of…

  18. Distinct roles of dopamine and subthalamic nucleus in learning and probabilistic decision making.

    PubMed

    Coulthard, Elizabeth J; Bogacz, Rafal; Javed, Shazia; Mooney, Lucy K; Murphy, Gillian; Keeley, Sophie; Whone, Alan L

    2012-12-01

    Even simple behaviour requires us to make decisions based on combining multiple pieces of learned and new information. Making such decisions requires both learning the optimal response to each given stimulus as well as combining probabilistic information from multiple stimuli before selecting a response. Computational theories of decision making predict that learning individual stimulus-response associations and rapid combination of information from multiple stimuli are dependent on different components of basal ganglia circuitry. In particular, learning and retention of memory, required for optimal response choice, are significantly reliant on dopamine, whereas integrating information probabilistically is critically dependent upon functioning of the glutamatergic subthalamic nucleus (computing the 'normalization term' in Bayes' theorem). Here, we test these theories by investigating 22 patients with Parkinson's disease either treated with deep brain stimulation to the subthalamic nucleus and dopaminergic therapy or managed with dopaminergic therapy alone. We use computerized tasks that probe three cognitive functions-information acquisition (learning), memory over a delay and information integration when multiple pieces of sequentially presented information have to be combined. Patients performed the tasks ON or OFF deep brain stimulation and/or ON or OFF dopaminergic therapy. Consistent with the computational theories, we show that stopping dopaminergic therapy impairs memory for probabilistic information over a delay, whereas deep brain stimulation to the region of the subthalamic nucleus disrupts decision making when multiple pieces of acquired information must be combined. Furthermore, we found that when participants needed to update their decision on the basis of the last piece of information presented in the decision-making task, patients with deep brain stimulation of the subthalamic nucleus region did not slow down appropriately to revise their plan, a pattern of behaviour that mirrors the impulsivity described clinically in some patients with subthalamic nucleus deep brain stimulation. Thus, we demonstrate distinct mechanisms for two important facets of human decision making: first, a role for dopamine in memory consolidation, and second, the critical importance of the subthalamic nucleus in successful decision making when multiple pieces of information must be combined.

  19. Application of a model of instrumental conditioning to mobile robot control

    NASA Astrophysics Data System (ADS)

    Saksida, Lisa M.; Touretzky, D. S.

    1997-09-01

    Instrumental conditioning is a psychological process whereby an animal learns to associate its actions with their consequences. This type of learning is exploited in animal training techniques such as 'shaping by successive approximations,' which enables trainers to gradually adjust the animal's behavior by giving strategically timed reinforcements. While this is similar in principle to reinforcement learning, the real phenomenon includes many subtle effects not considered in the machine learning literature. In addition, a good deal of domain information is utilized by an animal learning a new task; it does not start from scratch every time it learns a new behavior. For these reasons, it is not surprising that mobile robot learning algorithms have yet to approach the sophistication and robustness of animal learning. A serious attempt to model instrumental learning could prove fruitful for improving machine learning techniques. In the present paper, we develop a computational theory of shaping at a level appropriate for controlling mobile robots. The theory is based on a series of mechanisms for 'behavior editing,' in which pre-existing behaviors, either innate or previously learned, can be dramatically changed in magnitude, shifted in direction, or otherwise manipulated so as to produce new behavioral routines. We have implemented our theory on Amelia, an RWI B21 mobile robot equipped with a gripper and color video camera. We provide results from training Amelia on several tasks, all of which were constructed as variations of one innate behavior, object-pursuit.

  20. The computational nature of memory modification

    PubMed Central

    Gershman, Samuel J; Monfils, Marie-H; Norman, Kenneth A; Niv, Yael

    2017-01-01

    Retrieving a memory can modify its influence on subsequent behavior. We develop a computational theory of memory modification, according to which modification of a memory trace occurs through classical associative learning, but which memory trace is eligible for modification depends on a structure learning mechanism that discovers the units of association by segmenting the stream of experience into statistically distinct clusters (latent causes). New memories are formed when the structure learning mechanism infers that a new latent cause underlies current sensory observations. By the same token, old memories are modified when old and new sensory observations are inferred to have been generated by the same latent cause. We derive this framework from probabilistic principles, and present a computational implementation. Simulations demonstrate that our model can reproduce the major experimental findings from studies of memory modification in the Pavlovian conditioning literature. DOI: http://dx.doi.org/10.7554/eLife.23763.001 PMID:28294944

  1. Content-oriented Approach to Organization of Theories and Its Utilization

    NASA Astrophysics Data System (ADS)

    Hayashi, Yusuke; Bourdeau, Jacqueline; Mizoguch, Riichiro

    In spite of the fact that the relation between theory and practice is a foundation of scientific and technological development, the trend of increasing the gap between theory and practice accelerates in these years. The gap embraces a risk of distrust of science and technology. Ontological engineering as the content-oriented research is expected to contribute to the resolution of the gap. This paper presents the feasibility of organization of theoretical knowledge on ontological engineering and new-generation intelligent systems based on it through an application of ontological engineering in the area of learning/instruction support. This area also has the problem of the gap between theory and practice, and its resolution is strongly required. So far we proposed OMNIBUS ontology, which is a comprehensive ontology that covers different learning/instructional theories and paradigms, and SMARTIES, which is a theory-aware and standard-compliant authoring system for making learning/instructional scenarios based on OMNIBUS ontology. We believe the theory-awareness and standard-compliance bridge the gap between theory and practice because it links theories to practical use of standard technologies and enables practitioners to easily enjoy theoretical support while using standard technologies in practice. The following goals are set in order to achieve it; computers (1) understand a variety of learning/instructional theories based on the organization of them, (2) utilize the understanding for helping authors' learning/instructional scenario making and (3) make such theoretically sound scenarios interoperable within the framework of standard technologies. This paper suggests an ontological engineering solution to the achievement of these three goals. Although the evaluation is far from complete in terms of practical use, we believe that the results of this study address high-level technical challenges from the viewpoint of the current state of the art in the research area of artificial intelligence not only in education but also in general, and therefore we hope that constitute a substantial contribution for organization of theoretical knowledge in many other areas.

  2. An Integrative Account of Constraints on Cross-Situational Learning

    PubMed Central

    Yurovsky, Daniel; Frank, Michael C.

    2015-01-01

    Word-object co-occurrence statistics are a powerful information source for vocabulary learning, but there is considerable debate about how learners actually use them. While some theories hold that learners accumulate graded, statistical evidence about multiple referents for each word, others suggest that they track only a single candidate referent. In two large-scale experiments, we show that neither account is sufficient: Cross-situational learning involves elements of both. Further, the empirical data are captured by a computational model that formalizes how memory and attention interact with co-occurrence tracking. Together, the data and model unify opposing positions in a complex debate and underscore the value of understanding the interaction between computational and algorithmic levels of explanation. PMID:26302052

  3. Bootstrapping in a language of thought: a formal model of numerical concept learning.

    PubMed

    Piantadosi, Steven T; Tenenbaum, Joshua B; Goodman, Noah D

    2012-05-01

    In acquiring number words, children exhibit a qualitative leap in which they transition from understanding a few number words, to possessing a rich system of interrelated numerical concepts. We present a computational framework for understanding this inductive leap as the consequence of statistical inference over a sufficiently powerful representational system. We provide an implemented model that is powerful enough to learn number word meanings and other related conceptual systems from naturalistic data. The model shows that bootstrapping can be made computationally and philosophically well-founded as a theory of number learning. Our approach demonstrates how learners may combine core cognitive operations to build sophisticated representations during the course of development, and how this process explains observed developmental patterns in number word learning. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Equilibria of perceptrons for simple contingency problems.

    PubMed

    Dawson, Michael R W; Dupuis, Brian

    2012-08-01

    The contingency between cues and outcomes is fundamentally important to theories of causal reasoning and to theories of associative learning. Researchers have computed the equilibria of Rescorla-Wagner models for a variety of contingency problems, and have used these equilibria to identify situations in which the Rescorla-Wagner model is consistent, or inconsistent, with normative models of contingency. Mathematical analyses that directly compare artificial neural networks to contingency theory have not been performed, because of the assumed equivalence between the Rescorla-Wagner learning rule and the delta rule training of artificial neural networks. However, recent results indicate that this equivalence is not as straightforward as typically assumed, suggesting a strong need for mathematical accounts of how networks deal with contingency problems. One such analysis is presented here, where it is proven that the structure of the equilibrium for a simple network trained on a basic contingency problem is quite different from the structure of the equilibrium for a Rescorla-Wagner model faced with the same problem. However, these structural differences lead to functionally equivalent behavior. The implications of this result for the relationships between associative learning, contingency theory, and connectionism are discussed.

  5. Holography as deep learning

    NASA Astrophysics Data System (ADS)

    Gan, Wen-Cong; Shu, Fu-Wen

    Quantum many-body problem with exponentially large degrees of freedom can be reduced to a tractable computational form by neural network method [G. Carleo and M. Troyer, Science 355 (2017) 602, arXiv:1606.02318.] The power of deep neural network (DNN) based on deep learning is clarified by mapping it to renormalization group (RG), which may shed lights on holographic principle by identifying a sequence of RG transformations to the AdS geometry. In this paper, we show that any network which reflects RG process has intrinsic hyperbolic geometry, and discuss the structure of entanglement encoded in the graph of DNN. We find the entanglement structure of DNN is of Ryu-Takayanagi form. Based on these facts, we argue that the emergence of holographic gravitational theory is related to deep learning process of the quantum-field theory.

  6. Intrinsic motivation, curiosity, and learning: Theory and applications in educational technologies.

    PubMed

    Oudeyer, P-Y; Gottlieb, J; Lopes, M

    2016-01-01

    This chapter studies the bidirectional causal interactions between curiosity and learning and discusses how understanding these interactions can be leveraged in educational technology applications. First, we review recent results showing how state curiosity, and more generally the experience of novelty and surprise, can enhance learning and memory retention. Then, we discuss how psychology and neuroscience have conceptualized curiosity and intrinsic motivation, studying how the brain can be intrinsically rewarded by novelty, complexity, or other measures of information. We explain how the framework of computational reinforcement learning can be used to model such mechanisms of curiosity. Then, we discuss the learning progress (LP) hypothesis, which posits a positive feedback loop between curiosity and learning. We outline experiments with robots that show how LP-driven attention and exploration can self-organize a developmental learning curriculum scaffolding efficient acquisition of multiple skills/tasks. Finally, we discuss recent work exploiting these conceptual and computational models in educational technologies, showing in particular how intelligent tutoring systems can be designed to foster curiosity and learning. © 2016 Elsevier B.V. All rights reserved.

  7. Reinforcement learning with Marr.

    PubMed

    Niv, Yael; Langdon, Angela

    2016-10-01

    To many, the poster child for David Marr's famous three levels of scientific inquiry is reinforcement learning-a computational theory of reward optimization, which readily prescribes algorithmic solutions that evidence striking resemblance to signals found in the brain, suggesting a straightforward neural implementation. Here we review questions that remain open at each level of analysis, concluding that the path forward to their resolution calls for inspiration across levels, rather than a focus on mutual constraints.

  8. Understanding the Convolutional Neural Networks with Gradient Descent and Backpropagation

    NASA Astrophysics Data System (ADS)

    Zhou, XueFei

    2018-04-01

    With the development of computer technology, the applications of machine learning are more and more extensive. And machine learning is providing endless opportunities to develop new applications. One of those applications is image recognition by using Convolutional Neural Networks (CNNs). CNN is one of the most common algorithms in image recognition. It is significant to understand its theory and structure for every scholar who is interested in this field. CNN is mainly used in computer identification, especially in voice, text recognition and other aspects of the application. It utilizes hierarchical structure with different layers to accelerate computing speed. In addition, the greatest features of CNNs are the weight sharing and dimension reduction. And all of these consolidate the high effectiveness and efficiency of CNNs with idea computing speed and error rate. With the help of other learning altruisms, CNNs could be used in several scenarios for machine learning, especially for deep learning. Based on the general introduction to the background and the core solution CNN, this paper is going to focus on summarizing how Gradient Descent and Backpropagation work, and how they contribute to the high performances of CNNs. Also, some practical applications will be discussed in the following parts. The last section exhibits the conclusion and some perspectives of future work.

  9. Neural correlates of forward planning in a spatial decision task in humans

    PubMed Central

    Simon, Dylan Alexander; Daw, Nathaniel D.

    2011-01-01

    Although reinforcement learning (RL) theories have been influential in characterizing the brain’s mechanisms for reward-guided choice, the predominant temporal difference (TD) algorithm cannot explain many flexible or goal-directed actions that have been demonstrated behaviorally. We investigate such actions by contrasting an RL algorithm that is model-based, in that it relies on learning a map or model of the task and planning within it, to traditional model-free TD learning. To distinguish these approaches in humans, we used fMRI in a continuous spatial navigation task, in which frequent changes to the layout of the maze forced subjects continually to relearn their favored routes, thereby exposing the RL mechanisms employed. We sought evidence for the neural substrates of such mechanisms by comparing choice behavior and BOLD signals to decision variables extracted from simulations of either algorithm. Both choices and value-related BOLD signals in striatum, though most often associated with TD learning, were better explained by the model-based theory. Further, predecessor quantities for the model-based value computation were correlated with BOLD signals in the medial temporal lobe and frontal cortex. These results point to a significant extension of both the computational and anatomical substrates for RL in the brain. PMID:21471389

  10. Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems

    PubMed Central

    Giulioni, Massimiliano; Corradi, Federico; Dante, Vittorio; del Giudice, Paolo

    2015-01-01

    Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a ‘basin’ of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases. PMID:26463272

  11. Reinforcement learning and episodic memory in humans and animals: an integrative framework

    PubMed Central

    Gershman, Samuel J.; Daw, Nathaniel D.

    2018-01-01

    We review the psychology and neuroscience of reinforcement learning (RL), which has witnessed significant progress in the last two decades, enabled by the comprehensive experimental study of simple learning and decision-making tasks. However, the simplicity of these tasks misses important aspects of reinforcement learning in the real world: (i) State spaces are high-dimensional, continuous, and partially observable; this implies that (ii) data are relatively sparse: indeed precisely the same situation may never be encountered twice; and also that (iii) rewards depend on long-term consequences of actions in ways that violate the classical assumptions that make RL tractable. A seemingly distinct challenge is that, cognitively, these theories have largely connected with procedural and semantic memory: how knowledge about action values or world models extracted gradually from many experiences can drive choice. This misses many aspects of memory related to traces of individual events, such as episodic memory. We suggest that these two gaps are related. In particular, the computational challenges can be dealt with, in part, by endowing RL systems with episodic memory, allowing them to (i) efficiently approximate value functions over complex state spaces, (ii) learn with very little data, and (iii) bridge long-term dependencies between actions and rewards. We review the computational theory underlying this proposal and the empirical evidence to support it. Our proposal suggests that the ubiquitous and diverse roles of memory in RL may function as part of an integrated learning system. PMID:27618944

  12. Neurolinguistics and psycholinguistics as a basis for computer acquisition of natural language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Powers, D.M.W.

    1983-04-01

    Research into natural language understanding systems for computers has concentrated on implementing particular grammars and grammatical models of the language concerned. This paper presents a rationale for research into natural language understanding systems based on neurological and psychological principles. Important features of the approach are that it seeks to place the onus of learning the language on the computer, and that it seeks to make use of the vast wealth of relevant psycholinguistic and neurolinguistic theory. 22 references.

  13. Knowledge Management through the Equilibrium Pattern Model for Learning

    NASA Astrophysics Data System (ADS)

    Sarirete, Akila; Noble, Elizabeth; Chikh, Azeddine

    Contemporary students are characterized by having very applied learning styles and methods of acquiring knowledge. This behavior is consistent with the constructivist models where students are co-partners in the learning process. In the present work the authors developed a new model of learning based on the constructivist theory coupled with the cognitive development theory of Piaget. The model considers the level of learning based on several stages and the move from one stage to another requires learners' challenge. At each time a new concept is introduced creates a disequilibrium that needs to be worked out to return back to its equilibrium stage. This process of "disequilibrium/equilibrium" has been analyzed and validated using a course in computer networking as part of Cisco Networking Academy Program at Effat College, a women college in Saudi Arabia. The model provides a theoretical foundation for teaching especially in a complex knowledge domain such as engineering and can be used in a knowledge economy.

  14. Learning by statistical cooperation of self-interested neuron-like computing elements.

    PubMed

    Barto, A G

    1985-01-01

    Since the usual approaches to cooperative computation in networks of neuron-like computating elements do not assume that network components have any "preferences", they do not make substantive contact with game theoretic concepts, despite their use of some of the same terminology. In the approach presented here, however, each network component, or adaptive element, is a self-interested agent that prefers some inputs over others and "works" toward obtaining the most highly preferred inputs. Here we describe an adaptive element that is robust enough to learn to cooperate with other elements like itself in order to further its self-interests. It is argued that some of the longstanding problems concerning adaptation and learning by networks might be solvable by this form of cooperativity, and computer simulation experiments are described that show how networks of self-interested components that are sufficiently robust can solve rather difficult learning problems. We then place the approach in its proper historical and theoretical perspective through comparison with a number of related algorithms. A secondary aim of this article is to suggest that beyond what is explicitly illustrated here, there is a wealth of ideas from game theory and allied disciplines such as mathematical economics that can be of use in thinking about cooperative computation in both nervous systems and man-made systems.

  15. Developing Learning Tool of Control System Engineering Using Matrix Laboratory Software Oriented on Industrial Needs

    NASA Astrophysics Data System (ADS)

    Isnur Haryudo, Subuh; Imam Agung, Achmad; Firmansyah, Rifqi

    2018-04-01

    The purpose of this research is to develop learning media of control technique using Matrix Laboratory software with industry requirement approach. Learning media serves as a tool for creating a better and effective teaching and learning situation because it can accelerate the learning process in order to enhance the quality of learning. Control Techniques using Matrix Laboratory software can enlarge the interest and attention of students, with real experience and can grow independent attitude. This research design refers to the use of research and development (R & D) methods that have been modified by multi-disciplinary team-based researchers. This research used Computer based learning method consisting of computer and Matrix Laboratory software which was integrated with props. Matrix Laboratory has the ability to visualize the theory and analysis of the Control System which is an integration of computing, visualization and programming which is easy to use. The result of this instructional media development is to use mathematical equations using Matrix Laboratory software on control system application with DC motor plant and PID (Proportional-Integral-Derivative). Considering that manufacturing in the field of Distributed Control systems (DCSs), Programmable Controllers (PLCs), and Microcontrollers (MCUs) use PID systems in production processes are widely used in industry.

  16. A computer simulation approach to measurement of human control strategy

    NASA Technical Reports Server (NTRS)

    Green, J.; Davenport, E. L.; Engler, H. F.; Sears, W. E., III

    1982-01-01

    Human control strategy is measured through use of a psychologically-based computer simulation which reflects a broader theory of control behavior. The simulation is called the human operator performance emulator, or HOPE. HOPE was designed to emulate control learning in a one-dimensional preview tracking task and to measure control strategy in that setting. When given a numerical representation of a track and information about current position in relation to that track, HOPE generates positions for a stick controlling the cursor to be moved along the track. In other words, HOPE generates control stick behavior corresponding to that which might be used by a person learning preview tracking.

  17. MUD and Self Efficacy.

    ERIC Educational Resources Information Center

    Lee, Kwan Min

    2000-01-01

    Proposes a theoretical framework for analyzing the effect of MUD (Multi-User Dungeons) playing on users' self-efficacy by applying Bandura's social learning theory, and introduces three types of self-efficacy: computer self-efficacy; social self-efficacy; and generalized self-efficacy. Considers successful performance, vicarious experience,…

  18. Alterations in choice behavior by manipulations of world model.

    PubMed

    Green, C S; Benson, C; Kersten, D; Schrater, P

    2010-09-14

    How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) "probability matching"-a consistent example of suboptimal choice behavior seen in humans-occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning.

  19. Alterations in choice behavior by manipulations of world model

    PubMed Central

    Green, C. S.; Benson, C.; Kersten, D.; Schrater, P.

    2010-01-01

    How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) “probability matching”—a consistent example of suboptimal choice behavior seen in humans—occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning. PMID:20805507

  20. Homeostatic reinforcement learning for integrating reward collection and physiological stability.

    PubMed

    Keramati, Mehdi; Gutkin, Boris

    2014-12-02

    Efficient regulation of internal homeostasis and defending it against perturbations requires adaptive behavioral strategies. However, the computational principles mediating the interaction between homeostatic and associative learning processes remain undefined. Here we use a definition of primary rewards, as outcomes fulfilling physiological needs, to build a normative theory showing how learning motivated behaviors may be modulated by internal states. Within this framework, we mathematically prove that seeking rewards is equivalent to the fundamental objective of physiological stability, defining the notion of physiological rationality of behavior. We further suggest a formal basis for temporal discounting of rewards by showing that discounting motivates animals to follow the shortest path in the space of physiological variables toward the desired setpoint. We also explain how animals learn to act predictively to preclude prospective homeostatic challenges, and several other behavioral patterns. Finally, we suggest a computational role for interaction between hypothalamus and the brain reward system.

  1. Innovation Research in E-Learning

    NASA Astrophysics Data System (ADS)

    Wu, Bing; Xu, WenXia; Ge, Jun

    This study is a productivity review on the literature gleaned from SSCI, SCIE databases concerning innovation research in E-Learning. The result indicates that the number of literature productions on innovation research in ELearning is still growing from 2005. The main research development country is England, and from the analysis of the publication year, the number of papers is increasing peaking in 25% of the total in 2010. Meanwhile the main source title is British Journal of Educational Technology. In addition the subject area concentrated on Education & Educational Research, Computer Science, Interdisciplinary Applications and Computer Science, Software Engineering. Moreover the research focuses on are mainly conceptual research and empirical research, which were used to explore E-Learning in respective of innovation diffusion theory, also the limitations and future research of these research were discussed for further research.

  2. Do Thinking Styles Matter in the Use of and Attitudes toward Computing and Information Technology among Hong Kong University Students?

    ERIC Educational Resources Information Center

    Zhang, Li-Fang; He, Yunfeng

    2003-01-01

    In the present study, the thinking styles as defined in Sternberg's theory of mental self-government are tested against yet another domain relevant to student learning. This domain is students' knowledge and use of as well as their attitudes toward the use of computing and information technology (CIT) in education. One hundred and ninety-three (75…

  3. Intelligent Machines in the 21st Century: Automating the Processes of Inference and Inquiry

    NASA Technical Reports Server (NTRS)

    Knuth, Kevin H.

    2003-01-01

    The last century saw the application of Boolean algebra toward the construction of computing machines, which work by applying logical transformations to information contained in their memory. The development of information theory and the generalization of Boolean algebra to Bayesian inference have enabled these computing machines. in the last quarter of the twentieth century, to be endowed with the ability to learn by making inferences from data. This revolution is just beginning as new computational techniques continue to make difficult problems more accessible. However, modern intelligent machines work by inferring knowledge using only their pre-programmed prior knowledge and the data provided. They lack the ability to ask questions, or request data that would aid their inferences. Recent advances in understanding the foundations of probability theory have revealed implications for areas other than logic. Of relevance to intelligent machines, we identified the algebra of questions as the free distributive algebra, which now allows us to work with questions in a way analogous to that which Boolean algebra enables us to work with logical statements. In this paper we describe this logic of inference and inquiry using the mathematics of partially ordered sets and the scaffolding of lattice theory, discuss the far-reaching implications of the methodology, and demonstrate its application with current examples in machine learning. Automation of both inference and inquiry promises to allow robots to perform science in the far reaches of our solar system and in other star systems by enabling them to not only make inferences from data, but also decide which question to ask, experiment to perform, or measurement to take given what they have learned and what they are designed to understand.

  4. Information Processing Capacity of Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Dambre, Joni; Verstraeten, David; Schrauwen, Benjamin; Massar, Serge

    2012-07-01

    Many dynamical systems, both natural and artificial, are stimulated by time dependent external signals, somehow processing the information contained therein. We demonstrate how to quantify the different modes in which information can be processed by such systems and combine them to define the computational capacity of a dynamical system. This is bounded by the number of linearly independent state variables of the dynamical system, equaling it if the system obeys the fading memory condition. It can be interpreted as the total number of linearly independent functions of its stimuli the system can compute. Our theory combines concepts from machine learning (reservoir computing), system modeling, stochastic processes, and functional analysis. We illustrate our theory by numerical simulations for the logistic map, a recurrent neural network, and a two-dimensional reaction diffusion system, uncovering universal trade-offs between the non-linearity of the computation and the system's short-term memory.

  5. Information Processing Capacity of Dynamical Systems

    PubMed Central

    Dambre, Joni; Verstraeten, David; Schrauwen, Benjamin; Massar, Serge

    2012-01-01

    Many dynamical systems, both natural and artificial, are stimulated by time dependent external signals, somehow processing the information contained therein. We demonstrate how to quantify the different modes in which information can be processed by such systems and combine them to define the computational capacity of a dynamical system. This is bounded by the number of linearly independent state variables of the dynamical system, equaling it if the system obeys the fading memory condition. It can be interpreted as the total number of linearly independent functions of its stimuli the system can compute. Our theory combines concepts from machine learning (reservoir computing), system modeling, stochastic processes, and functional analysis. We illustrate our theory by numerical simulations for the logistic map, a recurrent neural network, and a two-dimensional reaction diffusion system, uncovering universal trade-offs between the non-linearity of the computation and the system's short-term memory. PMID:22816038

  6. Goal-directed decision making as probabilistic inference: A computational framework and potential neural correlates

    PubMed Central

    Solway, A.; Botvinick, M.

    2013-01-01

    Recent work has given rise to the view that reward-based decision making is governed by two key controllers: a habit system, which stores stimulus-response associations shaped by past reward, and a goal-oriented system that selects actions based on their anticipated outcomes. The current literature provides a rich body of computational theory addressing habit formation, centering on temporal-difference learning mechanisms. Less progress has been made toward formalizing the processes involved in goal-directed decision making. We draw on recent work in cognitive neuroscience, animal conditioning, cognitive and developmental psychology and machine learning, to outline a new theory of goal-directed decision making. Our basic proposal is that the brain, within an identifiable network of cortical and subcortical structures, implements a probabilistic generative model of reward, and that goal-directed decision making is effected through Bayesian inversion of this model. We present a set of simulations implementing the account, which address benchmark behavioral and neuroscientific findings, and which give rise to a set of testable predictions. We also discuss the relationship between the proposed framework and other models of decision making, including recent models of perceptual choice, to which our theory bears a direct connection. PMID:22229491

  7. Learning classification trees

    NASA Technical Reports Server (NTRS)

    Buntine, Wray

    1991-01-01

    Algorithms for learning classification trees have had successes in artificial intelligence and statistics over many years. How a tree learning algorithm can be derived from Bayesian decision theory is outlined. This introduces Bayesian techniques for splitting, smoothing, and tree averaging. The splitting rule turns out to be similar to Quinlan's information gain splitting rule, while smoothing and averaging replace pruning. Comparative experiments with reimplementations of a minimum encoding approach, Quinlan's C4 and Breiman et al. Cart show the full Bayesian algorithm is consistently as good, or more accurate than these other approaches though at a computational price.

  8. Emerging Affordances in Telecollaborative Multimodal Interactions

    ERIC Educational Resources Information Center

    Dey-Plissonneau, Aparajita; Blin, Françoise

    2016-01-01

    Drawing on Gibson's (1977) theory of affordances, Computer-Assisted Language Learning (CALL) affordances are a combination of technological, social, educational, and linguistic affordances (Blin, 2016). This paper reports on a preliminary study that sought to identify the emergence of affordances during an online video conferencing session between…

  9. Law of Large Numbers: The Theory, Applications and Technology-Based Education

    ERIC Educational Resources Information Center

    Dinov, Ivo D.; Christou, Nicolas; Gould, Robert

    2009-01-01

    Modern approaches for technology-based blended education utilize a variety of recently developed novel pedagogical, computational and network resources. Such attempts employ technology to deliver integrated, dynamically-linked, interactive-content and heterogeneous learning environments, which may improve student comprehension and information…

  10. Characterizing Representational Learning: A Combined Simulation and Tutorial on Perturbation Theory

    ERIC Educational Resources Information Center

    Kohnle, Antje; Passante, Gina

    2017-01-01

    Analyzing, constructing, and translating between graphical, pictorial, and mathematical representations of physics ideas and reasoning flexibly through them ("representational competence") is a key characteristic of expertise in physics but is a challenge for learners to develop. Interactive computer simulations and University of…

  11. Simulation/Gaming and the Acquisition of Communicative Competence in Another Language.

    ERIC Educational Resources Information Center

    Garcia-Carbonell, Amparo; Rising, Beverly; Montero, Begona; Watts, Frances

    2001-01-01

    Discussion of communicative competence in second language acquisition focuses on a theoretical and practical meshing of simulation and gaming methodology with theories of foreign language acquisition, including task-based learning, interaction, and comprehensible input. Describes experiments conducted with computer-assisted simulations in…

  12. a Radical Collaborative Approach: Developing a Model for Learning Theory, Human-Based Computation and Participant Motivation in a Rock-Art Heritage Application

    NASA Astrophysics Data System (ADS)

    Haubt, R.

    2016-06-01

    This paper explores a Radical Collaborative Approach in the global and centralized Rock-Art Database project to find new ways to look at rock-art by making information more accessible and more visible through public contributions. It looks at rock-art through the Key Performance Indicator (KPI), identified with the latest Australian State of the Environment Reports to help develop a better understanding of rock-art within a broader Cultural and Indigenous Heritage context. Using a practice-led approach the project develops a conceptual collaborative model that is deployed within the RADB Management System. Exploring learning theory, human-based computation and participant motivation the paper develops a procedure for deploying collaborative functions within the interface design of the RADB Management System. The paper presents the results of the collaborative model implementation and discusses considerations for the next iteration of the RADB Universe within an Agile Development Approach.

  13. The Convergence of Intelligences

    NASA Astrophysics Data System (ADS)

    Diederich, Joachim

    Minsky (1985) argued an extraterrestrial intelligence may be similar to ours despite very different origins. ``Problem- solving'' offers evolutionary advantages and individuals who are part of a technical civilisation should have this capacity. On earth, the principles of problem-solving are the same for humans, some primates and machines based on Artificial Intelligence (AI) techniques. Intelligent systems use ``goals'' and ``sub-goals'' for problem-solving, with memories and representations of ``objects'' and ``sub-objects'' as well as knowledge of relations such as ``cause'' or ``difference.'' Some of these objects are generic and cannot easily be divided into parts. We must, therefore, assume that these objects and relations are universal, and a general property of intelligence. Minsky's arguments from 1985 are extended here. The last decade has seen the development of a general learning theory (``computational learning theory'' (CLT) or ``statistical learning theory'') which equally applies to humans, animals and machines. It is argued that basic learning laws will also apply to an evolved alien intelligence, and this includes limitations of what can be learned efficiently. An example from CLT is that the general learning problem for neural networks is intractable, i.e. it cannot be solved efficiently for all instances (it is ``NP-complete''). It is the objective of this paper to show that evolved intelligences will be constrained by general learning laws and will use task-decomposition for problem-solving. Since learning and problem-solving are core features of intelligence, it can be said that intelligences converge despite very different origins.

  14. Computational Approaches to Chemical Hazard Assessment

    PubMed Central

    Luechtefeld, Thomas; Hartung, Thomas

    2018-01-01

    Summary Computational prediction of toxicity has reached new heights as a result of decades of growth in the magnitude and diversity of biological data. Public packages for statistics and machine learning make model creation faster. New theory in machine learning and cheminformatics enables integration of chemical structure, toxicogenomics, simulated and physical data in the prediction of chemical health hazards, and other toxicological information. Our earlier publications have characterized a toxicological dataset of unprecedented scale resulting from the European REACH legislation (Registration Evaluation Authorisation and Restriction of Chemicals). These publications dove into potential use cases for regulatory data and some models for exploiting this data. This article analyzes the options for the identification and categorization of chemicals, moves on to the derivation of descriptive features for chemicals, discusses different kinds of targets modeled in computational toxicology, and ends with a high-level perspective of the algorithms used to create computational toxicology models. PMID:29101769

  15. Computational psychiatry

    PubMed Central

    Montague, P. Read; Dolan, Raymond J.; Friston, Karl J.; Dayan, Peter

    2013-01-01

    Computational ideas pervade many areas of science and have an integrative explanatory role in neuroscience and cognitive science. However, computational depictions of cognitive function have had surprisingly little impact on the way we assess mental illness because diseases of the mind have not been systematically conceptualized in computational terms. Here, we outline goals and nascent efforts in the new field of computational psychiatry, which seeks to characterize mental dysfunction in terms of aberrant computations over multiple scales. We highlight early efforts in this area that employ reinforcement learning and game theoretic frameworks to elucidate decision-making in health and disease. Looking forwards, we emphasize a need for theory development and large-scale computational phenotyping in human subjects. PMID:22177032

  16. Application of the SP theory of intelligence to the understanding of natural vision and the development of computer vision.

    PubMed

    Wolff, J Gerard

    2014-01-01

    The SP theory of intelligence aims to simplify and integrate concepts in computing and cognition, with information compression as a unifying theme. This article is about how the SP theory may, with advantage, be applied to the understanding of natural vision and the development of computer vision. Potential benefits include an overall simplification of concepts in a universal framework for knowledge and seamless integration of vision with other sensory modalities and other aspects of intelligence. Low level perceptual features such as edges or corners may be identified by the extraction of redundancy in uniform areas in the manner of the run-length encoding technique for information compression. The concept of multiple alignment in the SP theory may be applied to the recognition of objects, and to scene analysis, with a hierarchy of parts and sub-parts, at multiple levels of abstraction, and with family-resemblance or polythetic categories. The theory has potential for the unsupervised learning of visual objects and classes of objects, and suggests how coherent concepts may be derived from fragments. As in natural vision, both recognition and learning in the SP system are robust in the face of errors of omission, commission and substitution. The theory suggests how, via vision, we may piece together a knowledge of the three-dimensional structure of objects and of our environment, it provides an account of how we may see things that are not objectively present in an image, how we may recognise something despite variations in the size of its retinal image, and how raster graphics and vector graphics may be unified. And it has things to say about the phenomena of lightness constancy and colour constancy, the role of context in recognition, ambiguities in visual perception, and the integration of vision with other senses and other aspects of intelligence.

  17. Comprehensive survey of deep learning in remote sensing: theories, tools, and challenges for the community

    NASA Astrophysics Data System (ADS)

    Ball, John E.; Anderson, Derek T.; Chan, Chee Seng

    2017-10-01

    In recent years, deep learning (DL), a rebranding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, and natural language processing. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV, e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should not only be aware of advancements such as DL, but also be leading researchers in this area. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools, and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as they relate to (i) inadequate data sets, (ii) human-understandable solutions for modeling physical phenomena, (iii) big data, (iv) nontraditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial, and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.

  18. Games and Diabetes: A Review Investigating Theoretical Frameworks, Evaluation Methodologies, and Opportunities for Design Grounded in Learning Theories.

    PubMed

    Lazem, Shaimaa; Webster, Mary; Holmes, Wayne; Wolf, Motje

    2015-09-02

    Here we review 18 articles that describe the design and evaluation of 1 or more games for diabetes from technical, methodological, and theoretical perspectives. We undertook searches covering the period 2010 to May 2015 in the ACM, IEEE, Journal of Medical Internet Research, Studies in Health Technology and Informatics, and Google Scholar online databases using the keywords "children," "computer games," "diabetes," "games," "type 1," and "type 2" in various Boolean combinations. The review sets out to establish, for future research, an understanding of the current landscape of digital games designed for children with diabetes. We briefly explored the use and impact of well-established learning theories in such games. The most frequently mentioned theoretical frameworks were social cognitive theory and social constructivism. Due to the limitations of the reported evaluation methodologies, little evidence was found to support the strong promise of games for diabetes. Furthermore, we could not establish a relation between design features and the game outcomes. We argue that an in-depth discussion about the extent to which learning theories could and should be manifested in the design decisions is required. © 2015 Diabetes Technology Society.

  19. A quantitative theory of the functions of the hippocampal CA3 network in memory

    PubMed Central

    Rolls, Edmund T.

    2013-01-01

    A quantitative computational theory of the operation of the hippocampal CA3 system as an autoassociation or attractor network used in episodic memory system is described. In this theory, the CA3 system operates as a single attractor or autoassociation network to enable rapid, one-trial, associations between any spatial location (place in rodents, or spatial view in primates) and an object or reward, and to provide for completion of the whole memory during recall from any part. The theory is extended to associations between time and object or reward to implement temporal order memory, also important in episodic memory. The dentate gyrus (DG) performs pattern separation by competitive learning to produce sparse representations suitable for setting up new representations in CA3 during learning, producing for example neurons with place-like fields from entorhinal cortex grid cells. The dentate granule cells produce by the very small number of mossy fiber (MF) connections to CA3 a randomizing pattern separation effect important during learning but not recall that separates out the patterns represented by CA3 firing to be very different from each other, which is optimal for an unstructured episodic memory system in which each memory must be kept distinct from other memories. The direct perforant path (pp) input to CA3 is quantitatively appropriate to provide the cue for recall in CA3, but not for learning. Tests of the theory including hippocampal subregion analyses and hippocampal NMDA receptor knockouts are described, and support the theory. PMID:23805074

  20. Designing contributing student pedagogies to promote students' intrinsic motivation to learn

    NASA Astrophysics Data System (ADS)

    Herman, Geoffrey L.

    2012-12-01

    In order to maximize the effectiveness of our pedagogies, we must understand how our pedagogies align with prevailing theories of cognition and motivation and design our pedagogies according to this understanding. When implementing Contributing Student Pedagogies (CSPs), students are expected to make meaningful contributions to the learning of their peers, and consequently, instructors inherently give students power and control over elements of the class. With this loss of power, instructors will become more aware that the quality of the learning environment will depend on the level of students' motivation and engagement rather than the instructor's mastery of content or techniques. Given this greater reliance on student motivation, we will discuss how motivation theories such as Self-Determination Theory (SDT) match and support the use of CSP and how CSP can be used to promote students' intrinsic motivation (IM) to learn. We conclude with examples of how we use principles of SDT to guide our design and use of CSP. We will particularly focus on how we changed the discussion sections of a large, required, sophomore-level class on digital logic and computer organization at a large, research university at relatively low-cost to the presiding class instructor.

  1. Big Data Meets Quantum Chemistry Approximations: The Δ-Machine Learning Approach.

    PubMed

    Ramakrishnan, Raghunathan; Dral, Pavlo O; Rupp, Matthias; von Lilienfeld, O Anatole

    2015-05-12

    Chemically accurate and comprehensive studies of the virtual space of all possible molecules are severely limited by the computational cost of quantum chemistry. We introduce a composite strategy that adds machine learning corrections to computationally inexpensive approximate legacy quantum methods. After training, highly accurate predictions of enthalpies, free energies, entropies, and electron correlation energies are possible, for significantly larger molecular sets than used for training. For thermochemical properties of up to 16k isomers of C7H10O2 we present numerical evidence that chemical accuracy can be reached. We also predict electron correlation energy in post Hartree-Fock methods, at the computational cost of Hartree-Fock, and we establish a qualitative relationship between molecular entropy and electron correlation. The transferability of our approach is demonstrated, using semiempirical quantum chemistry and machine learning models trained on 1 and 10% of 134k organic molecules, to reproduce enthalpies of all remaining molecules at density functional theory level of accuracy.

  2. Literature Review of Cloud Based E-learning Adoption by Students: State of the Art and Direction for Future Work

    NASA Astrophysics Data System (ADS)

    Hassan Kayali, Mohammad; Safie, Nurhizam; Mukhtar, Muriati

    2016-11-01

    Cloud computing is a new paradigm shift in information technology. Most of the studies in the cloud are business related while the studies in cloud based e-learning are few. The field is still in its infancy and researchers have used several adoption theories to discover the dimensions of this field. The purpose of this paper is to review and integrate the literature to understand the current situation of the cloud based e-learning adoption. A total of 312 articles were extracted from Science direct, emerald, and IEEE. Screening processes were applied to select only the articles that are related to the cloud based e-learning. A total of 231 removed because they are related to business organization. Next, a total of 63 articles were removed because they are technical articles. A total of 18 articles were included in this paper. A frequency analysis was conducted on the paper to identify the most frequent factors, theories, statistical software, respondents, and countries of the studies. The findings showed that usefulness and ease of use are the most frequent factors. TAM is the most prevalent adoption theories in the literature. The mean of the respondents in the reviewed studies is 377 and Malaysia is the most researched countries in terms of cloud based e-learning. Studies of cloud based e-learning are few and more empirical studies are needed.

  3. Denuded Data! Grounded Theory Using the NUDIST Computer Analysis Program: In Researching the Challenge to Teacher Self-Efficacy Posed by Students with Learning Disabilities in Australian Education.

    ERIC Educational Resources Information Center

    Burroughs-Lange, Sue G.; Lange, John

    This paper evaluates the effects of using the NUDIST (Non-numerical, Unstructured Data Indexing, Searching and Theorising) computer program to organize coded, qualitative data. The use of the software is discussed within the context of the study for which it was used: an Australian study that aimed to develop a theoretical understanding of the…

  4. Proceedings of Selected Research Paper Presentations at the 1988 Convention of the Association for Educational Communications and Technology and Sponsored by the Research and Theory Division (10th, New Orleans, Louisiana, January 14-19, 1988).

    ERIC Educational Resources Information Center

    Simonson, Michael R., Ed.; Frederick, Jacqueline K., Ed.

    1988-01-01

    The 54 papers in this volume represent some of the most current thinking in educational communications and technology. Individual papers address the following topics: feedback in computer-assisted instruction (CAI); cognitive style and cognitive strategies in CAI; persuasive film-making; learning strategies; computer technology and children's word…

  5. Culture and Cognition in Information Technology Education

    ERIC Educational Resources Information Center

    Holvikivi, Jaana

    2007-01-01

    This paper aims at explaining the outcomes of information technology education for international students using anthropological theories of cultural schemas. Even though computer science and engineering are usually assumed to be culture-independent, the practice in classrooms seems to indicate that learning patterns depend on culture. The…

  6. A "Language Lab" for Architectural Design.

    ERIC Educational Resources Information Center

    Mackenzie, Arch; And Others

    This paper discusses a "language lab" strategy in which traditional studio learning may be supplemented by language lessons using computer graphics techniques to teach architectural grammar, a body of elements and principles that govern the design of buildings belonging to a particular architectural theory or style. Two methods of…

  7. Computing on Encrypted Data: Theory and Application

    DTIC Science & Technology

    2016-01-01

    THEORY AND APPLICATION 5a. CONTRACT NUMBER FA8750-11-2-0225 5b. GRANT NUMBER N /A 5c. PROGRAM ELEMENT NUMBER 62303E 6. AUTHOR(S) Shafi...distance decoding assumption, GCD is greatest common divisors, LWE is learning with errors and NTRU is the N -th order truncated ring encryption scheme...that ` = n , but all definitions carry over to the general case). The mini- mum distance between two lattice points is equal to the length of the

  8. Reframing clinical workplace learning using the theory of distributed cognition.

    PubMed

    Pimmer, Christoph; Pachler, Norbert; Genewein, Urs

    2013-09-01

    In medicine, knowledge is embodied and socially, temporally, spatially, and culturally distributed between actors and their environment. In addition, clinicians increasingly are using technology in their daily work to gain and share knowledge. Despite these characteristics, surprisingly few studies have incorporated the theory of distributed cognition (DCog), which emphasizes how cognition is distributed in a wider system in the form of multimodal representations (e.g., clinical images, speech, gazes, and gestures) between social actors (e.g., doctors and patients) in the physical environment (e.g., with technological instruments and computers). In this article, the authors provide an example of an interaction between medical actors. Using that example, they then introduce the important concepts of the DCog theory, identifying five characteristics of clinical representations-that they are interwoven, co-constructed, redundantly accessed, intersubjectively shared, and substantiated-and discuss their value for learning. By contrasting these DCog perspectives with studies from the field of medical education, the authors argue that researchers should focus future medical education scholarship on the ways in which medical actors use and connect speech, bodily movements (e.g., gestures), and the visual and haptic structures of their own bodies and of artifacts, such as technological instruments and computers, to construct complex, multimodal representations. They also argue that future scholarship should "zoom in" on detailed, moment-by-moment analysis and, at the same time, "zoom out" following the distribution of cognition through an overall system to develop a more integrated view of clinical workplace learning.

  9. Dataset on Investigating the role of onsite learning in the optimisation of craft gang's productivity in the construction industry.

    PubMed

    Ugulu, Rex Asibuodu; Allen, Stephen

    2017-12-01

    The data presented in this article is an original data on "Investigating the role of onsite learning in the optimisation of craft gang's productivity in the construction industry". This article describes the constraints influencing craft gang's productivity and the influence of onsite learning on the blockwork craft gang's productivity. It also presented the method of data collection, using a semi-structured interview and an observation method to collect data from construction organisations. We provided statistics on the top most important constraints affecting the craft gang's productivity using 3-D Bar charts. In addition, we computed the correlation coefficients and the regression model on the influence of onsite learning on craft gang's productivity using the man-hour as the dependent variable. The relationship between blockwork inputs and cycle numbers was determined at 5% significance level. Finally, we presented data information on the application of the learning curve theory using the unit straight-line model equations and computed the learning rate of the observed craft gang's blockwork repetitive work.

  10. The unrealized promise of infant statistical word-referent learning

    PubMed Central

    Smith, Linda B.; Suanda, Sumarga H.; Yu, Chen

    2014-01-01

    Recent theory and experiments offer a new solution as to how infant learners may break into word learning, by using cross-situational statistics to find the underlying word-referent mappings. Computational models demonstrate the in-principle plausibility of this statistical learning solution and experimental evidence shows that infants can aggregate and make statistically appropriate decisions from word-referent co-occurrence data. We review these contributions and then identify the gaps in current knowledge that prevent a confident conclusion about whether cross-situational learning is the mechanism through which infants break into word learning. We propose an agenda to address that gap that focuses on detailing the statistics in the learning environment and the cognitive processes that make use of those statistics. PMID:24637154

  11. Intelligent Learning System using cognitive science theory and artificial intelligence methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cristensen, D.L.

    1986-01-01

    This dissertation is a presentation of a theoretical model of an intelligent Learning System (ILS). The approach view intelligent computer-based instruction on a curricular-level and educational-theory base, instead of the conventional instructional-only level. The ILS is divided into two components: (1) macro-level, curricular; and (2) micro-level (MAIS), instructional. The primary purpose of the ILS macro level is to establish the initial conditions of learning by considering individual difference variables within specification of the curriculum content domain. Second, the ILS macro-level will iteratively update the conditions of learning as the individual student progresses through the given curriculum. The term dynamic ismore » used to describe the expert tutor that establishes and monitors the conditions of instruction between the ILS macro level and the micro level. As the student progresses through the instruction, appropriate information is sent back continuously to the macro level to constantly improve decision making for succeeding conditions of instruction.« less

  12. Toward a computational framework for cognitive biology: Unifying approaches from cognitive neuroscience and comparative cognition

    NASA Astrophysics Data System (ADS)

    Fitch, W. Tecumseh

    2014-09-01

    Progress in understanding cognition requires a quantitative, theoretical framework, grounded in the other natural sciences and able to bridge between implementational, algorithmic and computational levels of explanation. I review recent results in neuroscience and cognitive biology that, when combined, provide key components of such an improved conceptual framework for contemporary cognitive science. Starting at the neuronal level, I first discuss the contemporary realization that single neurons are powerful tree-shaped computers, which implies a reorientation of computational models of learning and plasticity to a lower, cellular, level. I then turn to predictive systems theory (predictive coding and prediction-based learning) which provides a powerful formal framework for understanding brain function at a more global level. Although most formal models concerning predictive coding are framed in associationist terms, I argue that modern data necessitate a reinterpretation of such models in cognitive terms: as model-based predictive systems. Finally, I review the role of the theory of computation and formal language theory in the recent explosion of comparative biological research attempting to isolate and explore how different species differ in their cognitive capacities. Experiments to date strongly suggest that there is an important difference between humans and most other species, best characterized cognitively as a propensity by our species to infer tree structures from sequential data. Computationally, this capacity entails generative capacities above the regular (finite-state) level; implementationally, it requires some neural equivalent of a push-down stack. I dub this unusual human propensity "dendrophilia", and make a number of concrete suggestions about how such a system may be implemented in the human brain, about how and why it evolved, and what this implies for models of language acquisition. I conclude that, although much remains to be done, a neurally-grounded framework for theoretical cognitive science is within reach that can move beyond polarized debates and provide a more adequate theoretical future for cognitive biology.

  13. Toward a computational framework for cognitive biology: unifying approaches from cognitive neuroscience and comparative cognition.

    PubMed

    Fitch, W Tecumseh

    2014-09-01

    Progress in understanding cognition requires a quantitative, theoretical framework, grounded in the other natural sciences and able to bridge between implementational, algorithmic and computational levels of explanation. I review recent results in neuroscience and cognitive biology that, when combined, provide key components of such an improved conceptual framework for contemporary cognitive science. Starting at the neuronal level, I first discuss the contemporary realization that single neurons are powerful tree-shaped computers, which implies a reorientation of computational models of learning and plasticity to a lower, cellular, level. I then turn to predictive systems theory (predictive coding and prediction-based learning) which provides a powerful formal framework for understanding brain function at a more global level. Although most formal models concerning predictive coding are framed in associationist terms, I argue that modern data necessitate a reinterpretation of such models in cognitive terms: as model-based predictive systems. Finally, I review the role of the theory of computation and formal language theory in the recent explosion of comparative biological research attempting to isolate and explore how different species differ in their cognitive capacities. Experiments to date strongly suggest that there is an important difference between humans and most other species, best characterized cognitively as a propensity by our species to infer tree structures from sequential data. Computationally, this capacity entails generative capacities above the regular (finite-state) level; implementationally, it requires some neural equivalent of a push-down stack. I dub this unusual human propensity "dendrophilia", and make a number of concrete suggestions about how such a system may be implemented in the human brain, about how and why it evolved, and what this implies for models of language acquisition. I conclude that, although much remains to be done, a neurally-grounded framework for theoretical cognitive science is within reach that can move beyond polarized debates and provide a more adequate theoretical future for cognitive biology. Copyright © 2014. Published by Elsevier B.V.

  14. Electronic Structure Theory | Materials Science | NREL

    Science.gov Websites

    design and discover materials for energy applications. This includes detailed studies of the physical computing. Key Research Areas Materials by Design NREL leads the U.S. Department of Energy's Center for Next Generation of Materials by Design, which incorporates metastability and synthesizability. Learn more about

  15. A Wittgenstein Approach to the Learning of OO-Modeling

    ERIC Educational Resources Information Center

    Holmboe, Christian

    2004-01-01

    The paper uses Ludwig Wittgenstein's theories about the relationship between thought, language, and objects of the world to explore the assumption that OO-thinking resembles natural thinking. The paper imports from research in linguistic philosophy to computer science education research. I show how UML class diagrams (i.e., an artificial…

  16. Management Education: An Experimental Course.

    ERIC Educational Resources Information Center

    Gutelius, Paul Payne

    The thesis describes the design, implementation, and evaluation of a course in the theory and practice of management. It gives an appraisal of programmed learning techniques and compares three methods of teaching management--by readings, by cases, and by computer gaming. Additionally, it relates student reactions to the opportunity to select one…

  17. Teaching Advanced Vehicle Dynamics Using a Project Based Learning (PBL) Approach

    ERIC Educational Resources Information Center

    Redkar, Sangram

    2012-01-01

    This paper presents an interesting teaching experiment carried out at XXX University. The author offered a new course in computational/analytical vehicle dynamics to senior undergraduate students, graduate students and practicing engineers. The objective of the course was to present vehicle dynamics theory with practical applications using…

  18. Using a Commercial Simulator to Teach Sorption Separations

    ERIC Educational Resources Information Center

    Wankat, Phillip C.

    2006-01-01

    The commercial simulator Aspen Chromatography was used in the computer laboratory of a dual-level course. The lab assignments used a cookbook approach to teach basic simulator operation and open-ended exploration to understand adsorption. The students learned theory better than in previous years despite having less lecture time. Students agreed…

  19. How Can Intelligent CAL Better Adapt to Learners?

    ERIC Educational Resources Information Center

    Boyd, Gary McI.; Mitchell, P. David

    1992-01-01

    Discusses intelligent computer-aided learning (ICAL) support systems and considers learner characteristics as elements of ICAL student models. Cybernetic theory and attribute-treatment results are discussed, six components of a student model for tutoring are described, and methods for determining the student's model of the tutor are examined. (22…

  20. Inquiry-Based Learning in Remote Sensing: A Space Balloon Educational Experiment

    ERIC Educational Resources Information Center

    Mountrakis, Giorgos; Triantakonstantis, Dimitrios

    2012-01-01

    Teaching remote sensing in higher education has been traditionally restricted in lecture and computer-aided laboratory activities. This paper presents and evaluates an engaging inquiry-based educational experiment. The experiment was incorporated in an introductory remote sensing undergraduate course to bridge the gap between theory and…

  1. The Multiple-Choice Concept Map (MCCM): An Interactive Computer-Based Assessment Method

    ERIC Educational Resources Information Center

    Sas, Ioan Ciprian

    2010-01-01

    This research attempted to bridge the gap between cognitive psychology and educational measurement (Mislevy, 2008; Leighton & Gierl, 2007; Nichols, 1994; Messick, 1989; Snow & Lohman, 1989) by using cognitive theories from working memory (Baddeley, 1986; Miyake & Shah, 1999; Grimley & Banner, 2008), multimedia learning (Mayer, 2001), and cognitive…

  2. Technology for Teachers. 6th Edition.

    ERIC Educational Resources Information Center

    Volker, Roger; Simonson, Michael

    This book helps teachers learn how to use and make educational media, covering traditional and new media such as computer laboratories, authentic assessment, theory bases, and hypermedia. Chapter topics progress from simple to complex. Each chapter includes clearly stated behavioral objectives that provide a study guide for students and can serve…

  3. A Coding Scheme to Analyse the Online Asynchronous Discussion Forums of University Students

    ERIC Educational Resources Information Center

    Biasutti, Michele

    2017-01-01

    The current study describes the development of a content analysis coding scheme to examine transcripts of online asynchronous discussion groups in higher education. The theoretical framework comprises the theories regarding knowledge construction in computer-supported collaborative learning (CSCL) based on a sociocultural perspective. The coding…

  4. Effects of Using Historical Microworlds on Conceptual Change: A P-Prim Analysis

    ERIC Educational Resources Information Center

    Masson, Steve; Legendre, Marie-Francoise

    2008-01-01

    This study examines the effects of using historical microworlds on conceptual change in mechanics. Historical microworlds combine history of science and microworld through a computer based interactive learning environment that respects and represents historic conceptions or theories. Six grade 5 elementary students participated individually to…

  5. Design and Performance Frameworks for Constructing Problem-Solving Simulations

    ERIC Educational Resources Information Center

    Stevens, Rons; Palacio-Cayetano, Joycelin

    2003-01-01

    Rapid advancements in hardware, software, and connectivity are helping to shorten the times needed to develop computer simulations for science education. These advancements, however, have not been accompanied by corresponding theories of how best to design and use these technologies for teaching, learning, and testing. Such design frameworks…

  6. Optimizing physicians' instruction of PACS through e-learning: cognitive load theory applied.

    PubMed

    Devolder, P; Pynoo, B; Voet, T; Adang, L; Vercruysse, J; Duyck, P

    2009-03-01

    This article outlines the strategy used by our hospital to maximize the knowledge transfer to referring physicians on using a picture archiving and communication system (PACS). We developed an e-learning platform underpinned by the cognitive load theory (CLT) so that in depth knowledge of PACS' abilities becomes attainable regardless of the user's prior experience with computers. The application of the techniques proposed by CLT optimizes the learning of the new actions necessary to obtain and manipulate radiological images. The application of cognitive load reducing techniques is explained with several examples. We discuss the need to safeguard the physicians' main mental processes to keep the patient's interests in focus. A holistic adoption of CLT techniques both in teaching and in configuration of information systems could be adopted to attain this goal. An overview of the advantages of this instruction method is given both on the individual and organizational level.

  7. Genetic Epidemiology and Public Health: The Evolution From Theory to Technology.

    PubMed

    Fallin, M Daniele; Duggal, Priya; Beaty, Terri H

    2016-03-01

    Genetic epidemiology represents a hybrid of epidemiologic designs and statistical models that explicitly consider both genetic and environmental risk factors for disease. It is a relatively new field in public health; the term was first coined only 35 years ago. In this short time, the field has been through a major evolution, changing from a field driven by theory, without the technology for genetic measurement or computational capacity to apply much of the designs and methods developed, to a field driven by rapidly expanding technology in genomic measurement and computational analyses while epidemiologic theory struggles to keep up. In this commentary, we describe 4 different eras of genetic epidemiology, spanning this evolution from theory to technology, what we have learned, what we have added to the broader field of public health, and what remains to be done. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Nyström type subsampling analyzed as a regularized projection

    NASA Astrophysics Data System (ADS)

    Kriukova, Galyna; Pereverzyev, Sergiy, Jr.; Tkachenko, Pavlo

    2017-07-01

    In the statistical learning theory the Nyström type subsampling methods are considered as tools for dealing with big data. In this paper we consider Nyström subsampling as a special form of the projected Lavrentiev regularization, and study it using the approaches developed in the regularization theory. As a result, we prove that the same capacity independent learning rates that are guaranteed for standard algorithms running with quadratic computational complexity can be obtained with subquadratic complexity by the Nyström subsampling approach, provided that the subsampling size is chosen properly. We propose a priori rule for choosing the subsampling size and a posteriori strategy for dealing with uncertainty in the choice of it. The theoretical results are illustrated by numerical experiments.

  9. Using Interprofessional Learning for Continuing Education: Development and Evaluation of the Graduate Certificate Program in Health Professional Education for Clinicians.

    PubMed

    Henderson, Saras; Dalton, Megan; Cartmel, Jennifer

    2016-01-01

    Health professionals may be expert clinicians but do not automatically make effective teachers and need educational development. In response, a team of health academics at an Australian university developed and evaluated the continuing education Graduate Certificate in Health Professional Education Program using an interprofessional learning model. The model was informed by Collins interactional expertise and Knowles adult learning theories. The team collaboratively developed and taught four courses in the program. Blended learning methods such as web-based learning, face-to-face workshops, and online discussion forums were used. Twenty-seven multidisciplinary participants enrolled in the inaugural program. Focus group interview, self-report questionnaires, and teacher observations were used to evaluate the program. Online learning motivated participants to learn in a collaborative virtual environment. The workshops conducted in an interprofessional environment promoted knowledge sharing and helped participants to better understand other discipline roles, so they could conduct clinical education within a broader health care team context. Work-integrated assessments supported learning relevance. The teachers, however, observed that some participants struggled because of lack of computer skills. Although the interprofessional learning model promoted collaboration and flexibility, it is important to note that consideration be given to participants who are not computer literate. We therefore conducted a library and computer literacy workshop in orientation week which helped. An interprofessional learning environment can assist health professionals to operate outside their "traditional silos" leading to a more collaborative approach to the provision of care. Our experience may assist other organizations in developing similar programs.

  10. Effects of an eHealth Literacy Intervention for Older Adults

    PubMed Central

    2011-01-01

    Background Older adults generally have low health and computer literacies, making it challenging for them to function well in the eHealth era where technology is increasingly being used in health care. Little is known about effective interventions and strategies for improving the eHealth literacy of the older population. Objective The objective of this study was to examine the effects of a theory-driven eHealth literacy intervention for older adults. Methods The experimental design was a 2 × 2 mixed factorial design with learning method (collaborative; individualistic) as the between-participants variable and time of measurement (pre; post) as the within-participants variable. A total of 146 older adults aged 56–91 (mean 69.99, SD 8.12) participated in this study during February to May 2011. The intervention involved 2 weeks of learning about using the National Institutes of Health’s SeniorHealth.gov website to access reliable health information. The intervention took place at public libraries. Participants were randomly assigned to either experimental condition (collaborative: n = 72; individualistic: n = 74). Results Overall, participants’ knowledge, skills, and eHealth literacy efficacy all improved significantly from pre to post intervention (P < .001 in all cases; effect sizes were >0.8 with statistical power of 1.00 even at the .01 level in all cases). When controlling for baseline differences, no significant main effect of the learning method was found on computer/Web knowledge, skills, or eHealth literacy efficacy. Thus, collaborative learning did not differ from individualistic learning in affecting the learning outcomes. No significant interaction effect of learning method and time of measurement was found. Group composition based on gender, familiarity with peers, or prior computer experience had no significant main or interaction effect on the learning outcomes. Regardless of the specific learning method used, participants had overwhelmingly positive attitudes toward the intervention and reported positive changes in participation in their own health care as a result of the intervention. Conclusions The findings provide strong evidence that the eHealth literacy intervention tested in this study, regardless of the specific learning method used, significantly improved knowledge, skills, and eHealth literacy efficacy from pre to post intervention, was positively perceived by participants, and led to positive changes in their own health care. Collaborative learning did not differ from individualistic learning in affecting the learning outcomes, suggesting the previously widely reported advantages of collaborative over individualistic learning may not be easily applied to the older population in informal settings, though several confounding factors might have contributed to this finding (ie, the largely inexperienced computer user composition of the study sample, potential instructor effect, and ceiling effect). Further research is necessary before a more firm conclusion can be drawn. These findings contribute to the literatures on adult learning, social interdependence theory, and health literacy. PMID:22052161

  11. Cellular Automata

    NASA Astrophysics Data System (ADS)

    Gutowitz, Howard

    1991-08-01

    Cellular automata, dynamic systems in which space and time are discrete, are yielding interesting applications in both the physical and natural sciences. The thirty four contributions in this book cover many aspects of contemporary studies on cellular automata and include reviews, research reports, and guides to recent literature and available software. Chapters cover mathematical analysis, the structure of the space of cellular automata, learning rules with specified properties: cellular automata in biology, physics, chemistry, and computation theory; and generalizations of cellular automata in neural nets, Boolean nets, and coupled map lattices. Current work on cellular automata may be viewed as revolving around two central and closely related problems: the forward problem and the inverse problem. The forward problem concerns the description of properties of given cellular automata. Properties considered include reversibility, invariants, criticality, fractal dimension, and computational power. The role of cellular automata in computation theory is seen as a particularly exciting venue for exploring parallel computers as theoretical and practical tools in mathematical physics. The inverse problem, an area of study gaining prominence particularly in the natural sciences, involves designing rules that possess specified properties or perform specified task. A long-term goal is to develop a set of techniques that can find a rule or set of rules that can reproduce quantitative observations of a physical system. Studies of the inverse problem take up the organization and structure of the set of automata, in particular the parameterization of the space of cellular automata. Optimization and learning techniques, like the genetic algorithm and adaptive stochastic cellular automata are applied to find cellular automaton rules that model such physical phenomena as crystal growth or perform such adaptive-learning tasks as balancing an inverted pole. Howard Gutowitz is Collaborateur in the Service de Physique du Solide et Résonance Magnetique, Commissariat a I'Energie Atomique, Saclay, France.

  12. Visual Perceptual Learning and Models.

    PubMed

    Dosher, Barbara; Lu, Zhong-Lin

    2017-09-15

    Visual perceptual learning through practice or training can significantly improve performance on visual tasks. Originally seen as a manifestation of plasticity in the primary visual cortex, perceptual learning is more readily understood as improvements in the function of brain networks that integrate processes, including sensory representations, decision, attention, and reward, and balance plasticity with system stability. This review considers the primary phenomena of perceptual learning, theories of perceptual learning, and perceptual learning's effect on signal and noise in visual processing and decision. Models, especially computational models, play a key role in behavioral and physiological investigations of the mechanisms of perceptual learning and for understanding, predicting, and optimizing human perceptual processes, learning, and performance. Performance improvements resulting from reweighting or readout of sensory inputs to decision provide a strong theoretical framework for interpreting perceptual learning and transfer that may prove useful in optimizing learning in real-world applications.

  13. The control of a manipulator by a computer model of the cerebellum.

    NASA Technical Reports Server (NTRS)

    Albus, J. S.

    1973-01-01

    Extension of previous work by Albus (1971, 1972) on the theory of cerebellar function to an application of a computer model of the cerebellum to manipulator control. Following a discussion of the cerebellar function and of a perceptron analogy of the cerebellum, particularly in regard to learning, an electromechanical model of the cerebellum is considered in the form of an IBM 1800 computer connected to a Rancho Los Amigos arm with seven degrees of freedom. It is shown that the computer memory makes it possible to train the arm on some representative sample of the universe of possible states and to achieve satisfactory performance.

  14. Learning to Identify Local Flora with Human Feedback (Author’s Manuscript)

    DTIC Science & Technology

    2014-06-23

    UNEP- WCMC, 2002. 1 [4] J . Hays and A. A. Efros. IM2GPS: estimating geographic informa- tion from a single image. In CVPR, 2008. 1 [5] R. Jin, S. Wang...and Y. Zhou. Regularized distance metric learning: Theory and algorithm. In NIPS, 2009. 2 [6] N. Kumar, P. Belhumeur, A. Biswas, D. Jacobs, W. J . Kress...I. Lopez, and J . Soare. Leafsnap: A computer vision system for automatic plant species identification. In ECCV, 2012. 1 [7] A. Oliva and A. Torralba

  15. Neurobiological studies of risk assessment: a comparison of expected utility and mean-variance approaches.

    PubMed

    D'Acremont, Mathieu; Bossaerts, Peter

    2008-12-01

    When modeling valuation under uncertainty, economists generally prefer expected utility because it has an axiomatic foundation, meaning that the resulting choices will satisfy a number of rationality requirements. In expected utility theory, values are computed by multiplying probabilities of each possible state of nature by the payoff in that state and summing the results. The drawback of this approach is that all state probabilities need to be dealt with separately, which becomes extremely cumbersome when it comes to learning. Finance academics and professionals, however, prefer to value risky prospects in terms of a trade-off between expected reward and risk, where the latter is usually measured in terms of reward variance. This mean-variance approach is fast and simple and greatly facilitates learning, but it impedes assigning values to new gambles on the basis of those of known ones. To date, it is unclear whether the human brain computes values in accordance with expected utility theory or with mean-variance analysis. In this article, we discuss the theoretical and empirical arguments that favor one or the other theory. We also propose a new experimental paradigm that could determine whether the human brain follows the expected utility or the mean-variance approach. Behavioral results of implementation of the paradigm are discussed.

  16. A Direct-Learning Approach to Acquiring a Bimanual Tapping Skill.

    PubMed

    Michaels, Claire F; Gomes, Thábata V B; Benda, Rodolfo N

    2017-01-01

    The theory of direct learning (D. M. Jacobs & C. F. Michaels, 2007 ) has proven useful in understanding improvement in perception and exploratory action. Here the authors assess its usefulness for understanding the learning of a motor skill, bimanual tapping at a difficult phase relation. Twenty participants attempted to learn to tap with 2 index fingers at 2 Hz with a phase lag of 90° (i.e., with a right-right period of 500 ms and a right-left period of 125 ms). There were 30 trials, each with 50 tapping cycles. Computer-screen feedback informed of errors in both period and phase for each pair of taps. Participants differed dramatically in their success. Learning was assessed by identifying the succession of attractors capturing tapping over the experiment. A few participants' attractors migrated from antiphase to 90° with an appropriate period; others became attracted to a fixed right-left interval, rather than phase, with or without attraction to period. Changes in attractor loci were explained with mixed success by direct learning, inviting elaboration of the theory. The transition to interval attractors was understood as a change in intention, and was remarkable for its indifference to typical bimanual interactions.

  17. BELM: Bayesian extreme learning machine.

    PubMed

    Soria-Olivas, Emilio; Gómez-Sanchis, Juan; Martín, José D; Vila-Francés, Joan; Martínez, Marcelino; Magdalena, José R; Serrano, Antonio J

    2011-03-01

    The theory of extreme learning machine (ELM) has become very popular on the last few years. ELM is a new approach for learning the parameters of the hidden layers of a multilayer neural network (as the multilayer perceptron or the radial basis function neural network). Its main advantage is the lower computational cost, which is especially relevant when dealing with many patterns defined in a high-dimensional space. This brief proposes a bayesian approach to ELM, which presents some advantages over other approaches: it allows the introduction of a priori knowledge; obtains the confidence intervals (CIs) without the need of applying methods that are computationally intensive, e.g., bootstrap; and presents high generalization capabilities. Bayesian ELM is benchmarked against classical ELM in several artificial and real datasets that are widely used for the evaluation of machine learning algorithms. Achieved results show that the proposed approach produces a competitive accuracy with some additional advantages, namely, automatic production of CIs, reduction of probability of model overfitting, and use of a priori knowledge.

  18. Homeostatic reinforcement learning for integrating reward collection and physiological stability

    PubMed Central

    Keramati, Mehdi; Gutkin, Boris

    2014-01-01

    Efficient regulation of internal homeostasis and defending it against perturbations requires adaptive behavioral strategies. However, the computational principles mediating the interaction between homeostatic and associative learning processes remain undefined. Here we use a definition of primary rewards, as outcomes fulfilling physiological needs, to build a normative theory showing how learning motivated behaviors may be modulated by internal states. Within this framework, we mathematically prove that seeking rewards is equivalent to the fundamental objective of physiological stability, defining the notion of physiological rationality of behavior. We further suggest a formal basis for temporal discounting of rewards by showing that discounting motivates animals to follow the shortest path in the space of physiological variables toward the desired setpoint. We also explain how animals learn to act predictively to preclude prospective homeostatic challenges, and several other behavioral patterns. Finally, we suggest a computational role for interaction between hypothalamus and the brain reward system. DOI: http://dx.doi.org/10.7554/eLife.04811.001 PMID:25457346

  19. Modeling the behavioral substrates of associate learning and memory - Adaptive neural models

    NASA Technical Reports Server (NTRS)

    Lee, Chuen-Chien

    1991-01-01

    Three adaptive single-neuron models based on neural analogies of behavior modification episodes are proposed, which attempt to bridge the gap between psychology and neurophysiology. The proposed models capture the predictive nature of Pavlovian conditioning, which is essential to the theory of adaptive/learning systems. The models learn to anticipate the occurrence of a conditioned response before the presence of a reinforcing stimulus when training is complete. Furthermore, each model can find the most nonredundant and earliest predictor of reinforcement. The behavior of the models accounts for several aspects of basic animal learning phenomena in Pavlovian conditioning beyond previous related models. Computer simulations show how well the models fit empirical data from various animal learning paradigms.

  20. Minimalist Social-Affective Value for Use in Joint Action: A Neural-Computational Hypothesis

    PubMed Central

    Lowe, Robert; Almér, Alexander; Lindblad, Gustaf; Gander, Pierre; Michael, John; Vesper, Cordula

    2016-01-01

    Joint Action is typically described as social interaction that requires coordination among two or more co-actors in order to achieve a common goal. In this article, we put forward a hypothesis for the existence of a neural-computational mechanism of affective valuation that may be critically exploited in Joint Action. Such a mechanism would serve to facilitate coordination between co-actors permitting a reduction of required information. Our hypothesized affective mechanism provides a value function based implementation of Associative Two-Process (ATP) theory that entails the classification of external stimuli according to outcome expectancies. This approach has been used to describe animal and human action that concerns differential outcome expectancies. Until now it has not been applied to social interaction. We describe our Affective ATP model as applied to social learning consistent with an “extended common currency” perspective in the social neuroscience literature. We contrast this to an alternative mechanism that provides an example implementation of the so-called social-specific value perspective. In brief, our Social-Affective ATP mechanism builds upon established formalisms for reinforcement learning (temporal difference learning models) nuanced to accommodate expectations (consistent with ATP theory) and extended to integrate non-social and social cues for use in Joint Action. PMID:27601989

  1. Adaptive Importance Sampling for Control and Inference

    NASA Astrophysics Data System (ADS)

    Kappen, H. J.; Ruiz, H. C.

    2016-03-01

    Path integral (PI) control problems are a restricted class of non-linear control problems that can be solved formally as a Feynman-Kac PI and can be estimated using Monte Carlo sampling. In this contribution we review PI control theory in the finite horizon case. We subsequently focus on the problem how to compute and represent control solutions. We review the most commonly used methods in robotics and control. Within the PI theory, the question of how to compute becomes the question of importance sampling. Efficient importance samplers are state feedback controllers and the use of these requires an efficient representation. Learning and representing effective state-feedback controllers for non-linear stochastic control problems is a very challenging, and largely unsolved, problem. We show how to learn and represent such controllers using ideas from the cross entropy method. We derive a gradient descent method that allows to learn feed-back controllers using an arbitrary parametrisation. We refer to this method as the path integral cross entropy method or PICE. We illustrate this method for some simple examples. The PI control methods can be used to estimate the posterior distribution in latent state models. In neuroscience these problems arise when estimating connectivity from neural recording data using EM. We demonstrate the PI control method as an accurate alternative to particle filtering.

  2. Quantum Nash Equilibria and Quantum Computing

    NASA Astrophysics Data System (ADS)

    Fellman, Philip Vos; Post, Jonathan Vos

    In 2004, At the Fifth International Conference on Complex Systems, we drew attention to some remarkable findings by researchers at the Santa Fe Institute (Sato, Farmer and Akiyama, 2001) about hitherto unsuspected complexity in the Nash Equilibrium. As we progressed from these findings about heteroclinic Hamiltonians and chaotic transients hidden within the learning patterns of the simple rock-paper-scissors game to some related findings on the theory of quantum computing, one of the arguments we put forward was just as in the late 1990's a number of new Nash equilibria were discovered in simple bi-matrix games (Shubik and Quint, 1996; Von Stengel, 1997, 2000; and McLennan and Park, 1999) we would begin to see new Nash equilibria discovered as the result of quantum computation. While actual quantum computers remain rather primitive (Toibman, 2004), and the theory of quantum computation seems to be advancing perhaps a bit more slowly than originally expected, there have, nonetheless, been a number of advances in computation and some more radical advances in an allied field, quantum game theory (Huberman and Hogg, 2004) which are quite significant. In the course of this paper we will review a few of these discoveries and illustrate some of the characteristics of these new "Quantum Nash Equilibria". The full text of this research can be found at http://necsi.org/events/iccs6/viewpaper.php?id-234

  3. Computational and experimental single cell biology techniques for the definition of cell type heterogeneity, interplay and intracellular dynamics.

    PubMed

    de Vargas Roditi, Laura; Claassen, Manfred

    2015-08-01

    Novel technological developments enable single cell population profiling with respect to their spatial and molecular setup. These include single cell sequencing, flow cytometry and multiparametric imaging approaches and open unprecedented possibilities to learn about the heterogeneity, dynamics and interplay of the different cell types which constitute tissues and multicellular organisms. Statistical and dynamic systems theory approaches have been applied to quantitatively describe a variety of cellular processes, such as transcription and cell signaling. Machine learning approaches have been developed to define cell types, their mutual relationships, and differentiation hierarchies shaping heterogeneous cell populations, yielding insights into topics such as, for example, immune cell differentiation and tumor cell type composition. This combination of experimental and computational advances has opened perspectives towards learning predictive multi-scale models of heterogeneous cell populations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Grounded understanding of abstract concepts: The case of STEM learning.

    PubMed

    Hayes, Justin C; Kraemer, David J M

    2017-01-01

    Characterizing the neural implementation of abstract conceptual representations has long been a contentious topic in cognitive science. At the heart of the debate is whether the "sensorimotor" machinery of the brain plays a central role in representing concepts, or whether the involvement of these perceptual and motor regions is merely peripheral or epiphenomenal. The domain of science, technology, engineering, and mathematics (STEM) learning provides an important proving ground for sensorimotor (or grounded) theories of cognition, as concepts in science and engineering courses are often taught through laboratory-based and other hands-on methodologies. In this review of the literature, we examine evidence suggesting that sensorimotor processes strengthen learning associated with the abstract concepts central to STEM pedagogy. After considering how contemporary theories have defined abstraction in the context of semantic knowledge, we propose our own explanation for how body-centered information, as computed in sensorimotor brain regions and visuomotor association cortex, can form a useful foundation upon which to build an understanding of abstract scientific concepts, such as mechanical force. Drawing from theories in cognitive neuroscience, we then explore models elucidating the neural mechanisms involved in grounding intangible concepts, including Hebbian learning, predictive coding, and neuronal recycling. Empirical data on STEM learning through hands-on instruction are considered in light of these neural models. We conclude the review by proposing three distinct ways in which the field of cognitive neuroscience can contribute to STEM learning by bolstering our understanding of how the brain instantiates abstract concepts in an embodied fashion.

  5. Exploring the Human Element of Computer-Assisted Language Learning: An Iranian Context

    ERIC Educational Resources Information Center

    Fatemi Jahromi, Seyed Abolghasseminits; Salimi, Farimah

    2013-01-01

    Based on various theories of human agency (Ajzen, I. (2005). "Attitudes, personality and behavior" (2nd ed.). London: Open University Press; Davis, F.D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. "MIS Quarterly", 13, 319-340; Rogers, E.M. (1983). "Diffusion of…

  6. The Effect of Three-Dimensional Simulations on the Understanding of Chemical Structures and Their Properties

    ERIC Educational Resources Information Center

    Urhahne, Detlef; Nick, Sabine; Schanze, Sascha

    2009-01-01

    In a series of three experimental studies, the effectiveness of three-dimensional computer simulations to aid the understanding of chemical structures and their properties was investigated. Arguments for the usefulness of three-dimensional simulations were derived from Mayer's generative theory of multimedia learning. Simulations might lead to a…

  7. Sigmund Freud's Personality Theory: Learning Module Employing Computer-Assisted Instruction Technology.

    ERIC Educational Resources Information Center

    Saavedra, Jose M.

    This interactive module contains 33 windows of text and three graphics, in which Freud's topographical (unconscious, pre-conscious, and conscious) and structural (id, ego, and superego) models of the psyche are studied. Seventeen fill-in questions are interspersed within the text. The module stresses the importance of comprehending the concept of…

  8. What Artificial Grammar Learning Reveals about the Neurobiology of Syntax

    ERIC Educational Resources Information Center

    Petersson, Karl-Magnus; Folia, Vasiliki; Hagoort, Peter

    2012-01-01

    In this paper we examine the neurobiological correlates of syntax, the processing of structured sequences, by comparing FMRI results on artificial and natural language syntax. We discuss these and similar findings in the context of formal language and computability theory. We used a simple right-linear unification grammar in an implicit artificial…

  9. From Interaction to Intersubjectivity: Facilitating Online Group Discourse Processes

    ERIC Educational Resources Information Center

    Dennen, Vanessa Paz; Wieland, Kristina

    2007-01-01

    This article examines the online discourse that took place in representative threads from two classes, seeking to document indicators that students did or did not engage in co-construction of knowledge. Stahl's (2006) social theory of computer supported collaborative learning (CSCL) is used along with discourse analysis methods to examine these…

  10. Reality Is Our Laboratory: Communities of Practice in Applied Computer Science

    ERIC Educational Resources Information Center

    Rohde, M.; Klamma, R.; Jarke, M.; Wulf, V.

    2007-01-01

    The present paper presents a longitudinal study of the course "High-tech Entrepreneurship and New Media." The course design is based on socio-cultural theories of learning and considers the role of social capital in entrepreneurial networks. By integrating student teams into the communities of practice of local start-ups, we offer…

  11. Using Innate Visual Biases to Guide Face Learning in Natural Scenes: A Computational Investigation

    ERIC Educational Resources Information Center

    Balas, Benjamin

    2010-01-01

    Newborn infants appear to possess an innate bias that guides preferential orienting to and tracking of human faces. There is, however, no clear agreement as to the underlying mechanism supporting such a preference. In particular, two competing theories (known as the "structural" and "sensory" hypotheses) conjecture fundamentally different biasing…

  12. A Neurobiological Theory of Automaticity in Perceptual Categorization

    ERIC Educational Resources Information Center

    Ashby, F. Gregory; Ennis, John M.; Spiering, Brian J.

    2007-01-01

    A biologically detailed computational model is described of how categorization judgments become automatic in tasks that depend on procedural learning. The model assumes 2 neural pathways from sensory association cortex to the premotor area that mediates response selection. A longer and slower path projects to the premotor area via the striatum,…

  13. Engineering Education through the Latina Lens

    ERIC Educational Resources Information Center

    Villa, Elsa Q.; Wandermurem, Luciene; Hampton, Elaine M.; Esquinca, Alberto

    2016-01-01

    Less than 20% of undergraduates earning a degree in engineering are women, and even more alarming is minority women earn a mere 3.1% of those degrees. This paper reports on a qualitative study examining Latinas' identity development toward and in undergraduate engineering and computer science studies using a sociocultural theory of learning. Three…

  14. A Computational Model of Learners Achievement Emotions Using Control-Value Theory

    ERIC Educational Resources Information Center

    Muñoz, Karla; Noguez, Julieta; Neri, Luis; Mc Kevitt, Paul; Lunney, Tom

    2016-01-01

    Game-based Learning (GBL) environments make instruction flexible and interactive. Positive experiences depend on personalization. Student modelling has focused on affect. Three methods are used: (1) recognizing the physiological effects of emotion, (2) reasoning about emotion from its origin and (3) an approach combining 1 and 2. These have proven…

  15. Theory Meets Praxis: From Derrida to the Beginning German Classroom via the Internet

    ERIC Educational Resources Information Center

    Hasty, Will

    2006-01-01

    Based on practical experience in a new online beginning German course sequence, the author of this essay argues that contemporary cultural developments associated with the emergence of new technologies, particularly computer-assisted language learning, provide new opportunities to theorize German Studies curricula from the beginning level onward.…

  16. A Theory for the Neural Basis of Language Part 2: Simulation Studies of the Model

    ERIC Educational Resources Information Center

    Baron, R. J.

    1974-01-01

    Computer simulation studies of the proposed model are presented. Processes demonstrated are (1) verbally directed recall of visual experience; (2) understanding of verbal information; (3) aspects of learning and forgetting; (4) the dependence of recognition and understanding in context; and (5) elementary concepts of sentence production. (Author)

  17. A baker's dozen of new particle flows for nonlinear filters, Bayesian decisions and transport

    NASA Astrophysics Data System (ADS)

    Daum, Fred; Huang, Jim

    2015-05-01

    We describe a baker's dozen of new particle flows to compute Bayes' rule for nonlinear filters, Bayesian decisions and learning as well as transport. Several of these new flows were inspired by transport theory, but others were inspired by physics or statistics or Markov chain Monte Carlo methods.

  18. Task-Induced Development of Hinting Behaviors in Online Task-Oriented L2 Interaction

    ERIC Educational Resources Information Center

    Balaman, Ufuk

    2018-01-01

    Technology-mediated task settings are rich interactional domains in which second language (L2) learners manage a multitude of interactional resources for task accomplishment. The affordances of these settings have been repeatedly addressed in computer-assisted language learning (CALL) literature mainly based on theory-informed task design…

  19. Capturing Problem-Solving Processes Using Critical Rationalism

    ERIC Educational Resources Information Center

    Chitpin, Stephanie; Simon, Marielle

    2012-01-01

    The examination of problem-solving processes continues to be a current research topic in education. Knowing how to solve problems is not only a key aspect of learning mathematics but is also at the heart of cognitive theories, linguistics, artificial intelligence, and computers sciences. Problem solving is a multistep, higher-order cognitive task…

  20. A Computational Lens on Design Research

    ERIC Educational Resources Information Center

    Hoyles, Celia; Noss, Richard

    2015-01-01

    In this commentary, we briefly review the collective effort of design researchers to weave theory with empirical results, in order to gain a better understanding of the processes of learning. We seek to respond to this challenging agenda by centring on the evolution of one sub-field: namely that which involves investigations within a…

  1. Social Cues in Multimedia Learning: Role of Speaker's Voice.

    ERIC Educational Resources Information Center

    Mayer, Richard E.; Sobko, Kristina; Mautone, Patricia D.

    2003-01-01

    In 2 experiments, learners who were seated at a computer workstation received a narrated animation about lightning formation. Then, they took a retention test, a transfer test, and rated the speaker. The results are consistent with social agency theory, which posits that social cues in multimedia messages can encourage learners to interpret…

  2. The role of moral utility in decision making: an interdisciplinary framework.

    PubMed

    Tobler, Philippe N; Kalis, Annemarie; Kalenscher, Tobias

    2008-12-01

    What decisions should we make? Moral values, rules, and virtues provide standards for morally acceptable decisions, without prescribing how we should reach them. However, moral theories do assume that we are, at least in principle, capable of making the right decisions. Consequently, an empirical investigation of the methods and resources we use for making moral decisions becomes relevant. We consider theoretical parallels of economic decision theory and moral utilitarianism and suggest that moral decision making may tap into mechanisms and processes that have originally evolved for nonmoral decision making. For example, the computation of reward value occurs through the combination of probability and magnitude; similar computation might also be used for determining utilitarian moral value. Both nonmoral and moral decisions may resort to intuitions and heuristics. Learning mechanisms implicated in the assignment of reward value to stimuli, actions, and outcomes may also enable us to determine moral value and assign it to stimuli, actions, and outcomes. In conclusion, we suggest that moral capabilities can employ and benefit from a variety of nonmoral decision-making and learning mechanisms.

  3. Accelerating Multiagent Reinforcement Learning by Equilibrium Transfer.

    PubMed

    Hu, Yujing; Gao, Yang; An, Bo

    2015-07-01

    An important approach in multiagent reinforcement learning (MARL) is equilibrium-based MARL, which adopts equilibrium solution concepts in game theory and requires agents to play equilibrium strategies at each state. However, most existing equilibrium-based MARL algorithms cannot scale due to a large number of computationally expensive equilibrium computations (e.g., computing Nash equilibria is PPAD-hard) during learning. For the first time, this paper finds that during the learning process of equilibrium-based MARL, the one-shot games corresponding to each state's successive visits often have the same or similar equilibria (for some states more than 90% of games corresponding to successive visits have similar equilibria). Inspired by this observation, this paper proposes to use equilibrium transfer to accelerate equilibrium-based MARL. The key idea of equilibrium transfer is to reuse previously computed equilibria when each agent has a small incentive to deviate. By introducing transfer loss and transfer condition, a novel framework called equilibrium transfer-based MARL is proposed. We prove that although equilibrium transfer brings transfer loss, equilibrium-based MARL algorithms can still converge to an equilibrium policy under certain assumptions. Experimental results in widely used benchmarks (e.g., grid world game, soccer game, and wall game) show that the proposed framework: 1) not only significantly accelerates equilibrium-based MARL (up to 96.7% reduction in learning time), but also achieves higher average rewards than algorithms without equilibrium transfer and 2) scales significantly better than algorithms without equilibrium transfer when the state/action space grows and the number of agents increases.

  4. The development of a digital logic concept inventory

    NASA Astrophysics Data System (ADS)

    Herman, Geoffrey Lindsay

    Instructors in electrical and computer engineering and in computer science have developed innovative methods to teach digital logic circuits. These methods attempt to increase student learning, satisfaction, and retention. Although there are readily accessible and accepted means for measuring satisfaction and retention, there are no widely accepted means for assessing student learning. Rigorous assessment of learning is elusive because differences in topic coverage, curriculum and course goals, and exam content prevent direct comparison of two teaching methods when using tools such as final exam scores or course grades. Because of these difficulties, computing educators have issued a general call for the adoption of assessment tools to critically evaluate and compare the various teaching methods. Science, Technology, Engineering, and Mathematics (STEM) education researchers commonly measure students' conceptual learning to compare how much different pedagogies improve learning. Conceptual knowledge is often preferred because all engineering courses should teach a fundamental set of concepts even if they emphasize design or analysis to different degrees. Increasing conceptual learning is also important, because students who can organize facts and ideas within a consistent conceptual framework are able to learn new information quickly and can apply what they know in new situations. If instructors can accurately assess their students' conceptual knowledge, they can target instructional interventions to remedy common problems. To properly assess conceptual learning, several researchers have developed concept inventories (CIs) for core subjects in engineering sciences. CIs are multiple-choice assessment tools that evaluate how well a student's conceptual framework matches the accepted conceptual framework of a discipline or common faulty conceptual frameworks. We present how we created and evaluated the digital logic concept inventory (DLCI).We used a Delphi process to identify the important and difficult concepts to include on the DLCI. To discover and describe common student misconceptions, we interviewed students who had completed a digital logic course. Students vocalized their thoughts as they solved digital logic problems. We analyzed the interview data using a qualitative grounded theory approach. We have administered the DLCI at several institutions and have checked the validity, reliability, and bias of the DLCI with classical testing theory procedures. These procedures consisted of follow-up interviews with students, analysis of administration results with statistical procedures, and expert feedback. We discuss these results and present the DLCI's potential for providing a meaningful tool for comparing student learning at different institutions.

  5. Teaching Computer Languages and Elementary Theory for Mixed Audiences at University Level

    NASA Astrophysics Data System (ADS)

    Christiansen, Henning

    2004-09-01

    Theoretical issues of computer science are traditionally taught in a way that presupposes a solid mathematical background and are usually considered more or less inaccessible for students without this. An effective methodology is described which has been developed for a target group of university students with different backgrounds such as natural science or humanities. It has been developed for a course that integrates theoretical material on computer languages and abstract machines with practical programming techniques. Prolog used as meta-language for describing language issues is the central instrument in the approach: Formal descriptions become running prototypes that are easy and appealing to test and modify, and can be extended into analyzers, interpreters, and tools such as tracers and debuggers. Experience shows a high learning curve, especially when the principles are extended into a learning-by-doing approach having the students to develop such descriptions themselves from an informal introduction.

  6. Modeling choice and reaction time during arbitrary visuomotor learning through the coordination of adaptive working memory and reinforcement learning

    PubMed Central

    Viejo, Guillaume; Khamassi, Mehdi; Brovelli, Andrea; Girard, Benoît

    2015-01-01

    Current learning theory provides a comprehensive description of how humans and other animals learn, and places behavioral flexibility and automaticity at heart of adaptive behaviors. However, the computations supporting the interactions between goal-directed and habitual decision-making systems are still poorly understood. Previous functional magnetic resonance imaging (fMRI) results suggest that the brain hosts complementary computations that may differentially support goal-directed and habitual processes in the form of a dynamical interplay rather than a serial recruitment of strategies. To better elucidate the computations underlying flexible behavior, we develop a dual-system computational model that can predict both performance (i.e., participants' choices) and modulations in reaction times during learning of a stimulus–response association task. The habitual system is modeled with a simple Q-Learning algorithm (QL). For the goal-directed system, we propose a new Bayesian Working Memory (BWM) model that searches for information in the history of previous trials in order to minimize Shannon entropy. We propose a model for QL and BWM coordination such that the expensive memory manipulation is under control of, among others, the level of convergence of the habitual learning. We test the ability of QL or BWM alone to explain human behavior, and compare them with the performance of model combinations, to highlight the need for such combinations to explain behavior. Two of the tested combination models are derived from the literature, and the latter being our new proposal. In conclusion, all subjects were better explained by model combinations, and the majority of them are explained by our new coordination proposal. PMID:26379518

  7. Complementary learning systems within the hippocampus: a neural network modelling approach to reconciling episodic memory with statistical learning

    PubMed Central

    Turk-Browne, Nicholas B.; Botvinick, Matthew M.; Norman, Kenneth A.

    2017-01-01

    A growing literature suggests that the hippocampus is critical for the rapid extraction of regularities from the environment. Although this fits with the known role of the hippocampus in rapid learning, it seems at odds with the idea that the hippocampus specializes in memorizing individual episodes. In particular, the Complementary Learning Systems theory argues that there is a computational trade-off between learning the specifics of individual experiences and regularities that hold across those experiences. We asked whether it is possible for the hippocampus to handle both statistical learning and memorization of individual episodes. We exposed a neural network model that instantiates known properties of hippocampal projections and subfields to sequences of items with temporal regularities. We found that the monosynaptic pathway—the pathway connecting entorhinal cortex directly to region CA1—was able to support statistical learning, while the trisynaptic pathway—connecting entorhinal cortex to CA1 through dentate gyrus and CA3—learned individual episodes, with apparent representations of regularities resulting from associative reactivation through recurrence. Thus, in paradigms involving rapid learning, the computational trade-off between learning episodes and regularities may be handled by separate anatomical pathways within the hippocampus itself. This article is part of the themed issue ‘New frontiers for statistical learning in the cognitive sciences’. PMID:27872368

  8. Complementary learning systems within the hippocampus: a neural network modelling approach to reconciling episodic memory with statistical learning.

    PubMed

    Schapiro, Anna C; Turk-Browne, Nicholas B; Botvinick, Matthew M; Norman, Kenneth A

    2017-01-05

    A growing literature suggests that the hippocampus is critical for the rapid extraction of regularities from the environment. Although this fits with the known role of the hippocampus in rapid learning, it seems at odds with the idea that the hippocampus specializes in memorizing individual episodes. In particular, the Complementary Learning Systems theory argues that there is a computational trade-off between learning the specifics of individual experiences and regularities that hold across those experiences. We asked whether it is possible for the hippocampus to handle both statistical learning and memorization of individual episodes. We exposed a neural network model that instantiates known properties of hippocampal projections and subfields to sequences of items with temporal regularities. We found that the monosynaptic pathway-the pathway connecting entorhinal cortex directly to region CA1-was able to support statistical learning, while the trisynaptic pathway-connecting entorhinal cortex to CA1 through dentate gyrus and CA3-learned individual episodes, with apparent representations of regularities resulting from associative reactivation through recurrence. Thus, in paradigms involving rapid learning, the computational trade-off between learning episodes and regularities may be handled by separate anatomical pathways within the hippocampus itself.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Author(s).

  9. Learning experiences of science teachers in a computer-mediated communication context

    NASA Astrophysics Data System (ADS)

    Chung, Chia-Jung

    The use of computer-mediated-communication (CMC) has been applied increasingly in staff development efforts for teachers. Many teacher education programs are looking to CMC, particularly computer conferencing systems, as an effective and low-cost medium for the delivery of teacher educational programs anytime, anywhere. Based on constructivist learning theories, this study focused on examining the use of an online discussion board in a graduate course as a place where forty-six inservice teachers shared experiences and ideas. Data collection focused on online discussion transcripts of all the messages from three separate weeks, and supplemented by interviews and teacher self-evaluation reports. The nature and development of the discussions were studied over one semester by analyzing teacher online discussions in two domains: critical reflections and social-interpersonal rapport. In effect, this study provided insights into how to employ computer conferencing technology in facilitating inservice teachers' teaching practices and their professional development. Major findings include: (1) Participation: The level of participation varied during the semester but was higher at the beginning of the semester and lower at the end of the semester. (2) Critical Reflection: Teachers' critical reflection developed over time as a result of the online discussion board according to mean critical thinking scores during the three selected weeks. Cognitive presence was found mostly in focused discussion forums and social presence mainly existed in the unfocused discussion forums. (3) Social-Interpersonal Rapport: The number of social cues in the messages increased initially but declined significantly over time. When teachers focused more on on-task discussions or critical reflection, there was less social conversation. (4) Teaching Practices and Professional Development: The researcher, the instructor, and teachers identified some advantages for using computer conferencing for improving teaching practices and for professional development. The results of this study suggest that applying computer-mediated communication in teacher education would impact positively on teachers' growth in critical reflection and social-interpersonal rapport. Furthermore, this study may encourage other researchers to use cognitive and social learning theories as the theoretical backgrounds for developing teacher educational models by applying computer conferencing.

  10. The FuturICT education accelerator

    NASA Astrophysics Data System (ADS)

    Johnson, J.; Buckingham Shum, S.; Willis, A.; Bishop, S.; Zamenopoulos, T.; Swithenby, S.; MacKay, R.; Merali, Y.; Lorincz, A.; Costea, C.; Bourgine, P.; Louçã, J.; Kapenieks, A.; Kelley, P.; Caird, S.; Bromley, J.; Deakin Crick, R.; Goldspink, C.; Collet, P.; Carbone, A.; Helbing, D.

    2012-11-01

    Education is a major force for economic and social wellbeing. Despite high aspirations, education at all levels can be expensive and ineffective. Three Grand Challenges are identified: (1) enable people to learn orders of magnitude more effectively, (2) enable people to learn at orders of magnitude less cost, and (3) demonstrate success by exemplary interdisciplinary education in complex systems science. A ten year `man-on-the-moon' project is proposed in which FuturICT's unique combination of Complexity, Social and Computing Sciences could provide an urgently needed transdisciplinary language for making sense of educational systems. In close dialogue with educational theory and practice, and grounded in the emerging data science and learning analytics paradigms, this will translate into practical tools (both analytical and computational) for researchers, practitioners and leaders; generative principles for resilient educational ecosystems; and innovation for radically scalable, yet personalised, learner engagement and assessment. The proposed Education Accelerator will serve as a `wind tunnel' for testing these ideas in the context of real educational programmes, with an international virtual campus delivering complex systems education exploiting the new understanding of complex, social, computationally enhanced organisational structure developed within FuturICT.

  11. Motivation and emotion predict medical students' attention to computer-based feedback.

    PubMed

    Naismith, Laura M; Lajoie, Susanne P

    2017-12-14

    Students cannot learn from feedback unless they pay attention to it. This study investigated relationships between the personal factors of achievement goal orientations, achievement emotions, and attention to feedback in BioWorld, a computer environment for learning clinical reasoning. Novice medical students (N = 28) completed questionnaires to measure their achievement goal orientations and then thought aloud while solving three endocrinology patient cases and reviewing corresponding expert solutions. Questionnaires administered after each case measured participants' experiences of five feedback emotions: pride, relief, joy, shame, and anger. Attention to individual text segments of the expert solutions was modelled using logistic regression and the method of generalized estimating equations. Participants did not attend to all of the feedback that was available to them. Performance-avoidance goals and shame positively predicted attention to feedback, and performance-approach goals and relief negatively predicted attention to feedback. Aspects of how the feedback was displayed also influenced participants' attention. Findings are discussed in terms of their implications for educational theory as well as the design and use of computer learning environments in medical education.

  12. Intelligent machines in the twenty-first century: foundations of inference and inquiry.

    PubMed

    Knuth, Kevin H

    2003-12-15

    The last century saw the application of Boolean algebra to the construction of computing machines, which work by applying logical transformations to information contained in their memory. The development of information theory and the generalization of Boolean algebra to Bayesian inference have enabled these computing machines, in the last quarter of the twentieth century, to be endowed with the ability to learn by making inferences from data. This revolution is just beginning as new computational techniques continue to make difficult problems more accessible. Recent advances in our understanding of the foundations of probability theory have revealed implications for areas other than logic. Of relevance to intelligent machines, we recently identified the algebra of questions as the free distributive algebra, which will now allow us to work with questions in a way analogous to that which Boolean algebra enables us to work with logical statements. In this paper, we examine the foundations of inference and inquiry. We begin with a history of inferential reasoning, highlighting key concepts that have led to the automation of inference in modern machine-learning systems. We then discuss the foundations of inference in more detail using a modern viewpoint that relies on the mathematics of partially ordered sets and the scaffolding of lattice theory. This new viewpoint allows us to develop the logic of inquiry and introduce a measure describing the relevance of a proposed question to an unresolved issue. Last, we will demonstrate the automation of inference, and discuss how this new logic of inquiry will enable intelligent machines to ask questions. Automation of both inference and inquiry promises to allow robots to perform science in the far reaches of our solar system and in other star systems by enabling them not only to make inferences from data, but also to decide which question to ask, which experiment to perform, or which measurement to take given what they have learned and what they are designed to understand.

  13. Intelligent machines in the twenty-first century: foundations of inference and inquiry

    NASA Technical Reports Server (NTRS)

    Knuth, Kevin H.

    2003-01-01

    The last century saw the application of Boolean algebra to the construction of computing machines, which work by applying logical transformations to information contained in their memory. The development of information theory and the generalization of Boolean algebra to Bayesian inference have enabled these computing machines, in the last quarter of the twentieth century, to be endowed with the ability to learn by making inferences from data. This revolution is just beginning as new computational techniques continue to make difficult problems more accessible. Recent advances in our understanding of the foundations of probability theory have revealed implications for areas other than logic. Of relevance to intelligent machines, we recently identified the algebra of questions as the free distributive algebra, which will now allow us to work with questions in a way analogous to that which Boolean algebra enables us to work with logical statements. In this paper, we examine the foundations of inference and inquiry. We begin with a history of inferential reasoning, highlighting key concepts that have led to the automation of inference in modern machine-learning systems. We then discuss the foundations of inference in more detail using a modern viewpoint that relies on the mathematics of partially ordered sets and the scaffolding of lattice theory. This new viewpoint allows us to develop the logic of inquiry and introduce a measure describing the relevance of a proposed question to an unresolved issue. Last, we will demonstrate the automation of inference, and discuss how this new logic of inquiry will enable intelligent machines to ask questions. Automation of both inference and inquiry promises to allow robots to perform science in the far reaches of our solar system and in other star systems by enabling them not only to make inferences from data, but also to decide which question to ask, which experiment to perform, or which measurement to take given what they have learned and what they are designed to understand.

  14. The neurobiology of syntax: beyond string sets.

    PubMed

    Petersson, Karl Magnus; Hagoort, Peter

    2012-07-19

    The human capacity to acquire language is an outstanding scientific challenge to understand. Somehow our language capacities arise from the way the human brain processes, develops and learns in interaction with its environment. To set the stage, we begin with a summary of what is known about the neural organization of language and what our artificial grammar learning (AGL) studies have revealed. We then review the Chomsky hierarchy in the context of the theory of computation and formal learning theory. Finally, we outline a neurobiological model of language acquisition and processing based on an adaptive, recurrent, spiking network architecture. This architecture implements an asynchronous, event-driven, parallel system for recursive processing. We conclude that the brain represents grammars (or more precisely, the parser/generator) in its connectivity, and its ability for syntax is based on neurobiological infrastructure for structured sequence processing. The acquisition of this ability is accounted for in an adaptive dynamical systems framework. Artificial language learning (ALL) paradigms might be used to study the acquisition process within such a framework, as well as the processing properties of the underlying neurobiological infrastructure. However, it is necessary to combine and constrain the interpretation of ALL results by theoretical models and empirical studies on natural language processing. Given that the faculty of language is captured by classical computational models to a significant extent, and that these can be embedded in dynamic network architectures, there is hope that significant progress can be made in understanding the neurobiology of the language faculty.

  15. The neurobiology of syntax: beyond string sets

    PubMed Central

    Petersson, Karl Magnus; Hagoort, Peter

    2012-01-01

    The human capacity to acquire language is an outstanding scientific challenge to understand. Somehow our language capacities arise from the way the human brain processes, develops and learns in interaction with its environment. To set the stage, we begin with a summary of what is known about the neural organization of language and what our artificial grammar learning (AGL) studies have revealed. We then review the Chomsky hierarchy in the context of the theory of computation and formal learning theory. Finally, we outline a neurobiological model of language acquisition and processing based on an adaptive, recurrent, spiking network architecture. This architecture implements an asynchronous, event-driven, parallel system for recursive processing. We conclude that the brain represents grammars (or more precisely, the parser/generator) in its connectivity, and its ability for syntax is based on neurobiological infrastructure for structured sequence processing. The acquisition of this ability is accounted for in an adaptive dynamical systems framework. Artificial language learning (ALL) paradigms might be used to study the acquisition process within such a framework, as well as the processing properties of the underlying neurobiological infrastructure. However, it is necessary to combine and constrain the interpretation of ALL results by theoretical models and empirical studies on natural language processing. Given that the faculty of language is captured by classical computational models to a significant extent, and that these can be embedded in dynamic network architectures, there is hope that significant progress can be made in understanding the neurobiology of the language faculty. PMID:22688633

  16. Causal reasoning with forces

    PubMed Central

    Wolff, Phillip; Barbey, Aron K.

    2015-01-01

    Causal composition allows people to generate new causal relations by combining existing causal knowledge. We introduce a new computational model of such reasoning, the force theory, which holds that people compose causal relations by simulating the processes that join forces in the world, and compare this theory with the mental model theory (Khemlani et al., 2014) and the causal model theory (Sloman et al., 2009), which explain causal composition on the basis of mental models and structural equations, respectively. In one experiment, the force theory was uniquely able to account for people's ability to compose causal relationships from complex animations of real-world events. In three additional experiments, the force theory did as well as or better than the other two theories in explaining the causal compositions people generated from linguistically presented causal relations. Implications for causal learning and the hierarchical structure of causal knowledge are discussed. PMID:25653611

  17. Dissociable Learning Processes Underlie Human Pain Conditioning

    PubMed Central

    Zhang, Suyi; Mano, Hiroaki; Ganesh, Gowrishankar; Robbins, Trevor; Seymour, Ben

    2016-01-01

    Summary Pavlovian conditioning underlies many aspects of pain behavior, including fear and threat detection [1], escape and avoidance learning [2], and endogenous analgesia [3]. Although a central role for the amygdala is well established [4], both human and animal studies implicate other brain regions in learning, notably ventral striatum and cerebellum [5]. It remains unclear whether these regions make different contributions to a single aversive learning process or represent independent learning mechanisms that interact to generate the expression of pain-related behavior. We designed a human parallel aversive conditioning paradigm in which different Pavlovian visual cues probabilistically predicted thermal pain primarily to either the left or right arm and studied the acquisition of conditioned Pavlovian responses using combined physiological recordings and fMRI. Using computational modeling based on reinforcement learning theory, we found that conditioning involves two distinct types of learning process. First, a non-specific “preparatory” system learns aversive facial expressions and autonomic responses such as skin conductance. The associated learning signals—the learned associability and prediction error—were correlated with fMRI brain responses in amygdala-striatal regions, corresponding to the classic aversive (fear) learning circuit. Second, a specific lateralized system learns “consummatory” limb-withdrawal responses, detectable with electromyography of the arm to which pain is predicted. Its related learned associability was correlated with responses in ipsilateral cerebellar cortex, suggesting a novel computational role for the cerebellum in pain. In conclusion, our results show that the overall phenotype of conditioned pain behavior depends on two dissociable reinforcement learning circuits. PMID:26711494

  18. Learning, Realizability and Games in Classical Arithmetic

    NASA Astrophysics Data System (ADS)

    Aschieri, Federico

    2010-12-01

    In this dissertation we provide mathematical evidence that the concept of learning can be used to give a new and intuitive computational semantics of classical proofs in various fragments of Predicative Arithmetic. First, we extend Kreisel modified realizability to a classical fragment of first order Arithmetic, Heyting Arithmetic plus EM1 (Excluded middle axiom restricted to Sigma^0_1 formulas). We introduce a new realizability semantics we call "Interactive Learning-Based Realizability". Our realizers are self-correcting programs, which learn from their errors and evolve through time. Secondly, we extend the class of learning based realizers to a classical version PCFclass of PCF and, then, compare the resulting notion of realizability with Coquand game semantics and prove a full soundness and completeness result. In particular, we show there is a one-to-one correspondence between realizers and recursive winning strategies in the 1-Backtracking version of Tarski games. Third, we provide a complete and fully detailed constructive analysis of learning as it arises in learning based realizability for HA+EM1, Avigad's update procedures and epsilon substitution method for Peano Arithmetic PA. We present new constructive techniques to bound the length of learning processes and we apply them to reprove - by means of our theory - the classic result of Godel that provably total functions of PA can be represented in Godel's system T. Last, we give an axiomatization of the kind of learning that is needed to computationally interpret Predicative classical second order Arithmetic. Our work is an extension of Avigad's and generalizes the concept of update procedure to the transfinite case. Transfinite update procedures have to learn values of transfinite sequences of non computable functions in order to extract witnesses from classical proofs.

  19. Space Weather in the Machine Learning Era: A Multidisciplinary Approach

    NASA Astrophysics Data System (ADS)

    Camporeale, E.; Wing, S.; Johnson, J.; Jackman, C. M.; McGranaghan, R.

    2018-01-01

    The workshop entitled Space Weather: A Multidisciplinary Approach took place at the Lorentz Center, University of Leiden, Netherlands, on 25-29 September 2017. The aim of this workshop was to bring together members of the Space Weather, Mathematics, Statistics, and Computer Science communities to address the use of advanced techniques such as Machine Learning, Information Theory, and Deep Learning, to better understand the Sun-Earth system and to improve space weather forecasting. Although individual efforts have been made toward this goal, the community consensus is that establishing interdisciplinary collaborations is the most promising strategy for fully utilizing the potential of these advanced techniques in solving Space Weather-related problems.

  20. A Wittgenstein Approach to the Learning of OO-modeling

    NASA Astrophysics Data System (ADS)

    Holmboe, Christian

    2004-12-01

    The paper uses Ludwig Wittgenstein's theories about the relationship between thought, language, and objects of the world to explore the assumption that OO-thinking resembles natural thinking. The paper imports from research in linguistic philosophy to computer science education research. I show how UML class diagrams (i.e., an artificial context-free language) correspond to the logically perfect languages described in Tractatus Logico-Philosophicus. In Philosophical Investigations Wittgenstein disputes his previous theories by showing that natural languages are not constructed by rules of mathematical logic, but are language games where the meaning of a word is constructed through its use in social contexts. Contradicting the claim that OO-thinking is easy to learn because of its similarity to natural thinking, I claim that OO-thinking is difficult to learn because of its differences from natural thinking. The nature of these differences is not currently well known or appreciated. I suggest how explicit attention to the nature and implications of different language games may improve the teaching and learning of OO-modeling as well as programming.

  1. Integrating hypermedia into the environmental education setting: Developing a program and evaluating its effect

    NASA Astrophysics Data System (ADS)

    Parker, Tehri Davenport

    1997-09-01

    This study designed, implemented, and evaluated an environmental education hypermedia program for use in a residential environmental education facility. The purpose of the study was to ascertain whether a hypermedia program could increase student knowledge and positive attitudes toward the environment and environmental education. A student/computer interface, based on the theory of social cognition, was developed to direct student interactions with the computer. A quasi-experimental research design was used. Students were randomly assigned to either the experimental or control group. The experimental group used the hypermedia program to learn about the topic of energy. The control group received the same conceptual information from a teacher/naturalist. An Environmental Awareness Quiz was administered to measure differences in the students' cognitive understanding of energy issues. Students participated in one on one interviews to discuss their attitudes toward the lesson and the overall environmental education experience. Additionally, members of the experimental group were tape recorded while they used the hypermedia program. These tapes were analyzed to identify aspects of the hypermedia program that promoted student learning. The findings of this study suggest that computers, and hypermedia programs, can be integrated into residential environmental education facilities, and can assist environmental educators in meeting their goals for students. The study found that the hypermedia program was as effective as the teacher/naturalist for teaching about environmental education material. Students who used the computer reported more positive attitudes toward the lesson on energy, and thought that they had learned more than the control group. Students in the control group stated that they did not learn as much as the computer group. The majority of students had positive attitudes toward the inclusion of computers in the camp setting, and stated that they were a good way to learn about environmental education material. This study also identified lack of social skills as a barrier to social cognition among mixed gender groups using the computer program.

  2. How we learn to make decisions: rapid propagation of reinforcement learning prediction errors in humans.

    PubMed

    Krigolson, Olav E; Hassall, Cameron D; Handy, Todd C

    2014-03-01

    Our ability to make decisions is predicated upon our knowledge of the outcomes of the actions available to us. Reinforcement learning theory posits that actions followed by a reward or punishment acquire value through the computation of prediction errors-discrepancies between the predicted and the actual reward. A multitude of neuroimaging studies have demonstrated that rewards and punishments evoke neural responses that appear to reflect reinforcement learning prediction errors [e.g., Krigolson, O. E., Pierce, L. J., Holroyd, C. B., & Tanaka, J. W. Learning to become an expert: Reinforcement learning and the acquisition of perceptual expertise. Journal of Cognitive Neuroscience, 21, 1833-1840, 2009; Bayer, H. M., & Glimcher, P. W. Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron, 47, 129-141, 2005; O'Doherty, J. P. Reward representations and reward-related learning in the human brain: Insights from neuroimaging. Current Opinion in Neurobiology, 14, 769-776, 2004; Holroyd, C. B., & Coles, M. G. H. The neural basis of human error processing: Reinforcement learning, dopamine, and the error-related negativity. Psychological Review, 109, 679-709, 2002]. Here, we used the brain ERP technique to demonstrate that not only do rewards elicit a neural response akin to a prediction error but also that this signal rapidly diminished and propagated to the time of choice presentation with learning. Specifically, in a simple, learnable gambling task, we show that novel rewards elicited a feedback error-related negativity that rapidly decreased in amplitude with learning. Furthermore, we demonstrate the existence of a reward positivity at choice presentation, a previously unreported ERP component that has a similar timing and topography as the feedback error-related negativity that increased in amplitude with learning. The pattern of results we observed mirrored the output of a computational model that we implemented to compute reward prediction errors and the changes in amplitude of these prediction errors at the time of choice presentation and reward delivery. Our results provide further support that the computations that underlie human learning and decision-making follow reinforcement learning principles.

  3. The evolutionary basis of human social learning

    PubMed Central

    Morgan, T. J. H.; Rendell, L. E.; Ehn, M.; Hoppitt, W.; Laland, K. N.

    2012-01-01

    Humans are characterized by an extreme dependence on culturally transmitted information. Such dependence requires the complex integration of social and asocial information to generate effective learning and decision making. Recent formal theory predicts that natural selection should favour adaptive learning strategies, but relevant empirical work is scarce and rarely examines multiple strategies or tasks. We tested nine hypotheses derived from theoretical models, running a series of experiments investigating factors affecting when and how humans use social information, and whether such behaviour is adaptive, across several computer-based tasks. The number of demonstrators, consensus among demonstrators, confidence of subjects, task difficulty, number of sessions, cost of asocial learning, subject performance and demonstrator performance all influenced subjects' use of social information, and did so adaptively. Our analysis provides strong support for the hypothesis that human social learning is regulated by adaptive learning rules. PMID:21795267

  4. The evolutionary basis of human social learning.

    PubMed

    Morgan, T J H; Rendell, L E; Ehn, M; Hoppitt, W; Laland, K N

    2012-02-22

    Humans are characterized by an extreme dependence on culturally transmitted information. Such dependence requires the complex integration of social and asocial information to generate effective learning and decision making. Recent formal theory predicts that natural selection should favour adaptive learning strategies, but relevant empirical work is scarce and rarely examines multiple strategies or tasks. We tested nine hypotheses derived from theoretical models, running a series of experiments investigating factors affecting when and how humans use social information, and whether such behaviour is adaptive, across several computer-based tasks. The number of demonstrators, consensus among demonstrators, confidence of subjects, task difficulty, number of sessions, cost of asocial learning, subject performance and demonstrator performance all influenced subjects' use of social information, and did so adaptively. Our analysis provides strong support for the hypothesis that human social learning is regulated by adaptive learning rules.

  5. Investigating the Learning-Theory Foundations of Game-Based Learning: A Meta-Analysis

    ERIC Educational Resources Information Center

    Wu, W-H.; Hsiao, H-C.; Wu, P-L.; Lin, C-H.; Huang, S-H.

    2012-01-01

    Past studies on the issue of learning-theory foundations in game-based learning stressed the importance of establishing learning-theory foundation and provided an exploratory examination of established learning theories. However, we found research seldom addressed the development of the use or failure to use learning-theory foundations and…

  6. Parallel Distributed Processing Theory in the Age of Deep Networks.

    PubMed

    Bowers, Jeffrey S

    2017-12-01

    Parallel distributed processing (PDP) models in psychology are the precursors of deep networks used in computer science. However, only PDP models are associated with two core psychological claims, namely that all knowledge is coded in a distributed format and cognition is mediated by non-symbolic computations. These claims have long been debated in cognitive science, and recent work with deep networks speaks to this debate. Specifically, single-unit recordings show that deep networks learn units that respond selectively to meaningful categories, and researchers are finding that deep networks need to be supplemented with symbolic systems to perform some tasks. Given the close links between PDP and deep networks, it is surprising that research with deep networks is challenging PDP theory. Copyright © 2017. Published by Elsevier Ltd.

  7. Help Options for L2 Listening in CALL: A Research Agenda

    ERIC Educational Resources Information Center

    Cross, Jeremy

    2017-01-01

    In this article, I present an agenda for researching help options for second language (L2) listening in computer-assisted language learning (CALL) environments. I outline several theories which researchers in the area draw on, then present common points of concern identified from a review of related literature. This serves as a means to…

  8. Vygotskian Perspectives on Literacy Research: Constructing Meaning through Collaborative Inquiry. Learning in Doing: Social, Cognitive, and Computational Perspectives.

    ERIC Educational Resources Information Center

    Lee, Carol D., Ed.; Smagorinsky, Peter, Ed.

    In this collection of essays, the authors use Lev Vygotsky's cultural-historical theory of human development to frame their analyses of schooling, with particular emphasis on the ways in which literacy practices are mediated by social interaction and cultural artifacts. The collection extends Vygotsky's cultural-historical theoretical framework to…

  9. Exploring Relationship between Students' Questioning Behaviors and Inquiry Tasks in an Online Forum through Analysis of Ideational Function of Questions

    ERIC Educational Resources Information Center

    Tan, Seng-Chee; Seah, Lay-Hoon

    2011-01-01

    In this study we explored questioning behaviors among elementary students engaging in inquiry science using the "Knowledge Forum", a computer-supported collaborative learning tool. Adapting the theory of systemic functional linguistics, we developed the Ideational Function of Question (IFQ) analytical framework by means of inductive analysis of…

  10. Measuring Prevalence of Other-Oriented Transactive Contributions Using an Automated Measure of Speech Style Accommodation

    ERIC Educational Resources Information Center

    Gweon, Gahgene; Jain, Mahaveer; McDonough, John; Raj, Bhiksha; Rose, Carolyn P.

    2013-01-01

    This paper contributes to a theory-grounded methodological foundation for automatic collaborative learning process analysis. It does this by illustrating how insights from the social psychology and sociolinguistics of speech style provide a theoretical framework to inform the design of a computational model. The purpose of that model is to detect…

  11. CALL--Enhanced L2 Listening Skills--Aiming for Automatization in a Multimedia Environment

    ERIC Educational Resources Information Center

    Mayor, Maria Jesus Blasco

    2009-01-01

    Computer Assisted Language Learning (CALL) and L2 listening comprehension skill training are bound together for good. A neglected macroskill for decades, developing listening comprehension skill is now considered crucial for L2 acquisition. Thus this paper makes an attempt to offer latest information on processing theories and L2 listening…

  12. A BDI Approach to Infer Student's Emotions in an Intelligent Learning Environment

    ERIC Educational Resources Information Center

    Jaques, Patricia Augustin; Vicari, Rosa Maria

    2007-01-01

    In this article we describe the use of mental states approach, more specifically the belief-desire-intention (BDI) model, to implement the process of affective diagnosis in an educational environment. We use the psychological OCC model, which is based on the cognitive theory of emotions and is possible to be implemented computationally, in order…

  13. Virtual Worlds for Language Learning: From Theory to Practice. Telecollaboration in Education. Volume 2

    ERIC Educational Resources Information Center

    Sadler, Randall

    2012-01-01

    This book focuses on one area in the field of Computer-Mediated Communication that has recently exploded in popularity--Virtual Worlds. Virtual Worlds are online multiplayer three-dimensional environments where avatars represent their real world counterparts. In particular, this text explores the potential for these environments to be used for…

  14. Re-Aligning Research into Teacher Education for CALL and Bringing It into the Mainstream

    ERIC Educational Resources Information Center

    Motteram, Gary

    2014-01-01

    This paper explores three research projects conducted by the writer and others with a view to demonstrating the importance of effective theory and methodology in the analysis of teaching situations where Computer Assisted Language Learning (CALL), teacher practice and teacher education meet. It argues that there is a tendency in the field of…

  15. Comparative Analysis, Hypercard, and the Future of Social Studies Education.

    ERIC Educational Resources Information Center

    Jennings, James M.

    This research paper seeks to address new theories of learning and instructional practices that will be needed to meet the demands of 21st century education. A brief review of the literature on the topics of constructivism, reflective inquiry, and multicultural education, which form the major elements of a computer-based system called HyperCAP, are…

  16. EdMedia 2012: World Conference on Educational Multimedia, Hypermedia and Telecommunications. Proceedings (Denver, Colorado, June 26-29, 2012)

    ERIC Educational Resources Information Center

    Amiel, Tel, Ed.; Wilson, Brent, Ed.

    2012-01-01

    The Association for the Advancement of Computing in Education (AACE) is an international, non-profit educational organization. The Association's purpose is to advance the knowledge, theory, and quality of teaching and learning at all levels with information technology. "EdMedia 2012: World Conference on Educational Multimedia, Hypermedia and…

  17. When Learning Is Just a Click Away: Does Simple User Interaction Foster Deeper Understanding of Multimedia Messages?

    ERIC Educational Resources Information Center

    Mayer, Richard E.; Chandler, Paul

    2001-01-01

    In two experiments, students received two presentations of a narrated animation explaining how lightning forms, followed by retention and transfer tests. The goal was to determine possible benefits of incorporating a modest amount of computer-user interactivity within a multimedia explanation. Results were consistent with cognitive load theory and…

  18. Item response theory analysis of the mechanics baseline test

    NASA Astrophysics Data System (ADS)

    Cardamone, Caroline N.; Abbott, Jonathan E.; Rayyan, Saif; Seaton, Daniel T.; Pawl, Andrew; Pritchard, David E.

    2012-02-01

    Item response theory is useful in both the development and evaluation of assessments and in computing standardized measures of student performance. In item response theory, individual parameters (difficulty, discrimination) for each item or question are fit by item response models. These parameters provide a means for evaluating a test and offer a better measure of student skill than a raw test score, because each skill calculation considers not only the number of questions answered correctly, but the individual properties of all questions answered. Here, we present the results from an analysis of the Mechanics Baseline Test given at MIT during 2005-2010. Using the item parameters, we identify questions on the Mechanics Baseline Test that are not effective in discriminating between MIT students of different abilities. We show that a limited subset of the highest quality questions on the Mechanics Baseline Test returns accurate measures of student skill. We compare student skills as determined by item response theory to the more traditional measurement of the raw score and show that a comparable measure of learning gain can be computed.

  19. Toward a Meta-Theory of Learning and Performance

    ERIC Educational Resources Information Center

    Russ-Eft, Darlene

    2004-01-01

    This purpose of this paper is to identify implications of various learning theories for workplace learning and performance and HRD. It begins with a review of various theoretical positions on learning including behaviorism, Gestalt theory, cognitive theory, schema theory, connectionist theory, social learning or behavior modeling, social…

  20. The penumbra of learning: a statistical theory of synaptic tagging and capture.

    PubMed

    Gershman, Samuel J

    2014-01-01

    Learning in humans and animals is accompanied by a penumbra: Learning one task benefits from learning an unrelated task shortly before or after. At the cellular level, the penumbra of learning appears when weak potentiation of one synapse is amplified by strong potentiation of another synapse on the same neuron during a critical time window. Weak potentiation sets a molecular tag that enables the synapse to capture plasticity-related proteins synthesized in response to strong potentiation at another synapse. This paper describes a computational model which formalizes synaptic tagging and capture in terms of statistical learning mechanisms. According to this model, synaptic strength encodes a probabilistic inference about the dynamically changing association between pre- and post-synaptic firing rates. The rate of change is itself inferred, coupling together different synapses on the same neuron. When the inputs to one synapse change rapidly, the inferred rate of change increases, amplifying learning at other synapses.

  1. Game-Based Learning Theory

    NASA Technical Reports Server (NTRS)

    Laughlin, Daniel

    2008-01-01

    Persistent Immersive Synthetic Environments (PISE) are not just connection points, they are meeting places. They are the new public squares, village centers, malt shops, malls and pubs all rolled into one. They come with a sense of 'thereness" that engages the mind like a real place does. Learning starts as a real code. The code defines "objects." The objects exist in computer space, known as the "grid." The objects and space combine to create a "place." A "world" is created, Before long, the grid and code becomes obscure, and the "world maintains focus.

  2. Machine-learning techniques for fast and accurate feature localization in holograms of colloidal particles

    NASA Astrophysics Data System (ADS)

    Hannel, Mark D.; Abdulali, Aidan; O'Brien, Michael; Grier, David G.

    2018-06-01

    Holograms of colloidal particles can be analyzed with the Lorenz-Mie theory of light scattering to measure individual particles' three-dimensional positions with nanometer precision while simultaneously estimating their sizes and refractive indexes. Extracting this wealth of information begins by detecting and localizing features of interest within individual holograms. Conventionally approached with heuristic algorithms, this image analysis problem can be solved faster and more generally with machine-learning techniques. We demonstrate that two popular machine-learning algorithms, cascade classifiers and deep convolutional neural networks (CNN), can solve the feature-localization problem orders of magnitude faster than current state-of-the-art techniques. Our CNN implementation localizes holographic features precisely enough to bootstrap more detailed analyses based on the Lorenz-Mie theory of light scattering. The wavelet-based Haar cascade proves to be less precise, but is so computationally efficient that it creates new opportunities for applications that emphasize speed and low cost. We demonstrate its use as a real-time targeting system for holographic optical trapping.

  3. Application and Exploration of Big Data Mining in Clinical Medicine.

    PubMed

    Zhang, Yue; Guo, Shu-Li; Han, Li-Na; Li, Tie-Ling

    2016-03-20

    To review theories and technologies of big data mining and their application in clinical medicine. Literatures published in English or Chinese regarding theories and technologies of big data mining and the concrete applications of data mining technology in clinical medicine were obtained from PubMed and Chinese Hospital Knowledge Database from 1975 to 2015. Original articles regarding big data mining theory/technology and big data mining's application in the medical field were selected. This review characterized the basic theories and technologies of big data mining including fuzzy theory, rough set theory, cloud theory, Dempster-Shafer theory, artificial neural network, genetic algorithm, inductive learning theory, Bayesian network, decision tree, pattern recognition, high-performance computing, and statistical analysis. The application of big data mining in clinical medicine was analyzed in the fields of disease risk assessment, clinical decision support, prediction of disease development, guidance of rational use of drugs, medical management, and evidence-based medicine. Big data mining has the potential to play an important role in clinical medicine.

  4. A comparative analysis of support vector machines and extreme learning machines.

    PubMed

    Liu, Xueyi; Gao, Chuanhou; Li, Ping

    2012-09-01

    The theory of extreme learning machines (ELMs) has recently become increasingly popular. As a new learning algorithm for single-hidden-layer feed-forward neural networks, an ELM offers the advantages of low computational cost, good generalization ability, and ease of implementation. Hence the comparison and model selection between ELMs and other kinds of state-of-the-art machine learning approaches has become significant and has attracted many research efforts. This paper performs a comparative analysis of the basic ELMs and support vector machines (SVMs) from two viewpoints that are different from previous works: one is the Vapnik-Chervonenkis (VC) dimension, and the other is their performance under different training sample sizes. It is shown that the VC dimension of an ELM is equal to the number of hidden nodes of the ELM with probability one. Additionally, their generalization ability and computational complexity are exhibited with changing training sample size. ELMs have weaker generalization ability than SVMs for small sample but can generalize as well as SVMs for large sample. Remarkably, great superiority in computational speed especially for large-scale sample problems is found in ELMs. The results obtained can provide insight into the essential relationship between them, and can also serve as complementary knowledge for their past experimental and theoretical comparisons. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Simulating single word processing in the classic aphasia syndromes based on the Wernicke-Lichtheim-Geschwind theory.

    PubMed

    Weems, Scott A; Reggia, James A

    2006-09-01

    The Wernicke-Lichtheim-Geschwind (WLG) theory of the neurobiological basis of language is of great historical importance, and it continues to exert a substantial influence on most contemporary theories of language in spite of its widely recognized limitations. Here, we suggest that neurobiologically grounded computational models based on the WLG theory can provide a deeper understanding of which of its features are plausible and where the theory fails. As a first step in this direction, we created a model of the interconnected left and right neocortical areas that are most relevant to the WLG theory, and used it to study visual-confrontation naming, auditory repetition, and auditory comprehension performance. No specific functionality is assigned a priori to model cortical regions, other than that implicitly present due to their locations in the cortical network and a higher learning rate in left hemisphere regions. Following learning, the model successfully simulates confrontation naming and word repetition, and acquires a unique internal representation in parietal regions for each named object. Simulated lesions to the language-dominant cortical regions produce patterns of single word processing impairment reminiscent of those postulated historically in the classic aphasia syndromes. These results indicate that WLG theory, instantiated as a simple interconnected network of model neocortical regions familiar to any neuropsychologist/neurologist, captures several fundamental "low-level" aspects of neurobiological word processing and their impairment in aphasia.

  6. Barriers and decisions when answering clinical questions at the point of care: a grounded theory study.

    PubMed

    Cook, David A; Sorensen, Kristi J; Wilkinson, John M; Berger, Richard A

    2013-11-25

    Answering clinical questions affects patient-care decisions and is important to continuous professional development. The process of point-of-care learning is incompletely understood. To understand what barriers and enabling factors influence physician point-of-care learning and what decisions physicians face during this process. Focus groups with grounded theory analysis. Focus group discussions were transcribed and then analyzed using a constant comparative approach to identify barriers, enabling factors, and key decisions related to physician information-seeking activities. Academic medical center and outlying community sites. Purposive sample of 50 primary care and subspecialist internal medicine and family medicine physicians, interviewed in 11 focus groups. Insufficient time was the main barrier to point-of-care learning. Other barriers included the patient comorbidities and contexts, the volume of available information, not knowing which resource to search, doubt that the search would yield an answer, difficulty remembering questions for later study, and inconvenient access to computers. Key decisions were whether to search (reasons to search included infrequently seen conditions, practice updates, complex questions, and patient education), when to search (before, during, or after the clinical encounter), where to search (with the patient present or in a separate room), what type of resource to use (colleague or computer), what specific resource to use (influenced first by efficiency and second by credibility), and when to stop. Participants noted that key features of efficiency (completeness, brevity, and searchability) are often in conflict. Physicians perceive that insufficient time is the greatest barrier to point-of-care learning, and efficiency is the most important determinant in selecting an information source. Designing knowledge resources and systems to target key decisions may improve learning and patient care.

  7. Five-Year-Olds' Systematic Errors in Second-Order False Belief Tasks Are Due to First-Order Theory of Mind Strategy Selection: A Computational Modeling Study.

    PubMed

    Arslan, Burcu; Taatgen, Niels A; Verbrugge, Rineke

    2017-01-01

    The focus of studies on second-order false belief reasoning generally was on investigating the roles of executive functions and language with correlational studies. Different from those studies, we focus on the question how 5-year-olds select and revise reasoning strategies in second-order false belief tasks by constructing two computational cognitive models of this process: an instance-based learning model and a reinforcement learning model. Unlike the reinforcement learning model, the instance-based learning model predicted that children who fail second-order false belief tasks would give answers based on first-order theory of mind (ToM) reasoning as opposed to zero-order reasoning. This prediction was confirmed with an empirical study that we conducted with 72 5- to 6-year-old children. The results showed that 17% of the answers were correct and 83% of the answers were wrong. In line with our prediction, 65% of the wrong answers were based on a first-order ToM strategy, while only 29% of them were based on a zero-order strategy (the remaining 6% of subjects did not provide any answer). Based on our instance-based learning model, we propose that when children get feedback "Wrong," they explicitly revise their strategy to a higher level instead of implicitly selecting one of the available ToM strategies. Moreover, we predict that children's failures are due to lack of experience and that with exposure to second-order false belief reasoning, children can revise their wrong first-order reasoning strategy to a correct second-order reasoning strategy.

  8. Five-Year-Olds’ Systematic Errors in Second-Order False Belief Tasks Are Due to First-Order Theory of Mind Strategy Selection: A Computational Modeling Study

    PubMed Central

    Arslan, Burcu; Taatgen, Niels A.; Verbrugge, Rineke

    2017-01-01

    The focus of studies on second-order false belief reasoning generally was on investigating the roles of executive functions and language with correlational studies. Different from those studies, we focus on the question how 5-year-olds select and revise reasoning strategies in second-order false belief tasks by constructing two computational cognitive models of this process: an instance-based learning model and a reinforcement learning model. Unlike the reinforcement learning model, the instance-based learning model predicted that children who fail second-order false belief tasks would give answers based on first-order theory of mind (ToM) reasoning as opposed to zero-order reasoning. This prediction was confirmed with an empirical study that we conducted with 72 5- to 6-year-old children. The results showed that 17% of the answers were correct and 83% of the answers were wrong. In line with our prediction, 65% of the wrong answers were based on a first-order ToM strategy, while only 29% of them were based on a zero-order strategy (the remaining 6% of subjects did not provide any answer). Based on our instance-based learning model, we propose that when children get feedback “Wrong,” they explicitly revise their strategy to a higher level instead of implicitly selecting one of the available ToM strategies. Moreover, we predict that children’s failures are due to lack of experience and that with exposure to second-order false belief reasoning, children can revise their wrong first-order reasoning strategy to a correct second-order reasoning strategy. PMID:28293206

  9. Neural networks and applications tutorial

    NASA Astrophysics Data System (ADS)

    Guyon, I.

    1991-09-01

    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  10. A blended learning approach for teaching computer programming: design for large classes in Sub-Saharan Africa

    NASA Astrophysics Data System (ADS)

    Bayu Bati, Tesfaye; Gelderblom, Helene; van Biljon, Judy

    2014-01-01

    The challenge of teaching programming in higher education is complicated by problems associated with large class teaching, a prevalent situation in many developing countries. This paper reports on an investigation into the use of a blended learning approach to teaching and learning of programming in a class of more than 200 students. A course and learning environment was designed by integrating constructivist learning models of Constructive Alignment, Conversational Framework and the Three-Stage Learning Model. Design science research is used for the course redesign and development of the learning environment, and action research is integrated to undertake participatory evaluation of the intervention. The action research involved the Students' Approach to Learning survey, a comparative analysis of students' performance, and qualitative data analysis of data gathered from various sources. The paper makes a theoretical contribution in presenting a design of a blended learning solution for large class teaching of programming grounded in constructivist learning theory and use of free and open source technologies.

  11. Salient regions detection using convolutional neural networks and color volume

    NASA Astrophysics Data System (ADS)

    Liu, Guang-Hai; Hou, Yingkun

    2018-03-01

    Convolutional neural network is an important technique in machine learning, pattern recognition and image processing. In order to reduce the computational burden and extend the classical LeNet-5 model to the field of saliency detection, we propose a simple and novel computing model based on LeNet-5 network. In the proposed model, hue, saturation and intensity are utilized to extract depth cues, and then we integrate depth cues and color volume to saliency detection following the basic structure of the feature integration theory. Experimental results show that the proposed computing model outperforms some existing state-of-the-art methods on MSRA1000 and ECSSD datasets.

  12. A Learning Theory for Reward-Modulated Spike-Timing-Dependent Plasticity with Application to Biofeedback

    PubMed Central

    Maass, Wolfgang

    2008-01-01

    Reward-modulated spike-timing-dependent plasticity (STDP) has recently emerged as a candidate for a learning rule that could explain how behaviorally relevant adaptive changes in complex networks of spiking neurons could be achieved in a self-organizing manner through local synaptic plasticity. However, the capabilities and limitations of this learning rule could so far only be tested through computer simulations. This article provides tools for an analytic treatment of reward-modulated STDP, which allows us to predict under which conditions reward-modulated STDP will achieve a desired learning effect. These analytical results imply that neurons can learn through reward-modulated STDP to classify not only spatial but also temporal firing patterns of presynaptic neurons. They also can learn to respond to specific presynaptic firing patterns with particular spike patterns. Finally, the resulting learning theory predicts that even difficult credit-assignment problems, where it is very hard to tell which synaptic weights should be modified in order to increase the global reward for the system, can be solved in a self-organizing manner through reward-modulated STDP. This yields an explanation for a fundamental experimental result on biofeedback in monkeys by Fetz and Baker. In this experiment monkeys were rewarded for increasing the firing rate of a particular neuron in the cortex and were able to solve this extremely difficult credit assignment problem. Our model for this experiment relies on a combination of reward-modulated STDP with variable spontaneous firing activity. Hence it also provides a possible functional explanation for trial-to-trial variability, which is characteristic for cortical networks of neurons but has no analogue in currently existing artificial computing systems. In addition our model demonstrates that reward-modulated STDP can be applied to all synapses in a large recurrent neural network without endangering the stability of the network dynamics. PMID:18846203

  13. Trends in extreme learning machines: a review.

    PubMed

    Huang, Gao; Huang, Guang-Bin; Song, Shiji; You, Keyou

    2015-01-01

    Extreme learning machine (ELM) has gained increasing interest from various research fields recently. In this review, we aim to report the current state of the theoretical research and practical advances on this subject. We first give an overview of ELM from the theoretical perspective, including the interpolation theory, universal approximation capability, and generalization ability. Then we focus on the various improvements made to ELM which further improve its stability, sparsity and accuracy under general or specific conditions. Apart from classification and regression, ELM has recently been extended for clustering, feature selection, representational learning and many other learning tasks. These newly emerging algorithms greatly expand the applications of ELM. From implementation aspect, hardware implementation and parallel computation techniques have substantially sped up the training of ELM, making it feasible for big data processing and real-time reasoning. Due to its remarkable efficiency, simplicity, and impressive generalization performance, ELM have been applied in a variety of domains, such as biomedical engineering, computer vision, system identification, and control and robotics. In this review, we try to provide a comprehensive view of these advances in ELM together with its future perspectives.

  14. Mindstorms robots and the application of cognitive load theory in introductory programming

    NASA Astrophysics Data System (ADS)

    Mason, Raina; Cooper, Graham

    2013-12-01

    This paper reports on a series of introductory programming workshops, initially targeting female high school students, which utilised Lego Mindstorms robots. Cognitive load theory (CLT) was applied to the instructional design of the workshops, and a controlled experiment was also conducted investigating aspects of the interface. Results indicated that a truncated interface led to better learning by novice programmers as measured by test performance by participants, as well as enhanced shifts in self-efficacy and lowered perception of difficulty. There was also a transfer effect to another programming environment (Alice). It is argued that the results indicate that for novice programmers, the mere presence on-screen of additional (redundant) entities acts as a form of tacit distraction, thus impeding learning. The utility of CLT to analyse, design and deliver aspects of computer programming environments and instructional materials is discussed.

  15. Computer-Based Training for the U.S. Coast Guard Standard Terminal Microcomputer: A Basis for Implementation Utilizing the Elaboration Theory of Instructional Design.

    DTIC Science & Technology

    1985-03-01

    skills previously acquired and presently unused. Nevertheless, in preparing for a "worst case" trainee, the CBT course will be more likely to contain...IV. 18 - .• . . . . . .•. • of the course constitutes those elements of "computer literacy " most applicable to CGST users, including: - Data storage...nonsense approach toward acquiring some specific knowledge or skill . They also like to decide for themselves the best way to accomplish their learning goal

  16. On the necessity of U-shaped learning.

    PubMed

    Carlucci, Lorenzo; Case, John

    2013-01-01

    A U-shaped curve in a cognitive-developmental trajectory refers to a three-step process: good performance followed by bad performance followed by good performance once again. U-shaped curves have been observed in a wide variety of cognitive-developmental and learning contexts. U-shaped learning seems to contradict the idea that learning is a monotonic, cumulative process and thus constitutes a challenge for competing theories of cognitive development and learning. U-shaped behavior in language learning (in particular in learning English past tense) has become a central topic in the Cognitive Science debate about learning models. Antagonist models (e.g., connectionism versus nativism) are often judged on their ability of modeling or accounting for U-shaped behavior. The prior literature is mostly occupied with explaining how U-shaped behavior occurs. Instead, we are interested in the necessity of this kind of apparently inefficient strategy. We present and discuss a body of results in the abstract mathematical setting of (extensions of) Gold-style computational learning theory addressing a mathematically precise version of the following question: Are there learning tasks that require U-shaped behavior? All notions considered are learning in the limit from positive data. We present results about the necessity of U-shaped learning in classical models of learning as well as in models with bounds on the memory of the learner. The pattern emerges that, for parameterized, cognitively relevant learning criteria, beyond very few initial parameter values, U-shapes are necessary for full learning power! We discuss the possible relevance of the above results for the Cognitive Science debate about learning models as well as directions for future research. Copyright © 2013 Cognitive Science Society, Inc.

  17. Explaining compound generalization in associative and causal learning through rational principles of dimensional generalization.

    PubMed

    Soto, Fabian A; Gershman, Samuel J; Niv, Yael

    2014-07-01

    How do we apply learning from one situation to a similar, but not identical, situation? The principles governing the extent to which animals and humans generalize what they have learned about certain stimuli to novel compounds containing those stimuli vary depending on a number of factors. Perhaps the best studied among these factors is the type of stimuli used to generate compounds. One prominent hypothesis is that different generalization principles apply depending on whether the stimuli in a compound are similar or dissimilar to each other. However, the results of many experiments cannot be explained by this hypothesis. Here, we propose a rational Bayesian theory of compound generalization that uses the notion of consequential regions, first developed in the context of rational theories of multidimensional generalization, to explain the effects of stimulus factors on compound generalization. The model explains a large number of results from the compound generalization literature, including the influence of stimulus modality and spatial contiguity on the summation effect, the lack of influence of stimulus factors on summation with a recovered inhibitor, the effect of spatial position of stimuli on the blocking effect, the asymmetrical generalization decrement in overshadowing and external inhibition, and the conditions leading to a reliable external inhibition effect. By integrating rational theories of compound and dimensional generalization, our model provides the first comprehensive computational account of the effects of stimulus factors on compound generalization, including spatial and temporal contiguity between components, which have posed long-standing problems for rational theories of associative and causal learning. (c) 2014 APA, all rights reserved.

  18. Explaining Compound Generalization in Associative and Causal Learning Through Rational Principles of Dimensional Generalization

    PubMed Central

    Soto, Fabian A.; Gershman, Samuel J.; Niv, Yael

    2014-01-01

    How do we apply learning from one situation to a similar, but not identical, situation? The principles governing the extent to which animals and humans generalize what they have learned about certain stimuli to novel compounds containing those stimuli vary depending on a number of factors. Perhaps the best studied among these factors is the type of stimuli used to generate compounds. One prominent hypothesis is that different generalization principles apply depending on whether the stimuli in a compound are similar or dissimilar to each other. However, the results of many experiments cannot be explained by this hypothesis. Here we propose a rational Bayesian theory of compound generalization that uses the notion of consequential regions, first developed in the context of rational theories of multidimensional generalization, to explain the effects of stimulus factors on compound generalization. The model explains a large number of results from the compound generalization literature, including the influence of stimulus modality and spatial contiguity on the summation effect, the lack of influence of stimulus factors on summation with a recovered inhibitor, the effect of spatial position of stimuli on the blocking effect, the asymmetrical generalization decrement in overshadowing and external inhibition, and the conditions leading to a reliable external inhibition effect. By integrating rational theories of compound and dimensional generalization, our model provides the first comprehensive computational account of the effects of stimulus factors on compound generalization, including spatial and temporal contiguity between components, which have posed longstanding problems for rational theories of associative and causal learning. PMID:25090430

  19. Central Limit Theorem: New SOCR Applet and Demonstration Activity

    PubMed Central

    Dinov, Ivo D.; Christou, Nicolas; Sanchez, Juana

    2011-01-01

    Modern approaches for information technology based blended education utilize a variety of novel instructional, computational and network resources. Such attempts employ technology to deliver integrated, dynamically linked, interactive content and multifaceted learning environments, which may facilitate student comprehension and information retention. In this manuscript, we describe one such innovative effort of using technological tools for improving student motivation and learning of the theory, practice and usability of the Central Limit Theorem (CLT) in probability and statistics courses. Our approach is based on harnessing the computational libraries developed by the Statistics Online Computational Resource (SOCR) to design a new interactive Java applet and a corresponding demonstration activity that illustrate the meaning and the power of the CLT. The CLT applet and activity have clear common goals; to provide graphical representation of the CLT, to improve student intuition, and to empirically validate and establish the limits of the CLT. The SOCR CLT activity consists of four experiments that demonstrate the assumptions, meaning and implications of the CLT and ties these to specific hands-on simulations. We include a number of examples illustrating the theory and applications of the CLT. Both the SOCR CLT applet and activity are freely available online to the community to test, validate and extend (Applet: http://www.socr.ucla.edu/htmls/SOCR_Experiments.html and Activity: http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_Activities_GeneralCentralLimitTheorem). PMID:21833159

  20. Central Limit Theorem: New SOCR Applet and Demonstration Activity.

    PubMed

    Dinov, Ivo D; Christou, Nicolas; Sanchez, Juana

    2008-07-01

    Modern approaches for information technology based blended education utilize a variety of novel instructional, computational and network resources. Such attempts employ technology to deliver integrated, dynamically linked, interactive content and multifaceted learning environments, which may facilitate student comprehension and information retention. In this manuscript, we describe one such innovative effort of using technological tools for improving student motivation and learning of the theory, practice and usability of the Central Limit Theorem (CLT) in probability and statistics courses. Our approach is based on harnessing the computational libraries developed by the Statistics Online Computational Resource (SOCR) to design a new interactive Java applet and a corresponding demonstration activity that illustrate the meaning and the power of the CLT. The CLT applet and activity have clear common goals; to provide graphical representation of the CLT, to improve student intuition, and to empirically validate and establish the limits of the CLT. The SOCR CLT activity consists of four experiments that demonstrate the assumptions, meaning and implications of the CLT and ties these to specific hands-on simulations. We include a number of examples illustrating the theory and applications of the CLT. Both the SOCR CLT applet and activity are freely available online to the community to test, validate and extend (Applet: http://www.socr.ucla.edu/htmls/SOCR_Experiments.html and Activity: http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_Activities_GeneralCentralLimitTheorem).

  1. Extinction from a Rationalist Perspective

    PubMed Central

    Gallistel, C. R.

    2012-01-01

    The merging of the computational theory of mind and evolutionary thinking leads to a kind of rationalism, in which enduring truths about the world have become implicit in the computations that enable the brain to cope with the experienced world. The dead reckoning computation, for example, is implemented within the brains of animals as one of the mechanisms that enables them to learn where they are (Gallistel, 1990, 1995). It integrates a velocity signal with respect to a time signal. Thus, the manner in which position and velocity relate to one another in the world is reflected in the manner in which signals representing those variables are processed in the brain. I use principles of information theory and Bayesian inference to derive from other simple principles explanations for: 1) the failure of partial reinforcement to increase reinforcements to acquisition; 2) the partial reinforcement extinction effect; 3) spontaneous recovery; 4) renewal; 5) reinstatement; 6) resurgence (aka facilitated reacquisition). Like the principle underlying dead-reckoning, these principles are grounded in analytic considerations. They are the kind of enduring truths about the world that are likely to have shaped the brain's computations. PMID:22391153

  2. ASCR Workshop on Quantum Computing for Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aspuru-Guzik, Alan; Van Dam, Wim; Farhi, Edward

    This report details the findings of the DOE ASCR Workshop on Quantum Computing for Science that was organized to assess the viability of quantum computing technologies to meet the computational requirements of the DOE’s science and energy mission, and to identify the potential impact of quantum technologies. The workshop was held on February 17-18, 2015, in Bethesda, MD, to solicit input from members of the quantum computing community. The workshop considered models of quantum computation and programming environments, physical science applications relevant to DOE's science mission as well as quantum simulation, and applied mathematics topics including potential quantum algorithms formore » linear algebra, graph theory, and machine learning. This report summarizes these perspectives into an outlook on the opportunities for quantum computing to impact problems relevant to the DOE’s mission as well as the additional research required to bring quantum computing to the point where it can have such impact.« less

  3. Dissociable Learning Processes Underlie Human Pain Conditioning.

    PubMed

    Zhang, Suyi; Mano, Hiroaki; Ganesh, Gowrishankar; Robbins, Trevor; Seymour, Ben

    2016-01-11

    Pavlovian conditioning underlies many aspects of pain behavior, including fear and threat detection [1], escape and avoidance learning [2], and endogenous analgesia [3]. Although a central role for the amygdala is well established [4], both human and animal studies implicate other brain regions in learning, notably ventral striatum and cerebellum [5]. It remains unclear whether these regions make different contributions to a single aversive learning process or represent independent learning mechanisms that interact to generate the expression of pain-related behavior. We designed a human parallel aversive conditioning paradigm in which different Pavlovian visual cues probabilistically predicted thermal pain primarily to either the left or right arm and studied the acquisition of conditioned Pavlovian responses using combined physiological recordings and fMRI. Using computational modeling based on reinforcement learning theory, we found that conditioning involves two distinct types of learning process. First, a non-specific "preparatory" system learns aversive facial expressions and autonomic responses such as skin conductance. The associated learning signals-the learned associability and prediction error-were correlated with fMRI brain responses in amygdala-striatal regions, corresponding to the classic aversive (fear) learning circuit. Second, a specific lateralized system learns "consummatory" limb-withdrawal responses, detectable with electromyography of the arm to which pain is predicted. Its related learned associability was correlated with responses in ipsilateral cerebellar cortex, suggesting a novel computational role for the cerebellum in pain. In conclusion, our results show that the overall phenotype of conditioned pain behavior depends on two dissociable reinforcement learning circuits. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. On adaptive learning rate that guarantees convergence in feedforward networks.

    PubMed

    Behera, Laxmidhar; Kumar, Swagat; Patnaik, Awhan

    2006-09-01

    This paper investigates new learning algorithms (LF I and LF II) based on Lyapunov function for the training of feedforward neural networks. It is observed that such algorithms have interesting parallel with the popular backpropagation (BP) algorithm where the fixed learning rate is replaced by an adaptive learning rate computed using convergence theorem based on Lyapunov stability theory. LF II, a modified version of LF I, has been introduced with an aim to avoid local minima. This modification also helps in improving the convergence speed in some cases. Conditions for achieving global minimum for these kind of algorithms have been studied in detail. The performances of the proposed algorithms are compared with BP algorithm and extended Kalman filtering (EKF) on three bench-mark function approximation problems: XOR, 3-bit parity, and 8-3 encoder. The comparisons are made in terms of number of learning iterations and computational time required for convergence. It is found that the proposed algorithms (LF I and II) are much faster in convergence than other two algorithms to attain same accuracy. Finally, the comparison is made on a complex two-dimensional (2-D) Gabor function and effect of adaptive learning rate for faster convergence is verified. In a nutshell, the investigations made in this paper help us better understand the learning procedure of feedforward neural networks in terms of adaptive learning rate, convergence speed, and local minima.

  5. Reinforcement learning in multidimensional environments relies on attention mechanisms.

    PubMed

    Niv, Yael; Daniel, Reka; Geana, Andra; Gershman, Samuel J; Leong, Yuan Chang; Radulescu, Angela; Wilson, Robert C

    2015-05-27

    In recent years, ideas from the computational field of reinforcement learning have revolutionized the study of learning in the brain, famously providing new, precise theories of how dopamine affects learning in the basal ganglia. However, reinforcement learning algorithms are notorious for not scaling well to multidimensional environments, as is required for real-world learning. We hypothesized that the brain naturally reduces the dimensionality of real-world problems to only those dimensions that are relevant to predicting reward, and conducted an experiment to assess by what algorithms and with what neural mechanisms this "representation learning" process is realized in humans. Our results suggest that a bilateral attentional control network comprising the intraparietal sulcus, precuneus, and dorsolateral prefrontal cortex is involved in selecting what dimensions are relevant to the task at hand, effectively updating the task representation through trial and error. In this way, cortical attention mechanisms interact with learning in the basal ganglia to solve the "curse of dimensionality" in reinforcement learning. Copyright © 2015 the authors 0270-6474/15/358145-13$15.00/0.

  6. Why copy others? Insights from the social learning strategies tournament.

    PubMed

    Rendell, L; Boyd, R; Cownden, D; Enquist, M; Eriksson, K; Feldman, M W; Fogarty, L; Ghirlanda, S; Lillicrap, T; Laland, K N

    2010-04-09

    Social learning (learning through observation or interaction with other individuals) is widespread in nature and is central to the remarkable success of humanity, yet it remains unclear why copying is profitable and how to copy most effectively. To address these questions, we organized a computer tournament in which entrants submitted strategies specifying how to use social learning and its asocial alternative (for example, trial-and-error learning) to acquire adaptive behavior in a complex environment. Most current theory predicts the emergence of mixed strategies that rely on some combination of the two types of learning. In the tournament, however, strategies that relied heavily on social learning were found to be remarkably successful, even when asocial information was no more costly than social information. Social learning proved advantageous because individuals frequently demonstrated the highest-payoff behavior in their repertoire, inadvertently filtering information for copiers. The winning strategy (discountmachine) relied nearly exclusively on social learning and weighted information according to the time since acquisition.

  7. Spatiotemporal Computations of an Excitable and Plastic Brain: Neuronal Plasticity Leads to Noise-Robust and Noise-Constructive Computations

    PubMed Central

    Toutounji, Hazem; Pipa, Gordon

    2014-01-01

    It is a long-established fact that neuronal plasticity occupies the central role in generating neural function and computation. Nevertheless, no unifying account exists of how neurons in a recurrent cortical network learn to compute on temporally and spatially extended stimuli. However, these stimuli constitute the norm, rather than the exception, of the brain's input. Here, we introduce a geometric theory of learning spatiotemporal computations through neuronal plasticity. To that end, we rigorously formulate the problem of neural representations as a relation in space between stimulus-induced neural activity and the asymptotic dynamics of excitable cortical networks. Backed up by computer simulations and numerical analysis, we show that two canonical and widely spread forms of neuronal plasticity, that is, spike-timing-dependent synaptic plasticity and intrinsic plasticity, are both necessary for creating neural representations, such that these computations become realizable. Interestingly, the effects of these forms of plasticity on the emerging neural code relate to properties necessary for both combating and utilizing noise. The neural dynamics also exhibits features of the most likely stimulus in the network's spontaneous activity. These properties of the spatiotemporal neural code resulting from plasticity, having their grounding in nature, further consolidate the biological relevance of our findings. PMID:24651447

  8. Optimizing the Usability of Brain-Computer Interfaces.

    PubMed

    Zhang, Yin; Chase, Steve M

    2018-05-01

    Brain-computer interfaces are in the process of moving from the laboratory to the clinic. These devices act by reading neural activity and using it to directly control a device, such as a cursor on a computer screen. An open question in the field is how to map neural activity to device movement in order to achieve the most proficient control. This question is complicated by the fact that learning, especially the long-term skill learning that accompanies weeks of practice, can allow subjects to improve performance over time. Typical approaches to this problem attempt to maximize the biomimetic properties of the device in order to limit the need for extensive training. However, it is unclear if this approach would ultimately be superior to performance that might be achieved with a nonbiomimetic device once the subject has engaged in extended practice and learned how to use it. Here we approach this problem using ideas from optimal control theory. Under the assumption that the brain acts as an optimal controller, we present a formal definition of the usability of a device and show that the optimal postlearning mapping can be written as the solution of a constrained optimization problem. We then derive the optimal mappings for particular cases common to most brain-computer interfaces. Our results suggest that the common approach of creating biomimetic interfaces may not be optimal when learning is taken into account. More broadly, our method provides a blueprint for optimal device design in general control-theoretic contexts.

  9. Neural controller for adaptive movements with unforeseen payloads.

    PubMed

    Kuperstein, M; Wang, J

    1990-01-01

    A theory and computer simulation of a neural controller that learns to move and position a link carrying an unforeseen payload accurately are presented. The neural controller learns adaptive dynamic control from its own experience. It does not use information about link mass, link length, or direction of gravity, and it uses only indirect uncalibrated information about payload and actuator limits. Its average positioning accuracy across a large range of payloads after learning is 3% of the positioning range. This neural controller can be used as a basis for coordinating any number of sensory inputs with limbs of any number of joints. The feedforward nature of control allows parallel implementation in real time across multiple joints.

  10. Machine learning of molecular electronic properties in chemical compound space

    NASA Astrophysics Data System (ADS)

    Montavon, Grégoire; Rupp, Matthias; Gobre, Vivekanand; Vazquez-Mayagoitia, Alvaro; Hansen, Katja; Tkatchenko, Alexandre; Müller, Klaus-Robert; Anatole von Lilienfeld, O.

    2013-09-01

    The combination of modern scientific computing with electronic structure theory can lead to an unprecedented amount of data amenable to intelligent data analysis for the identification of meaningful, novel and predictive structure-property relationships. Such relationships enable high-throughput screening for relevant properties in an exponentially growing pool of virtual compounds that are synthetically accessible. Here, we present a machine learning model, trained on a database of ab initio calculation results for thousands of organic molecules, that simultaneously predicts multiple electronic ground- and excited-state properties. The properties include atomization energy, polarizability, frontier orbital eigenvalues, ionization potential, electron affinity and excitation energies. The machine learning model is based on a deep multi-task artificial neural network, exploiting the underlying correlations between various molecular properties. The input is identical to ab initio methods, i.e. nuclear charges and Cartesian coordinates of all atoms. For small organic molecules, the accuracy of such a ‘quantum machine’ is similar, and sometimes superior, to modern quantum-chemical methods—at negligible computational cost.

  11. Metrological traceability in education: A practical online system for measuring and managing middle school mathematics instruction

    NASA Astrophysics Data System (ADS)

    Torres Irribarra, D.; Freund, R.; Fisher, W.; Wilson, M.

    2015-02-01

    Computer-based, online assessments modelled, designed, and evaluated for adaptively administered invariant measurement are uniquely suited to defining and maintaining traceability to standardized units in education. An assessment of this kind is embedded in the Assessing Data Modeling and Statistical Reasoning (ADM) middle school mathematics curriculum. Diagnostic information about middle school students' learning of statistics and modeling is provided via computer-based formative assessments for seven constructs that comprise a learning progression for statistics and modeling from late elementary through the middle school grades. The seven constructs are: Data Display, Meta-Representational Competence, Conceptions of Statistics, Chance, Modeling Variability, Theory of Measurement, and Informal Inference. The end product is a web-delivered system built with Ruby on Rails for use by curriculum development teams working with classroom teachers in designing, developing, and delivering formative assessments. The online accessible system allows teachers to accurately diagnose students' unique comprehension and learning needs in a common language of real-time assessment, logging, analysis, feedback, and reporting.

  12. Broadening conceptions of learning in medical education: the message from teamworking.

    PubMed

    Bleakley, Alan

    2006-02-01

    There is a mismatch between the broad range of learning theories offered in the wider education literature and a relatively narrow range of theories privileged in the medical education literature. The latter are usually described under the heading of 'adult learning theory'. This paper critically addresses the limitations of the current dominant learning theories informing medical education. An argument is made that such theories, which address how an individual learns, fail to explain how learning occurs in dynamic, complex and unstable systems such as fluid clinical teams. Models of learning that take into account distributed knowing, learning through time as well as space, and the complexity of a learning environment including relationships between persons and artefacts, are more powerful in explaining and predicting how learning occurs in clinical teams. Learning theories may be privileged for ideological reasons, such as medicine's concern with autonomy. Where an increasing amount of medical education occurs in workplace contexts, sociocultural learning theories offer a best-fit exploration and explanation of such learning. We need to continue to develop testable models of learning that inform safe work practice. One type of learning theory will not inform all practice contexts and we need to think about a range of fit-for-purpose theories that are testable in practice. Exciting current developments include dynamicist models of learning drawing on complexity theory.

  13. Effectiveness of a Computer-Mediated Simulations Program in School Biology on Pupils' Learning Outcomes in Cell Theory

    ERIC Educational Resources Information Center

    Kiboss, Joel K.; Ndirangu, Mwangi; Wekesa, Eric W.

    2004-01-01

    Biology knowledge and understanding is important not only for the conversion of the loftiest dreams into reality for a better life of individuals but also for preparing secondary pupils for such fields as agriculture, medicine, biotechnology, and genetic engineering. But a recent study has revealed that many aspects of school science (biology…

  14. Nuclear Engineering Computer Modules, Thermal-Hydraulics, TH-2: Liquid Metal Fast Breeder Reactors.

    ERIC Educational Resources Information Center

    Reihman, Thomas C.

    This learning module is concerned with the temperature field, the heat transfer rates, and the coolant pressure drop in typical liquid metal fast breeder reactor (LMFBR) fuel assemblies. As in all of the modules of this series, emphasis is placed on developing the theory and demonstrating the use with a simplified model. The heart of the module is…

  15. Toward a Theory of Sequencing: Study 3-2: An Exploration of Transitivities Formulated From a Set of Piagetian-Derived Operations and Their Implications in Traversing Learning Hierarchies.

    ERIC Educational Resources Information Center

    Hopkins, Layne Victor

    Certain transitivity relationships formulated from reversible operations were examined. Thirty randomly selected fifth grade students received instructional episodes, developed for each identified behavioral objective and its inverse (on unspecified content), presented via the IBM 1500 Instructional Computer System. It was found that students who…

  16. Proceedings of Selected Research and Development Presentations at the 1996 National Convention of the Association for Educational Communications and Technology Sponsored by the Research and Theory Division (18th, Indianapolis, IN, 1996).

    ERIC Educational Resources Information Center

    Simonson, Michael R., Ed.; And Others

    1996-01-01

    This proceedings volume contains 77 papers. Subjects addressed include: image processing; new faculty research methods; preinstructional activities for preservice teacher education; computer "window" presentation styles; interface design; stress management instruction; cooperative learning; graphical user interfaces; student attitudes,…

  17. Studio Mathematics: The Epistemology and Practice of Design Pedagogy as a Model for Mathematics Learning. WCER Working Paper No. 2005-3

    ERIC Educational Resources Information Center

    Shaffer, David Williamson

    2005-01-01

    This paper examines how middle school students developed understanding of transformational geometry through design activities in Escher's World, a computationally rich design experiment explicitly modeled on an architectural design studio. Escher's World was based on the theory of pedagogical praxis (Shaffer, 2004a), which suggests that preserving…

  18. The Complementary Use of Audience Response Systems and Online Tests to Implement Repeat Testing: A Case Study

    ERIC Educational Resources Information Center

    Stratling, Rebecca

    2017-01-01

    Although learning theories suggest that repeat testing can be highly beneficial for students' retention and understanding of material, there is, so far, little guidance on how to implement repeat testing in higher education. This paper introduces one method for implementing a three-stage model of repeat testing via computer-aided formative…

  19. The Educational Value of Visual Cues and 3D-Representational Format in a Computer Animation under Restricted and Realistic Conditions

    ERIC Educational Resources Information Center

    Huk, Thomas; Steinke, Mattias; Floto, Christian

    2010-01-01

    Within the framework of cognitive learning theories, instructional design manipulations have primarily been investigated under tightly controlled laboratory conditions. We carried out two experiments, where the first experiment was conducted in a restricted system-paced setting and is therefore in line with the majority of empirical studies in the…

  20. Factors Affecting Students' Acceptance of Tablet PCs: A Study in Italian High Schools

    ERIC Educational Resources Information Center

    Cacciamani, Stefano; Villani, Daniela; Bonanomi, Andrea; Carissoli, Claudia; Olivari, Maria Giulia; Morganti, Laura; Riva, Giuseppe; Confalonieri, Emanuela

    2018-01-01

    To maximize the advantages of the tablet personal computer (TPC) at school, this technology needs to be accepted by students as new tool for learning. With reference to the Technology Acceptance Model and the Unified Theory of Acceptance and Use of Technology, the aims of this study were (a) to analyze factors influencing high school students'…

  1. Towards a Local Integration of Theories: Codes and Praxeologies in the Case of Computer-Based Instruction

    ERIC Educational Resources Information Center

    Gellert, Uwe; Barbe, Joaquim; Espinoza, Lorena

    2013-01-01

    We report on the development of a "language of description" that facilitates an integrated analysis of classroom video data in terms of the quality of the teaching-learning process and the students' access to valued forms of mathematical knowledge. Our research setting is the introduction of software for teachers for improving the mathematical…

  2. Access to ICT for Teaching and Learning: From Single Artefact to Interrelated Resources

    ERIC Educational Resources Information Center

    Czerniewicz, Laura; Brown, Cheryl

    2005-01-01

    In the past few years, concepts of the digital divide and theories of access to ICT have evolved beyond a focus on the separation of the "haves" and the "have nots" to include more than just physical access to computers. Researchers have started considering the conditions or criteria for access and broadened the concept by…

  3. Nuclear Engineering Computer Modules, Thermal-Hydraulics, TH-1: Pressurized Water Reactors.

    ERIC Educational Resources Information Center

    Reihman, Thomas C.

    This learning module is concerned with the temperature field, the heat transfer rates, and the coolant pressure drop in typical pressurized water reactor (PWR) fuel assemblies. As in all of the modules of this series, emphasis is placed on developing the theory and demonstrating its use with a simplified model. The heart of the module is the PWR…

  4. Centre-Embedded Structures Are a By-Product of Associative Learning and Working Memory Constraints: Evidence from Baboons ("Papio Papio")

    ERIC Educational Resources Information Center

    Rey, Arnaud; Perruchet, Pierre; Fagot, Joel

    2012-01-01

    Influential theories have claimed that the ability for recursion forms the computational core of human language faculty distinguishing our communication system from that of other animals (Hauser, Chomsky, & Fitch, 2002). In the present study, we consider an alternative view on recursion by studying the contribution of associative and working…

  5. Language and Discourse Analysis with Coh-Metrix: Applications from Educational Material to Learning Environments at Scale

    ERIC Educational Resources Information Center

    Dowell, Nia M. M.; Graesser, Arthur\tC.; Cai, Zhiqiang

    2016-01-01

    The goal of this article is to preserve and distribute the information presented at the LASI (2014) workshop on Coh-Metrix, a theoretically grounded, computational linguistics facility that analyzes texts on multiple levels of language and discourse. The workshop focused on the utility of Coh-Metrix in discourse theory and educational practice. We…

  6. Florida Teletraining Project.

    DTIC Science & Technology

    1994-01-01

    with any relatively small research effort, caution must be exercised in making inferences beyond the population of specific courses taught and...Management). The adapted model is based on learning and instructionali theory. The five courses that were reconfigured in the FTP were assigned by the...distance education strategies, including audio teleconferencing, computer- based teleconferencing, and VTT. While the research is in its infancy and many

  7. Towards solving the hard problem of consciousness: The varieties of brain resonances and the conscious experiences that they support.

    PubMed

    Grossberg, Stephen

    2017-03-01

    The hard problem of consciousness is the problem of explaining how we experience qualia or phenomenal experiences, such as seeing, hearing, and feeling, and knowing what they are. To solve this problem, a theory of consciousness needs to link brain to mind by modeling how emergent properties of several brain mechanisms interacting together embody detailed properties of individual conscious psychological experiences. This article summarizes evidence that Adaptive Resonance Theory, or ART, accomplishes this goal. ART is a cognitive and neural theory of how advanced brains autonomously learn to attend, recognize, and predict objects and events in a changing world. ART has predicted that "all conscious states are resonant states" as part of its specification of mechanistic links between processes of consciousness, learning, expectation, attention, resonance, and synchrony. It hereby provides functional and mechanistic explanations of data ranging from individual spikes and their synchronization to the dynamics of conscious perceptual, cognitive, and cognitive-emotional experiences. ART has reached sufficient maturity to begin classifying the brain resonances that support conscious experiences of seeing, hearing, feeling, and knowing. Psychological and neurobiological data in both normal individuals and clinical patients are clarified by this classification. This analysis also explains why not all resonances become conscious, and why not all brain dynamics are resonant. The global organization of the brain into computationally complementary cortical processing streams (complementary computing), and the organization of the cerebral cortex into characteristic layers of cells (laminar computing), figure prominently in these explanations of conscious and unconscious processes. Alternative models of consciousness are also discussed. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  8. Technology, attributions, and emotions in post-secondary education: An application of Weiner’s attribution theory to academic computing problems

    PubMed Central

    Hall, Nathan C.; Goetz, Thomas; Chiarella, Andrew; Rahimi, Sonia

    2018-01-01

    As technology becomes increasingly integrated with education, research on the relationships between students’ computing-related emotions and motivation following technological difficulties is critical to improving learning experiences. Following from Weiner’s (2010) attribution theory of achievement motivation, the present research examined relationships between causal attributions and emotions concerning academic computing difficulties in two studies. Study samples consisted of North American university students enrolled in both traditional and online universities (total N = 559) who responded to either hypothetical scenarios or experimental manipulations involving technological challenges experienced in academic settings. Findings from Study 1 showed stable and external attributions to be emotionally maladaptive (more helplessness, boredom, guilt), particularly in response to unexpected computing problems. Additionally, Study 2 found stable attributions for unexpected problems to predict more anxiety for traditional students, with both external and personally controllable attributions for minor problems proving emotionally beneficial for students in online degree programs (more hope, less anxiety). Overall, hypothesized negative effects of stable attributions were observed across both studies, with mixed results for personally controllable attributions and unanticipated emotional benefits of external attributions for academic computing problems warranting further study. PMID:29529039

  9. Technology, attributions, and emotions in post-secondary education: An application of Weiner's attribution theory to academic computing problems.

    PubMed

    Maymon, Rebecca; Hall, Nathan C; Goetz, Thomas; Chiarella, Andrew; Rahimi, Sonia

    2018-01-01

    As technology becomes increasingly integrated with education, research on the relationships between students' computing-related emotions and motivation following technological difficulties is critical to improving learning experiences. Following from Weiner's (2010) attribution theory of achievement motivation, the present research examined relationships between causal attributions and emotions concerning academic computing difficulties in two studies. Study samples consisted of North American university students enrolled in both traditional and online universities (total N = 559) who responded to either hypothetical scenarios or experimental manipulations involving technological challenges experienced in academic settings. Findings from Study 1 showed stable and external attributions to be emotionally maladaptive (more helplessness, boredom, guilt), particularly in response to unexpected computing problems. Additionally, Study 2 found stable attributions for unexpected problems to predict more anxiety for traditional students, with both external and personally controllable attributions for minor problems proving emotionally beneficial for students in online degree programs (more hope, less anxiety). Overall, hypothesized negative effects of stable attributions were observed across both studies, with mixed results for personally controllable attributions and unanticipated emotional benefits of external attributions for academic computing problems warranting further study.

  10. Quantum reinforcement learning.

    PubMed

    Dong, Daoyi; Chen, Chunlin; Li, Hanxiong; Tarn, Tzyh-Jong

    2008-10-01

    The key approaches for machine learning, particularly learning in unknown probabilistic environments, are new representations and computation mechanisms. In this paper, a novel quantum reinforcement learning (QRL) method is proposed by combining quantum theory and reinforcement learning (RL). Inspired by the state superposition principle and quantum parallelism, a framework of a value-updating algorithm is introduced. The state (action) in traditional RL is identified as the eigen state (eigen action) in QRL. The state (action) set can be represented with a quantum superposition state, and the eigen state (eigen action) can be obtained by randomly observing the simulated quantum state according to the collapse postulate of quantum measurement. The probability of the eigen action is determined by the probability amplitude, which is updated in parallel according to rewards. Some related characteristics of QRL such as convergence, optimality, and balancing between exploration and exploitation are also analyzed, which shows that this approach makes a good tradeoff between exploration and exploitation using the probability amplitude and can speedup learning through the quantum parallelism. To evaluate the performance and practicability of QRL, several simulated experiments are given, and the results demonstrate the effectiveness and superiority of the QRL algorithm for some complex problems. This paper is also an effective exploration on the application of quantum computation to artificial intelligence.

  11. The Attribution Theory of Learning and Advising Students on Academic Probation

    ERIC Educational Resources Information Center

    Demetriou, Cynthia

    2011-01-01

    Academic advisors need to be knowledgeable of the ways students learn. To aid advisors in their exploration of learning theories, I provide an overview of the attribution theory of learning, including recent applications of the theory to research in college student learning. An understanding of this theory may help advisors understand student…

  12. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    PubMed

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  13. Flight School in the Virtual Environment: Capabilities and Risks of Executing a Simulations-Based Flight Training Program

    DTIC Science & Technology

    2012-05-17

    theories work together to explain learning in aviation—behavioral learning theory , cognitive learning theory , constructivism, experiential ...solve problems, and make decisions. Experiential learning theory incorporates both behavioral and cognitive theories .104 This theory harnesses the...34Evaluation of the Effectiveness of Flight School XXI," 7. 106 David A. Kolb , Experiential Learning : Experience as the Source of

  14. Prosocial norms as a positive youth development construct: a conceptual review.

    PubMed

    Siu, Andrew M H; Shek, Daniel T L; Law, Ben

    2012-01-01

    Prosocial norms like reciprocity, social responsibility, altruism, and volunteerism are ethical standards and beliefs that youth development programs often want to promote. This paper reviews evolutionary, social-cognitive, and developmental theories of prosocial development and analyzes how young people learn and adopt prosocial norms. The paper showed that very few current theories explicitly address the issue of how prosocial norms, in form of feelings of moral obligations, may be challenged by a norm of self-interest and social circumstances when prosocial acts are needed. It is necessary to develop theories which put prosocial norms as a central construct, and a new social cognitive theory of norm activation has the potential to help us understand how prosocial norms may be applied. This paper also highlights how little we know about young people perceiving and receiving prosocial norms and how influential of school policies and peer influence on the prosocial development. Lastly, while training of interpersonal competence (e.g., empathy, moral reasoning, etc.) was commonly used in the youth development, their effectiveness was not systematically evaluated. It will also be interesting to examine how computer and information technology or video games may be used in e-learning of prosocial norms.

  15. Prosocial Norms as a Positive Youth Development Construct: A Conceptual Review

    PubMed Central

    Siu, Andrew M. H.; Shek, Daniel T. L.; Law, Ben

    2012-01-01

    Prosocial norms like reciprocity, social responsibility, altruism, and volunteerism are ethical standards and beliefs that youth development programs often want to promote. This paper reviews evolutionary, social-cognitive, and developmental theories of prosocial development and analyzes how young people learn and adopt prosocial norms. The paper showed that very few current theories explicitly address the issue of how prosocial norms, in form of feelings of moral obligations, may be challenged by a norm of self-interest and social circumstances when prosocial acts are needed. It is necessary to develop theories which put prosocial norms as a central construct, and a new social cognitive theory of norm activation has the potential to help us understand how prosocial norms may be applied. This paper also highlights how little we know about young people perceiving and receiving prosocial norms and how influential of school policies and peer influence on the prosocial development. Lastly, while training of interpersonal competence (e.g., empathy, moral reasoning, etc.) was commonly used in the youth development, their effectiveness was not systematically evaluated. It will also be interesting to examine how computer and information technology or video games may be used in e-learning of prosocial norms. PMID:22666157

  16. Fuzzy logic, neural networks, and soft computing

    NASA Technical Reports Server (NTRS)

    Zadeh, Lofti A.

    1994-01-01

    The past few years have witnessed a rapid growth of interest in a cluster of modes of modeling and computation which may be described collectively as soft computing. The distinguishing characteristic of soft computing is that its primary aims are to achieve tractability, robustness, low cost, and high MIQ (machine intelligence quotient) through an exploitation of the tolerance for imprecision and uncertainty. Thus, in soft computing what is usually sought is an approximate solution to a precisely formulated problem or, more typically, an approximate solution to an imprecisely formulated problem. A simple case in point is the problem of parking a car. Generally, humans can park a car rather easily because the final position of the car is not specified exactly. If it were specified to within, say, a few millimeters and a fraction of a degree, it would take hours or days of maneuvering and precise measurements of distance and angular position to solve the problem. What this simple example points to is the fact that, in general, high precision carries a high cost. The challenge, then, is to exploit the tolerance for imprecision by devising methods of computation which lead to an acceptable solution at low cost. By its nature, soft computing is much closer to human reasoning than the traditional modes of computation. At this juncture, the major components of soft computing are fuzzy logic (FL), neural network theory (NN), and probabilistic reasoning techniques (PR), including genetic algorithms, chaos theory, and part of learning theory. Increasingly, these techniques are used in combination to achieve significant improvement in performance and adaptability. Among the important application areas for soft computing are control systems, expert systems, data compression techniques, image processing, and decision support systems. It may be argued that it is soft computing, rather than the traditional hard computing, that should be viewed as the foundation for artificial intelligence. In the years ahead, this may well become a widely held position.

  17. Reinforcement Learning in Multidimensional Environments Relies on Attention Mechanisms

    PubMed Central

    Daniel, Reka; Geana, Andra; Gershman, Samuel J.; Leong, Yuan Chang; Radulescu, Angela; Wilson, Robert C.

    2015-01-01

    In recent years, ideas from the computational field of reinforcement learning have revolutionized the study of learning in the brain, famously providing new, precise theories of how dopamine affects learning in the basal ganglia. However, reinforcement learning algorithms are notorious for not scaling well to multidimensional environments, as is required for real-world learning. We hypothesized that the brain naturally reduces the dimensionality of real-world problems to only those dimensions that are relevant to predicting reward, and conducted an experiment to assess by what algorithms and with what neural mechanisms this “representation learning” process is realized in humans. Our results suggest that a bilateral attentional control network comprising the intraparietal sulcus, precuneus, and dorsolateral prefrontal cortex is involved in selecting what dimensions are relevant to the task at hand, effectively updating the task representation through trial and error. In this way, cortical attention mechanisms interact with learning in the basal ganglia to solve the “curse of dimensionality” in reinforcement learning. PMID:26019331

  18. Mesolimbic confidence signals guide perceptual learning in the absence of external feedback

    PubMed Central

    Guggenmos, Matthias; Wilbertz, Gregor; Hebart, Martin N; Sterzer, Philipp

    2016-01-01

    It is well established that learning can occur without external feedback, yet normative reinforcement learning theories have difficulties explaining such instances of learning. Here, we propose that human observers are capable of generating their own feedback signals by monitoring internal decision variables. We investigated this hypothesis in a visual perceptual learning task using fMRI and confidence reports as a measure for this monitoring process. Employing a novel computational model in which learning is guided by confidence-based reinforcement signals, we found that mesolimbic brain areas encoded both anticipation and prediction error of confidence—in remarkable similarity to previous findings for external reward-based feedback. We demonstrate that the model accounts for choice and confidence reports and show that the mesolimbic confidence prediction error modulation derived through the model predicts individual learning success. These results provide a mechanistic neurobiological explanation for learning without external feedback by augmenting reinforcement models with confidence-based feedback. DOI: http://dx.doi.org/10.7554/eLife.13388.001 PMID:27021283

  19. Why Copy Others? Insights from the Social Learning Strategies Tournament

    PubMed Central

    Rendell, L.; Boyd, R.; Cownden, D.; Enquist, M.; Eriksson, K.; Feldman, M. W.; Fogarty, L.; Ghirlanda, S.; Lillicrap, T.; Laland, K. N.

    2010-01-01

    Social learning (learning through observation or interaction with other individuals) is widespread in nature and is central to the remarkable success of humanity, yet it remains unclear why it pays to copy, and how best to do so. To address these questions we organised a computer tournament in which entrants submitted strategies specifying how to use social learning and its asocial alternative (e.g. trial-and-error) to acquire adaptive behavior in a complex environment. Most current theory predicts the emergence of mixed strategies that rely on some combination of the two types of learning. In the tournament, however, strategies that relied heavily on social learning were found to be remarkably successful, even when asocial information was no more costly than social information. Social learning proved advantageous because individuals frequently demonstrated the highest-payoff behavior in their repertoire, inadvertently filtering information for copiers. The winning strategy (discountmachine) relied exclusively on social learning, and weighted information according to the time since acquisition. PMID:20378813

  20. Priming for performance: valence of emotional primes interact with dissociable prototype learning systems.

    PubMed

    Gorlick, Marissa A; Maddox, W Todd

    2013-01-01

    Arousal Biased Competition theory suggests that arousal enhances competitive attentional processes, but makes no strong claims about valence effects. Research suggests that the scope of enhanced attention depends on valence with negative arousal narrowing and positive arousal broadening attention. Attentional scope likely affects declarative-memory-mediated and perceptual-representation-mediated learning systems differently, with declarative-memory-mediated learning depending on narrow attention to develop targeted verbalizable rules, and perceptual-representation-mediated learning depending on broad attention to develop a perceptual representation. We hypothesize that negative arousal accentuates declarative-memory-mediated learning and attenuates perceptual-representation-mediated learning, while positive arousal reverses this pattern. Prototype learning provides an ideal test bed as dissociable declarative-memory and perceptual-representation systems mediate two-prototype (AB) and one-prototype (AN) prototype learning, respectively, and computational models are available that provide powerful insights on cognitive processing. As predicted, we found that negative arousal narrows attentional focus facilitating AB learning and impairing AN learning, while positive arousal broadens attentional focus facilitating AN learning and impairing AB learning.

Top