An Overview of Computer-Based Natural Language Processing.
ERIC Educational Resources Information Center
Gevarter, William B.
Computer-based Natural Language Processing (NLP) is the key to enabling humans and their computer-based creations to interact with machines using natural languages (English, Japanese, German, etc.) rather than formal computer languages. NLP is a major research area in the fields of artificial intelligence and computational linguistics. Commercial…
Emerging Approach of Natural Language Processing in Opinion Mining: A Review
NASA Astrophysics Data System (ADS)
Kim, Tai-Hoon
Natural language processing (NLP) is a subfield of artificial intelligence and computational linguistics. It studies the problems of automated generation and understanding of natural human languages. This paper outlines a framework to use computer and natural language techniques for various levels of learners to learn foreign languages in Computer-based Learning environment. We propose some ideas for using the computer as a practical tool for learning foreign language where the most of courseware is generated automatically. We then describe how to build Computer Based Learning tools, discuss its effectiveness, and conclude with some possibilities using on-line resources.
An overview of computer-based natural language processing
NASA Technical Reports Server (NTRS)
Gevarter, W. B.
1983-01-01
Computer based Natural Language Processing (NLP) is the key to enabling humans and their computer based creations to interact with machines in natural language (like English, Japanese, German, etc., in contrast to formal computer languages). The doors that such an achievement can open have made this a major research area in Artificial Intelligence and Computational Linguistics. Commercial natural language interfaces to computers have recently entered the market and future looks bright for other applications as well. This report reviews the basic approaches to such systems, the techniques utilized, applications, the state of the art of the technology, issues and research requirements, the major participants and finally, future trends and expectations. It is anticipated that this report will prove useful to engineering and research managers, potential users, and others who will be affected by this field as it unfolds.
Neurolinguistics and psycholinguistics as a basis for computer acquisition of natural language
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powers, D.M.W.
1983-04-01
Research into natural language understanding systems for computers has concentrated on implementing particular grammars and grammatical models of the language concerned. This paper presents a rationale for research into natural language understanding systems based on neurological and psychological principles. Important features of the approach are that it seeks to place the onus of learning the language on the computer, and that it seeks to make use of the vast wealth of relevant psycholinguistic and neurolinguistic theory. 22 references.
Intelligent CAI: An Author Aid for a Natural Language Interface.
ERIC Educational Resources Information Center
Burton, Richard R.; Brown, John Seely
This report addresses the problems of using natural language (English) as the communication language for advanced computer-based instructional systems. The instructional environment places requirements on a natural language understanding system that exceed the capabilities of all existing systems, including: (1) efficiency, (2) habitability, (3)…
ERIC Educational Resources Information Center
Harbusch, Karin; Itsova, Gergana; Koch, Ulrich; Kuhner, Christine
2009-01-01
We built a natural language processing (NLP) system implementing a "virtual writing conference" for elementary-school children, with German as the target language. Currently, state-of-the-art computer support for writing tasks is restricted to multiple-choice questions or quizzes because automatic parsing of the often ambiguous and fragmentary…
A grammar-based semantic similarity algorithm for natural language sentences.
Lee, Ming Che; Chang, Jia Wei; Hsieh, Tung Cheng
2014-01-01
This paper presents a grammar and semantic corpus based similarity algorithm for natural language sentences. Natural language, in opposition to "artificial language", such as computer programming languages, is the language used by the general public for daily communication. Traditional information retrieval approaches, such as vector models, LSA, HAL, or even the ontology-based approaches that extend to include concept similarity comparison instead of cooccurrence terms/words, may not always determine the perfect matching while there is no obvious relation or concept overlap between two natural language sentences. This paper proposes a sentence similarity algorithm that takes advantage of corpus-based ontology and grammatical rules to overcome the addressed problems. Experiments on two famous benchmarks demonstrate that the proposed algorithm has a significant performance improvement in sentences/short-texts with arbitrary syntax and structure.
Language evolution and human-computer interaction
NASA Technical Reports Server (NTRS)
Grudin, Jonathan; Norman, Donald A.
1991-01-01
Many of the issues that confront designers of interactive computer systems also appear in natural language evolution. Natural languages and human-computer interfaces share as their primary mission the support of extended 'dialogues' between responsive entities. Because in each case one participant is a human being, some of the pressures operating on natural languages, causing them to evolve in order to better support such dialogue, also operate on human-computer 'languages' or interfaces. This does not necessarily push interfaces in the direction of natural language - since one entity in this dialogue is not a human, this is not to be expected. Nonetheless, by discerning where the pressures that guide natural language evolution also appear in human-computer interaction, we can contribute to the design of computer systems and obtain a new perspective on natural languages.
ERIC Educational Resources Information Center
Burk, Robin K.
2010-01-01
Computational natural language understanding and generation have been a goal of artificial intelligence since McCarthy, Minsky, Rochester and Shannon first proposed to spend the summer of 1956 studying this and related problems. Although statistical approaches dominate current natural language applications, two current research trends bring…
A Grammar-Based Semantic Similarity Algorithm for Natural Language Sentences
Chang, Jia Wei; Hsieh, Tung Cheng
2014-01-01
This paper presents a grammar and semantic corpus based similarity algorithm for natural language sentences. Natural language, in opposition to “artificial language”, such as computer programming languages, is the language used by the general public for daily communication. Traditional information retrieval approaches, such as vector models, LSA, HAL, or even the ontology-based approaches that extend to include concept similarity comparison instead of cooccurrence terms/words, may not always determine the perfect matching while there is no obvious relation or concept overlap between two natural language sentences. This paper proposes a sentence similarity algorithm that takes advantage of corpus-based ontology and grammatical rules to overcome the addressed problems. Experiments on two famous benchmarks demonstrate that the proposed algorithm has a significant performance improvement in sentences/short-texts with arbitrary syntax and structure. PMID:24982952
Language Analysis Package (L.A.P.) Version I System Design.
ERIC Educational Resources Information Center
Porch, Ann
To permit researchers to use the speed and versatility of the computer to process natural language text as well as numerical data without undergoing special training in programing or computer operations, a language analysis package has been developed partially based on several existing programs. An overview of the design is provided and system…
Linguistic Analysis of Natural Language Communication with Computers.
ERIC Educational Resources Information Center
Thompson, Bozena Henisz
Interaction with computers in natural language requires a language that is flexible and suited to the task. This study of natural dialogue was undertaken to reveal those characteristics which can make computer English more natural. Experiments were made in three modes of communication: face-to-face, terminal-to-terminal, and human-to-computer,…
Knowledge-Based Extensible Natural Language Interface Technology Program
1989-11-30
natural language as its own meta-language to explain the meaning and attributes of the words and idioms of the larguage. Educational courses in language...understood and used by Lydia for human-computer dialogue. The KL enables a systems developer or " teacher -user" to build the system to a point where new...language can be "formal" as in a structured educational language program or it can be "informal" as in the case of a person consulting a dictionary for the
The Effect of Bilingual Term List Size on Dictionary-Based Cross-Language Information Retrieval
2006-01-01
The Effect of Bilingual Term List Size on Dictionary -Based Cross-Language Information Retrieval Dina Demner-Fushman Department of Computer Science... dictionary -based Cross-Language Information Retrieval (CLIR), in which the goal is to find documents written in one natural language based on queries that...in which the documents are written. In dictionary -based CLIR techniques, the princi- pal source of translation knowledge is a translation lexicon
Computational Natural Language Inference: Robust and Interpretable Question Answering
ERIC Educational Resources Information Center
Sharp, Rebecca Reynolds
2017-01-01
We address the challenging task of "computational natural language inference," by which we mean bridging two or more natural language texts while also providing an explanation of how they are connected. In the context of question answering (i.e., finding short answers to natural language questions), this inference connects the question…
ERIC Educational Resources Information Center
Snyder, Robin M.
2015-01-01
In 2014, in conjunction with doing research in natural language processing and attending a global conference on computational linguistics, the author decided to learn a new foreign language, Greek, that uses a non-English character set. This paper/session will present/discuss an overview of the current state of natural language processing and…
Reconciliation of ontology and terminology to cope with linguistics.
Baud, Robert H; Ceusters, Werner; Ruch, Patrick; Rassinoux, Anne-Marie; Lovis, Christian; Geissbühler, Antoine
2007-01-01
To discuss the relationships between ontologies, terminologies and language in the context of Natural Language Processing (NLP) applications in order to show the negative consequences of confusing them. The viewpoints of the terminologist and (computational) linguist are developed separately, and then compared, leading to the presentation of reconciliation among these points of view, with consideration of the role of the ontologist. In order to encourage appropriate usage of terminologies, guidelines are presented advocating the simultaneous publication of pragmatic vocabularies supported by terminological material based on adequate ontological analysis. Ontologies, terminologies and natural languages each have their own purpose. Ontologies support machine understanding, natural languages support human communication, and terminologies should form the bridge between them. Therefore, future terminology standards should be based on sound ontology and do justice to the diversities in natural languages. Moreover, they should support local vocabularies, in order to be easily adaptable to local needs and practices.
Sentence Paraphrasing from a Conceptual Base
ERIC Educational Resources Information Center
Goldman, Neil M.
1975-01-01
A model of natural language generation based on an underlying language-free representation of meaning is described. A computer implementation of this model, called BABEL, has been developed at Stanford University. It is able to produce sentence paraphrases which demonstrate understanding with respect to a given context. Available from Association…
LLOGO: An Implementation of LOGO in LISP. Artificial Intelligence Memo Number 307.
ERIC Educational Resources Information Center
Goldstein, Ira; And Others
LISP LOGO is a computer language invented for the beginning student of man-machine interaction. The language has the advantages of simplicity and naturalness as well as that of emphasizing the difference between programs and data. The language is based on the LOGO language and uses mnemonic syllables as commands. It can be used in conjunction with…
The semantic web and computer vision: old AI meets new AI
NASA Astrophysics Data System (ADS)
Mundy, J. L.; Dong, Y.; Gilliam, A.; Wagner, R.
2018-04-01
There has been vast process in linking semantic information across the billions of web pages through the use of ontologies encoded in the Web Ontology Language (OWL) based on the Resource Description Framework (RDF). A prime example is the Wikipedia where the knowledge contained in its more than four million pages is encoded in an ontological database called DBPedia http://wiki.dbpedia.org/. Web-based query tools can retrieve semantic information from DBPedia encoded in interlinked ontologies that can be accessed using natural language. This paper will show how this vast context can be used to automate the process of querying images and other geospatial data in support of report changes in structures and activities. Computer vision algorithms are selected and provided with context based on natural language requests for monitoring and analysis. The resulting reports provide semantically linked observations from images and 3D surface models.
ERIC Educational Resources Information Center
Liou, Hsien-Chin; Chang, Jason S; Chen, Hao-Jan; Lin, Chih-Cheng; Liaw, Meei-Ling; Gao, Zhao-Ming; Jang, Jyh-Shing Roger; Yeh, Yuli; Chuang, Thomas C.; You, Geeng-Neng
2006-01-01
This paper describes the development of an innovative web-based environment for English language learning with advanced data-driven and statistical approaches. The project uses various corpora, including a Chinese-English parallel corpus ("Sinorama") and various natural language processing (NLP) tools to construct effective English…
Efficient Embedded Decoding of Neural Network Language Models in a Machine Translation System.
Zamora-Martinez, Francisco; Castro-Bleda, Maria Jose
2018-02-22
Neural Network Language Models (NNLMs) are a successful approach to Natural Language Processing tasks, such as Machine Translation. We introduce in this work a Statistical Machine Translation (SMT) system which fully integrates NNLMs in the decoding stage, breaking the traditional approach based on [Formula: see text]-best list rescoring. The neural net models (both language models (LMs) and translation models) are fully coupled in the decoding stage, allowing to more strongly influence the translation quality. Computational issues were solved by using a novel idea based on memorization and smoothing of the softmax constants to avoid their computation, which introduces a trade-off between LM quality and computational cost. These ideas were studied in a machine translation task with different combinations of neural networks used both as translation models and as target LMs, comparing phrase-based and [Formula: see text]-gram-based systems, showing that the integrated approach seems more promising for [Formula: see text]-gram-based systems, even with nonfull-quality NNLMs.
Natural Language Processing in Game Studies Research: An Overview
ERIC Educational Resources Information Center
Zagal, Jose P.; Tomuro, Noriko; Shepitsen, Andriy
2012-01-01
Natural language processing (NLP) is a field of computer science and linguistics devoted to creating computer systems that use human (natural) language as input and/or output. The authors propose that NLP can also be used for game studies research. In this article, the authors provide an overview of NLP and describe some research possibilities…
The Contribution of CALL to Advanced-Level Foreign/Second Language Instruction
ERIC Educational Resources Information Center
Burston, Jack; Arispe, Kelly
2016-01-01
This paper evaluates the contribution of instructional technology to advanced-level foreign/second language learning (AL2) over the past thirty years. It is shown that the most salient feature of AL2 practice and associated Computer-Assisted Language Learning (CALL) research are their rarity and restricted nature. Based on an analysis of four…
Hunter, James; Freer, Yvonne; Gatt, Albert; Reiter, Ehud; Sripada, Somayajulu; Sykes, Cindy; Westwater, Dave
2011-01-01
The BT-Nurse system uses data-to-text technology to automatically generate a natural language nursing shift summary in a neonatal intensive care unit (NICU). The summary is solely based on data held in an electronic patient record system, no additional data-entry is required. BT-Nurse was tested for two months in the Royal Infirmary of Edinburgh NICU. Nurses were asked to rate the understandability, accuracy, and helpfulness of the computer-generated summaries; they were also asked for free-text comments about the summaries. The nurses found the majority of the summaries to be understandable, accurate, and helpful (p<0.001 for all measures). However, nurses also pointed out many deficiencies, especially with regard to extra content they wanted to see in the computer-generated summaries. In conclusion, natural language NICU shift summaries can be automatically generated from an electronic patient record, but our proof-of-concept software needs considerable additional development work before it can be deployed.
Freer, Yvonne; Gatt, Albert; Reiter, Ehud; Sripada, Somayajulu; Sykes, Cindy; Westwater, Dave
2011-01-01
The BT-Nurse system uses data-to-text technology to automatically generate a natural language nursing shift summary in a neonatal intensive care unit (NICU). The summary is solely based on data held in an electronic patient record system, no additional data-entry is required. BT-Nurse was tested for two months in the Royal Infirmary of Edinburgh NICU. Nurses were asked to rate the understandability, accuracy, and helpfulness of the computer-generated summaries; they were also asked for free-text comments about the summaries. The nurses found the majority of the summaries to be understandable, accurate, and helpful (p<0.001 for all measures). However, nurses also pointed out many deficiencies, especially with regard to extra content they wanted to see in the computer-generated summaries. In conclusion, natural language NICU shift summaries can be automatically generated from an electronic patient record, but our proof-of-concept software needs considerable additional development work before it can be deployed. PMID:21724739
Neural Network Computing and Natural Language Processing.
ERIC Educational Resources Information Center
Borchardt, Frank
1988-01-01
Considers the application of neural network concepts to traditional natural language processing and demonstrates that neural network computing architecture can: (1) learn from actual spoken language; (2) observe rules of pronunciation; and (3) reproduce sounds from the patterns derived by its own processes. (Author/CB)
Vectorial Representations of Meaning for a Computational Model of Language Comprehension
ERIC Educational Resources Information Center
Wu, Stephen Tze-Inn
2010-01-01
This thesis aims to define and extend a line of computational models for text comprehension that are humanly plausible. Since natural language is human by nature, computational models of human language will always be just that--models. To the degree that they miss out on information that humans would tap into, they may be improved by considering…
DOE Office of Scientific and Technical Information (OSTI.GOV)
McHale, M.L.
The field of artificial Intelligence strives to produce computer programs that exhibit intelligent behavior. One of the areas of interest is the processing of natural language. This report discusses the role of the computer language PROLOG in Natural Language Processing (NLP) both from theoretic and pragmatic viewpoints. The reasons for using PROLOG for NLP are numerous. First, linguists can write natural-language grammars almost directly as PROLOG programs; this allows fast-prototyping of NLP systems and facilitates analysis of NLP theories. Second, semantic representations of natural-language texts that use logic formalisms are readily produced in PROLOG because of PROLOG's logical foundations. Third,more » PROLOG's built-in inferencing mechanisms are often sufficient for inferences on the logical forms produced by NLPs. Fourth, the logical, declarative nature of PROLOG may make it the language of choice for parallel computing systems. Finally, the fact that PROLOG has a de facto standard (Edinburgh) makes the porting of code from one computer system to another virtually trouble free. Perhaps the strongest tie one could make between NLP and PROLOG was stated by John Stuart Mill in his inaugural Address at St. Andrews: The structure of every sentence is a lesson in logic.« less
RGSS-ID: an approach to new radiologic reporting system.
Ikeda, M; Sakuma, S; Maruyama, K
1990-01-01
RGSS-ID is a developmental computer system that applies artificial intelligence (AI) methods to a reporting system. The representation scheme called Generalized Finding Representation (GFR) is proposed to bridge the gap between natural language expressions in the radiology report and AI methods. The entry process of RGSS-ID is made mainly by selecting items; our system allows a radiologist to compose a sentence which can be completely parsed by the computer. Further RGSS-ID encodes findings into the expression corresponding to GFR, and stores this expression into the knowledge data base. The final printed report is made in the natural language.
A Diagrammatic Language for Biochemical Networks
NASA Astrophysics Data System (ADS)
Maimon, Ron
2002-03-01
I present a diagrammatic language for representing the structure of biochemical networks. The language is designed to represent modular structure in a computational fasion, with composition of reactions replacing functional composition. This notation is used to represent arbitrarily large networks efficiently. The notation finds its most natural use in representing biological interaction networks, but it is a general computing language appropriate to any naturally occuring computation. Unlike lambda-calculus, or text-derived languages, it does not impose a tree-structure on the diagrams, and so is more effective at representing biological fucntion than competing notations.
Natural Language Processing: Toward Large-Scale, Robust Systems.
ERIC Educational Resources Information Center
Haas, Stephanie W.
1996-01-01
Natural language processing (NLP) is concerned with getting computers to do useful things with natural language. Major applications include machine translation, text generation, information retrieval, and natural language interfaces. Reviews important developments since 1987 that have led to advances in NLP; current NLP applications; and problems…
Modeling Memory for Language Understanding.
1982-02-01
Abstract Research on natural language understanding by computer has shown that the nature and organization of memory plays j central role in the...block number) Research on natural language understanding by computer has shown that the nature and organization of memory plays a central role in the...understanding mechanism. Further we claim that such reminding is at the root of how we learn. Issues such as these have played an important part in shaping the
ERIC Educational Resources Information Center
Rouhshad, Amir; Wigglesworth, Gillian; Storch, Neomy
2016-01-01
The Interaction Approach argues that negotiation for meaning and form is conducive to second language development. To date, most of the research on negotiations has been either in face-to-face (FTF) or text-based synchronous computer-mediated communication (SCMC) modes. Very few studies have compared the nature of negotiations across the modes.…
HGML: a hypertext guideline markup language.
Hagerty, C. G.; Pickens, D.; Kulikowski, C.; Sonnenberg, F.
2000-01-01
Existing text-based clinical practice guidelines can be difficult to put into practice. While a growing number of such documents have gained acceptance in the medical community and contain a wealth of valuable information, the time required to digest them is substantial. Yet the expressive power, subtlety and flexibility of natural language pose challenges when designing computer tools that will help in their application. At the same time, formal computer languages typically lack such expressiveness and the effort required to translate existing documents into these languages may be costly. We propose a method based on the mark-up concept for converting text-based clinical guidelines into a machine-operable form. This allows existing guidelines to be manipulated by machine, and viewed in different formats at various levels of detail according to the needs of the practitioner, while preserving their originally published form. PMID:11079898
New Frontiers in Language Evolution and Development.
Oller, D Kimbrough; Dale, Rick; Griebel, Ulrike
2016-04-01
This article introduces the Special Issue and its focus on research in language evolution with emphasis on theory as well as computational and robotic modeling. A key theme is based on the growth of evolutionary developmental biology or evo-devo. The Special Issue consists of 13 articles organized in two sections: A) Theoretical foundations and B) Modeling and simulation studies. All the papers are interdisciplinary in nature, encompassing work in biological and linguistic foundations for the study of language evolution as well as a variety of computational and robotic modeling efforts shedding light on how language may be developed and may have evolved. Copyright © 2016 Cognitive Science Society, Inc.
Flexible processing and the design of grammar.
Sag, Ivan A; Wasow, Thomas
2015-02-01
We explore the consequences of letting the incremental and integrative nature of language processing inform the design of competence grammar. What emerges is a view of grammar as a system of local monotonic constraints that provide a direct characterization of the signs (the form-meaning correspondences) of a given language. This "sign-based" conception of grammar has provided precise solutions to the key problems long thought to motivate movement-based analyses, has supported three decades of computational research developing large-scale grammar implementations, and is now beginning to play a role in computational psycholinguistics research that explores the use of underspecification in the incremental computation of partial meanings.
A Large-Scale Analysis of Variance in Written Language.
Johns, Brendan T; Jamieson, Randall K
2018-01-22
The collection of very large text sources has revolutionized the study of natural language, leading to the development of several models of language learning and distributional semantics that extract sophisticated semantic representations of words based on the statistical redundancies contained within natural language (e.g., Griffiths, Steyvers, & Tenenbaum, ; Jones & Mewhort, ; Landauer & Dumais, ; Mikolov, Sutskever, Chen, Corrado, & Dean, ). The models treat knowledge as an interaction of processing mechanisms and the structure of language experience. But language experience is often treated agnostically. We report a distributional semantic analysis that shows written language in fiction books varies appreciably between books from the different genres, books from the same genre, and even books written by the same author. Given that current theories assume that word knowledge reflects an interaction between processing mechanisms and the language environment, the analysis shows the need for the field to engage in a more deliberate consideration and curation of the corpora used in computational studies of natural language processing. Copyright © 2018 Cognitive Science Society, Inc.
Artificial intelligence, expert systems, computer vision, and natural language processing
NASA Technical Reports Server (NTRS)
Gevarter, W. B.
1984-01-01
An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.
State of the Art of Natural Language Processing
1987-11-15
work of Chomsky , Hewlett-Packard, Generalized Phase Structure Grammar . D. Lunar, DARPA speech understanding, Schank’s Conceptual Dependency Theory...of computers that a machine which understood natural languages was highly desirable. It also was evident from the work of Chomsky * and others that...computers. ♦Noam Chomsky , Aspects of the Theory of Syntax (Cambridge, Mass.: MIT Press, 1965). -A- One of the earliest attempts at Natural Language
ERIC Educational Resources Information Center
Nash-Webber, Bonnie; Reiter, Raymond
This paper describes a computational approach to certain problems of anaphora in natural language and argues in favor of formal meaning representation languages (MRLs) for natural language. After presenting arguments in favor of formal meaning representation languages, appropriate MRLs are discussed. Minimal requirements include provisions for…
Dethlefs, Nina; Milders, Maarten; Cuayáhuitl, Heriberto; Al-Salkini, Turkey; Douglas, Lorraine
2017-12-01
Currently, an estimated 36 million people worldwide are affected by Alzheimer's disease or related dementias. In the absence of a cure, non-pharmacological interventions, such as cognitive stimulation, which slow down the rate of deterioration can benefit people with dementia and their caregivers. Such interventions have shown to improve well-being and slow down the rate of cognitive decline. It has further been shown that cognitive stimulation in interaction with a computer is as effective as with a human. However, the need to operate a computer often represents a difficulty for the elderly and stands in the way of widespread adoption. A possible solution to this obstacle is to provide a spoken natural language interface that allows people with dementia to interact with the cognitive stimulation software in the same way as they would interact with a human caregiver. This makes the assistive technology accessible to users regardless of their technical skills and provides a fully intuitive user experience. This article describes a pilot study that evaluated the feasibility of computer-based cognitive stimulation through a spoken natural language interface. Prototype software was evaluated with 23 users, including healthy elderly people and people with dementia. Feedback was overwhelmingly positive.
Studies of Human Memory and Language Processing.
ERIC Educational Resources Information Center
Collins, Allan M.
The purposes of this study were to determine the nature of human semantic memory and to obtain knowledge usable in the future development of computer systems that can converse with people. The work was based on a computer model which is designed to comprehend English text, relating the text to information stored in a semantic data base that is…
Laboratory process control using natural language commands from a personal computer
NASA Technical Reports Server (NTRS)
Will, Herbert A.; Mackin, Michael A.
1989-01-01
PC software is described which provides flexible natural language process control capability with an IBM PC or compatible machine. Hardware requirements include the PC, and suitable hardware interfaces to all controlled devices. Software required includes the Microsoft Disk Operating System (MS-DOS) operating system, a PC-based FORTRAN-77 compiler, and user-written device drivers. Instructions for use of the software are given as well as a description of an application of the system.
Zadeh, L A
2001-04-01
Interest in issues relating to consciousness has grown markedly during the last several years. And yet, nobody can claim that consciousness is a well-understood concept that lends itself to precise analysis. It may be argued that, as a concept, consciousness is much too complex to fit into the conceptual structure of existing theories based on Aristotelian logic and probability theory. An approach suggested in this paper links consciousness to perceptions and perceptions to their descriptors in a natural language. In this way, those aspects of consciousness which relate to reasoning and concept formation are linked to what is referred to as the methodology of computing with words (CW). Computing, in its usual sense, is centered on manipulation of numbers and symbols. In contrast, computing with words, or CW for short, is a methodology in which the objects of computation are words and propositions drawn from a natural language (e.g., small, large, far, heavy, not very likely, the price of gas is low and declining, Berkeley is near San Francisco, it is very unlikely that there will be a significant increase in the price of oil in the near future, etc.). Computing with words is inspired by the remarkable human capability to perform a wide variety of physical and mental tasks without any measurements and any computations. Familiar examples of such tasks are parking a car, driving in heavy traffic, playing golf, riding a bicycle, understanding speech, and summarizing a story. Underlying this remarkable capability is the brain's crucial ability to manipulate perceptions--perceptions of distance, size, weight, color, speed, time, direction, force, number, truth, likelihood, and other characteristics of physical and mental objects. Manipulation of perceptions plays a key role in human recognition, decision and execution processes. As a methodology, computing with words provides a foundation for a computational theory of perceptions: a theory which may have an important bearing on how humans make--and machines might make--perception-based rational decisions in an environment of imprecision, uncertainty, and partial truth. A basic difference between perceptions and measurements is that, in general, measurements are crisp, whereas perceptions are fuzzy. One of the fundamental aims of science has been and continues to be that of progressing from perceptions to measurements. Pursuit of this aim has led to brilliant successes. We have sent men to the moon; we can build computers that are capable of performing billions of computations per second; we have constructed telescopes that can explore the far reaches of the universe; and we can date the age of rocks that are millions of years old. But alongside the brilliant successes stand conspicuous underachievements and outright failures. We cannot build robots that can move with the agility of animals or humans; we cannot automate driving in heavy traffic; we cannot translate from one language to another at the level of a human interpreter; we cannot create programs that can summarize non-trivial stories; our ability to model the behavior of economic systems leaves much to be desired; and we cannot build machines that can compete with children in the performance of a wide variety of physical and cognitive tasks. It may be argued that underlying the underachievements and failures is the unavailability of a methodology for reasoning and computing with perceptions rather than measurements. An outline of such a methodology--referred to as a computational theory of perceptions--is presented in this paper. The computational theory of perceptions (CTP) is based on the methodology of CW. In CTP, words play the role of labels of perceptions, and, more generally, perceptions are expressed as propositions in a natural language. CW-based techniques are employed to translate propositions expressed in a natural language into what is called the Generalized Constraint Language (GCL). In this language, the meaning of a proposition is expressed as a generalized constraint, X isr R, where X is the constrained variable, R is the constraining relation, and isr is a variable copula in which r is an indexing variable whose value defines the way in which R constrains X. Among the basic types of constraints are possibilistic, veristic, probabilistic, random set, Pawlak set, fuzzy graph, and usuality. The wide variety of constraints in GCL makes GCL a much more expressive language than the language of predicate logic. In CW, the initial and terminal data sets, IDS and TDS, are assumed to consist of propositions expressed in a natural language. These propositions are translated, respectively, into antecedent and consequent constraints. Consequent constraints are derived from antecedent constraints through the use of rules of constraint propagation. The principal constraint propagation rule is the generalized extension principle. (ABSTRACT TRUNCATED)
Natural language processing, pragmatics, and verbal behavior
Cherpas, Chris
1992-01-01
Natural Language Processing (NLP) is that part of Artificial Intelligence (AI) concerned with endowing computers with verbal and listener repertoires, so that people can interact with them more easily. Most attention has been given to accurately parsing and generating syntactic structures, although NLP researchers are finding ways of handling the semantic content of language as well. It is increasingly apparent that understanding the pragmatic (contextual and consequential) dimension of natural language is critical for producing effective NLP systems. While there are some techniques for applying pragmatics in computer systems, they are piecemeal, crude, and lack an integrated theoretical foundation. Unfortunately, there is little awareness that Skinner's (1957) Verbal Behavior provides an extensive, principled pragmatic analysis of language. The implications of Skinner's functional analysis for NLP and for verbal aspects of epistemology lead to a proposal for a “user expert”—a computer system whose area of expertise is the long-term computer user. The evolutionary nature of behavior suggests an AI technology known as genetic algorithms/programming for implementing such a system. ImagesFig. 1 PMID:22477052
Parton, Becky Sue
2006-01-01
In recent years, research has progressed steadily in regard to the use of computers to recognize and render sign language. This paper reviews significant projects in the field beginning with finger-spelling hands such as "Ralph" (robotics), CyberGloves (virtual reality sensors to capture isolated and continuous signs), camera-based projects such as the CopyCat interactive American Sign Language game (computer vision), and sign recognition software (Hidden Markov Modeling and neural network systems). Avatars such as "Tessa" (Text and Sign Support Assistant; three-dimensional imaging) and spoken language to sign language translation systems such as Poland's project entitled "THETOS" (Text into Sign Language Automatic Translator, which operates in Polish; natural language processing) are addressed. The application of this research to education is also explored. The "ICICLE" (Interactive Computer Identification and Correction of Language Errors) project, for example, uses intelligent computer-aided instruction to build a tutorial system for deaf or hard-of-hearing children that analyzes their English writing and makes tailored lessons and recommendations. Finally, the article considers synthesized sign, which is being added to educational material and has the potential to be developed by students themselves.
Discourse Understanding. Technical Report No. 391.
ERIC Educational Resources Information Center
Scha, R. J. H.; And Others
Artificial intelligence research on natural language understanding is discussed in this report using the notions that (1) natural language understanding systems must "see" sentences as elements whose significance resides in the contribution they make to the larger whole, and (2) a natural language understanding computer system must…
Modeling Coevolution between Language and Memory Capacity during Language Origin
Gong, Tao; Shuai, Lan
2015-01-01
Memory is essential to many cognitive tasks including language. Apart from empirical studies of memory effects on language acquisition and use, there lack sufficient evolutionary explorations on whether a high level of memory capacity is prerequisite for language and whether language origin could influence memory capacity. In line with evolutionary theories that natural selection refined language-related cognitive abilities, we advocated a coevolution scenario between language and memory capacity, which incorporated the genetic transmission of individual memory capacity, cultural transmission of idiolects, and natural and cultural selections on individual reproduction and language teaching. To illustrate the coevolution dynamics, we adopted a multi-agent computational model simulating the emergence of lexical items and simple syntax through iterated communications. Simulations showed that: along with the origin of a communal language, an initially-low memory capacity for acquired linguistic knowledge was boosted; and such coherent increase in linguistic understandability and memory capacities reflected a language-memory coevolution; and such coevolution stopped till memory capacities became sufficient for language communications. Statistical analyses revealed that the coevolution was realized mainly by natural selection based on individual communicative success in cultural transmissions. This work elaborated the biology-culture parallelism of language evolution, demonstrated the driving force of culturally-constituted factors for natural selection of individual cognitive abilities, and suggested that the degree difference in language-related cognitive abilities between humans and nonhuman animals could result from a coevolution with language. PMID:26544876
Modeling Coevolution between Language and Memory Capacity during Language Origin.
Gong, Tao; Shuai, Lan
2015-01-01
Memory is essential to many cognitive tasks including language. Apart from empirical studies of memory effects on language acquisition and use, there lack sufficient evolutionary explorations on whether a high level of memory capacity is prerequisite for language and whether language origin could influence memory capacity. In line with evolutionary theories that natural selection refined language-related cognitive abilities, we advocated a coevolution scenario between language and memory capacity, which incorporated the genetic transmission of individual memory capacity, cultural transmission of idiolects, and natural and cultural selections on individual reproduction and language teaching. To illustrate the coevolution dynamics, we adopted a multi-agent computational model simulating the emergence of lexical items and simple syntax through iterated communications. Simulations showed that: along with the origin of a communal language, an initially-low memory capacity for acquired linguistic knowledge was boosted; and such coherent increase in linguistic understandability and memory capacities reflected a language-memory coevolution; and such coevolution stopped till memory capacities became sufficient for language communications. Statistical analyses revealed that the coevolution was realized mainly by natural selection based on individual communicative success in cultural transmissions. This work elaborated the biology-culture parallelism of language evolution, demonstrated the driving force of culturally-constituted factors for natural selection of individual cognitive abilities, and suggested that the degree difference in language-related cognitive abilities between humans and nonhuman animals could result from a coevolution with language.
ERIC Educational Resources Information Center
Roid, Gale H.
A computer-assisted instruction (CAI) author language and operating system is available for use by McGill instructors on the university's IBM 360/65 RAX Time-Sharing System. Instructors can use this system to prepare lessons which allow the computer and a student to "converse" in natural language. The instructor prepares a lesson by…
Formalization of Generalized Constraint Language: A Crucial Prelude to Computing With Words.
Khorasani, Elham S; Rahimi, Shahram; Calvert, Wesley
2013-02-01
The generalized constraint language (GCL), introduced by Zadeh, serves as a basis for computing with words (CW). It provides an agenda to express the imprecise and fuzzy information embedded in natural language and allows reasoning with perceptions. Despite its fundamental role, the definition of GCL has remained informal since its introduction by Zadeh, and to our knowledge, no attempt has been made to formulate a rigorous theoretical framework for GCL. Such formalization is necessary for further theoretical and practical advancement of CW for two important reasons. First, it provides the underlying infrastructure for the development of useful inference patterns based on sound theories. Second, it determines the scope of GCL and hence facilitates the translation of natural language expressions into GCL. This paper is an attempt to step in this direction by providing a formal syntax together with a compositional semantics for GCL. A soundness theorem is defined, and Zadeh's deduction rules are proved to be valid in the defined semantics. Furthermore, a discussion is provided on how the proposed language may be used in practice.
Language Model Applications to Spelling with Brain-Computer Interfaces
Mora-Cortes, Anderson; Manyakov, Nikolay V.; Chumerin, Nikolay; Van Hulle, Marc M.
2014-01-01
Within the Ambient Assisted Living (AAL) community, Brain-Computer Interfaces (BCIs) have raised great hopes as they provide alternative communication means for persons with disabilities bypassing the need for speech and other motor activities. Although significant advancements have been realized in the last decade, applications of language models (e.g., word prediction, completion) have only recently started to appear in BCI systems. The main goal of this article is to review the language model applications that supplement non-invasive BCI-based communication systems by discussing their potential and limitations, and to discern future trends. First, a brief overview of the most prominent BCI spelling systems is given, followed by an in-depth discussion of the language models applied to them. These language models are classified according to their functionality in the context of BCI-based spelling: the static/dynamic nature of the user interface, the use of error correction and predictive spelling, and the potential to improve their classification performance by using language models. To conclude, the review offers an overview of the advantages and challenges when implementing language models in BCI-based communication systems when implemented in conjunction with other AAL technologies. PMID:24675760
ERIC Educational Resources Information Center
Wood, Peter
2011-01-01
"QuickAssist," the program presented in this paper, uses natural language processing (NLP) technologies. It places a range of NLP tools at the disposal of learners, intended to enable them to independently read and comprehend a German text of their choice while they extend their vocabulary, learn about different uses of particular words,…
The Further Development of CSIEC Project Driven by Application and Evaluation in English Education
ERIC Educational Resources Information Center
Jia, Jiyou; Chen, Weichao
2009-01-01
In this paper, we present the comprehensive version of CSIEC (Computer Simulation in Educational Communication), an interactive web-based human-computer dialogue system with natural language for English instruction, and its tentative application and evaluation in English education. First, we briefly introduce the motivation for this project,…
Running R Statistical Computing Environment Software on the Peregrine
for the development of new statistical methodologies and enjoys a large user base. Please consult the distribution details. Natural language support but running in an English locale R is a collaborative project programming paradigms to better leverage modern HPC systems. The CRAN task view for High Performance Computing
Programming Languages, Natural Languages, and Mathematics
ERIC Educational Resources Information Center
Naur, Peter
1975-01-01
Analogies are drawn between the social aspects of programming and similar aspects of mathematics and natural languages. By analogy with the history of auxiliary languages it is suggested that Fortran and Cobol will remain dominant. (Available from the Association of Computing Machinery, 1133 Avenue of the Americas, New York, NY 10036.) (Author/TL)
Dataflow computing approach in high-speed digital simulation
NASA Technical Reports Server (NTRS)
Ercegovac, M. D.; Karplus, W. J.
1984-01-01
New computational tools and methodologies for the digital simulation of continuous systems were explored. Programmability, and cost effective performance in multiprocessor organizations for real time simulation was investigated. Approach is based on functional style languages and data flow computing principles, which allow for the natural representation of parallelism in algorithms and provides a suitable basis for the design of cost effective high performance distributed systems. The objectives of this research are to: (1) perform comparative evaluation of several existing data flow languages and develop an experimental data flow language suitable for real time simulation using multiprocessor systems; (2) investigate the main issues that arise in the architecture and organization of data flow multiprocessors for real time simulation; and (3) develop and apply performance evaluation models in typical applications.
ERIC Educational Resources Information Center
Kiraz, George Anton
This book presents a tractable computational model that can cope with complex morphological operations, especially in Semitic languages, and less complex morphological systems present in Western languages. It outlines a new generalized regular rewrite rule system that uses multiple finite-state automata to cater to root-and-pattern morphology,…
An Intelligent Computer Assisted Language Learning System for Arabic Learners
ERIC Educational Resources Information Center
Shaalan, Khaled F.
2005-01-01
This paper describes the development of an intelligent computer-assisted language learning (ICALL) system for learning Arabic. This system could be used for learning Arabic by students at primary schools or by learners of Arabic as a second or foreign language. It explores the use of Natural Language Processing (NLP) techniques for learning…
Learning from a Computer Tutor with Natural Language Capabilities
ERIC Educational Resources Information Center
Michael, Joel; Rovick, Allen; Glass, Michael; Zhou, Yujian; Evens, Martha
2003-01-01
CIRCSIM-Tutor is a computer tutor designed to carry out a natural language dialogue with a medical student. Its domain is the baroreceptor reflex, the part of the cardiovascular system that is responsible for maintaining a constant blood pressure. CIRCSIM-Tutor's interaction with students is modeled after the tutoring behavior of two experienced…
Linguistics and Information Science
ERIC Educational Resources Information Center
Montgomery, Christine A.
1972-01-01
This paper defines the relationship between linguistics and information science in terms of a common interest in natural language. The concept of a natural language information system is introduced as a framework for reviewing automated language processing efforts by computational linguists and information scientists. (96 references) (Author)
Zheng, Kai; Mei, Qiaozhu; Yang, Lei; Manion, Frank J.; Balis, Ulysses J.; Hanauer, David A.
2011-01-01
In this study, we comparatively examined the linguistic properties of narrative clinician notes created through voice dictation versus those directly entered by clinicians via a computer keyboard. Intuitively, the nature of voice-dictated notes would resemble that of natural language, while typed-in notes may demonstrate distinctive language features for reasons such as intensive usage of acronyms. The study analyses were based on an empirical dataset retrieved from our institutional electronic health records system. The dataset contains 30,000 voice-dictated notes and 30,000 notes that were entered manually; both were encounter notes generated in ambulatory care settings. The results suggest that between the narrative clinician notes created via these two different methods, there exists a considerable amount of lexical and distributional differences. Such differences could have a significant impact on the performance of natural language processing tools, necessitating these two different types of documents being differentially treated. PMID:22195229
A Framework for Representing and Jointly Reasoning over Linguistic and Non-Linguistic Knowledge
ERIC Educational Resources Information Center
Murugesan, Arthi
2009-01-01
Natural language poses several challenges to developing computational systems for modeling it. Natural language is not a precise problem but is rather ridden with a number of uncertainties in the form of either alternate words or interpretations. Furthermore, natural language is a generative system where the problem size is potentially infinite.…
Advances in natural language processing.
Hirschberg, Julia; Manning, Christopher D
2015-07-17
Natural language processing employs computational techniques for the purpose of learning, understanding, and producing human language content. Early computational approaches to language research focused on automating the analysis of the linguistic structure of language and developing basic technologies such as machine translation, speech recognition, and speech synthesis. Today's researchers refine and make use of such tools in real-world applications, creating spoken dialogue systems and speech-to-speech translation engines, mining social media for information about health or finance, and identifying sentiment and emotion toward products and services. We describe successes and challenges in this rapidly advancing area. Copyright © 2015, American Association for the Advancement of Science.
The emergence of Zipf's law - Spontaneous encoding optimization by users of a command language
NASA Technical Reports Server (NTRS)
Ellis, S. R.; Hitchcock, R. J.
1986-01-01
The distribution of commands issued by experienced users of a computer operating system allowing command customization tends to conform to Zipf's law. This result documents the emergence of a statistical property of natural language as users master an artificial language. Analysis of Zipf's law by Mandelbrot and Cherry shows that its emergence in the computer interaction of experienced users may be interpreted as evidence that these users optimize their encoding of commands. Accordingly, the extent to which users of a command language exhibit Zipf's law can provide a metric of the naturalness and efficiency with which that language is used.
A System for Natural Language Sentence Generation.
ERIC Educational Resources Information Center
Levison, Michael; Lessard, Gregory
1992-01-01
Describes the natural language computer program, "Vinci." Explains that using an attribute grammar formalism, Vinci can simulate components of several current linguistic theories. Considers the design of the system and its applications in linguistic modelling and second language acquisition research. Notes Vinci's uses in linguistics…
Computer Applications in Professional Writing: Systems that Analyze and Describe Natural Language.
ERIC Educational Resources Information Center
O'Brien, Frank
Two varieties of user-friendly computer systems that deal with natural language are now available, providing either at-the-monitor stylistic and grammatic correction of keyed-in writing or a sorting, selecting, and generating of statistical data for any written or spoken document. The editor programs, such as "The Writer's Workbench"…
BIT BY BIT: A Game Simulating Natural Language Processing in Computers
ERIC Educational Resources Information Center
Kato, Taichi; Arakawa, Chuichi
2008-01-01
BIT BY BIT is an encryption game that is designed to improve students' understanding of natural language processing in computers. Participants encode clear words into binary code using an encryption key and exchange them in the game. BIT BY BIT enables participants who do not understand the concept of binary numbers to perform the process of…
ERIC Educational Resources Information Center
Graesser, Arthur; McNamara, Danielle
2010-01-01
This article discusses the occurrence and measurement of self-regulated learning (SRL) both in human tutoring and in computer tutors with agents that hold conversations with students in natural language and help them learn at deeper levels. One challenge in building these computer tutors is to accommodate, encourage, and scaffold SRL because these…
Integration of Speech and Natural Language
1988-04-01
major activities: • Development of the syntax and semantics components for natural language processing. • Integration of the developed syntax and...evaluating the performance of speech recognition algonthms developed K» under the Strategic Computing Program. grs Our work on natural language processing...included the developement of a grammar (syntax) that uses the Uiuficanon gnmmaj formaMsm (an augmented context free formalism). The Unification
Modeling the Emergence of Lexicons in Homesign Systems
Richie, Russell; Yang, Charles; Coppola, Marie
2014-01-01
It is largely acknowledged that natural languages emerge from not just human brains, but also from rich communities of interacting human brains (Senghas, 2005). Yet the precise role of such communities and such interaction in the emergence of core properties of language has largely gone uninvestigated in naturally emerging systems, leaving the few existing computational investigations of this issue at an artificial setting. Here we take a step towards investigating the precise role of community structure in the emergence of linguistic conventions with both naturalistic empirical data and computational modeling. We first show conventionalization of lexicons in two different classes of naturally emerging signed systems: (1) protolinguistic “homesigns” invented by linguistically isolated Deaf individuals, and (2) a natural sign language emerging in a recently formed rich Deaf community. We find that the latter conventionalized faster than the former. Second, we model conventionalization as a population of interacting individuals who adjust their probability of sign use in response to other individuals' actual sign use, following an independently motivated model of language learning (Yang 2002, 2004). Simulations suggest that a richer social network, like that of natural (signed) languages, conventionalizes faster than a sparser social network, like that of homesign systems. We discuss our behavioral and computational results in light of other work on language emergence, and other work of behavior on complex networks. PMID:24482343
Bengali-English Relevant Cross Lingual Information Access Using Finite Automata
NASA Astrophysics Data System (ADS)
Banerjee, Avishek; Bhattacharyya, Swapan; Hazra, Simanta; Mondal, Shatabdi
2010-10-01
CLIR techniques searches unrestricted texts and typically extract term and relationships from bilingual electronic dictionaries or bilingual text collections and use them to translate query and/or document representations into a compatible set of representations with a common feature set. In this paper, we focus on dictionary-based approach by using a bilingual data dictionary with a combination to statistics-based methods to avoid the problem of ambiguity also the development of human computer interface aspects of NLP (Natural Language processing) is the approach of this paper. The intelligent web search with regional language like Bengali is depending upon two major aspect that is CLIA (Cross language information access) and NLP. In our previous work with IIT, KGP we already developed content based CLIA where content based searching in trained on Bengali Corpora with the help of Bengali data dictionary. Here we want to introduce intelligent search because to recognize the sense of meaning of a sentence and it has a better real life approach towards human computer interactions.
NASA Astrophysics Data System (ADS)
Dragan, Laurentiu; Watt, Stephen M.
Computer algebra in scientific computation squarely faces the dilemma of natural mathematical expression versus efficiency. While higher-order programming constructs and parametric polymorphism provide a natural and expressive language for mathematical abstractions, they can come at a considerable cost. We investigate how deeply nested type constructions may be optimized to achieve performance similar to that of hand-tuned code written in lower-level languages.
ERIC Educational Resources Information Center
Knapp, Sara D., Comp.
This book is designed primarily to help users find meaningful words for natural language, or free-text, computer searching of bibliographic and textual databases in the social and behavioral sciences. Additionally, it covers many socially relevant and technical topics not covered by the usual literary thesaurus, therefore it may also be useful for…
A Python Geospatial Language Toolkit
NASA Astrophysics Data System (ADS)
Fillmore, D.; Pletzer, A.; Galloy, M.
2012-12-01
The volume and scope of geospatial data archives, such as collections of satellite remote sensing or climate model products, has been rapidly increasing and will continue to do so in the near future. The recently launched (October 2011) Suomi National Polar-orbiting Partnership satellite (NPP) for instance, is the first of a new generation of Earth observation platforms that will monitor the atmosphere, oceans, and ecosystems, and its suite of instruments will generate several terabytes each day in the form of multi-spectral images and derived datasets. Full exploitation of such data for scientific analysis and decision support applications has become a major computational challenge. Geophysical data exploration and knowledge discovery could benefit, in particular, from intelligent mechanisms for extracting and manipulating subsets of data relevant to the problem of interest. Potential developments include enhanced support for natural language queries and directives to geospatial datasets. The translation of natural language (that is, human spoken or written phrases) into complex but unambiguous objects and actions can be based on a context, or knowledge domain, that represents the underlying geospatial concepts. This poster describes a prototype Python module that maps English phrases onto basic geospatial objects and operations. This module, along with the associated computational geometry methods, enables the resolution of natural language directives that include geographic regions of arbitrary shape and complexity.
Semantic Grammar: An Engineering Technique for Constructing Natural Language Understanding Systems.
ERIC Educational Resources Information Center
Burton, Richard R.
In an attempt to overcome the lack of natural means of communication between student and computer, this thesis addresses the problem of developing a system which can understand natural language within an educational problem-solving environment. The nature of the environment imposes efficiency, habitability, self-teachability, and awareness of…
Brain-computer interface with language model-electroencephalography fusion for locked-in syndrome.
Oken, Barry S; Orhan, Umut; Roark, Brian; Erdogmus, Deniz; Fowler, Andrew; Mooney, Aimee; Peters, Betts; Miller, Meghan; Fried-Oken, Melanie B
2014-05-01
Some noninvasive brain-computer interface (BCI) systems are currently available for locked-in syndrome (LIS) but none have incorporated a statistical language model during text generation. To begin to address the communication needs of individuals with LIS using a noninvasive BCI that involves rapid serial visual presentation (RSVP) of symbols and a unique classifier with electroencephalography (EEG) and language model fusion. The RSVP Keyboard was developed with several unique features. Individual letters are presented at 2.5 per second. Computer classification of letters as targets or nontargets based on EEG is performed using machine learning that incorporates a language model for letter prediction via Bayesian fusion enabling targets to be presented only 1 to 4 times. Nine participants with LIS and 9 healthy controls were enrolled. After screening, subjects first calibrated the system, and then completed a series of balanced word generation mastery tasks that were designed with 5 incremental levels of difficulty, which increased by selecting phrases for which the utility of the language model decreased naturally. Six participants with LIS and 9 controls completed the experiment. All LIS participants successfully mastered spelling at level 1 and one subject achieved level 5. Six of 9 control participants achieved level 5. Individuals who have incomplete LIS may benefit from an EEG-based BCI system, which relies on EEG classification and a statistical language model. Steps to further improve the system are discussed.
Paradigms of Evaluation in Natural Language Processing: Field Linguistics for Glass Box Testing
ERIC Educational Resources Information Center
Cohen, Kevin Bretonnel
2010-01-01
Although software testing has been well-studied in computer science, it has received little attention in natural language processing. Nonetheless, a fully developed methodology for glass box evaluation and testing of language processing applications already exists in the field methods of descriptive linguistics. This work lays out a number of…
ERIC Educational Resources Information Center
Erdocia, Kepa; Laka, Itziar; Mestres-Misse, Anna; Rodriguez-Fornells, Antoni
2009-01-01
In natural languages some syntactic structures are simpler than others. Syntactically complex structures require further computation that is not required by syntactically simple structures. In particular, canonical, basic word order represents the simplest sentence-structure. Natural languages have different canonical word orders, and they vary in…
Voice-enabled Knowledge Engine using Flood Ontology and Natural Language Processing
NASA Astrophysics Data System (ADS)
Sermet, M. Y.; Demir, I.; Krajewski, W. F.
2015-12-01
The Iowa Flood Information System (IFIS) is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to flood inundation maps, real-time flood conditions, flood forecasts, flood-related data, information and interactive visualizations for communities in Iowa. The IFIS is designed for use by general public, often people with no domain knowledge and limited general science background. To improve effective communication with such audience, we have introduced a voice-enabled knowledge engine on flood related issues in IFIS. Instead of navigating within many features and interfaces of the information system and web-based sources, the system provides dynamic computations based on a collection of built-in data, analysis, and methods. The IFIS Knowledge Engine connects to real-time stream gauges, in-house data sources, analysis and visualization tools to answer natural language questions. Our goal is the systematization of data and modeling results on flood related issues in Iowa, and to provide an interface for definitive answers to factual queries. The goal of the knowledge engine is to make all flood related knowledge in Iowa easily accessible to everyone, and support voice-enabled natural language input. We aim to integrate and curate all flood related data, implement analytical and visualization tools, and make it possible to compute answers from questions. The IFIS explicitly implements analytical methods and models, as algorithms, and curates all flood related data and resources so that all these resources are computable. The IFIS Knowledge Engine computes the answer by deriving it from its computational knowledge base. The knowledge engine processes the statement, access data warehouse, run complex database queries on the server-side and return outputs in various formats. This presentation provides an overview of IFIS Knowledge Engine, its unique information interface and functionality as an educational tool, and discusses the future plans for providing knowledge on flood related issues and resources. IFIS Knowledge Engine provides an alternative access method to these comprehensive set of tools and data resources available in IFIS. Current implementation of the system accepts free-form input and voice recognition capabilities within browser and mobile applications.
Towards Automatic Treatment of Natural Language.
ERIC Educational Resources Information Center
Lonsdale, Deryle
1984-01-01
Because automated natural language processing relies heavily on the still developing fields of linguistics, knowledge representation, and computational linguistics, no system is capable of mimicking human linguistic capabilities. For the present, interactive systems may be used to augment today's technology. (MSE)
NASA Astrophysics Data System (ADS)
Hudson, Richard
2017-07-01
This paper [4] - referred to below as 'LXL' - is an excellent example of cross-disciplinary work which brings together three very different disciplines, each with its different methods: quantitative computational linguistics (exploring big data), psycholinguistics (using experiments with human subjects) and theoretical linguistics (building models based on language descriptions). The measured unit is the dependency between two words, as defined by theoretical linguistics, and the question is how the length of this dependency affects the choices made by writers, as revealed in big data from a wide range of languages.
Computer-Mediated Communication as an Autonomy-Enhancement Tool for Advanced Learners of English
ERIC Educational Resources Information Center
Wach, Aleksandra
2012-01-01
This article examines the relevance of modern technology for the development of learner autonomy in the process of learning English as a foreign language. Computer-assisted language learning and computer-mediated communication (CMC) appear to be particularly conducive to fostering autonomous learning, as they naturally incorporate many elements of…
A Guide to IRUS-II Application Development
1989-09-01
Stallard (editors). Research and Develo; nent in Natural Language b’nderstan,;ng as Part of t/i Strategic Computing Program . chapter 3, pages 27-34...Development in Natural Language Processing in the Strategic Computing Program . Compi-nrional Linguistics 12(2):132-136. April-June, 1986. [24] Sidner. C.L...assist developers interested in adapting IRUS-11 to new application domains Chapter 2 provides a general introduction and overviev ,. Chapter 3 describes
Directly Comparing Computer and Human Performance in Language Understanding and Visual Reasoning.
ERIC Educational Resources Information Center
Baker, Eva L.; And Others
Evaluation models are being developed for assessing artificial intelligence (AI) systems in terms of similar performance by groups of people. Natural language understanding and vision systems are the areas of concentration. In simplest terms, the goal is to norm a given natural language system's performance on a sample of people. The specific…
Natural Resource Information System, design analysis
NASA Technical Reports Server (NTRS)
1972-01-01
The computer-based system stores, processes, and displays map data relating to natural resources. The system was designed on the basis of requirements established in a user survey and an analysis of decision flow. The design analysis effort is described, and the rationale behind major design decisions, including map processing, cell vs. polygon, choice of classification systems, mapping accuracy, system hardware, and software language is summarized.
ERIC Educational Resources Information Center
Ouellon, Conrad, Comp.
Presentations from a colloquium on applications of research on natural languages to computer science address the following topics: (1) analysis of complex adverbs; (2) parser use in computerized text analysis; (3) French language utilities; (4) lexicographic mapping of official language notices; (5) phonographic codification of Spanish; (6)…
Top-down methodology for human factors research
NASA Technical Reports Server (NTRS)
Sibert, J.
1983-01-01
User computer interaction as a conversation is discussed. The design of user interfaces which depends on viewing communications between a user and the computer as a conversion is presented. This conversation includes inputs to the computer (outputs from the user), outputs from the computer (inputs to the user), and the sequencing in both time and space of those outputs and inputs. The conversation is viewed from the user's side of the conversation. Two languages are modeled: the one with which the user communicates with the computer and the language where communication flows from the computer to the user. Both languages exist on three levels; the semantic, syntactic and lexical. It is suggested that natural languages can also be considered in these terms.
Computer Aided Management for Information Processing Projects.
ERIC Educational Resources Information Center
Akman, Ibrahim; Kocamustafaogullari, Kemal
1995-01-01
Outlines the nature of information processing projects and discusses some project management programming packages. Describes an in-house interface program developed to utilize a selected project management package (TIMELINE) by using Oracle Data Base Management System tools and Pascal programming language for the management of information system…
Natural Language Processing Technologies in Radiology Research and Clinical Applications.
Cai, Tianrun; Giannopoulos, Andreas A; Yu, Sheng; Kelil, Tatiana; Ripley, Beth; Kumamaru, Kanako K; Rybicki, Frank J; Mitsouras, Dimitrios
2016-01-01
The migration of imaging reports to electronic medical record systems holds great potential in terms of advancing radiology research and practice by leveraging the large volume of data continuously being updated, integrated, and shared. However, there are significant challenges as well, largely due to the heterogeneity of how these data are formatted. Indeed, although there is movement toward structured reporting in radiology (ie, hierarchically itemized reporting with use of standardized terminology), the majority of radiology reports remain unstructured and use free-form language. To effectively "mine" these large datasets for hypothesis testing, a robust strategy for extracting the necessary information is needed. Manual extraction of information is a time-consuming and often unmanageable task. "Intelligent" search engines that instead rely on natural language processing (NLP), a computer-based approach to analyzing free-form text or speech, can be used to automate this data mining task. The overall goal of NLP is to translate natural human language into a structured format (ie, a fixed collection of elements), each with a standardized set of choices for its value, that is easily manipulated by computer programs to (among other things) order into subcategories or query for the presence or absence of a finding. The authors review the fundamentals of NLP and describe various techniques that constitute NLP in radiology, along with some key applications. ©RSNA, 2016.
Natural Language Processing Technologies in Radiology Research and Clinical Applications
Cai, Tianrun; Giannopoulos, Andreas A.; Yu, Sheng; Kelil, Tatiana; Ripley, Beth; Kumamaru, Kanako K.; Rybicki, Frank J.
2016-01-01
The migration of imaging reports to electronic medical record systems holds great potential in terms of advancing radiology research and practice by leveraging the large volume of data continuously being updated, integrated, and shared. However, there are significant challenges as well, largely due to the heterogeneity of how these data are formatted. Indeed, although there is movement toward structured reporting in radiology (ie, hierarchically itemized reporting with use of standardized terminology), the majority of radiology reports remain unstructured and use free-form language. To effectively “mine” these large datasets for hypothesis testing, a robust strategy for extracting the necessary information is needed. Manual extraction of information is a time-consuming and often unmanageable task. “Intelligent” search engines that instead rely on natural language processing (NLP), a computer-based approach to analyzing free-form text or speech, can be used to automate this data mining task. The overall goal of NLP is to translate natural human language into a structured format (ie, a fixed collection of elements), each with a standardized set of choices for its value, that is easily manipulated by computer programs to (among other things) order into subcategories or query for the presence or absence of a finding. The authors review the fundamentals of NLP and describe various techniques that constitute NLP in radiology, along with some key applications. ©RSNA, 2016 PMID:26761536
NASA Astrophysics Data System (ADS)
Gómez-Rodríguez, Carlos
2017-07-01
Liu et al. [1] provide a comprehensive account of research on dependency distance in human languages. While the article is a very rich and useful report on this complex subject, here I will expand on a few specific issues where research in computational linguistics (specifically natural language processing) can inform DDM research, and vice versa. These aspects have not been explored much in [1] or elsewhere, probably due to the little overlap between both research communities, but they may provide interesting insights for improving our understanding of the evolution of human languages, the mechanisms by which the brain processes and understands language, and the construction of effective computer systems to achieve this goal.
The role of voice input for human-machine communication.
Cohen, P R; Oviatt, S L
1995-01-01
Optimism is growing that the near future will witness rapid growth in human-computer interaction using voice. System prototypes have recently been built that demonstrate speaker-independent real-time speech recognition, and understanding of naturally spoken utterances with vocabularies of 1000 to 2000 words, and larger. Already, computer manufacturers are building speech recognition subsystems into their new product lines. However, before this technology can be broadly useful, a substantial knowledge base is needed about human spoken language and performance during computer-based spoken interaction. This paper reviews application areas in which spoken interaction can play a significant role, assesses potential benefits of spoken interaction with machines, and compares voice with other modalities of human-computer interaction. It also discusses information that will be needed to build a firm empirical foundation for the design of future spoken and multimodal interfaces. Finally, it argues for a more systematic and scientific approach to investigating spoken input and performance with future language technology. PMID:7479803
Comparability of a Paper-Based Language Test and a Computer-Based Language Test.
ERIC Educational Resources Information Center
Choi, Inn-Chull; Kim, Kyoung Sung; Boo, Jaeyool
2003-01-01
Utilizing the Test of English Proficiency, developed by Seoul National University (TEPS), examined comparability between the paper-based language test and the computer-based language test based on content and construct validation employing content analyses based on corpus linguistic techniques in addition to such statistical analyses as…
Interactive Simulated Patient: Experiences with Collaborative E-Learning in Medicine
ERIC Educational Resources Information Center
Bergin, Rolf; Youngblood, Patricia; Ayers, Mary K.; Boberg, Jonas; Bolander, Klara; Courteille, Olivier; Dev, Parvati; Hindbeck, Hans; Edward, Leonard E., II; Stringer, Jennifer R.; Thalme, Anders; Fors, Uno G. H.
2003-01-01
Interactive Simulated Patient (ISP) is a computer-based simulation tool designed to provide medical students with the opportunity to practice their clinical problem solving skills. The ISP system allows students to perform most clinical decision-making procedures in a simulated environment, including history taking in natural language, many…
In Vitro Evaluation of a Program for Machine-Aided Indexing.
ERIC Educational Resources Information Center
Jacquemin, Christian; Daille, Beatrice; Royaute, Jean; Polanco, Xavier
2002-01-01
Presents the human evaluation of ILIAD, a program for machine-aided indexing that was designed to assist expert librarians in computer-aided indexing and document analysis. Topics include controlled indexing and free indexing; natural language and concept-based information retrieval; evaluation methodology; syntactic variations; and a comparison…
Supporting Second Language Writing Using Multimodal Feedback
ERIC Educational Resources Information Center
Elola, Idoia; Oskoz, Ana
2016-01-01
The educational use of computer-based feedback in the classroom is becoming widespread. However, less is known about (1) the extent to which tools influence how instructors provide written and oral comments, and (2) whether receiving oral or written feedback influences the nature of learners' revisions. This case study, which expands existing…
HAL/SM language specification. [programming languages and computer programming for space shuttles
NASA Technical Reports Server (NTRS)
Williams, G. P. W., Jr.; Ross, C.
1975-01-01
A programming language is presented for the flight software of the NASA Space Shuttle program. It is intended to satisfy virtually all of the flight software requirements of the space shuttle. To achieve this, it incorporates a wide range of features, including applications-oriented data types and organizations, real time control mechanisms, and constructs for systems programming tasks. It is a higher order language designed to allow programmers, analysts, and engineers to communicate with the computer in a form approximating natural mathematical expression. Parts of the English language are combined with standard notation to provide a tool that readily encourages programming without demanding computer hardware expertise. Block diagrams and flow charts are included. The semantics of the language is discussed.
Natural language information retrieval in digital libraries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strzalkowski, T.; Perez-Carballo, J.; Marinescu, M.
In this paper we report on some recent developments in joint NYU and GE natural language information retrieval system. The main characteristic of this system is the use of advanced natural language processing to enhance the effectiveness of term-based document retrieval. The system is designed around a traditional statistical backbone consisting of the indexer module, which builds inverted index files from pre-processed documents, and a retrieval engine which searches and ranks the documents in response to user queries. Natural language processing is used to (1) preprocess the documents in order to extract content-carrying terms, (2) discover inter-term dependencies and buildmore » a conceptual hierarchy specific to the database domain, and (3) process user`s natural language requests into effective search queries. This system has been used in NIST-sponsored Text Retrieval Conferences (TREC), where we worked with approximately 3.3 GBytes of text articles including material from the Wall Street Journal, the Associated Press newswire, the Federal Register, Ziff Communications`s Computer Library, Department of Energy abstracts, U.S. Patents and the San Jose Mercury News, totaling more than 500 million words of English. The system have been designed to facilitate its scalability to deal with ever increasing amounts of data. In particular, a randomized index-splitting mechanism has been installed which allows the system to create a number of smaller indexes that can be independently and efficiently searched.« less
Natural language processing and the Now-or-Never bottleneck.
Gómez-Rodríguez, Carlos
2016-01-01
Researchers, motivated by the need to improve the efficiency of natural language processing tools to handle web-scale data, have recently arrived at models that remarkably match the expected features of human language processing under the Now-or-Never bottleneck framework. This provides additional support for said framework and highlights the research potential in the interaction between applied computational linguistics and cognitive science.
1987-04-01
facilities. BBN is developing a series of increasingly sophisticated natural language understanding systems which will serve as an integrated interface...Haas, A.R. A Syntactic Theory of Belief and Action. Artificial Intelligence. 1986. Forthcoming. [6] Hinrichs, E. Temporale Anaphora im Englischen
Rassinoux, Anne-Marie; Baud, Robert H; Rodrigues, Jean-Marie; Lovis, Christian; Geissbühler, Antoine
2007-01-01
The importance of clinical communication between providers, consumers and others, as well as the requisite for computer interoperability, strengthens the need for sharing common accepted terminologies. Under the directives of the World Health Organization (WHO), an approach is currently being conducted in Australia to adopt a standardized terminology for medical procedures that is intended to become an international reference. In order to achieve such a standard, a collaborative approach is adopted, in line with the successful experiment conducted for the development of the new French coding system CCAM. Different coding centres are involved in setting up a semantic representation of each term using a formal ontological structure expressed through a logic-based representation language. From this language-independent representation, multilingual natural language generation (NLG) is performed to produce noun phrases in various languages that are further compared for consistency with the original terms. Outcomes are presented for the assessment of the International Classification of Health Interventions (ICHI) and its translation into Portuguese. The initial results clearly emphasize the feasibility and cost-effectiveness of the proposed method for handling both a different classification and an additional language. NLG tools, based on ontology driven semantic representation, facilitate the discovery of ambiguous and inconsistent terms, and, as such, should be promoted for establishing coherent international terminologies.
Hupa Natural Resources Dictionary.
ERIC Educational Resources Information Center
Bennett, Ruth, Ed.; And Others
Created by children in grades 5-8 who were enrolled in a year-long Hupa language class, this computer-generated, bilingual book contains descriptions and illustrations of local animals, birds, and fish. The introduction explains that students worked on a Macintosh computer able to print the Unifon alphabet used in writing the Hupa language.…
A Computational Model of Linguistic Humor in Puns
ERIC Educational Resources Information Center
Kao, Justine T.; Levy, Roger; Goodman, Noah D.
2016-01-01
Humor plays an essential role in human interactions. Precisely what makes something funny, however, remains elusive. While research on natural language understanding has made significant advancements in recent years, there has been little direct integration of humor research with computational models of language understanding. In this paper, we…
The ALICE System: A Workbench for Learning and Using Language.
ERIC Educational Resources Information Center
Levin, Lori; And Others
1991-01-01
ALICE, a multimedia framework for intelligent computer-assisted language instruction (ICALI) at Carnegie Mellon University (PA), consists of a set of tools for building a number of different types of ICALI programs in any language. Its Natural Language Processing tools for syntactic error detection, morphological analysis, and generation of…
Learning by Communicating in Natural Language with Conversational Agents
ERIC Educational Resources Information Center
Graesser, Arthur; Li, Haiying; Forsyth, Carol
2014-01-01
Learning is facilitated by conversational interactions both with human tutors and with computer agents that simulate human tutoring and ideal pedagogical strategies. In this article, we describe some intelligent tutoring systems (e.g., AutoTutor) in which agents interact with students in natural language while being sensitive to their cognitive…
Design of Lexicons in Some Natural Language Systems.
ERIC Educational Resources Information Center
Cercone, Nick; Mercer, Robert
1980-01-01
Discusses an investigation of certain problems concerning the structural design of lexicons used in computational approaches to natural language understanding. Emphasizes three aspects of design: retrieval of relevant portions of lexicals items, storage requirements, and representation of meaning in the lexicon. (Available from ALLC, Dr. Rex Last,…
Trombert-Paviot, B; Rodrigues, J M; Rogers, J E; Baud, R; van der Haring, E; Rassinoux, A M; Abrial, V; Clavel, L; Idir, H
2000-09-01
Generalised architecture for languages, encyclopedia and nomenclatures in medicine (GALEN) has developed a new generation of terminology tools based on a language independent model describing the semantics and allowing computer processing and multiple reuses as well as natural language understanding systems applications to facilitate the sharing and maintaining of consistent medical knowledge. During the European Union 4 Th. framework program project GALEN-IN-USE and later on within two contracts with the national health authorities we applied the modelling and the tools to the development of a new multipurpose coding system for surgical procedures named CCAM in a minority language country, France. On one hand, we contributed to a language independent knowledge repository and multilingual semantic dictionaries for multicultural Europe. On the other hand, we support the traditional process for creating a new coding system in medicine which is very much labour consuming by artificial intelligence tools using a medically oriented recursive ontology and natural language processing. We used an integrated software named CLAW (for classification workbench) to process French professional medical language rubrics produced by the national colleges of surgeons domain experts into intermediate dissections and to the Grail reference ontology model representation. From this language independent concept model representation, on one hand, we generate with the LNAT natural language generator controlled French natural language to support the finalization of the linguistic labels (first generation) in relation with the meanings of the conceptual system structure. On the other hand, the Claw classification manager proves to be very powerful to retrieve the initial domain experts rubrics list with different categories of concepts (second generation) within a semantic structured representation (third generation) bridge to the electronic patient record detailed terminology.
Integration of language and sensor information
NASA Astrophysics Data System (ADS)
Perlovsky, Leonid I.; Weijers, Bertus
2003-04-01
The talk describes the development of basic technologies of intelligent systems fusing data from multiple domains and leading to automated computational techniques for understanding data contents. Understanding involves inferring appropriate decisions and recommending proper actions, which in turn requires fusion of data and knowledge about objects, situations, and actions. Data might include sensory data, verbal reports, intelligence intercepts, or public records, whereas knowledge ought to encompass the whole range of objects, situations, people and their behavior, and knowledge of languages. In the past, a fundamental difficulty in combining knowledge with data was the combinatorial complexity of computations, too many combinations of data and knowledge pieces had to be evaluated. Recent progress in understanding of natural intelligent systems, including the human mind, leads to the development of neurophysiologically motivated architectures for solving these challenging problems, in particular the role of emotional neural signals in overcoming combinatorial complexity of old logic-based approaches. Whereas past approaches based on logic tended to identify logic with language and thinking, recent studies in cognitive linguistics have led to appreciation of more complicated nature of linguistic models. Little is known about the details of the brain mechanisms integrating language and thinking. Understanding and fusion of linguistic information with sensory data represent a novel challenging aspect of the development of integrated fusion systems. The presentation will describe a non-combinatorial approach to this problem and outline techniques that can be used for fusing diverse and uncertain knowledge with sensory and linguistic data.
Technology assessment of advanced automation for space missions
NASA Technical Reports Server (NTRS)
1982-01-01
Six general classes of technology requirements derived during the mission definition phase of the study were identified as having maximum importance and urgency, including autonomous world model based information systems, learning and hypothesis formation, natural language and other man-machine communication, space manufacturing, teleoperators and robot systems, and computer science and technology.
NASA Astrophysics Data System (ADS)
Kardava, Irakli; Tadyszak, Krzysztof; Gulua, Nana; Jurga, Stefan
2017-02-01
For more flexibility of environmental perception by artificial intelligence it is needed to exist the supporting software modules, which will be able to automate the creation of specific language syntax and to make a further analysis for relevant decisions based on semantic functions. According of our proposed approach, of which implementation it is possible to create the couples of formal rules of given sentences (in case of natural languages) or statements (in case of special languages) by helping of computer vision, speech recognition or editable text conversion system for further automatic improvement. In other words, we have developed an approach, by which it can be achieved to significantly improve the training process automation of artificial intelligence, which as a result will give us a higher level of self-developing skills independently from us (from users). At the base of our approach we have developed a software demo version, which includes the algorithm and software code for the entire above mentioned component's implementation (computer vision, speech recognition and editable text conversion system). The program has the ability to work in a multi - stream mode and simultaneously create a syntax based on receiving information from several sources.
The neurobiology of syntax: beyond string sets.
Petersson, Karl Magnus; Hagoort, Peter
2012-07-19
The human capacity to acquire language is an outstanding scientific challenge to understand. Somehow our language capacities arise from the way the human brain processes, develops and learns in interaction with its environment. To set the stage, we begin with a summary of what is known about the neural organization of language and what our artificial grammar learning (AGL) studies have revealed. We then review the Chomsky hierarchy in the context of the theory of computation and formal learning theory. Finally, we outline a neurobiological model of language acquisition and processing based on an adaptive, recurrent, spiking network architecture. This architecture implements an asynchronous, event-driven, parallel system for recursive processing. We conclude that the brain represents grammars (or more precisely, the parser/generator) in its connectivity, and its ability for syntax is based on neurobiological infrastructure for structured sequence processing. The acquisition of this ability is accounted for in an adaptive dynamical systems framework. Artificial language learning (ALL) paradigms might be used to study the acquisition process within such a framework, as well as the processing properties of the underlying neurobiological infrastructure. However, it is necessary to combine and constrain the interpretation of ALL results by theoretical models and empirical studies on natural language processing. Given that the faculty of language is captured by classical computational models to a significant extent, and that these can be embedded in dynamic network architectures, there is hope that significant progress can be made in understanding the neurobiology of the language faculty.
The neurobiology of syntax: beyond string sets
Petersson, Karl Magnus; Hagoort, Peter
2012-01-01
The human capacity to acquire language is an outstanding scientific challenge to understand. Somehow our language capacities arise from the way the human brain processes, develops and learns in interaction with its environment. To set the stage, we begin with a summary of what is known about the neural organization of language and what our artificial grammar learning (AGL) studies have revealed. We then review the Chomsky hierarchy in the context of the theory of computation and formal learning theory. Finally, we outline a neurobiological model of language acquisition and processing based on an adaptive, recurrent, spiking network architecture. This architecture implements an asynchronous, event-driven, parallel system for recursive processing. We conclude that the brain represents grammars (or more precisely, the parser/generator) in its connectivity, and its ability for syntax is based on neurobiological infrastructure for structured sequence processing. The acquisition of this ability is accounted for in an adaptive dynamical systems framework. Artificial language learning (ALL) paradigms might be used to study the acquisition process within such a framework, as well as the processing properties of the underlying neurobiological infrastructure. However, it is necessary to combine and constrain the interpretation of ALL results by theoretical models and empirical studies on natural language processing. Given that the faculty of language is captured by classical computational models to a significant extent, and that these can be embedded in dynamic network architectures, there is hope that significant progress can be made in understanding the neurobiology of the language faculty. PMID:22688633
Constraints on Statistical Computations at 10 Months of Age: The Use of Phonological Features
ERIC Educational Resources Information Center
Gonzalez-Gomez, Nayeli; Nazzi, Thierry
2015-01-01
Recently, several studies have argued that infants capitalize on the statistical properties of natural languages to acquire the linguistic structure of their native language, but the kinds of constraints which apply to statistical computations remain largely unknown. Here we explored French-learning infants' perceptual preference for…
Philosophy of Language. Course Notes for a Tutorial on Computational Semantics.
ERIC Educational Resources Information Center
Wilks, Yorick
This course was part of a tutorial focusing on the state of computational semantics, i.e., the state of work on natural language within the artificial intelligence (AI) paradigm. The discussion in the course centered on the philosophers Richard Montague and Ludwig Wittgenstein. The course was divided into three sections: (1)…
Moore, G. W.; Hutchins, G. M.; Miller, R. E.
1984-01-01
Computerized indexing and retrieval of medical records is increasingly important; but the use of natural language versus coded languages (SNOP, SNOMED) for this purpose remains controversial. In an effort to develop search strategies for natural language text, the authors examined the anatomic diagnosis reports by computer for 7000 consecutive autopsy subjects spanning a 13-year period at The Johns Hopkins Hospital. There were 923,657 words, 11,642 of them distinct. The authors observed an average of 1052 keystrokes, 28 lines, and 131 words per autopsy report, with an average 4.6 words per line and 7.0 letters per word. The entire text file represented 921 hours of secretarial effort. Words ranged in frequency from 33,959 occurrences of "and" to one occurrence for each of 3398 different words. Searches for rare diseases with unique names or for representative examples of common diseases were most readily performed with the use of computer-printed key word in context (KWIC) books. For uncommon diseases designated by commonly used terms (such as "cystic fibrosis"), needs were best served by a computerized search for logical combinations of key words. In an unbalanced word distribution, each conjunction (logical and) search should be performed in ascending order of word frequency; but each alternation (logical inclusive or) search should be performed in descending order of word frequency. Natural language text searches will assume a larger role in medical records analysis as the labor-intensive procedure of translation into a coded language becomes more costly, compared with the computer-intensive procedure of text searching. PMID:6546837
ERIC Educational Resources Information Center
Collentine, Karina
2009-01-01
Second language acquisition (SLA) researchers strive to understand the language and exchanges that learners generate in synchronous computer-mediated communication (SCMC). Doughty and Long (2003) advocate replacing open-ended SCMC with task-based language teaching (TBLT) design principles. Since most task-based SCMC (TB-SCMC) research addresses an…
The PLATO System and Language Study.
ERIC Educational Resources Information Center
Hart, Robert S., Ed.
1981-01-01
This issue presents an overview of research in computer-based language instruction using the PLATO IV computer system. The following articles are presented: (1) "Language Study and the PLATO system," by R. Hart; (2) "Reflections on the Use of Computers in Second-Language Acquisition," by F. Marty; (3) "Computer-Based…
A Natural Language Interface Concordant with a Knowledge Base.
Han, Yong-Jin; Park, Seong-Bae; Park, Se-Young
2016-01-01
The discordance between expressions interpretable by a natural language interface (NLI) system and those answerable by a knowledge base is a critical problem in the field of NLIs. In order to solve this discordance problem, this paper proposes a method to translate natural language questions into formal queries that can be generated from a graph-based knowledge base. The proposed method considers a subgraph of a knowledge base as a formal query. Thus, all formal queries corresponding to a concept or a predicate in the knowledge base can be generated prior to query time and all possible natural language expressions corresponding to each formal query can also be collected in advance. A natural language expression has a one-to-one mapping with a formal query. Hence, a natural language question is translated into a formal query by matching the question with the most appropriate natural language expression. If the confidence of this matching is not sufficiently high the proposed method rejects the question and does not answer it. Multipredicate queries are processed by regarding them as a set of collected expressions. The experimental results show that the proposed method thoroughly handles answerable questions from the knowledge base and rejects unanswerable ones effectively.
Integrating Micro-computers with a Centralized DBMS: ORACLE, SEED AND INGRES
NASA Technical Reports Server (NTRS)
Hoerger, J.
1984-01-01
Users of ADABAS, a relational-like data base management system (ADABAS) with its data base programming language (NATURAL) are acquiring microcomputers with hopes of solving their individual word processing, office automation, decision support, and simple data processing problems. As processor speeds, memory sizes, and disk storage capacities increase, individual departments begin to maintain "their own" data base on "their own" micro-computer. This situation can adversely affect several of the primary goals set for implementing a centralized DBMS. In order to avoid this potential problem, these micro-computers must be integrated with the centralized DBMS. An easy to use and flexible means for transferring logic data base files between the central data base machine and micro-computers must be provided. Some of the problems encounted in an effort to accomplish this integration and possible solutions are discussed.
TOWARDS A MULTI-SCALE AGENT-BASED PROGRAMMING LANGUAGE METHODOLOGY
Somogyi, Endre; Hagar, Amit; Glazier, James A.
2017-01-01
Living tissues are dynamic, heterogeneous compositions of objects, including molecules, cells and extra-cellular materials, which interact via chemical, mechanical and electrical process and reorganize via transformation, birth, death and migration processes. Current programming language have difficulty describing the dynamics of tissues because: 1: Dynamic sets of objects participate simultaneously in multiple processes, 2: Processes may be either continuous or discrete, and their activity may be conditional, 3: Objects and processes form complex, heterogeneous relationships and structures, 4: Objects and processes may be hierarchically composed, 5: Processes may create, destroy and transform objects and processes. Some modeling languages support these concepts, but most cannot translate models into executable simulations. We present a new hybrid executable modeling language paradigm, the Continuous Concurrent Object Process Methodology (CCOPM) which naturally expresses tissue models, enabling users to visually create agent-based models of tissues, and also allows computer simulation of these models. PMID:29282379
TOWARDS A MULTI-SCALE AGENT-BASED PROGRAMMING LANGUAGE METHODOLOGY.
Somogyi, Endre; Hagar, Amit; Glazier, James A
2016-12-01
Living tissues are dynamic, heterogeneous compositions of objects , including molecules, cells and extra-cellular materials, which interact via chemical, mechanical and electrical process and reorganize via transformation, birth, death and migration processes . Current programming language have difficulty describing the dynamics of tissues because: 1: Dynamic sets of objects participate simultaneously in multiple processes, 2: Processes may be either continuous or discrete, and their activity may be conditional, 3: Objects and processes form complex, heterogeneous relationships and structures, 4: Objects and processes may be hierarchically composed, 5: Processes may create, destroy and transform objects and processes. Some modeling languages support these concepts, but most cannot translate models into executable simulations. We present a new hybrid executable modeling language paradigm, the Continuous Concurrent Object Process Methodology ( CCOPM ) which naturally expresses tissue models, enabling users to visually create agent-based models of tissues, and also allows computer simulation of these models.
Generating and Executing Complex Natural Language Queries across Linked Data.
Hamon, Thierry; Mougin, Fleur; Grabar, Natalia
2015-01-01
With the recent and intensive research in the biomedical area, the knowledge accumulated is disseminated through various knowledge bases. Links between these knowledge bases are needed in order to use them jointly. Linked Data, SPARQL language, and interfaces in Natural Language question-answering provide interesting solutions for querying such knowledge bases. We propose a method for translating natural language questions in SPARQL queries. We use Natural Language Processing tools, semantic resources, and the RDF triples description. The method is designed on 50 questions over 3 biomedical knowledge bases, and evaluated on 27 questions. It achieves 0.78 F-measure on the test set. The method for translating natural language questions into SPARQL queries is implemented as Perl module available at http://search.cpan.org/ thhamon/RDF-NLP-SPARQLQuery.
Artificial Intelligence and CALL.
ERIC Educational Resources Information Center
Underwood, John H.
The potential application of artificial intelligence (AI) to computer-assisted language learning (CALL) is explored. Two areas of AI that hold particular interest to those who deal with language meaning--knowledge representation and expert systems, and natural-language processing--are described and examples of each are presented. AI contribution…
A Chinese Interactive Feedback System for a Virtual Campus
ERIC Educational Resources Information Center
Chen, Jui-Fa; Lin, Wei-Chuan; Jian, Chih-Yu; Hung, Ching-Chung
2008-01-01
Considering the popularity of the Internet, an automatic interactive feedback system for Elearning websites is becoming increasingly desirable. However, computers still have problems understanding natural languages, especially the Chinese language, firstly because the Chinese language has no space to segment lexical entries (its segmentation…
What's So Hard about Understanding Language?
ERIC Educational Resources Information Center
Read, Walter; And Others
A discussion of the application of artificial intelligence to natural language processing looks at several problems in language comprehension, involving semantic ambiguity, anaphoric reference, and metonymy. Examples of these problems are cited, and the importance of the computational approach in analyzing them is explained. The approach applies…
A comparative study of programming languages for next-generation astrodynamics systems
NASA Astrophysics Data System (ADS)
Eichhorn, Helge; Cano, Juan Luis; McLean, Frazer; Anderl, Reiner
2018-03-01
Due to the computationally intensive nature of astrodynamics tasks, astrodynamicists have relied on compiled programming languages such as Fortran for the development of astrodynamics software. Interpreted languages such as Python, on the other hand, offer higher flexibility and development speed thereby increasing the productivity of the programmer. While interpreted languages are generally slower than compiled languages, recent developments such as just-in-time (JIT) compilers or transpilers have been able to close this speed gap significantly. Another important factor for the usefulness of a programming language is its wider ecosystem which consists of the available open-source packages and development tools such as integrated development environments or debuggers. This study compares three compiled languages and three interpreted languages, which were selected based on their popularity within the scientific programming community and technical merit. The three compiled candidate languages are Fortran, C++, and Java. Python, Matlab, and Julia were selected as the interpreted candidate languages. All six languages are assessed and compared to each other based on their features, performance, and ease-of-use through the implementation of idiomatic solutions to classical astrodynamics problems. We show that compiled languages still provide the best performance for astrodynamics applications, but JIT-compiled dynamic languages have reached a competitive level of speed and offer an attractive compromise between numerical performance and programmer productivity.
Efficacy of Computer Games on Language Learning
ERIC Educational Resources Information Center
Klimova, Blanka; Kacet, Jaroslav
2017-01-01
Information and communication technologies (ICT) have become an inseparable part of people's lives. For children the use of ICT is as natural as breathing and therefore they find the use of ICT in school education as normal as the use of textbooks. The purpose of this review study is to explore the efficacy of computer games on language learning…
ERIC Educational Resources Information Center
Zajenkowski, Marcin; Styla, Rafal; Szymanik, Jakub
2011-01-01
We compared the processing of natural language quantifiers in a group of patients with schizophrenia and a healthy control group. In both groups, the difficulty of the quantifiers was consistent with computational predictions, and patients with schizophrenia took more time to solve the problems. However, they were significantly less accurate only…
ERIC Educational Resources Information Center
Chen, Yu-Hua; Bruncak, Radovan
2015-01-01
With the advances in technology, wordlists retrieved from computer corpora have become increasingly popular in recent years. The lexical items in those wordlists are usually selected, according to a set of robust frequency and dispersion criteria, from large corpora of authentic and naturally occurring language. Corpus wordlists are of great value…
Somogyi, Endre; Glazier, James A.
2017-01-01
Biological cells are the prototypical example of active matter. Cells sense and respond to mechanical, chemical and electrical environmental stimuli with a range of behaviors, including dynamic changes in morphology and mechanical properties, chemical uptake and secretion, cell differentiation, proliferation, death, and migration. Modeling and simulation of such dynamic phenomena poses a number of computational challenges. A modeling language describing cellular dynamics must naturally represent complex intra and extra-cellular spatial structures and coupled mechanical, chemical and electrical processes. Domain experts will find a modeling language most useful when it is based on concepts, terms and principles native to the problem domain. A compiler must then be able to generate an executable model from this physically motivated description. Finally, an executable model must efficiently calculate the time evolution of such dynamic and inhomogeneous phenomena. We present a spatial hybrid systems modeling language, compiler and mesh-free Lagrangian based simulation engine which will enable domain experts to define models using natural, biologically motivated constructs and to simulate time evolution of coupled cellular, mechanical and chemical processes acting on a time varying number of cells and their environment. PMID:29303160
Somogyi, Endre; Glazier, James A
2017-04-01
Biological cells are the prototypical example of active matter. Cells sense and respond to mechanical, chemical and electrical environmental stimuli with a range of behaviors, including dynamic changes in morphology and mechanical properties, chemical uptake and secretion, cell differentiation, proliferation, death, and migration. Modeling and simulation of such dynamic phenomena poses a number of computational challenges. A modeling language describing cellular dynamics must naturally represent complex intra and extra-cellular spatial structures and coupled mechanical, chemical and electrical processes. Domain experts will find a modeling language most useful when it is based on concepts, terms and principles native to the problem domain. A compiler must then be able to generate an executable model from this physically motivated description. Finally, an executable model must efficiently calculate the time evolution of such dynamic and inhomogeneous phenomena. We present a spatial hybrid systems modeling language, compiler and mesh-free Lagrangian based simulation engine which will enable domain experts to define models using natural, biologically motivated constructs and to simulate time evolution of coupled cellular, mechanical and chemical processes acting on a time varying number of cells and their environment.
A Model Based Framework for Semantic Interpretation of Architectural Construction Drawings
ERIC Educational Resources Information Center
Babalola, Olubi Oluyomi
2011-01-01
The study addresses the automated translation of architectural drawings from 2D Computer Aided Drafting (CAD) data into a Building Information Model (BIM), with emphasis on the nature, possible role, and limitations of a drafting language Knowledge Representation (KR) on the problem and process. The central idea is that CAD to BIM translation is a…
Relaxation of selection, niche construction, and the Baldwin effect in language evolution.
Yamauchi, Hajime; Hashimoto, Takashi
2010-01-01
Deacon has suggested that one of the key factors of language evolution is not characterized by an increase in genetic contribution, often known as the Baldwin effect, but rather by a decrease. This process effectively increases linguistic learning capability by organizing a novel synergy of multiple lower-order functions previously irrelevant to the process of language acquisition. Deacon posits that this transition is not caused by natural selection. Rather, it is due to the relaxation of natural selection. While there are some cases in which relaxation caused by some external factors indeed induces the transition, we do not know what kind of relaxation has worked in language evolution. In this article, a genetic-algorithm-based computer simulation is used to investigate how the niche-constructing aspect of linguistic behavior may trigger the degradation of genetic predisposition related to language learning. The results show that agents initially increase their genetic predisposition for language learning—the Baldwin effect. They create a highly uniform sociolinguistic environment—a linguistic niche construction. This means that later generations constantly receive very similar inputs from adult agents, and subsequently the selective pressure to retain the genetic predisposition is relaxed.
ERIC Educational Resources Information Center
Ziegler, Nicole; Meurers, Detmar; Rebuschat, Patrick; Ruiz, Simón; Moreno-Vega, José L.; Chinkina, Maria; Li, Wenjing; Grey, Sarah
2017-01-01
Despite the promise of research conducted at the intersection of computer-assisted language learning (CALL), natural language processing, and second language acquisition, few studies have explored the potential benefits of using intelligent CALL systems to deepen our understanding of the process and products of second language (L2) learning. The…
Policy Process Editor for P3BM Software
NASA Technical Reports Server (NTRS)
James, Mark; Chang, Hsin-Ping; Chow, Edward T.; Crichton, Gerald A.
2010-01-01
A computer program enables generation, in the form of graphical representations of process flows with embedded natural-language policy statements, input to a suite of policy-, process-, and performance-based management (P3BM) software. This program (1) serves as an interface between users and the Hunter software, which translates the input into machine-readable form; and (2) enables users to initialize and monitor the policy-implementation process. This program provides an intuitive graphical interface for incorporating natural-language policy statements into business-process flow diagrams. Thus, the program enables users who dictate policies to intuitively embed their intended process flows as they state the policies, reducing the likelihood of errors and reducing the time between declaration and execution of policy.
Can, Doğan; Marín, Rebeca A.; Georgiou, Panayiotis G.; Imel, Zac E.; Atkins, David C.; Narayanan, Shrikanth S.
2016-01-01
The dissemination and evaluation of evidence based behavioral treatments for substance abuse problems rely on the evaluation of counselor interventions. In Motivational Interviewing (MI), a treatment that directs the therapist to utilize a particular linguistic style, proficiency is assessed via behavioral coding - a time consuming, non-technological approach. Natural language processing techniques have the potential to scale up the evaluation of behavioral treatments like MI. We present a novel computational approach to assessing components of MI, focusing on one specific counselor behavior – reflections – that are believed to be a critical MI ingredient. Using 57 sessions from 3 MI clinical trials, we automatically detected counselor reflections in a Maximum Entropy Markov Modeling framework using the raw linguistic data derived from session transcripts. We achieved 93% recall, 90% specificity, and 73% precision. Results provide insight into the linguistic information used by coders to make ratings and demonstrate the feasibility of new computational approaches to scaling up the evaluation of behavioral treatments. PMID:26784286
Research at Yale in Natural Language Processing. Research Report #84.
ERIC Educational Resources Information Center
Schank, Roger C.
This report summarizes the capabilities of five computer programs at Yale that do automatic natural language processing as of the end of 1976. For each program an introduction to its overall intent is given, followed by the input/output, a short discussion of the research underlying the program, and a prognosis for future development. The programs…
AutoTutor and Family: A Review of 17 Years of Natural Language Tutoring
ERIC Educational Resources Information Center
Nye, Benjamin D.; Graesser, Arthur C.; Hu, Xiangen
2014-01-01
AutoTutor is a natural language tutoring system that has produced learning gains across multiple domains (e.g., computer literacy, physics, critical thinking). In this paper, we review the development, key research findings, and systems that have evolved from AutoTutor. First, the rationale for developing AutoTutor is outlined and the advantages…
Weng, Chunhua; Payne, Philip R O; Velez, Mark; Johnson, Stephen B; Bakken, Suzanne
2014-01-01
The successful adoption by clinicians of evidence-based clinical practice guidelines (CPGs) contained in clinical information systems requires efficient translation of free-text guidelines into computable formats. Natural language processing (NLP) has the potential to improve the efficiency of such translation. However, it is laborious to develop NLP to structure free-text CPGs using existing formal knowledge representations (KR). In response to this challenge, this vision paper discusses the value and feasibility of supporting symbiosis in text-based knowledge acquisition (KA) and KR. We compare two ontologies: (1) an ontology manually created by domain experts for CPG eligibility criteria and (2) an upper-level ontology derived from a semantic pattern-based approach for automatic KA from CPG eligibility criteria text. Then we discuss the strengths and limitations of interweaving KA and NLP for KR purposes and important considerations for achieving the symbiosis of KR and NLP for structuring CPGs to achieve evidence-based clinical practice.
A Wittgenstein Approach to the Learning of OO-modeling
NASA Astrophysics Data System (ADS)
Holmboe, Christian
2004-12-01
The paper uses Ludwig Wittgenstein's theories about the relationship between thought, language, and objects of the world to explore the assumption that OO-thinking resembles natural thinking. The paper imports from research in linguistic philosophy to computer science education research. I show how UML class diagrams (i.e., an artificial context-free language) correspond to the logically perfect languages described in Tractatus Logico-Philosophicus. In Philosophical Investigations Wittgenstein disputes his previous theories by showing that natural languages are not constructed by rules of mathematical logic, but are language games where the meaning of a word is constructed through its use in social contexts. Contradicting the claim that OO-thinking is easy to learn because of its similarity to natural thinking, I claim that OO-thinking is difficult to learn because of its differences from natural thinking. The nature of these differences is not currently well known or appreciated. I suggest how explicit attention to the nature and implications of different language games may improve the teaching and learning of OO-modeling as well as programming.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azevedo, S.G.; Fitch, J.P.
1987-10-21
Conventional software interfaces that use imperative computer commands or menu interactions are often restrictive environments when used for researching new algorithms or analyzing processed experimental data. We found this to be true with current signal-processing software (SIG). As an alternative, ''functional language'' interfaces provide features such as command nesting for a more natural interaction with the data. The Image and Signal LISP Environment (ISLE) is an example of an interpreted functional language interface based on common LISP. Advantages of ISLE include multidimensional and multiple data-type independence through dispatching functions, dynamic loading of new functions, and connections to artificial intelligence (AI)more » software. 10 refs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azevedo, S.G.; Fitch, J.P.
1987-05-01
Conventional software interfaces which utilize imperative computer commands or menu interactions are often restrictive environments when used for researching new algorithms or analyzing processed experimental data. We found this to be true with current signal processing software (SIG). Existing ''functional language'' interfaces provide features such as command nesting for a more natural interaction with the data. The Image and Signal Lisp Environment (ISLE) will be discussed as an example of an interpreted functional language interface based on Common LISP. Additional benefits include multidimensional and multiple data-type independence through dispatching functions, dynamic loading of new functions, and connections to artificial intelligencemore » software.« less
NASA Astrophysics Data System (ADS)
Lu, Qian
2017-07-01
Exploring language universal is one of the major goals of linguistic researches, which are largely devoted to answering the ;Platonic questions; in linguistics, that is, what is the language knowledge, how to get and use this knowledge. However, if solely guided by linguistic intuition, it is very difficult for syntactic studies to answer these questions, or to achieve abstractions in the scientific sense. This suggests that linguistic analyses based on the probability theory may provide effective ways to investigate into language universals in terms of biological motivations or cognitive psychological mechanisms. With the view that ;Language is a human-driven system;, Liu, Xu & Liang's review [1] pointed out that dependency distance minimization (DDM), which has been corroborated by big data analysis of corpus, may be a language universal shaped in language evolution, a universal that has profound effect on syntactic patterns.
A Proposal on the Validation Model of Equivalence between PBLT and CBLT
ERIC Educational Resources Information Center
Chen, Huilin
2014-01-01
The validity of the computer-based language test is possibly affected by three factors: computer familiarity, audio-visual cognitive competence, and other discrepancies in construct. Therefore, validating the equivalence between the paper-and-pencil language test and the computer-based language test is a key step in the procedure of designing a…
Exploiting salient semantic analysis for information retrieval
NASA Astrophysics Data System (ADS)
Luo, Jing; Meng, Bo; Quan, Changqin; Tu, Xinhui
2016-11-01
Recently, many Wikipedia-based methods have been proposed to improve the performance of different natural language processing (NLP) tasks, such as semantic relatedness computation, text classification and information retrieval. Among these methods, salient semantic analysis (SSA) has been proven to be an effective way to generate conceptual representation for words or documents. However, its feasibility and effectiveness in information retrieval is mostly unknown. In this paper, we study how to efficiently use SSA to improve the information retrieval performance, and propose a SSA-based retrieval method under the language model framework. First, SSA model is adopted to build conceptual representations for documents and queries. Then, these conceptual representations and the bag-of-words (BOW) representations can be used in combination to estimate the language models of queries and documents. The proposed method is evaluated on several standard text retrieval conference (TREC) collections. Experiment results on standard TREC collections show the proposed models consistently outperform the existing Wikipedia-based retrieval methods.
NASA Astrophysics Data System (ADS)
Knoeferle, Pia
2016-03-01
In his review article [19], Arbib outlines an ambitious research agenda: to accommodate within a unified framework the evolution, the development, and the processing of language in natural settings (implicating other systems such as vision). He does so with neuro-computationally explicit modeling in mind [1,2] and inspired by research on the mirror neuron system in primates. Similar research questions have received substantial attention also among other scientists [3,4,12].
Lee, Shu-Ping; Su, Hui-Kai; Lee, Shin-Da
2012-06-01
This study investigated the effects of immediate feedback on computer-based foreign language listening comprehension tests and on intrapersonal test-associated anxiety in 72 English major college students at a Taiwanese University. Foreign language listening comprehension of computer-based tests designed by MOODLE, a dynamic e-learning environment, with or without immediate feedback together with the state-trait anxiety inventory (STAI) were tested and repeated after one week. The analysis indicated that immediate feedback during testing caused significantly higher anxiety and resulted in significantly higher listening scores than in the control group, which had no feedback. However, repeated feedback did not affect the test anxiety and listening scores. Computer-based immediate feedback did not lower debilitating effects of anxiety but enhanced students' intrapersonal eustress-like anxiety and probably improved their attention during listening tests. Computer-based tests with immediate feedback might help foreign language learners to increase attention in foreign language listening comprehension.
Automated Detection of Events of Scientific Interest
NASA Technical Reports Server (NTRS)
James, Mark
2007-01-01
A report presents a slightly different perspective of the subject matter of Fusing Symbolic and Numerical Diagnostic Computations (NPO-42512), which appears elsewhere in this issue of NASA Tech Briefs. Briefly, the subject matter is the X-2000 Anomaly Detection Language, which is a developmental computing language for fusing two diagnostic computer programs one implementing a numerical analysis method, the other implementing a symbolic analysis method into a unified event-based decision analysis software system for real-time detection of events. In the case of the cited companion NASA Tech Briefs article, the contemplated events that one seeks to detect would be primarily failures or other changes that could adversely affect the safety or success of a spacecraft mission. In the case of the instant report, the events to be detected could also include natural phenomena that could be of scientific interest. Hence, the use of X- 2000 Anomaly Detection Language could contribute to a capability for automated, coordinated use of multiple sensors and sensor-output-data-processing hardware and software to effect opportunistic collection and analysis of scientific data.
Evolution, brain, and the nature of language.
Berwick, Robert C; Friederici, Angela D; Chomsky, Noam; Bolhuis, Johan J
2013-02-01
Language serves as a cornerstone for human cognition, yet much about its evolution remains puzzling. Recent research on this question parallels Darwin's attempt to explain both the unity of all species and their diversity. What has emerged from this research is that the unified nature of human language arises from a shared, species-specific computational ability. This ability has identifiable correlates in the brain and has remained fixed since the origin of language approximately 100 thousand years ago. Although songbirds share with humans a vocal imitation learning ability, with a similar underlying neural organization, language is uniquely human. Copyright © 2012 Elsevier Ltd. All rights reserved.
Thai Language Sentence Similarity Computation Based on Syntactic Structure and Semantic Vector
NASA Astrophysics Data System (ADS)
Wang, Hongbin; Feng, Yinhan; Cheng, Liang
2018-03-01
Sentence similarity computation plays an increasingly important role in text mining, Web page retrieval, machine translation, speech recognition and question answering systems. Thai language as a kind of resources scarce language, it is not like Chinese language with HowNet and CiLin resources. So the Thai sentence similarity research faces some challenges. In order to solve this problem of the Thai language sentence similarity computation. This paper proposes a novel method to compute the similarity of Thai language sentence based on syntactic structure and semantic vector. This method firstly uses the Part-of-Speech (POS) dependency to calculate two sentences syntactic structure similarity, and then through the word vector to calculate two sentences semantic similarity. Finally, we combine the two methods to calculate two Thai language sentences similarity. The proposed method not only considers semantic, but also considers the sentence syntactic structure. The experiment result shows that this method in Thai language sentence similarity computation is feasible.
ERIC Educational Resources Information Center
García Laborda, Jesús; López Santiago, Mercedes; Otero de Juan, Nuria; Álvarez Álvarez, Alfredo
2014-01-01
Current evolutions of language testing have led to integrating computers in FSP assessments both in oral and written communicative tasks. This paper deals with two main issues: learners' expectations about the types of questions in FSP computer based assessments and the relation with their own experience. This paper describes the experience of 23…
Storyboard method of end-user programming with natural language configuration
Bouchard, Ann M; Osbourn, Gordon C
2013-11-19
A technique for end-user programming includes populating a template with graphically illustrated actions and then invoking a command to generate a screen element based on the template. The screen element is rendered within a computing environment and provides a mechanism for triggering execution of a sequence of user actions. The sequence of user actions is based at least in part on the graphically illustrated actions populated into the template.
DiSalvo, Betsy
2014-01-01
To determine appropriate computer science curricula, educators sought to better understand the different affordances of teaching with a visual programming language (Alice) or a text-based language (Jython). Although students often preferred one language, that language wasn't necessarily the one from which they learned the most.
A Randomized Field Trial of the Fast ForWord Language Computer-Based Training Program
ERIC Educational Resources Information Center
Borman, Geoffrey D.; Benson, James G.; Overman, Laura
2009-01-01
This article describes an independent assessment of the Fast ForWord Language computer-based training program developed by Scientific Learning Corporation. Previous laboratory research involving children with language-based learning impairments showed strong effects on their abilities to recognize brief and fast sequences of nonspeech and speech…
Medical Data Management in Time-Sharing: Findings of the DIRAC Project.
ERIC Educational Resources Information Center
Ludwig, Herbert; Vallee, Jacques
In terms of examples drawn from clinical and research data files, one of the objectives of this study is to illustrate several factors that have combined to delay the implementation of medical data bases. A primary factor has been inherent in the design of computer software. The languages currently on the market are procedural in nature: they…
Parsing English. Course Notes for a Tutorial on Computational Semantics, March 17-22, 1975.
ERIC Educational Resources Information Center
Wilks, Yorick
The course in parsing English is essentially a survey and comparison of several of the principal systems used for understanding natural language. The basic procedure of parsing is described. The discussion of the principal systems is based on the idea that "meaning is procedures," that is, that the procedures of application give a parsed…
Automated Computerized Analysis of Speechin Psychiatric Disorders
Cohen, Alex S.; Elvevåg, Brita
2014-01-01
Purpose of Review Disturbances in communication are a hallmark of severe mental illnesses. Recent technological advances have paved the way for objectifying communication using automated computerized linguistic and acoustic analysis. We review recent studies applying various computer-based assessments to the natural language produced by adult patients with severe mental illness. Recent Findings Automated computerized methods afford tools with which it is possible to objectively evaluate patients in a reliable, valid and efficient manner that complements human ratings. Crucially, these measures correlate with important clinical measures. The clinical relevance of these novel metrics has been demonstrated by showing their relationship to functional outcome measures, their in vivo link to classic ‘language’ regions in the brain, and, in the case of linguistic analysis, their relationship to candidate genes for severe mental illness. Summary Computer based assessments of natural language afford a framework with which to measure communication disturbances in adults with SMI. Emerging evidence suggests that they can be reliable and valid, and overcome many practical limitations of more traditional assessment methods. The advancement of these technologies offers unprecedented potential for measuring and understanding some of the most crippling symptoms of some of the most debilitating illnesses known to humankind. PMID:24613984
Cadeddu, Andrea; Wylie, Elizabeth K; Jurczak, Janusz; Wampler-Doty, Matthew; Grzybowski, Bartosz A
2014-07-28
Methods of computational linguistics are used to demonstrate that a natural language such as English and organic chemistry have the same structure in terms of the frequency of, respectively, text fragments and molecular fragments. This quantitative correspondence suggests that it is possible to extend the methods of computational corpus linguistics to the analysis of organic molecules. It is shown that within organic molecules bonds that have highest information content are the ones that 1) define repeat/symmetry subunits and 2) in asymmetric molecules, define the loci of potential retrosynthetic disconnections. Linguistics-based analysis appears well-suited to the analysis of complex structural and reactivity patterns within organic molecules. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A computational language approach to modeling prose recall in schizophrenia
Rosenstein, Mark; Diaz-Asper, Catherine; Foltz, Peter W.; Elvevåg, Brita
2014-01-01
Many cortical disorders are associated with memory problems. In schizophrenia, verbal memory deficits are a hallmark feature. However, the exact nature of this deficit remains elusive. Modeling aspects of language features used in memory recall have the potential to provide means for measuring these verbal processes. We employ computational language approaches to assess time-varying semantic and sequential properties of prose recall at various retrieval intervals (immediate, 30 min and 24 h later) in patients with schizophrenia, unaffected siblings and healthy unrelated control participants. First, we model the recall data to quantify the degradation of performance with increasing retrieval interval and the effect of diagnosis (i.e., group membership) on performance. Next we model the human scoring of recall performance using an n-gram language sequence technique, and then with a semantic feature based on Latent Semantic Analysis. These models show that automated analyses of the recalls can produce scores that accurately mimic human scoring. The final analysis addresses the validity of this approach by ascertaining the ability to predict group membership from models built on the two classes of language features. Taken individually, the semantic feature is most predictive, while a model combining the features improves accuracy of group membership prediction slightly above the semantic feature alone as well as over the human rating approach. We discuss the implications for cognitive neuroscience of such a computational approach in exploring the mechanisms of prose recall. PMID:24709122
Studying Language Learning Opportunities Afforded by a Collaborative CALL Task
ERIC Educational Resources Information Center
Leahy, Christine
2016-01-01
This research study explores the learning potential of a computer-assisted language learning (CALL) activity. Research suggests that the dual emphasis on content development and language accuracy, as well as the complexity of L2 production in natural settings, can potentially create cognitive overload. This study poses the question whether, and…
Complexity in language acquisition.
Clark, Alexander; Lappin, Shalom
2013-01-01
Learning theory has frequently been applied to language acquisition, but discussion has largely focused on information theoretic problems-in particular on the absence of direct negative evidence. Such arguments typically neglect the probabilistic nature of cognition and learning in general. We argue first that these arguments, and analyses based on them, suffer from a major flaw: they systematically conflate the hypothesis class and the learnable concept class. As a result, they do not allow one to draw significant conclusions about the learner. Second, we claim that the real problem for language learning is the computational complexity of constructing a hypothesis from input data. Studying this problem allows for a more direct approach to the object of study--the language acquisition device-rather than the learnable class of languages, which is epiphenomenal and possibly hard to characterize. The learnability results informed by complexity studies are much more insightful. They strongly suggest that target grammars need to be objective, in the sense that the primitive elements of these grammars are based on objectively definable properties of the language itself. These considerations support the view that language acquisition proceeds primarily through data-driven learning of some form. Copyright © 2013 Cognitive Science Society, Inc.
Semantic computing and language knowledge bases
NASA Astrophysics Data System (ADS)
Wang, Lei; Wang, Houfeng; Yu, Shiwen
2017-09-01
As the proposition of the next-generation Web - semantic Web, semantic computing has been drawing more and more attention within the circle and the industries. A lot of research has been conducted on the theory and methodology of the subject, and potential applications have also been investigated and proposed in many fields. The progress of semantic computing made so far cannot be detached from its supporting pivot - language resources, for instance, language knowledge bases. This paper proposes three perspectives of semantic computing from a macro view and describes the current status of affairs about the construction of language knowledge bases and the related research and applications that have been carried out on the basis of these resources via a case study in the Institute of Computational Linguistics at Peking University.
Our health language and data collections.
Hovenga, Evelyn J S; Grain, Heather
2013-01-01
All communication within the health industry is dependent upon the use of our health language consisting of a very extensive and complex vocabulary. Converting this language into computable formats is necessary in a digital environment with a strong reliance on data, information and knowledge sharing. This chapter describes our health language, what terminologies and ontologies are, their use and relationships with natural language, indexing, data standards, data collections and the need for data governance.
ERIC Educational Resources Information Center
Prihatin, Pius N.
2012-01-01
Computer technology has been popular for teaching English as a foreign language in non-English speaking countries. This case study explored the way language instructors designed and implemented computer-based instruction so that students are engaged in English language learning. This study explored the beliefs, practices and perceptions of…
Toward using alpha and theta brain waves to quantify programmer expertise.
Crk, Igor; Kluthe, Timothy
2014-01-01
Empirical studies of programming language learnability and usability have thus far depended on indirect measures of human cognitive performance, attempting to capture what is at its essence a purely cognitive exercise through various indicators of comprehension, such as the correctness of coding tasks or the time spent working out the meaning of code and producing acceptable solutions. Understanding program comprehension is essential to understanding the inherent complexity of programming languages, and ultimately, having a measure of mental effort based on direct observation of the brain at work will illuminate the nature of the work of programming. We provide evidence of direct observation of the cognitive effort associated with programming tasks, through a carefully constructed empirical study using a cross-section of undergraduate computer science students and an inexpensive, off-the-shelf brain-computer interface device. This study presents a link between expertise and programming language comprehension, draws conclusions about the observed indicators of cognitive effort using recent cognitive theories, and proposes directions for future work that is now possible.
Can mathematics explain the evolution of human language?
Witzany, Guenther
2011-09-01
Investigation into the sequence structure of the genetic code by means of an informatic approach is a real success story. The features of human language are also the object of investigation within the realm of formal language theories. They focus on the common rules of a universal grammar that lies behind all languages and determine generation of syntactic structures. This universal grammar is a depiction of material reality, i.e., the hidden logical order of things and its relations determined by natural laws. Therefore mathematics is viewed not only as an appropriate tool to investigate human language and genetic code structures through computer science-based formal language theory but is itself a depiction of material reality. This confusion between language as a scientific tool to describe observations/experiences within cognitive constructed models and formal language as a direct depiction of material reality occurs not only in current approaches but was the central focus of the philosophy of science debate in the twentieth century, with rather unexpected results. This article recalls these results and their implications for more recent mathematical approaches that also attempt to explain the evolution of human language.
ERIC Educational Resources Information Center
Rojano, Teresa; García-Campos, Montserrat
2017-01-01
This article reports the outcomes of a study that seeks to investigate the role of feedback, by way of an intelligent support system in natural language, in parametrized modelling activities carried out by a group of tertiary education students. With such a system, it is possible to simultaneously display on a computer screen a dialogue window and…
Students' Motivation toward Computer-Based Language Learning
ERIC Educational Resources Information Center
Genc, Gulten; Aydin, Selami
2011-01-01
The present article examined some factors affecting the motivation level of the preparatory school students in using a web-based computer-assisted language-learning course. The sample group of the study consisted of 126 English-as-a-foreign-language learners at a preparatory school of a state university. After performing statistical analyses…
Trombert-Paviot, B; Rodrigues, J M; Rogers, J E; Baud, R; van der Haring, E; Rassinoux, A M; Abrial, V; Clavel, L; Idir, H
1999-01-01
GALEN has developed a new generation of terminology tools based on a language independent concept reference model using a compositional formalism allowing computer processing and multiple reuses. During the 4th framework program project Galen-In-Use we applied the modelling and the tools to the development of a new multipurpose coding system for surgical procedures (CCAM) in France. On one hand we contributed to a language independent knowledge repository for multicultural Europe. On the other hand we support the traditional process for creating a new coding system in medicine which is very much labour consuming by artificial intelligence tools using a medically oriented recursive ontology and natural language processing. We used an integrated software named CLAW to process French professional medical language rubrics produced by the national colleges of surgeons into intermediate dissections and to the Grail reference ontology model representation. From this language independent concept model representation on one hand we generate controlled French natural language to support the finalization of the linguistic labels in relation with the meanings of the conceptual system structure. On the other hand the classification manager of third generation proves to be very powerful to retrieve the initial professional rubrics with different categories of concepts within a semantic network.
Incorporating advanced language models into the P300 speller using particle filtering
NASA Astrophysics Data System (ADS)
Speier, W.; Arnold, C. W.; Deshpande, A.; Knall, J.; Pouratian, N.
2015-08-01
Objective. The P300 speller is a common brain-computer interface (BCI) application designed to communicate language by detecting event related potentials in a subject’s electroencephalogram signal. Information about the structure of natural language can be valuable for BCI communication, but attempts to use this information have thus far been limited to rudimentary n-gram models. While more sophisticated language models are prevalent in natural language processing literature, current BCI analysis methods based on dynamic programming cannot handle their complexity. Approach. Sampling methods can overcome this complexity by estimating the posterior distribution without searching the entire state space of the model. In this study, we implement sequential importance resampling, a commonly used particle filtering (PF) algorithm, to integrate a probabilistic automaton language model. Main result. This method was first evaluated offline on a dataset of 15 healthy subjects, which showed significant increases in speed and accuracy when compared to standard classification methods as well as a recently published approach using a hidden Markov model (HMM). An online pilot study verified these results as the average speed and accuracy achieved using the PF method was significantly higher than that using the HMM method. Significance. These findings strongly support the integration of domain-specific knowledge into BCI classification to improve system performance.
The Design of Hand Gestures for Human-Computer Interaction: Lessons from Sign Language Interpreters.
Rempel, David; Camilleri, Matt J; Lee, David L
2015-10-01
The design and selection of 3D modeled hand gestures for human-computer interaction should follow principles of natural language combined with the need to optimize gesture contrast and recognition. The selection should also consider the discomfort and fatigue associated with distinct hand postures and motions, especially for common commands. Sign language interpreters have extensive and unique experience forming hand gestures and many suffer from hand pain while gesturing. Professional sign language interpreters (N=24) rated discomfort for hand gestures associated with 47 characters and words and 33 hand postures. Clear associations of discomfort with hand postures were identified. In a nominal logistic regression model, high discomfort was associated with gestures requiring a flexed wrist, discordant adjacent fingers, or extended fingers. These and other findings should be considered in the design of hand gestures to optimize the relationship between human cognitive and physical processes and computer gesture recognition systems for human-computer input.
Flexible language constructs for large parallel programs
NASA Technical Reports Server (NTRS)
Rosing, Matthew; Schnabel, Robert
1993-01-01
The goal of the research described is to develop flexible language constructs for writing large data parallel numerical programs for distributed memory (MIMD) multiprocessors. Previously, several models have been developed to support synchronization and communication. Models for global synchronization include SIMD (Single Instruction Multiple Data), SPMD (Single Program Multiple Data), and sequential programs annotated with data distribution statements. The two primary models for communication include implicit communication based on shared memory and explicit communication based on messages. None of these models by themselves seem sufficient to permit the natural and efficient expression of the variety of algorithms that occur in large scientific computations. An overview of a new language that combines many of these programming models in a clean manner is given. This is done in a modular fashion such that different models can be combined to support large programs. Within a module, the selection of a model depends on the algorithm and its efficiency requirements. An overview of the language and discussion of some of the critical implementation details is given.
Testing of a Natural Language Retrieval System for a Full Text Knowledge Base.
ERIC Educational Resources Information Center
Bernstein, Lionel M.; Williamson, Robert E.
1984-01-01
The Hepatitis Knowledge Base (text of prototype information system) was used for modifying and testing "A Navigator of Natural Language Organized (Textual) Data" (ANNOD), a retrieval system which combines probabilistic, linguistic, and empirical means to rank individual paragraphs of full text for similarity to natural language queries…
Computer-Assisted Search Of Large Textual Data Bases
NASA Technical Reports Server (NTRS)
Driscoll, James R.
1995-01-01
"QA" denotes high-speed computer system for searching diverse collections of documents including (but not limited to) technical reference manuals, legal documents, medical documents, news releases, and patents. Incorporates previously available and emerging information-retrieval technology to help user intelligently and rapidly locate information found in large textual data bases. Technology includes provision for inquiries in natural language; statistical ranking of retrieved information; artificial-intelligence implementation of semantics, in which "surface level" knowledge found in text used to improve ranking of retrieved information; and relevance feedback, in which user's judgements of relevance of some retrieved documents used automatically to modify search for further information.
ERIC Educational Resources Information Center
Kolodny, Oren; Lotem, Arnon; Edelman, Shimon
2015-01-01
We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given…
ERIC Educational Resources Information Center
Pulker, Hélène; Vialleton, Elodie
2015-01-01
Much research has been done on blended learning and the design of tasks most appropriate for online environments and computer-mediated communication. Increasingly, language teachers and Second Language Acquisition (SLA) practitioners recognise the different nature of communications in online settings and in face-to-face settings; teachers do not…
ERIC Educational Resources Information Center
Pareja-Lora, Antonio; Arús-Hita, Jorge; Read, Timothy; Rodríguez-Arancón, Pilar; Calle-Martínez, Cristina; Pomposo, Lourdes; Martín-Monje, Elena; Bárcena, Elena
2013-01-01
In this short paper, we present some initial work on Mobile Assisted Language Learning (MALL) undertaken by the ATLAS research group. ATLAS embraced this multidisciplinary field cutting across Mobile Learning and Computer Assisted Language Learning (CALL) as a natural step in their quest to find learning formulas for professional English that…
Computer-Based English Language Testing in China: Present and Future
ERIC Educational Resources Information Center
Yu, Guoxing; Zhang, Jing
2017-01-01
In this special issue on high-stakes English language testing in China, the two articles on computer-based testing (Jin & Yan; He & Min) highlight a number of consistent, ongoing challenges and concerns in the development and implementation of the nationwide IB-CET (Internet Based College English Test) and institutional computer-adaptive…
Your Career in Computer Programming.
ERIC Educational Resources Information Center
Seligsohn, I. J.
This book offers the career-minded young reader insight into computers and computer-programming, by describing the nature of the work, the actual workings of the machines, the language of computers, their history, and their far-reading and increasing applications in business, industry, science, education, defense, and government. At the same time,…
Turned on to Language Arts: Computer Literacy in the Primary Grades.
ERIC Educational Resources Information Center
Guthrie, Larry F.; Richardson, Susan
1995-01-01
Describes Apple Computer's Early Language Connections (ELC) program. Designed for K-2 grades, ELC integrates Macintosh computers, children's literature, instructional software, and other curriculum materials, including sample lessons constructed around thematic units. The literature-based product uses a whole-language approach (with phonics…
ERIC Educational Resources Information Center
Kessler, Greg; Bikowski, Dawn
2010-01-01
This study reports on attention to meaning among 40 NNS pre-service EFL teachers as they collaboratively constructed a wiki in a 16-week online course. Focus is placed upon the nature of individual and group behavior when attending to meaning in a long-term wiki-based collaborative activity as well as the students' collaborative autonomous…
A Survey of Object Oriented Languages in Programming Environments.
1987-06-01
subset of natural languages might be more effective , and make the human-computer interface more friendly. 19 .. .. . . -.. -, " ,. o...and complexty of Ada. He meant that the language contained too many features that made it complicated to use effectively . Much of the complexity comes...by sending messages to a class instance. A small subset of the methods in Smalltalk-80 are not expressed in the !-’ Smalhalk-80 programming language
Automating software design system DESTA
NASA Technical Reports Server (NTRS)
Lovitsky, Vladimir A.; Pearce, Patricia D.
1992-01-01
'DESTA' is the acronym for the Dialogue Evolutionary Synthesizer of Turnkey Algorithms by means of a natural language (Russian or English) functional specification of algorithms or software being developed. DESTA represents the computer-aided and/or automatic artificial intelligence 'forgiving' system which provides users with software tools support for algorithm and/or structured program development. The DESTA system is intended to provide support for the higher levels and earlier stages of engineering design of software in contrast to conventional Computer Aided Design (CAD) systems which provide low level tools for use at a stage when the major planning and structuring decisions have already been taken. DESTA is a knowledge-intensive system. The main features of the knowledge are procedures, functions, modules, operating system commands, batch files, their natural language specifications, and their interlinks. The specific domain for the DESTA system is a high level programming language like Turbo Pascal 6.0. The DESTA system is operational and runs on an IBM PC computer.
Cognition, Corpora, and Computing: Triangulating Research in Usage-Based Language Learning
ERIC Educational Resources Information Center
Ellis, Nick C.
2017-01-01
Usage-based approaches explore how we learn language from our experience of language. Related research thus involves the analysis of the usage from which learners learn and of learner usage as it develops. This program involves considerable data recording, transcription, and analysis, using a variety of corpus and computational techniques, many of…
Current Trends in Computer-Based Language Instruction.
ERIC Educational Resources Information Center
Hart, Robert S.
1987-01-01
A discussion of computer-based language instruction examines the quality of materials currently in use and looks at developments in the field. It is found that language courseware is generally weak in the areas of error analysis and feedback, communicative realism, and convenience of lesson authoring. A review of research under way to improve…
Computer versus Paper-Based Reading: A Case Study in English Language Teaching Context
ERIC Educational Resources Information Center
Solak, Ekrem
2014-01-01
This research aims to determine the preference of prospective English teachers in performing computer and paper-based reading tasks and to what extent computer and paper-based reading influence their reading speed, accuracy and comprehension. The research was conducted at a State run University, English Language Teaching Department in Turkey. The…
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1997-01-01
A multidisciplinary, finite element-based, highly graphics-oriented, linear and nonlinear analysis capability that includes such disciplines as structures, heat transfer, linear aerodynamics, computational fluid dynamics, and controls engineering has been achieved by integrating several new modules in the original STARS (STructural Analysis RoutineS) computer program. Each individual analysis module is general-purpose in nature and is effectively integrated to yield aeroelastic and aeroservoelastic solutions of complex engineering problems. Examples of advanced NASA Dryden Flight Research Center projects analyzed by the code in recent years include the X-29A, F-18 High Alpha Research Vehicle/Thrust Vectoring Control System, B-52/Pegasus Generic Hypersonics, National AeroSpace Plane (NASP), SR-71/Hypersonic Launch Vehicle, and High Speed Civil Transport (HSCT) projects. Extensive graphics capabilities exist for convenient model development and postprocessing of analysis results. The program is written in modular form in standard FORTRAN language to run on a variety of computers, such as the IBM RISC/6000, SGI, DEC, Cray, and personal computer; associated graphics codes use OpenGL and IBM/graPHIGS language for color depiction. This program is available from COSMIC, the NASA agency for distribution of computer programs.
ERIC Educational Resources Information Center
Crossley, Scott A.
2013-01-01
This paper provides an agenda for replication studies focusing on second language (L2) writing and the use of natural language processing (NLP) tools and machine learning algorithms. Specifically, it introduces a range of the available NLP tools and machine learning algorithms and demonstrates how these could be used to replicate seminal studies…
ERIC Educational Resources Information Center
Hutchins, Sandra E.
By analyzing the lexicology of natural language (English or other languages as they are commonly spoken or written), as compared to computer languages, this study explored the extent to which syntactic and semantic levels of linguistic analysis can be implemented and effectively used on microcomputers. In Phase I of the study, the Apple IIe with…
Assessing Creative Problem-Solving with Automated Text Grading
ERIC Educational Resources Information Center
Wang, Hao-Chuan; Chang, Chun-Yen; Li, Tsai-Yen
2008-01-01
The work aims to improve the assessment of creative problem-solving in science education by employing language technologies and computational-statistical machine learning methods to grade students' natural language responses automatically. To evaluate constructs like creative problem-solving with validity, open-ended questions that elicit…
Opus: A Coordination Language for Multidisciplinary Applications
NASA Technical Reports Server (NTRS)
Chapman, Barbara; Haines, Matthew; Mehrotra, Piyush; Zima, Hans; vanRosendale, John
1997-01-01
Data parallel languages, such as High Performance fortran, can be successfully applied to a wide range of numerical applications. However, many advanced scientific and engineering applications are multidisciplinary and heterogeneous in nature, and thus do not fit well into the data parallel paradigm. In this paper we present Opus, a language designed to fill this gap. The central concept of Opus is a mechanism called ShareD Abstractions (SDA). An SDA can be used as a computation server, i.e., a locus of computational activity, or as a data repository for sharing data between asynchronous tasks. SDAs can be internally data parallel, providing support for the integration of data and task parallelism as well as nested task parallelism. They can thus be used to express multidisciplinary applications in a natural and efficient way. In this paper we describe the features of the language through a series of examples and give an overview of the runtime support required to implement these concepts in parallel and distributed environments.
Karakülah, Gökhan; Dicle, Oğuz; Koşaner, Ozgün; Suner, Aslı; Birant, Çağdaş Can; Berber, Tolga; Canbek, Sezin
2014-01-01
The lack of laboratory tests for the diagnosis of most of the congenital anomalies renders the physical examination of the case crucial for the diagnosis of the anomaly; and the cases in the diagnostic phase are mostly being evaluated in the light of the literature knowledge. In this respect, for accurate diagnosis, ,it is of great importance to provide the decision maker with decision support by presenting the literature knowledge about a particular case. Here, we demonstrated a methodology for automated scanning and determining of the phenotypic features from the case reports related to congenital anomalies in the literature with text and natural language processing methods, and we created a framework of an information source for a potential diagnostic decision support system for congenital anomalies.
Exploring autonomy through computational biomodelling.
Palfreyman, Niall
2009-07-01
The question of whether living organisms possess autonomy of action is tied up with the nature of causal efficacy. Yet the nature of organisms is such that they frequently defy conventional causal language. Did the fig wasp select the fig, or vice versa? Is this an epithelial cell because of its genetic structure, or because it develops within the epithelium? The intimate coupling of biological levels of organisation leads developmental systems theory to deconstruct the biological organism into a life-cycle process which constitutes itself from the resources available within a complete developmental system. This radical proposal necessarily raises questions regarding the ontological status of organisms: Does an organism possess existence distinguishable from its molecular composition and social environment? The ambiguity of biological causality makes such questions difficult to answer or even formulate, and computational biology has an important role to play in operationalising the language in which they are framed. In this article we review the role played by computational biomodels in shedding light on the ontological status of organisms. These models are drawn from backgrounds ranging from molecular kinetics to niche construction, and all attempt to trace biological processes to a causal, and therefore existent, source. We conclude that computational biomodelling plays a fertile role in furnishing a proof of concept for conjectures in the philosophy of biology, and suggests the need for a process-based ontology of biological systems.
Students' Motivation towards Computer Use in EFL Learning
ERIC Educational Resources Information Center
Genc, Gulten; Aydin, Selami
2010-01-01
It has been widely recognized that language instruction that integrates technology has become popular, and has had a tremendous impact on language learning process whereas learners are expected to be more motivated in a web-based Computer assisted language learning program, and improve their comprehensive language ability. Thus, the present paper…
Communicating River Level Data and Information to Stakeholders with Different Interests
NASA Astrophysics Data System (ADS)
Macleod, K.; Sripada, S.; Ioris, A.; Arts, K.; van der Wal, R.
2012-12-01
There is a need to increase the effectiveness of how river level data are communicated to a range of stakeholders with an interest in river level information to increase the use of data collected by regulatory agencies. Currently, river level data is provided to members of the public through a web site without any formal engagement with river users having taken place. In our research project called wikiRivers, we are working with the suppliers of river level data as well as the users of this data to explore and improve from the user perspective how river level data and information is made available online. We are focusing on the application of natural language generation technology to create textual summaries of river level data tailored for specific interest groups. These tailored textual summaries will be presented among other modes of information presentation (e.g. maps and visualizations) with the aim to increase communication effectiveness. Natural language generation involves developing computational models that use non-linguistic input data to produce natural language as their output. Acquiring accurate correct system knowledge for natural language generation is a key step in developing such an effective computer software system. In this paper we set out the needs for this project based on discussions with the stakeholder who supplies the river level data and current cyberinfrastructure and report on what we have learned from those individuals and groups who use river level data. Stages in the wikiRivers stakeholder identification, engagement and cyberinfrastructure development. S1- interviews with collectors and suppliers of river level data. S2- river level data stakeholder analysis, including analysis of their interests in individual river networks in Scotland and what they require from the cyberinfrastructure. S3-5 Iterative development and testing of cyberinfrastructure and modelling of river level data with domain and stakeholder knowledge.
Listening Strategy Use and Influential Factors in Web-Based Computer Assisted Language Learning
ERIC Educational Resources Information Center
Chen, L.; Zhang, R.; Liu, C.
2014-01-01
This study investigates second and foreign language (L2) learners' listening strategy use and factors that influence their strategy use in a Web-based computer assisted language learning (CALL) system. A strategy inventory, a factor questionnaire and a standardized listening test were used to collect data from a group of 82 Chinese students…
NASA Astrophysics Data System (ADS)
Stout, Dietrich
2016-03-01
Twenty-five years ago, Pinker and Bloom [1] helped reinvigorate research on language evolution by arguing that language ;shows signs of complex design for the communication of propositional structures, and the only explanation for the origin of organs with complex design is the process of natural selection.; Since then, empirical research has tested the assertions of (cross-cultural) universality, (cross-species) uniqueness, and (cross-domain) specificity underpinning this argument from design. Appearances aside, points of consensus have emerged. The existence of a core computational and neural substrate unique to language and/or humans is still debated, but it is widely agreed that: 1) human language performance overlaps with behaviors in other domains and species, and 2) such general, pre-existing capacities provided the context for language-specific evolution (e.g. [2]).
Dialogue-Based CALL: An Overview of Existing Research
ERIC Educational Resources Information Center
Bibauw, Serge; François, Thomas; Desmet, Piet
2015-01-01
Dialogue-based Computer-Assisted Language Learning (CALL) covers applications and systems allowing a learner to practice the target language in a meaning-focused conversational activity with an automated agent. We first present a common definition for dialogue-based CALL, based on three features: dialogue as the activity unit, computer as the…
English Language Learners' Strategies for Reading Computer-Based Texts at Home and in School
ERIC Educational Resources Information Center
Park, Ho-Ryong; Kim, Deoksoon
2016-01-01
This study investigated four elementary-level English language learners' (ELLs') use of strategies for reading computer-based texts at home and in school. The ELLs in this study were in the fourth and fifth grades in a public elementary school. We identify the ELLs' strategies for reading computer-based texts in home and school environments. We…
Computer Programming Languages for Health Care
O'Neill, Joseph T.
1979-01-01
This paper advocates the use of standard high level programming languages for medical computing. It recommends that U.S. Government agencies having health care missions implement coordinated policies that encourage the use of existing standard languages and the development of new ones, thereby enabling them and the medical computing community at large to share state-of-the-art application programs. Examples are based on a model that characterizes language and language translator influence upon the specification, development, test, evaluation, and transfer of application programs.
Memory Reconsolidation and Computational Learning
2010-03-01
Cooper and H.T. Siegelmann, "Memory Reconsolidation for Natural Language Processing," Cognitive Neurodynamics , 3, 2009: 365-372. M.M. Olsen, N...computerized memories and other state of the art cognitive architectures, our memory system has the ability to process on-line and in real-time as...on both continuous and binary inputs, unlike state of the art methods in case based reasoning and in cognitive architectures, which are bound to
Computer Generation of Natural Language from a Deep Conceptual Base
1974-01-01
It would be useful to have machines which could read scientific documents, newspaper articles , novels, etc., and translate them into other...preparing abstracts :or articles and in headline writing (at least in those cases in which headlines are used as an indication of article content...above), a definite or indefinite article is attached to the noun phrase. The selection of color and size adjectives is made in .. fashion
Condition Recognition for a Program Synthesizer.
1981-06-01
suited for tnls type work since it neitner complains c±* boredom nor wanders from its assigned task. Tne macnine meticulously sequences throuzh a series...natural language understanding is a difficult problem that can De solvel only in limited domains. The use of natural language in programming ta been...and output behavior. For example, if someone wanted to lescribe a proeram to compute tne Fibonacci numbers tnen tie could supply tne input-outpost pairs
Learning for Semantic Parsing with Kernels under Various Forms of Supervision
2007-08-01
natural language sentences to their formal executable meaning representations. This is a challenging problem and is critical for developing computing...sentences are semantically tractable. This indi- cates that Geoquery is more challenging domain for semantic parsing than ATIS. In the past, there have been a...Combining parsers. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/ VLC -99), pp. 187–194
A Tutorial on Techniques and Applications for Natural Language Processing
1983-10-17
mentioned above as specific to context-free grammars were tackled by linguists, in particular Chomsky [21, 221 through Transformational Grammar . As shown...DTIC e, C 17 October 1983 MAY 1,5 1990 DEPARTMENT of COMPUTER SCIENCE Approved for pu ]3 -- ,. " Carnegie-Mellon University . . . - -A.,,Anm m n n n n ln...A Tutorial on Techniques and Applications for Natural Language Processing Philip J. Hayes and Jaime G. Carbonell Carnegie-Mellon University 17
Natural-Language Parser for PBEM
NASA Technical Reports Server (NTRS)
James, Mark
2010-01-01
A computer program called "Hunter" accepts, as input, a colloquial-English description of a set of policy-based-management rules, and parses that description into a form useable by policy-based enterprise management (PBEM) software. PBEM is a rules-based approach suitable for automating some management tasks. PBEM simplifies the management of a given enterprise through establishment of policies addressing situations that are likely to occur. Hunter was developed to have a unique capability to extract the intended meaning instead of focusing on parsing the exact ways in which individual words are used.
Representing sentence information
NASA Astrophysics Data System (ADS)
Perkins, Walton A., III
1991-03-01
This paper describes a computer-oriented representation for sentence information. Whereas many Artificial Intelligence (AI) natural language systems start with a syntactic parse of a sentence into the linguist's components: noun, verb, adjective, preposition, etc., we argue that it is better to parse the input sentence into 'meaning' components: attribute, attribute value, object class, object instance, and relation. AI systems need a representation that will allow rapid storage and retrieval of information and convenient reasoning with that information. The attribute-of-object representation has proven useful for handling information in relational databases (which are well known for their efficiency in storage and retrieval) and for reasoning in knowledge- based systems. On the other hand, the linguist's syntactic representation of the works in sentences has not been shown to be useful for information handling and reasoning. We think it is an unnecessary and misleading intermediate form. Our sentence representation is semantic based in terms of attribute, attribute value, object class, object instance, and relation. Every sentence is segmented into one or more components with the form: 'attribute' of 'object' 'relation' 'attribute value'. Using only one format for all information gives the system simplicity and good performance as a RISC architecture does for hardware. The attribute-of-object representation is not new; it is used extensively in relational databases and knowledge-based systems. However, we will show that it can be used as a meaning representation for natural language sentences with minor extensions. In this paper we describe how a computer system can parse English sentences into this representation and generate English sentences from this representation. Much of this has been tested with computer implementation.
Applying and evaluating computer-animated tutors
NASA Astrophysics Data System (ADS)
Massaro, Dominic W.; Bosseler, Alexis; Stone, Patrick S.; Connors, Pamela
2002-05-01
We have developed computer-assisted speech and language tutors for deaf, hard of hearing, and autistic children. Our language-training program utilizes our computer-animated talking head, Baldi, as the conversational agent, who guides students through a variety of exercises designed to teach vocabulary and grammer, to improve speech articulation, and to develop linguistic and phonological awareness. Baldi is an accurate three-dimensional animated talking head appropriately aligned with either synthesized or natural speech. Baldi has a tongue and palate, which can be displayed by making his skin transparent. Two specific language-training programs have been evaluated to determine if they improve word learning and speech articulation. The results indicate that the programs are effective in teaching receptive and productive language. Advantages of utilizing a computer-animated agent as a language tutor are the popularity of computers and embodied conversational agents with autistic kids, the perpetual availability of the program, and individualized instruction. Students enjoy working with Baldi because he offers extreme patience, he doesn't become angry, tired, or bored, and he is in effect a perpetual teaching machine. The results indicate that the psychology and technology of Baldi holds great promise in language learning and speech therapy. [Work supported by NSF Grant Nos. CDA-9726363 and BCS-9905176 and Public Health Service Grant No. PHS R01 DC00236.
Natural Language Description of Emotion
ERIC Educational Resources Information Center
Kazemzadeh, Abe
2013-01-01
This dissertation studies how people describe emotions with language and how computers can simulate this descriptive behavior. Although many non-human animals can express their current emotions as social signals, only humans can communicate about emotions symbolically. This symbolic communication of emotion allows us to talk about emotions that we…
ERIC Educational Resources Information Center
Cautin, Harvey; Regan, Edward
Requirements are discussed for an information retrieval language that enables users to employ natural language sentences in interaction with computer-stored files. Anticipated modes of operation of the system are outlined. These are: the search mode, the dictionary mode, the tables mode, and the statistical mode. Analysis of sample sentences…
Very Large Scale Integration (VLSI).
ERIC Educational Resources Information Center
Yeaman, Andrew R. J.
Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…
A data analysis expert system for large established distributed databases
NASA Technical Reports Server (NTRS)
Gnacek, Anne-Marie; An, Y. Kim; Ryan, J. Patrick
1987-01-01
A design for a natural language database interface system, called the Deductively Augmented NASA Management Decision support System (DANMDS), is presented. The DANMDS system components have been chosen on the basis of the following considerations: maximal employment of the existing NASA IBM-PC computers and supporting software; local structuring and storing of external data via the entity-relationship model; a natural easy-to-use error-free database query language; user ability to alter query language vocabulary and data analysis heuristic; and significant artificial intelligence data analysis heuristic techniques that allow the system to become progressively and automatically more useful.
NASA Technical Reports Server (NTRS)
Martinko, E. A. (Principal Investigator); Caron, L. M.; Stewart, D. S.
1984-01-01
Data bases and information systems developed and maintained by state agencies to support planning and management of environmental and nutural resources were inventoried for all 50 states, Puerto Rico, and U.S. Virgin Islands. The information obtained is assembled into a computerized data base catalog which is throughly cross-referecence. Retrieval is possible by code, state, data base name, data base acronym, agency, computer, GIS capability, language, specialized software, data category name, geograhic reference, data sources, and level of reliability. The 324 automated data bases identified are described.
Boguslav, Mayla; Cohen, Kevin Bretonnel
2017-01-01
Human-annotated data is a fundamental part of natural language processing system development and evaluation. The quality of that data is typically assessed by calculating the agreement between the annotators. It is widely assumed that this agreement between annotators is the upper limit on system performance in natural language processing: if humans can't agree with each other about the classification more than some percentage of the time, we don't expect a computer to do any better. We trace the logical positivist roots of the motivation for measuring inter-annotator agreement, demonstrate the prevalence of the widely-held assumption about the relationship between inter-annotator agreement and system performance, and present data that suggest that inter-annotator agreement is not, in fact, an upper bound on language processing system performance.
The Evolution of Networked Computing in the Teaching of Japanese as a Foreign Language.
ERIC Educational Resources Information Center
Harrison, Richard
1998-01-01
Reviews the evolution of Internet-based projects in Japanese computer-assisted language learning and suggests future directions in which the field may develop, based on emerging network technology and learning theory. (Author/VWL)
ERIC Educational Resources Information Center
Pierson, Susan Jacques
2015-01-01
One way to provide high quality instruction for underserved English Language Learners around the world is to combine Task-Based English Language Learning with Computer- Assisted Instruction. As part of an ongoing project, "Bridges to Swaziland," these approaches have been implemented in a determined effort to improve the ESL program for…
The Use of Computer-Based Simulation to Aid Comprehension and Incidental Vocabulary Learning
ERIC Educational Resources Information Center
Mohsen, Mohammed Ali
2016-01-01
One of the main issues in language learning is to find ways to enable learners to interact with the language input in an involved task. Given that computer-based simulation allows learners to interact with visual modes, this article examines how the interaction of students with an online video simulation affects their second language video…
ERIC Educational Resources Information Center
Bejar, Isaac I.; VanWinkle, Waverely; Madnani, Nitin; Lewis, William; Steier, Michael
2013-01-01
The paper applies a natural language computational tool to study a potential construct-irrelevant response strategy, namely the use of "shell language." Although the study is motivated by the impending increase in the volume of scoring of students responses from assessments to be developed in response to the Race to the Top initiative,…
2016-01-05
discretizations . We maintain that what is clear at the mathematical level should be equally clear in computation. In this small STIR project, we separate the...concerns of describing and discretizing such models by defining an input language representing PDE, including steady-state and tran- sient, linear and...solvers, such as [8, 9], focused on the solvers themselves and particular families of discretizations (e. g. finite elements), and now it is natural to
The Design of Hand Gestures for Human-Computer Interaction: Lessons from Sign Language Interpreters
Rempel, David; Camilleri, Matt J.; Lee, David L.
2015-01-01
The design and selection of 3D modeled hand gestures for human-computer interaction should follow principles of natural language combined with the need to optimize gesture contrast and recognition. The selection should also consider the discomfort and fatigue associated with distinct hand postures and motions, especially for common commands. Sign language interpreters have extensive and unique experience forming hand gestures and many suffer from hand pain while gesturing. Professional sign language interpreters (N=24) rated discomfort for hand gestures associated with 47 characters and words and 33 hand postures. Clear associations of discomfort with hand postures were identified. In a nominal logistic regression model, high discomfort was associated with gestures requiring a flexed wrist, discordant adjacent fingers, or extended fingers. These and other findings should be considered in the design of hand gestures to optimize the relationship between human cognitive and physical processes and computer gesture recognition systems for human-computer input. PMID:26028955
Mobile robots IV; Proceedings of the Meeting, Philadelphia, PA, Nov. 6, 7, 1989
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfe, W.J.; Chun, W.H.
1990-01-01
The present conference on mobile robot systems discusses high-speed machine perception based on passive sensing, wide-angle optical ranging, three-dimensional path planning for flying/crawling robots, navigation of autonomous mobile intelligence in an unstructured natural environment, mechanical models for the locomotion of a four-articulated-track robot, a rule-based command language for a semiautonomous Mars rover, and a computer model of the structured light vision system for a Mars rover. Also discussed are optical flow and three-dimensional information for navigation, feature-based reasoning trail detection, a symbolic neural-net production system for obstacle avoidance and navigation, intelligent path planning for robot navigation in an unknown environment,more » behaviors from a hierarchical control system, stereoscopic TV systems, the REACT language for autonomous robots, and a man-amplifying exoskeleton.« less
Inferring Speaker Affect in Spoken Natural Language Communication
ERIC Educational Resources Information Center
Pon-Barry, Heather Roberta
2013-01-01
The field of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards…
Bridging Levels of Analysis: Learning, Information Theory, and the Lexicon
ERIC Educational Resources Information Center
Dye, Melody
2017-01-01
While information theory is typically considered in the context of modern computing and engineering, its core mathematical principles provide a potentially useful lens through which to consider human language. Like the artificial communication systems such principles were invented to describe, natural languages involve a sender and receiver, a…
A Graphical Database Interface for Casual, Naive Users.
ERIC Educational Resources Information Center
Burgess, Clifford; Swigger, Kathleen
1986-01-01
Describes the design of a database interface for infrequent users of computers which consists of a graphical display of a model of a database and a natural language query language. This interface was designed for and tested with physicians at the University of Texas Health Science Center in Dallas. (LRW)
English Complex Verb Constructions: Identification and Inference
ERIC Educational Resources Information Center
Tu, Yuancheng
2012-01-01
The fundamental problem faced by automatic text understanding in Natural Language Processing (NLP) is to identify semantically related pieces of text and integrate them together to compute the meaning of the whole text. However, the principle of compositionality runs into trouble very quickly when real language is examined with its frequent…
The Relevance of AI Research to CAI.
ERIC Educational Resources Information Center
Kearsley, Greg P.
This article provides a tutorial introduction to Artificial Intelligence (AI) research for those involved in Computer Assisted Instruction (CAI). The general theme is that much of the current work in AI, particularly in the areas of natural language understanding systems, rule induction, programming languages, and socratic systems, has important…
Semantic biomedical resource discovery: a Natural Language Processing framework.
Sfakianaki, Pepi; Koumakis, Lefteris; Sfakianakis, Stelios; Iatraki, Galatia; Zacharioudakis, Giorgos; Graf, Norbert; Marias, Kostas; Tsiknakis, Manolis
2015-09-30
A plethora of publicly available biomedical resources do currently exist and are constantly increasing at a fast rate. In parallel, specialized repositories are been developed, indexing numerous clinical and biomedical tools. The main drawback of such repositories is the difficulty in locating appropriate resources for a clinical or biomedical decision task, especially for non-Information Technology expert users. In parallel, although NLP research in the clinical domain has been active since the 1960s, progress in the development of NLP applications has been slow and lags behind progress in the general NLP domain. The aim of the present study is to investigate the use of semantics for biomedical resources annotation with domain specific ontologies and exploit Natural Language Processing methods in empowering the non-Information Technology expert users to efficiently search for biomedical resources using natural language. A Natural Language Processing engine which can "translate" free text into targeted queries, automatically transforming a clinical research question into a request description that contains only terms of ontologies, has been implemented. The implementation is based on information extraction techniques for text in natural language, guided by integrated ontologies. Furthermore, knowledge from robust text mining methods has been incorporated to map descriptions into suitable domain ontologies in order to ensure that the biomedical resources descriptions are domain oriented and enhance the accuracy of services discovery. The framework is freely available as a web application at ( http://calchas.ics.forth.gr/ ). For our experiments, a range of clinical questions were established based on descriptions of clinical trials from the ClinicalTrials.gov registry as well as recommendations from clinicians. Domain experts manually identified the available tools in a tools repository which are suitable for addressing the clinical questions at hand, either individually or as a set of tools forming a computational pipeline. The results were compared with those obtained from an automated discovery of candidate biomedical tools. For the evaluation of the results, precision and recall measurements were used. Our results indicate that the proposed framework has a high precision and low recall, implying that the system returns essentially more relevant results than irrelevant. There are adequate biomedical ontologies already available, sufficiency of existing NLP tools and quality of biomedical annotation systems for the implementation of a biomedical resources discovery framework, based on the semantic annotation of resources and the use on NLP techniques. The results of the present study demonstrate the clinical utility of the application of the proposed framework which aims to bridge the gap between clinical question in natural language and efficient dynamic biomedical resources discovery.
Computational Investigations of Multiword Chunks in Language Learning.
McCauley, Stewart M; Christiansen, Morten H
2017-07-01
Second-language learners rarely arrive at native proficiency in a number of linguistic domains, including morphological and syntactic processing. Previous approaches to understanding the different outcomes of first- versus second-language learning have focused on cognitive and neural factors. In contrast, we explore the possibility that children and adults may rely on different linguistic units throughout the course of language learning, with specific focus on the granularity of those units. Following recent psycholinguistic evidence for the role of multiword chunks in online language processing, we explore the hypothesis that children rely more heavily on multiword units in language learning than do adults learning a second language. To this end, we take an initial step toward using large-scale, corpus-based computational modeling as a tool for exploring the granularity of speakers' linguistic units. Employing a computational model of language learning, the Chunk-Based Learner, we compare the usefulness of chunk-based knowledge in accounting for the speech of second-language learners versus children and adults speaking their first language. Our findings suggest that while multiword units are likely to play a role in second-language learning, adults may learn less useful chunks, rely on them to a lesser extent, and arrive at them through different means than children learning a first language. Copyright © 2017 Cognitive Science Society, Inc.
Alt, Mary; Arizmendi, Genesis D; Beal, Carole R
2014-07-01
The present study examined the relationship between mathematics and language to better understand the nature of the deficit and the academic implications associated with specific language impairment (SLI) and academic implications for English language learners (ELLs). School-age children (N = 61; 20 SLI, 20 ELL, 21 native monolingual English [NE]) were assessed using a norm-referenced mathematics instrument and 3 experimental computer-based mathematics games that varied in language demands. Group means were compared with analyses of variance. The ELL group was less accurate than the NE group only when tasks were language heavy. In contrast, the group with SLI was less accurate than the groups with NE and ELLs on language-heavy tasks and some language-light tasks. Specifically, the group with SLI was less accurate on tasks that involved comparing numerical symbols and using visual working memory for patterns. However, there were no group differences between children with SLI and peers without SLI on language-light mathematics tasks that involved visual working memory for numerical symbols. Mathematical difficulties of children who are ELLs appear to be related to the language demands of mathematics tasks. In contrast, children with SLI appear to have difficulty with mathematics tasks because of linguistic as well as nonlinguistic processing constraints.
Teaching Computer Languages and Elementary Theory for Mixed Audiences at University Level
NASA Astrophysics Data System (ADS)
Christiansen, Henning
2004-09-01
Theoretical issues of computer science are traditionally taught in a way that presupposes a solid mathematical background and are usually considered more or less inaccessible for students without this. An effective methodology is described which has been developed for a target group of university students with different backgrounds such as natural science or humanities. It has been developed for a course that integrates theoretical material on computer languages and abstract machines with practical programming techniques. Prolog used as meta-language for describing language issues is the central instrument in the approach: Formal descriptions become running prototypes that are easy and appealing to test and modify, and can be extended into analyzers, interpreters, and tools such as tracers and debuggers. Experience shows a high learning curve, especially when the principles are extended into a learning-by-doing approach having the students to develop such descriptions themselves from an informal introduction.
Net-centric ACT-R-Based Cognitive Architecture with DEVS Unified Process
2011-04-01
effort has been spent in analyzing various forms of requirement specifications, viz, state-based, Natural Language based, UML-based, Rule- based, BPMN ...requirement specifications in one of the chosen formats such as BPMN , DoDAF, Natural Language Processing (NLP) based, UML- based, DSL or simply
Application Portable Parallel Library
NASA Technical Reports Server (NTRS)
Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott
1995-01-01
Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.
ERIC Educational Resources Information Center
Cárdenas-Claros, Mónica Stella
2015-01-01
This paper reports on the findings of two qualitative exploratory studies that sought to investigate design features of help options in computer-based L2 listening materials. Informed by principles of participatory design, language learners, software designers, language teachers, and a computer programmer worked collaboratively in a series of…
1974-07-01
iiWU -immmemmmmm This document was generated by the Stanford Artificial Intelligence Laboratory’s document compiler, "PUB" and reproducec’ on a...for more sophisticated artificial (programming) languages. The new issues became those of how to represent a grammar as precise syntactic structures...challenge lies in discovering - either by synthesis of an artificial system, or by analysis of a natural one - the underlying logical (a. opposed to
NASA Astrophysics Data System (ADS)
Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias; Small, Steven L.
2014-09-01
From the perspective of language, Fitch's [1] claim that theories of cognitive computation should not be separated from those of implementation surely deserves applauding. Recent developments in the Cognitive Neuroscience of Language, leading to the new field of the Neurobiology of Language [2-4], emphasise precisely this point: rather than attempting to simply map cognitive theories of language onto the brain, we should aspire to understand how the brain implements language. This perspective resonates with many of the points raised by Fitch in his review, such as the discussion of unhelpful dichotomies (e.g., Nature versus Nurture). Cognitive dichotomies and debates have repeatedly turned out to be of limited usefulness when it comes to understanding language in the brain. The famous modularity-versus-interactivity and dual route-versus-connectionist debates are cases in point: in spite of hundreds of experiments using neuroimaging (or other techniques), or the construction of myriad computer models, little progress has been made in their resolution. This suggests that dichotomies proposed at a purely cognitive (or computational) level without consideration of biological grounding appear to be "asking the wrong questions" about the neurobiology of language. In accordance with these developments, several recent proposals explicitly consider neurobiological constraints while seeking to explain language processing at a cognitive level (e.g. [5-7]).
Development of the Tensoral Computer Language
NASA Technical Reports Server (NTRS)
Ferziger, Joel; Dresselhaus, Eliot
1996-01-01
The research scientist or engineer wishing to perform large scale simulations or to extract useful information from existing databases is required to have expertise in the details of the particular database, the numerical methods and the computer architecture to be used. This poses a significant practical barrier to the use of simulation data. The goal of this research was to develop a high-level computer language called Tensoral, designed to remove this barrier. The Tensoral language provides a framework in which efficient generic data manipulations can be easily coded and implemented. First of all, Tensoral is general. The fundamental objects in Tensoral represent tensor fields and the operators that act on them. The numerical implementation of these tensors and operators is completely and flexibly programmable. New mathematical constructs and operators can be easily added to the Tensoral system. Tensoral is compatible with existing languages. Tensoral tensor operations co-exist in a natural way with a host language, which may be any sufficiently powerful computer language such as Fortran, C, or Vectoral. Tensoral is very-high-level. Tensor operations in Tensoral typically act on entire databases (i.e., arrays) at one time and may, therefore, correspond to many lines of code in a conventional language. Tensoral is efficient. Tensoral is a compiled language. Database manipulations are simplified optimized and scheduled by the compiler eventually resulting in efficient machine code to implement them.
A progress report on a NASA research program for embedded computer systems software
NASA Technical Reports Server (NTRS)
Foudriat, E. C.; Senn, E. H.; Will, R. W.; Straeter, T. A.
1979-01-01
The paper presents the results of the second stage of the Multipurpose User-oriented Software Technology (MUST) program. Four primary areas of activities are discussed: programming environment, HAL/S higher-order programming language support, the Integrated Verification and Testing System (IVTS), and distributed system language research. The software development environment is provided by the interactive software invocation system. The higher-order programming language (HOL) support chosen for consideration is HAL/S mainly because at the time it was one of the few HOLs with flight computer experience and it is the language used on the Shuttle program. The overall purpose of IVTS is to provide a 'user-friendly' software testing system which is highly modular, user controlled, and cooperative in nature.
CPP-TRS(C): On using visual cognitive symbols to enhance communication effectiveness
NASA Technical Reports Server (NTRS)
Tonfoni, Graziella
1994-01-01
Communicative Positioning Program/Text Representation Systems (CPP-TRS) is a visual language based on a system of 12 canvasses, 10 signals and 14 symbols. CPP-TRS is based on the fact that every communication action is the result of a set of cognitive processes and the whole system is based on the concept that you can enhance communication by visually perceiving text. With a simple syntax, CPP-TRS is capable of representing meaning and intention as well as communication functions visually. Those are precisely invisible aspects of natural language that are most relevant to getting the global meaning of a text. CPP-TRS reinforces natural language in human machine interaction systems. It complements natural language by adding certain important elements that are not represented by natural language by itself. These include communication intention and function of the text expressed by the sender, as well as the role the reader is supposed to play. The communication intention and function of a text and the reader's role are invisible in natural language because neither specific words nor punctuation conveys them sufficiently and unambiguously; they are therefore non-transparent.
Flexible Language Constructs for Large Parallel Programs
Rosing, Matt; Schnabel, Robert
1994-01-01
The goal of the research described in this article is to develop flexible language constructs for writing large data parallel numerical programs for distributed memory (multiple instruction multiple data [MIMD]) multiprocessors. Previously, several models have been developed to support synchronization and communication. Models for global synchronization include single instruction multiple data (SIMD), single program multiple data (SPMD), and sequential programs annotated with data distribution statements. The two primary models for communication include implicit communication based on shared memory and explicit communication based on messages. None of these models by themselves seem sufficient to permit the natural and efficient expression ofmore » the variety of algorithms that occur in large scientific computations. In this article, we give an overview of a new language that combines many of these programming models in a clean manner. This is done in a modular fashion such that different models can be combined to support large programs. Within a module, the selection of a model depends on the algorithm and its efficiency requirements. In this article, we give an overview of the language and discuss some of the critical implementation details.« less
The Printout: Computers and Reading in the United Kingdom.
ERIC Educational Resources Information Center
Ewing, James M.
1988-01-01
Offers an overview of some reading and language arts computer projects in the United Kingdom, including language teaching and intelligent knowledge-based systems, assessment of written style by computer, and desktop publishing in the primary school. (ARH)
The language of gene ontology: a Zipf's law analysis.
Kalankesh, Leila Ranandeh; Stevens, Robert; Brass, Andy
2012-06-07
Most major genome projects and sequence databases provide a GO annotation of their data, either automatically or through human annotators, creating a large corpus of data written in the language of GO. Texts written in natural language show a statistical power law behaviour, Zipf's law, the exponent of which can provide useful information on the nature of the language being used. We have therefore explored the hypothesis that collections of GO annotations will show similar statistical behaviours to natural language. Annotations from the Gene Ontology Annotation project were found to follow Zipf's law. Surprisingly, the measured power law exponents were consistently different between annotation captured using the three GO sub-ontologies in the corpora (function, process and component). On filtering the corpora using GO evidence codes we found that the value of the measured power law exponent responded in a predictable way as a function of the evidence codes used to support the annotation. Techniques from computational linguistics can provide new insights into the annotation process. GO annotations show similar statistical behaviours to those seen in natural language with measured exponents that provide a signal which correlates with the nature of the evidence codes used to support the annotations, suggesting that the measured exponent might provide a signal regarding the information content of the annotation.
ERIC Educational Resources Information Center
Van Campen, Joseph A.
Computer software for programed language instruction, developed in the second quarter of 1970 at Stanford's Institute for Mathematical Studies in the Social Sciences is described in this report. The software includes: (1) a PDP-10 computer assembly language for generating drill sentences; (2) a coding system allowing a large number of sentences to…
ERIC Educational Resources Information Center
Haider, Md. Zulfeqar; Chowdhury, Takad Ahmed
2012-01-01
This study is based on a survey of the Communicative English Language Certificate (CELC) course run by the Foreign Language Training Center (FLTC), a Project under the Ministry of Education, Bangladesh. FLTC is working to promote the teaching and learning of English through its eleven computer-based and state of the art language laboratories. As…
Interactive natural language acquisition in a multi-modal recurrent neural architecture
NASA Astrophysics Data System (ADS)
Heinrich, Stefan; Wermter, Stefan
2018-01-01
For the complex human brain that enables us to communicate in natural language, we gathered good understandings of principles underlying language acquisition and processing, knowledge about sociocultural conditions, and insights into activity patterns in the brain. However, we were not yet able to understand the behavioural and mechanistic characteristics for natural language and how mechanisms in the brain allow to acquire and process language. In bridging the insights from behavioural psychology and neuroscience, the goal of this paper is to contribute a computational understanding of appropriate characteristics that favour language acquisition. Accordingly, we provide concepts and refinements in cognitive modelling regarding principles and mechanisms in the brain and propose a neurocognitively plausible model for embodied language acquisition from real-world interaction of a humanoid robot with its environment. In particular, the architecture consists of a continuous time recurrent neural network, where parts have different leakage characteristics and thus operate on multiple timescales for every modality and the association of the higher level nodes of all modalities into cell assemblies. The model is capable of learning language production grounded in both, temporal dynamic somatosensation and vision, and features hierarchical concept abstraction, concept decomposition, multi-modal integration, and self-organisation of latent representations.
Generation of Natural-Language Textual Summaries from Longitudinal Clinical Records.
Goldstein, Ayelet; Shahar, Yuval
2015-01-01
Physicians are required to interpret, abstract and present in free-text large amounts of clinical data in their daily tasks. This is especially true for chronic-disease domains, but holds also in other clinical domains. We have recently developed a prototype system, CliniText, which, given a time-oriented clinical database, and appropriate formal abstraction and summarization knowledge, combines the computational mechanisms of knowledge-based temporal data abstraction, textual summarization, abduction, and natural-language generation techniques, to generate an intelligent textual summary of longitudinal clinical data. We demonstrate our methodology, and the feasibility of providing a free-text summary of longitudinal electronic patient records, by generating summaries in two very different domains - Diabetes Management and Cardiothoracic surgery. In particular, we explain the process of generating a discharge summary of a patient who had undergone a Coronary Artery Bypass Graft operation, and a brief summary of the treatment of a diabetes patient for five years.
1986-12-31
synthesize synchronization skeletons"Science of Computer Programming 2, 1982, pp. 241-266 [Gel85] Gelernter, David, "Generative communication in...effective computation based on given primitives . An architecture is an abstract object-type, whose instances are computing systems. By a parallel computing...explaining the language primitives on this basis. We explain how such a basis can be "simpler" than a general-purpose manual-programming language such as
A natural language interface plug-in for cooperative query answering in biological databases.
Jamil, Hasan M
2012-06-11
One of the many unique features of biological databases is that the mere existence of a ground data item is not always a precondition for a query response. It may be argued that from a biologist's standpoint, queries are not always best posed using a structured language. By this we mean that approximate and flexible responses to natural language like queries are well suited for this domain. This is partly due to biologists' tendency to seek simpler interfaces and partly due to the fact that questions in biology involve high level concepts that are open to interpretations computed using sophisticated tools. In such highly interpretive environments, rigidly structured databases do not always perform well. In this paper, our goal is to propose a semantic correspondence plug-in to aid natural language query processing over arbitrary biological database schema with an aim to providing cooperative responses to queries tailored to users' interpretations. Natural language interfaces for databases are generally effective when they are tuned to the underlying database schema and its semantics. Therefore, changes in database schema become impossible to support, or a substantial reorganization cost must be absorbed to reflect any change. We leverage developments in natural language parsing, rule languages and ontologies, and data integration technologies to assemble a prototype query processor that is able to transform a natural language query into a semantically equivalent structured query over the database. We allow knowledge rules and their frequent modifications as part of the underlying database schema. The approach we adopt in our plug-in overcomes some of the serious limitations of many contemporary natural language interfaces, including support for schema modifications and independence from underlying database schema. The plug-in introduced in this paper is generic and facilitates connecting user selected natural language interfaces to arbitrary databases using a semantic description of the intended application. We demonstrate the feasibility of our approach with a practical example.
Fifth Generation Computers: Their Implications for Further Education. An Occasional Paper.
ERIC Educational Resources Information Center
Ennals, Richard; Cotterell, Arthur
Research to develop a fifth generation of computers is underway in several countries. These computers, which will be distinguished by the ability to provide knowledge information processing and respond to natural language commands, will have a profound impact on the labor market and hence on further education. Rather than being a separate…
ERIC Educational Resources Information Center
Lee, Young-Jin
2010-01-01
Teaching computer programming to young children has been considered difficult because of its abstract and complex nature. The objectives of this study are (1) to investigate whether an innovative educational technology tool called Scratch could enable young children to learn abstract knowledge of computer programming while creating multimedia…
Using hybridization networks to retrace the evolution of Indo-European languages.
Willems, Matthieu; Lord, Etienne; Laforest, Louise; Labelle, Gilbert; Lapointe, François-Joseph; Di Sciullo, Anna Maria; Makarenkov, Vladimir
2016-09-06
Curious parallels between the processes of species and language evolution have been observed by many researchers. Retracing the evolution of Indo-European (IE) languages remains one of the most intriguing intellectual challenges in historical linguistics. Most of the IE language studies use the traditional phylogenetic tree model to represent the evolution of natural languages, thus not taking into account reticulate evolutionary events, such as language hybridization and word borrowing which can be associated with species hybridization and horizontal gene transfer, respectively. More recently, implicit evolutionary networks, such as split graphs and minimal lateral networks, have been used to account for reticulate evolution in linguistics. Striking parallels existing between the evolution of species and natural languages allowed us to apply three computational biology methods for reconstruction of phylogenetic networks to model the evolution of IE languages. We show how the transfer of methods between the two disciplines can be achieved, making necessary methodological adaptations. Considering basic vocabulary data from the well-known Dyen's lexical database, which contains word forms in 84 IE languages for the meanings of a 200-meaning Swadesh list, we adapt a recently developed computational biology algorithm for building explicit hybridization networks to study the evolution of IE languages and compare our findings to the results provided by the split graph and galled network methods. We conclude that explicit phylogenetic networks can be successfully used to identify donors and recipients of lexical material as well as the degree of influence of each donor language on the corresponding recipient languages. We show that our algorithm is well suited to detect reticulate relationships among languages, and present some historical and linguistic justification for the results obtained. Our findings could be further refined if relevant syntactic, phonological and morphological data could be analyzed along with the available lexical data.
Culture and biology in the origins of linguistic structure.
Kirby, Simon
2017-02-01
Language is systematically structured at all levels of description, arguably setting it apart from all other instances of communication in nature. In this article, I survey work over the last 20 years that emphasises the contributions of individual learning, cultural transmission, and biological evolution to explaining the structural design features of language. These 3 complex adaptive systems exist in a network of interactions: individual learning biases shape the dynamics of cultural evolution; universal features of linguistic structure arise from this cultural process and form the ultimate linguistic phenotype; the nature of this phenotype affects the fitness landscape for the biological evolution of the language faculty; and in turn this determines individuals' learning bias. Using a combination of computational simulation, laboratory experiments, and comparison with real-world cases of language emergence, I show that linguistic structure emerges as a natural outcome of cultural evolution once certain minimal biological requirements are in place.
What Artificial Grammar Learning Reveals about the Neurobiology of Syntax
ERIC Educational Resources Information Center
Petersson, Karl-Magnus; Folia, Vasiliki; Hagoort, Peter
2012-01-01
In this paper we examine the neurobiological correlates of syntax, the processing of structured sequences, by comparing FMRI results on artificial and natural language syntax. We discuss these and similar findings in the context of formal language and computability theory. We used a simple right-linear unification grammar in an implicit artificial…
Automated Error Detection for Developing Grammar Proficiency of ESL Learners
ERIC Educational Resources Information Center
Feng, Hui-Hsien; Saricaoglu, Aysel; Chukharev-Hudilainen, Evgeny
2016-01-01
Thanks to natural language processing technologies, computer programs are actively being used not only for holistic scoring, but also for formative evaluation of writing. CyWrite is one such program that is under development. The program is built upon Second Language Acquisition theories and aims to assist ESL learners in higher education by…
Assessing Online Collaboration among Language Teachers: A Cross-Institutional Case Study
ERIC Educational Resources Information Center
Arnold, Nike; Ducate, Lara; Lomicka, Lara; Lord, Gillian
2009-01-01
This paper focuses on computer-supported collaborative learning (CSCL) among foreign language (FL) graduate students from three universities, who worked together to create a wiki. In order to investigate the nature of CSCL among participants, this qualitative case study used the Curtis and Lawson framework (2001) to conduct a content analysis of…
Introduction to the special issue: parsimony and redundancy in models of language.
Wiechmann, Daniel; Kerz, Elma; Snider, Neal; Jaeger, T Florian
2013-09-01
One of the most fundamental goals in linguistic theory is to understand the nature of linguistic knowledge, that is, the representations and mechanisms that figure in a cognitively plausible model of human language-processing. The past 50 years have witnessed the development and refinement of various theories about what kind of 'stuff' human knowledge of language consists of, and technological advances now permit the development of increasingly sophisticated computational models implementing key assumptions of different theories from both rationalist and empiricist perspectives. The present special issue does not aim to present or discuss the arguments for and against the two epistemological stances or discuss evidence that supports either of them (cf. Bod, Hay, & Jannedy, 2003; Christiansen & Chater, 2008; Hauser, Chomsky, & Fitch, 2002; Oaksford & Chater, 2007; O'Donnell, Hauser, & Fitch, 2005). Rather, the research presented in this issue, which we label usage-based here, conceives of linguistic knowledge as being induced from experience. According to the strongest of such accounts, the acquisition and processing of language can be explained with reference to general cognitive mechanisms alone (rather than with reference to innate language-specific mechanisms). Defined in these terms, usage-based approaches encompass approaches referred to as experience-based, performance-based and/or emergentist approaches (Amrnon & Snider, 2010; Bannard, Lieven, & Tomasello, 2009; Bannard & Matthews, 2008; Chater & Manning, 2006; Clark & Lappin, 2010; Gerken, Wilson, & Lewis, 2005; Gomez, 2002;
ERIC Educational Resources Information Center
Tanaka, Makiko
2015-01-01
The use of computers as an educational tool has become very popular in the context of language teaching and learning. Research into computer mediated communication (CMC) in a Japanese as a foreign language (JFL) learning and teaching context can take advantage of various pedagogical possibilities, just as in the English classroom. This study…
ERIC Educational Resources Information Center
Wang, Li
2005-01-01
With the advent of networked computers and Internet technology, computer-based instruction has been widely used in language classrooms throughout the United States. Computer technologies have dramatically changed the way people gather information, conduct research and communicate with others worldwide. Considering the tremendous startup expenses,…
An intelligent multi-media human-computer dialogue system
NASA Technical Reports Server (NTRS)
Neal, J. G.; Bettinger, K. E.; Byoun, J. S.; Dobes, Z.; Thielman, C. Y.
1988-01-01
Sophisticated computer systems are being developed to assist in the human decision-making process for very complex tasks performed under stressful conditions. The human-computer interface is a critical factor in these systems. The human-computer interface should be simple and natural to use, require a minimal learning period, assist the user in accomplishing his task(s) with a minimum of distraction, present output in a form that best conveys information to the user, and reduce cognitive load for the user. In pursuit of this ideal, the Intelligent Multi-Media Interfaces project is devoted to the development of interface technology that integrates speech, natural language text, graphics, and pointing gestures for human-computer dialogues. The objective of the project is to develop interface technology that uses the media/modalities intelligently in a flexible, context-sensitive, and highly integrated manner modelled after the manner in which humans converse in simultaneous coordinated multiple modalities. As part of the project, a knowledge-based interface system, called CUBRICON (CUBRC Intelligent CONversationalist) is being developed as a research prototype. The application domain being used to drive the research is that of military tactical air control.
Creating Body Shapes From Verbal Descriptions by Linking Similarity Spaces.
Hill, Matthew Q; Streuber, Stephan; Hahn, Carina A; Black, Michael J; O'Toole, Alice J
2016-11-01
Brief verbal descriptions of people's bodies (e.g., "curvy," "long-legged") can elicit vivid mental images. The ease with which these mental images are created belies the complexity of three-dimensional body shapes. We explored the relationship between body shapes and body descriptions and showed that a small number of words can be used to generate categorically accurate representations of three-dimensional bodies. The dimensions of body-shape variation that emerged in a language-based similarity space were related to major dimensions of variation computed directly from three-dimensional laser scans of 2,094 bodies. This relationship allowed us to generate three-dimensional models of people in the shape space using only their coordinates on analogous dimensions in the language-based description space. Human descriptions of photographed bodies and their corresponding models matched closely. The natural mapping between the spaces illustrates the role of language as a concise code for body shape that captures perceptually salient global and local body features. © The Author(s) 2016.
Action and language integration: from humans to cognitive robots.
Borghi, Anna M; Cangelosi, Angelo
2014-07-01
The topic is characterized by a highly interdisciplinary approach to the issue of action and language integration. Such an approach, combining computational models and cognitive robotics experiments with neuroscience, psychology, philosophy, and linguistic approaches, can be a powerful means that can help researchers disentangle ambiguous issues, provide better and clearer definitions, and formulate clearer predictions on the links between action and language. In the introduction we briefly describe the papers and discuss the challenges they pose to future research. We identify four important phenomena the papers address and discuss in light of empirical and computational evidence: (a) the role played not only by sensorimotor and emotional information but also of natural language in conceptual representation; (b) the contextual dependency and high flexibility of the interaction between action, concepts, and language; (c) the involvement of the mirror neuron system in action and language processing; (d) the way in which the integration between action and language can be addressed by developmental robotics and Human-Robot Interaction. Copyright © 2014 Cognitive Science Society, Inc.
Training Methods to Build Human Terrain Mapping Skills
2010-10-01
confidence in making friends, and talking to strangers. • Language – a few key phrases. • Language training with Arabic teacher (not computer -based...session to evaluate the lesson content and delivery method. Based on your feedback we will make changes and corrections to the content and the computer ...requirement, exemplar training materials were developed. The training materials took the form of a modular computer /web-based and web-deliverable course of
A Large-Scale Analysis of Variance in Written Language
ERIC Educational Resources Information Center
Johns, Brendan T.; Jamieson, Randall K.
2018-01-01
The collection of very large text sources has revolutionized the study of natural language, leading to the development of several models of language learning and distributional semantics that extract sophisticated semantic representations of words based on the statistical redundancies contained within natural language (e.g., Griffiths, Steyvers,…
High level language for measurement complex control based on the computer E-100I
NASA Technical Reports Server (NTRS)
Zubkov, B. V.
1980-01-01
A high level language was designed to control the process of conducting an experiment using the computer "Elektrinika-1001". Program examples are given to control the measuring and actuating devices. The procedure of including these programs in the suggested high level language is described.
A Study of Multimedia Application-Based Vocabulary Acquisition
ERIC Educational Resources Information Center
Shao, Jing
2012-01-01
The development of computer-assisted language learning (CALL) has created the opportunity for exploring the effects of the multimedia application on foreign language vocabulary acquisition in recent years. This study provides an overview the computer-assisted language learning (CALL) and detailed a developing result of CALL--multimedia. With the…
Code of Federal Regulations, 2010 CFR
2010-04-01
... education emphasizing literacy in language arts, mathematics, natural and physical sciences, history, and... needed to function effectively in a society increasingly dependent on computer and information technology...
ERIC Educational Resources Information Center
Srivastava, Pradyumn; Gray, Shelley
2012-01-01
Purpose: With the global expansion of technology, our reading platform has shifted from traditional text to hypertext, yet little consideration has been given to how this shift might help or hinder students' reading comprehension. The purpose of this study was to compare reading comprehension of computer-based and paper-based texts in adolescents…
A high level language for a high performance computer
NASA Technical Reports Server (NTRS)
Perrott, R. H.
1978-01-01
The proposed computational aerodynamic facility will join the ranks of the supercomputers due to its architecture and increased execution speed. At present, the languages used to program these supercomputers have been modifications of programming languages which were designed many years ago for sequential machines. A new programming language should be developed based on the techniques which have proved valuable for sequential programming languages and incorporating the algorithmic techniques required for these supercomputers. The design objectives for such a language are outlined.
Toward a unified account of comprehension and production in language development.
McCauley, Stewart M; Christiansen, Morten H
2013-08-01
Although Pickering & Garrod (P&G) argue convincingly for a unified system for language comprehension and production, they fail to explain how such a system might develop. Using a recent computational model of language acquisition as an example, we sketch a developmental perspective on the integration of comprehension and production. We conclude that only through development can we fully understand the intertwined nature of comprehension and production in adult processing.
1983-10-28
Computing. By seizing an opportunity to leverage recent advances in artificial intelligence, computer science, and microelectronics, the Agency plans...occurred in many separated areas of artificial intelligence, computer science, and microelectronics. Advances in "expert system" technology now...and expert knowledge o Advances in Artificial Intelligence: Mechanization of speech recognition, vision, and natural language understanding. o
Conclusiveness of natural languages and recognition of images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wojcik, Z.M.
1983-01-01
The conclusiveness is investigated using recognition processes and one-one correspondence between expressions of a natural language and graphs representing events. The graphs, as conceived in psycholinguistics, are obtained as a result of perception processes. It is possible to generate and process the graphs automatically, using computers and then to convert the resulting graphs into expressions of a natural language. Correctness and conclusiveness of the graphs and sentences are investigated using the fundamental condition for events representation processes. Some consequences of the conclusiveness are discussed, e.g. undecidability of arithmetic, human brain assymetry, correctness of statistical calculations and operations research. It ismore » suggested that the group theory should be imposed on mathematical models of any real system. Proof of the fundamental condition is also presented. 14 references.« less
Legal Issues and Computer Use by School-Based Audiologists and Speech-Language Pathologists.
ERIC Educational Resources Information Center
Wynne, Michael K.; Hurst, David S.
1995-01-01
This article reviews ethical and legal issues regarding school-based integration and application of technologies, particularly when used by speech-language pathologists and audiologists. Four issues are addressed: (1) software copyright and licensed use; (2) information access and the right to privacy; (3) computer-assisted or…
ERIC Educational Resources Information Center
Ambrose, Regina Maria; Palpanathan, Shanthini
2017-01-01
Computer-assisted language learning (CALL) has evolved through various stages in both technology as well as the pedagogical use of technology (Warschauer & Healey, 1998). Studies show that the CALL trend has facilitated students in their English language writing with useful tools such as computer based activities and word processing. Students…
Formal ontology for natural language processing and the integration of biomedical databases.
Simon, Jonathan; Dos Santos, Mariana; Fielding, James; Smith, Barry
2006-01-01
The central hypothesis underlying this communication is that the methodology and conceptual rigor of a philosophically inspired formal ontology can bring significant benefits in the development and maintenance of application ontologies [A. Flett, M. Dos Santos, W. Ceusters, Some Ontology Engineering Procedures and their Supporting Technologies, EKAW2002, 2003]. This hypothesis has been tested in the collaboration between Language and Computing (L&C), a company specializing in software for supporting natural language processing especially in the medical field, and the Institute for Formal Ontology and Medical Information Science (IFOMIS), an academic research institution concerned with the theoretical foundations of ontology. In the course of this collaboration L&C's ontology, LinKBase, which is designed to integrate and support reasoning across a plurality of external databases, has been subjected to a thorough auditing on the basis of the principles underlying IFOMIS's Basic Formal Ontology (BFO) [B. Smith, Basic Formal Ontology, 2002. http://ontology.buffalo.edu/bfo]. The goal is to transform a large terminology-based ontology into one with the ability to support reasoning applications. Our general procedure has been the implementation of a meta-ontological definition space in which the definitions of all the concepts and relations in LinKBase are standardized in the framework of first-order logic. In this paper we describe how this principles-based standardization has led to a greater degree of internal coherence of the LinKBase structure, and how it has facilitated the construction of mappings between external databases using LinKBase as translation hub. We argue that the collaboration here described represents a new phase in the quest to solve the so-called "Tower of Babel" problem of ontology integration [F. Montayne, J. Flanagan, Formal Ontology: The Foundation for Natural Language Processing, 2003. http://www.landcglobal.com/].
Assessing Group Interaction with Social Language Network Analysis
NASA Astrophysics Data System (ADS)
Scholand, Andrew J.; Tausczik, Yla R.; Pennebaker, James W.
In this paper we discuss a new methodology, social language network analysis (SLNA), that combines tools from social language processing and network analysis to assess socially situated working relationships within a group. Specifically, SLNA aims to identify and characterize the nature of working relationships by processing artifacts generated with computer-mediated communication systems, such as instant message texts or emails. Because social language processing is able to identify psychological, social, and emotional processes that individuals are not able to fully mask, social language network analysis can clarify and highlight complex interdependencies between group members, even when these relationships are latent or unrecognized.
Observations in the Computer Room: L2 Output and Learner Behaviour
ERIC Educational Resources Information Center
Leahy, Christine
2004-01-01
This article draws on second language theory, particularly output theory as defined by Swain (1995), in order to conceptualise observations made in a computer-assisted language learning setting. It investigates second language output and learner behaviour within an electronic role-play setting, based on a subject-specific problem solving task and…
Microcomputer Based Computer-Assisted Learning System: CASTLE.
ERIC Educational Resources Information Center
Garraway, R. W. T.
The purpose of this study was to investigate the extent to which a sophisticated computer assisted instruction (CAI) system could be implemented on the type of microcomputer system currently found in the schools. A method was devised for comparing CAI languages and was used to rank five common CAI languages. The highest ranked language, NATAL,…
NASA Technical Reports Server (NTRS)
2004-01-01
I/NET, Inc., is making the dream of natural human-computer conversation a practical reality. Through a combination of advanced artificial intelligence research and practical software design, I/NET has taken the complexity out of developing advanced, natural language interfaces. Conversational capabilities like pronoun resolution, anaphora and ellipsis processing, and dialog management that were once available only in the laboratory can now be brought to any application with any speech recognition system using I/NET s conversational engine middleware.
Bilingual parallel programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, I.; Overbeek, R.
1990-01-01
Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach providesmore » and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.« less
The Sentence Fairy: A Natural-Language Generation System to Support Children's Essay Writing
ERIC Educational Resources Information Center
Harbusch, Karin; Itsova, Gergana; Koch, Ulrich; Kuhner, Christine
2008-01-01
We built an NLP system implementing a "virtual writing conference" for elementary-school children, with German as the target language. Currently, state-of-the-art computer support for writing tasks is restricted to multiple-choice questions or quizzes because automatic parsing of the often ambiguous and fragmentary texts produced by pupils…
A language comparison for scientific computing on MIMD architectures
NASA Technical Reports Server (NTRS)
Jones, Mark T.; Patrick, Merrell L.; Voigt, Robert G.
1989-01-01
Choleski's method for solving banded symmetric, positive definite systems is implemented on a multiprocessor computer using three FORTRAN based parallel programming languages, the Force, PISCES and Concurrent FORTRAN. The capabilities of the language for expressing parallelism and their user friendliness are discussed, including readability of the code, debugging assistance offered, and expressiveness of the languages. The performance of the different implementations is compared. It is argued that PISCES, using the Force for medium-grained parallelism, is the appropriate choice for programming Choleski's method on the multiprocessor computer, Flex/32.
An Evaluation Framework and Comparative Analysis of the Widely Used First Programming Languages
Farooq, Muhammad Shoaib; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed; Abid, Adnan
2014-01-01
Computer programming is the core of computer science curriculum. Several programming languages have been used to teach the first course in computer programming, and such languages are referred to as first programming language (FPL). The pool of programming languages has been evolving with the development of new languages, and from this pool different languages have been used as FPL at different times. Though the selection of an appropriate FPL is very important, yet it has been a controversial issue in the presence of many choices. Many efforts have been made for designing a good FPL, however, there is no ample way to evaluate and compare the existing languages so as to find the most suitable FPL. In this article, we have proposed a framework to evaluate the existing imperative, and object oriented languages for their suitability as an appropriate FPL. Furthermore, based on the proposed framework we have devised a customizable scoring function to compute a quantitative suitability score for a language, which reflects its conformance to the proposed framework. Lastly, we have also evaluated the conformance of the widely used FPLs to the proposed framework, and have also computed their suitability scores. PMID:24586449
An evaluation framework and comparative analysis of the widely used first programming languages.
Farooq, Muhammad Shoaib; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed; Abid, Adnan
2014-01-01
Computer programming is the core of computer science curriculum. Several programming languages have been used to teach the first course in computer programming, and such languages are referred to as first programming language (FPL). The pool of programming languages has been evolving with the development of new languages, and from this pool different languages have been used as FPL at different times. Though the selection of an appropriate FPL is very important, yet it has been a controversial issue in the presence of many choices. Many efforts have been made for designing a good FPL, however, there is no ample way to evaluate and compare the existing languages so as to find the most suitable FPL. In this article, we have proposed a framework to evaluate the existing imperative, and object oriented languages for their suitability as an appropriate FPL. Furthermore, based on the proposed framework we have devised a customizable scoring function to compute a quantitative suitability score for a language, which reflects its conformance to the proposed framework. Lastly, we have also evaluated the conformance of the widely used FPLs to the proposed framework, and have also computed their suitability scores.
Machine Learning and Radiology
Wang, Shijun; Summers, Ronald M.
2012-01-01
In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers. PMID:22465077
Kindermans, Pieter-Jan; Verschore, Hannes; Schrauwen, Benjamin
2013-10-01
In recent years, in an attempt to maximize performance, machine learning approaches for event-related potential (ERP) spelling have become more and more complex. In this paper, we have taken a step back as we wanted to improve the performance without building an overly complex model, that cannot be used by the community. Our research resulted in a unified probabilistic model for ERP spelling, which is based on only three assumptions and incorporates language information. On top of that, the probabilistic nature of our classifier yields a natural dynamic stopping strategy. Furthermore, our method uses the same parameters across 25 subjects from three different datasets. We show that our classifier, when enhanced with language models and dynamic stopping, improves the spelling speed and accuracy drastically. Additionally, we would like to point out that as our model is entirely probabilistic, it can easily be used as the foundation for complex systems in future work. All our experiments are executed on publicly available datasets to allow for future comparison with similar techniques.
A Computational Model of Linguistic Humor in Puns.
Kao, Justine T; Levy, Roger; Goodman, Noah D
2016-07-01
Humor plays an essential role in human interactions. Precisely what makes something funny, however, remains elusive. While research on natural language understanding has made significant advancements in recent years, there has been little direct integration of humor research with computational models of language understanding. In this paper, we propose two information-theoretic measures-ambiguity and distinctiveness-derived from a simple model of sentence processing. We test these measures on a set of puns and regular sentences and show that they correlate significantly with human judgments of funniness. Moreover, within a set of puns, the distinctiveness measure distinguishes exceptionally funny puns from mediocre ones. Our work is the first, to our knowledge, to integrate a computational model of general language understanding and humor theory to quantitatively predict humor at a fine-grained level. We present it as an example of a framework for applying models of language processing to understand higher level linguistic and cognitive phenomena. © 2015 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
A Cultural Diffusion Model for the Rise and Fall of Programming Languages.
Valverde, Sergi; Solé, Ricard V
2015-07-01
Our interaction with complex computing machines is mediated by programming languages (PLs), which constitute one of the major innovations in the evolution of technology. PLs allow flexible, scalable, and fast use of hardware and are largely responsible for shaping the history of information technology since the rise of computers in the 1950s. The rapid growth and impact of computers were followed closely by the development of PLs. As occurs with natural, human languages, PLs have emerged and gone extinct. There has been always a diversity of coexisting PLs that compete somewhat while occupying special niches. Here we show that the statistical patterns of language adoption, rise, and fall can be accounted for by a simple model in which a set of programmers can use several PLs, decide to use existing PLs used by other programmers, or decide not to use them. Our results highlight the influence of strong communities of practice in the diffusion of PL innovations.
patterns of doctor–patient interaction in online environment.
Zummo, Marianna Lya
2015-01-01
This paper questions the nature of the communicative event that takes place in online contexts between doctors and web-users, showing computer-mediated linguistic norms and discussing the nature of the participants’ roles. Based on an analysis of 1005 posts occurring between doctors and the users of health service websites, I analyse how doctor–patient communication is affected by the medium and how health professionals overcome issues concerning the virtual medical visit. Results suggest that (a) online medical answers offer a different service from that expected by users, as doctors cannot always fulfill patient requests, and (b) net consultations use aspects of traditional doctor–patient exchange and yet present a language and a style that are affected by the computer-mediated environment. Additionally, it seems that this new form leads to a different model of doctor–patient relationship. The findings are intended to provide new insights into web-based discourse in doctor–patient communication and to demonstrate the emergence of a new style in medical communication.
ERIC Educational Resources Information Center
Cardenas-Claros, Monica Stella; Gruba, Paul A.
2013-01-01
This paper proposes a theoretical framework for the conceptualization and design of help options in computer-based second language (L2) listening. Based on four empirical studies, it aims at clarifying both conceptualization and design (CoDe) components. The elements of conceptualization consist of a novel four-part classification of help options:…
Criteria for Evaluating a Game-Based CALL Platform
ERIC Educational Resources Information Center
Ní Chiaráin, Neasa; Ní Chasaide, Ailbhe
2017-01-01
Game-based Computer-Assisted Language Learning (CALL) is an area that currently warrants attention, as task-based, interactive, multimodal games increasingly show promise for language learning. This area is inherently multidisciplinary--theories from second language acquisition, games, and psychology must be explored and relevant concepts from…
Hu, Xiangen; Graesser, Arthur C
2004-05-01
The Human Use Regulatory Affairs Advisor (HURAA) is a Web-based facility that provides help and training on the ethical use of human subjects in research, based on documents and regulations in United States federal agencies. HURAA has a number of standard features of conventional Web facilities and computer-based training, such as hypertext, multimedia, help modules, glossaries, archives, links to other sites, and page-turning didactic instruction. HURAA also has these intelligent features: (1) an animated conversational agent that serves as a navigational guide for the Web facility, (2) lessons with case-based and explanation-based reasoning, (3) document retrieval through natural language queries, and (4) a context-sensitive Frequently Asked Questions segment, called Point & Query. This article describes the functional learning components of HURAA, specifies its computational architecture, and summarizes empirical tests of the facility on learners.
With 26 million citations, PubMed is one of the largest sources of information about the activity of chemicals in biological systems. Because this information is expressed in natural language and not stored as data, using the biomedical literature directly in computational resear...
Cooperative Learning with a Computer in a Native Language Class.
ERIC Educational Resources Information Center
Bennett, Ruth
In a cooperative task, American Indian elementary students produced bilingual natural history dictionaries using a Macintosh computer. Students in grades 3 through 8 attended weekly, multi-graded bilingual classes in Hupa/English or Yurok/English, held at two public school field sites for training elementary teaching-credential candidates. Teams…
Patterns of Computer-Mediated Interaction in Small Writing Groups Using Wikis
ERIC Educational Resources Information Center
Li, Mimi; Zhu, Wei
2013-01-01
Informed by sociocultural theory and guided especially by "collective scaffolding", this study investigated the nature of computer-mediated interaction of three groups of English as a Foreign Language students when they performed collaborative writing tasks using wikis. Nine college students from a Chinese university participated in the…
Modeling Education on the Real World.
ERIC Educational Resources Information Center
Hunter, Beverly
1983-01-01
Discusses educational applications of computer simulation and model building for grades K to 8, with emphasis on the usefulness of the computer simulation language, micro-DYNAMO, for programing and understanding the models which help to explain social and natural phenomena. A new textbook for junior-senior high school students is noted. (EAO)
ERIC Educational Resources Information Center
Kitade, Keiko
2006-01-01
Based on recent studies, computer-mediated communication (CMC) has been considered a tool to aid in language learning on account of its distinctive interactional features. However, most studies have referred to "synchronous" CMC and neglected to investigate how "asynchronous" CMC contributes to language learning. Asynchronous CMC possesses…
A natural command language for C/3/I applications
NASA Astrophysics Data System (ADS)
Mergler, J. P.
1980-03-01
The article discusses the development of a natural command language and a control and analysis console designed to simplify the task of the operator in field of Command, Control, Communications, and Intelligence. The console is based on a DEC LSI-11 microcomputer, supported by 16-K words of memory and a serial interface component. Discussion covers the language, which utilizes English and a natural syntax, and how it is integrated with the hardware. It is concluded that results have demonstrated the effectiveness of this natural command language.
Kolodny, Oren; Lotem, Arnon; Edelman, Shimon
2015-03-01
We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The grammar constructed in this manner takes the form of a directed weighted graph, whose nodes are recursively (hierarchically) defined patterns over the elements of the input stream. We evaluated the model in seventeen experiments, grouped into five studies, which examined, respectively, (a) the generative ability of grammar learned from a corpus of natural language, (b) the characteristics of the learned representation, (c) sequence segmentation and chunking, (d) artificial grammar learning, and (e) certain types of structure dependence. The model's performance largely vindicates our design choices, suggesting that progress in modeling language acquisition can be made on a broad front-ranging from issues of generativity to the replication of human experimental findings-by bringing biological and computational considerations, as well as lessons from prior efforts, to bear on the modeling approach. Copyright © 2014 Cognitive Science Society, Inc.
Natural Language Processing in aid of FlyBase curators
Karamanis, Nikiforos; Seal, Ruth; Lewin, Ian; McQuilton, Peter; Vlachos, Andreas; Gasperin, Caroline; Drysdale, Rachel; Briscoe, Ted
2008-01-01
Background Despite increasing interest in applying Natural Language Processing (NLP) to biomedical text, whether this technology can facilitate tasks such as database curation remains unclear. Results PaperBrowser is the first NLP-powered interface that was developed under a user-centered approach to improve the way in which FlyBase curators navigate an article. In this paper, we first discuss how observing curators at work informed the design and evaluation of PaperBrowser. Then, we present how we appraise PaperBrowser's navigational functionalities in a user-based study using a text highlighting task and evaluation criteria of Human-Computer Interaction. Our results show that PaperBrowser reduces the amount of interactions between two highlighting events and therefore improves navigational efficiency by about 58% compared to the navigational mechanism that was previously available to the curators. Moreover, PaperBrowser is shown to provide curators with enhanced navigational utility by over 74% irrespective of the different ways in which they highlight text in the article. Conclusion We show that state-of-the-art performance in certain NLP tasks such as Named Entity Recognition and Anaphora Resolution can be combined with the navigational functionalities of PaperBrowser to support curation quite successfully. PMID:18410678
ERIC Educational Resources Information Center
Morin, Yves Ch.
Described in this paper is the implementation of Querido's French grammar ("Grammaire I, Description transformationelle d'un sous-ensemble du Francais," 1969) on the computer system for transformational grammar at the University of Michigan (Friedman 1969). The purpose was to demonstrate the ease of transcribing a relative formal grammar into the…
Becoming Little Scientists: Technologically-Enhanced Project-Based Language Learning
ERIC Educational Resources Information Center
Dooly, Melinda; Sadler, Randall
2016-01-01
This article outlines research into innovative language teaching practices that make optimal use of technology and Computer-Mediated Communication (CMC) for an integrated approach to Project-Based Learning. It is based on data compiled during a 10- week language project that employed videoconferencing and "machinima" (short video clips…
Hunter, James; Freer, Yvonne; Gatt, Albert; Reiter, Ehud; Sripada, Somayajulu; Sykes, Cindy
2012-11-01
Our objective was to determine whether and how a computer system could automatically generate helpful natural language nursing shift summaries solely from an electronic patient record system, in a neonatal intensive care unit (NICU). A system was developed which automatically generates partial NICU shift summaries (for the respiratory and cardiovascular systems), using data-to-text technology. It was evaluated for 2 months in the NICU at the Royal Infirmary of Edinburgh, under supervision. In an on-ward evaluation, a substantial majority of the summaries was found by outgoing and incoming nurses to be understandable (90%), and a majority was found to be accurate (70%), and helpful (59%). The evaluation also served to identify some outstanding issues, especially with regard to extra content the nurses wanted to see in the computer-generated summaries. It is technically possible automatically to generate limited natural language NICU shift summaries from an electronic patient record. However, it proved difficult to handle electronic data that was intended primarily for display to the medical staff, and considerable engineering effort would be required to create a deployable system from our proof-of-concept software. Copyright © 2012 Elsevier B.V. All rights reserved.
Computational Understanding: Analysis of Sentences and Context
1974-05-01
Computer Science Department Stanford, California 9430b 10- PROGRAM ELEMENT. PROJECT. TASK AREA « WORK UNIT NUMBERS II. CONTROLLING OFFICE NAME...these is the need tor programs that can respond in useful ways to information expressed in a natural language. However a computational understanding...buying structure because "Mary" appears where it does. But the time for analysis was rarely over five seconds of computer time, when the Lisp program
Semi-Automated Methods for Refining a Domain-Specific Terminology Base
2011-02-01
only as a resource for written and oral translation, but also for Natural Language Processing ( NLP ) applications, text retrieval, document indexing...Natural Language Processing ( NLP ) applications, text retrieval, document indexing, and other knowledge management tasks. The objective of this...also for Natural Language Processing ( NLP ) applications, text retrieval (1), document indexing, and other knowledge management tasks. The National
Research in Knowledge Representation for Natural Language Understanding
1980-11-01
artificial intelligence, natural language understanding , parsing, syntax, semantics, speaker meaning, knowledge representation, semantic networks...TinB PAGE map M W006 1Report No. 4513 L RESEARCH IN KNOWLEDGE REPRESENTATION FOR NATURAL LANGUAGE UNDERSTANDING Annual Report 1 September 1979 to 31... understanding , knowledge representation, and knowledge based inference. The work that we have been doing falls into three classes, successively motivated by
Gender, "Discourse," and Technology. Center for Equity and Diversity Working Paper 5.
ERIC Educational Resources Information Center
Hanson, Katherine
This paper identifies and discusses the connections between the way individuals frame their world based on the language they use and the impact of language and stereotyping on the perception that computer technology is primarily for certain individuals. The study explores how some of the dimensions of the language of computers and technology,…
The Impact of Computer-Based Instruction on the Development of EFL Learners' Writing Skills
ERIC Educational Resources Information Center
Zaini, A.; Mazdayasna, G.
2015-01-01
The current study investigated the application and effectiveness of computer assisted language learning (CALL) in teaching academic writing to Iranian EFL (English as a Foreign Language) learners by means of Microsoft Word Office. To this end, 44 sophomore intermediate university students majoring in English Language and Literature at an Iranian…
Do neural nets learn statistical laws behind natural language?
Takahashi, Shuntaro; Tanaka-Ishii, Kumiko
2017-01-01
The performance of deep learning in natural language processing has been spectacular, but the reasons for this success remain unclear because of the inherent complexity of deep learning. This paper provides empirical evidence of its effectiveness and of a limitation of neural networks for language engineering. Precisely, we demonstrate that a neural language model based on long short-term memory (LSTM) effectively reproduces Zipf's law and Heaps' law, two representative statistical properties underlying natural language. We discuss the quality of reproducibility and the emergence of Zipf's law and Heaps' law as training progresses. We also point out that the neural language model has a limitation in reproducing long-range correlation, another statistical property of natural language. This understanding could provide a direction for improving the architectures of neural networks.
Do neural nets learn statistical laws behind natural language?
Takahashi, Shuntaro
2017-01-01
The performance of deep learning in natural language processing has been spectacular, but the reasons for this success remain unclear because of the inherent complexity of deep learning. This paper provides empirical evidence of its effectiveness and of a limitation of neural networks for language engineering. Precisely, we demonstrate that a neural language model based on long short-term memory (LSTM) effectively reproduces Zipf’s law and Heaps’ law, two representative statistical properties underlying natural language. We discuss the quality of reproducibility and the emergence of Zipf’s law and Heaps’ law as training progresses. We also point out that the neural language model has a limitation in reproducing long-range correlation, another statistical property of natural language. This understanding could provide a direction for improving the architectures of neural networks. PMID:29287076
Foreign Language Tutoring in Oral Conversations Using Spoken Dialog Systems
NASA Astrophysics Data System (ADS)
Lee, Sungjin; Noh, Hyungjong; Lee, Jonghoon; Lee, Kyusong; Lee, Gary Geunbae
Although there have been enormous investments into English education all around the world, not many differences have been made to change the English instruction style. Considering the shortcomings for the current teaching-learning methodology, we have been investigating advanced computer-assisted language learning (CALL) systems. This paper aims at summarizing a set of POSTECH approaches including theories, technologies, systems, and field studies and providing relevant pointers. On top of the state-of-the-art technologies of spoken dialog system, a variety of adaptations have been applied to overcome some problems caused by numerous errors and variations naturally produced by non-native speakers. Furthermore, a number of methods have been developed for generating educational feedback that help learners develop to be proficient. Integrating these efforts resulted in intelligent educational robots — Mero and Engkey — and virtual 3D language learning games, Pomy. To verify the effects of our approaches on students' communicative abilities, we have conducted a field study at an elementary school in Korea. The results showed that our CALL approaches can be enjoyable and fruitful activities for students. Although the results of this study bring us a step closer to understanding computer-based education, more studies are needed to consolidate the findings.
Compositional and enumerative designs for medical language representation.
Rassinoux, A. M.; Miller, R. A.; Baud, R. H.; Scherrer, J. R.
1997-01-01
Medical language is in essence highly compositional, allowing complex information to be expressed from more elementary pieces. Embedding the expressive power of medical language into formal systems of representation is recognized in the medical informatics community as a key step towards sharing such information among medical record, decision support, and information retrieval systems. Accordingly, such representation requires managing both the expressiveness of the formalism and its computational tractability, while coping with the level of detail expected by clinical applications. These desiderata can be supported by enumerative as well as compositional approaches, as argued in this paper. These principles have been applied in recasting a frame-based system for general medical findings developed during the 1980s. The new system captures the precise meaning of a subset of over 1500 medical terms for general internal medicine identified from the Quick Medical Reference (QMR) lexicon. In order to evaluate the adequacy of this formal structure in reflecting the deep meaning of the QMR findings, a validation process was implemented. It consists of automatically rebuilding the semantic representation of the QMR findings by analyzing them through the RECIT natural language analyzer, whose semantic components have been adjusted to this frame-based model for the understanding task. PMID:9357700
Compositional and enumerative designs for medical language representation.
Rassinoux, A M; Miller, R A; Baud, R H; Scherrer, J R
1997-01-01
Medical language is in essence highly compositional, allowing complex information to be expressed from more elementary pieces. Embedding the expressive power of medical language into formal systems of representation is recognized in the medical informatics community as a key step towards sharing such information among medical record, decision support, and information retrieval systems. Accordingly, such representation requires managing both the expressiveness of the formalism and its computational tractability, while coping with the level of detail expected by clinical applications. These desiderata can be supported by enumerative as well as compositional approaches, as argued in this paper. These principles have been applied in recasting a frame-based system for general medical findings developed during the 1980s. The new system captures the precise meaning of a subset of over 1500 medical terms for general internal medicine identified from the Quick Medical Reference (QMR) lexicon. In order to evaluate the adequacy of this formal structure in reflecting the deep meaning of the QMR findings, a validation process was implemented. It consists of automatically rebuilding the semantic representation of the QMR findings by analyzing them through the RECIT natural language analyzer, whose semantic components have been adjusted to this frame-based model for the understanding task.
[Artificial intelligence in psychiatry-an overview].
Meyer-Lindenberg, A
2018-06-18
Artificial intelligence and the underlying methods of machine learning and neuronal networks (NN) have made dramatic progress in recent years and have allowed computers to reach superhuman performance in domains that used to be thought of as uniquely human. In this overview, the underlying methodological developments that made this possible are briefly delineated and then the applications to psychiatry in three domains are discussed: precision medicine and biomarkers, natural language processing and artificial intelligence-based psychotherapeutic interventions. In conclusion, some of the risks of this new technology are mentioned.
Machine-aided indexing for NASA STI
NASA Technical Reports Server (NTRS)
Wilson, John
1987-01-01
One of the major components of the NASA/STI processing system is machine-aided indexing (MAI). MAI is a computer process that generates a set of indexing terms selected from NASA's thesaurus, is used for indexing technical reports, is based on text, and is reviewed by indexers. This paper summarizes the MAI objectives and discusses the NASA Lexical Dictionary, subject switching, and phrase matching or natural languages. The benefits of using MAI are mentioned, and MAI production improvement and the future of MAI are briefly addressed.
NASA Astrophysics Data System (ADS)
Ragan-Kelley, M.; Perez, F.; Granger, B.; Kluyver, T.; Ivanov, P.; Frederic, J.; Bussonnier, M.
2014-12-01
IPython has provided terminal-based tools for interactive computing in Python since 2001. The notebook document format and multi-process architecture introduced in 2011 have expanded the applicable scope of IPython into teaching, presenting, and sharing computational work, in addition to interactive exploration. The new architecture also allows users to work in any language, with implementations in Python, R, Julia, Haskell, and several other languages. The language agnostic parts of IPython have been renamed to Jupyter, to better capture the notion that a cross-language design can encapsulate commonalities present in computational research regardless of the programming language being used. This architecture offers components like the web-based Notebook interface, that supports rich documents that combine code and computational results with text narratives, mathematics, images, video and any media that a modern browser can display. This interface can be used not only in research, but also for publication and education, as notebooks can be converted to a variety of output formats, including HTML and PDF. Recent developments in the Jupyter project include a multi-user environment for hosting notebooks for a class or research group, a live collaboration notebook via Google Docs, and better support for languages other than Python.
1988-01-01
A Generator for Natural Language Interfaces," Computational Linguistis. Vol. 11, Number 4, October-December, 1985. pp. 219-242. de Joia , A. and...employ in order to communicate to their intended audience. Production, therefore, encompasses issues of deciding what is pertinent as well as de ...rhetorical predicates; design of a system motivated by the desire for domain and language independency, semantic connection of the generation system
A Functional Specification for a Programming Language for Computer Aided Learning Applications.
ERIC Educational Resources Information Center
National Research Council of Canada, Ottawa (Ontario).
In 1972 there were at least six different course authoring languages in use in Canada with little exchange of course materials between Computer Assisted Learning (CAL) centers. In order to improve facilities for producing "transportable" computer based course materials, a working panel undertook the definition of functional requirements of a user…
Rapid Profile: A Second Language Screening Procedure.
ERIC Educational Resources Information Center
Mackey, Alison; And Others
1991-01-01
Rapid Profile, developed by Manfred Pienemann of National Languages Institute of Australia/Language Acquisition Research Centre, is a computer-based procedure for screening speech samples collected from language learners to assess their level of language development as compared to standard patterns in the acquisition of the target language. Rapid…
Restrictions on biological adaptation in language evolution.
Chater, Nick; Reali, Florencia; Christiansen, Morten H
2009-01-27
Language acquisition and processing are governed by genetic constraints. A crucial unresolved question is how far these genetic constraints have coevolved with language, perhaps resulting in a highly specialized and species-specific language "module," and how much language acquisition and processing redeploy preexisting cognitive machinery. In the present work, we explored the circumstances under which genes encoding language-specific properties could have coevolved with language itself. We present a theoretical model, implemented in computer simulations, of key aspects of the interaction of genes and language. Our results show that genes for language could have coevolved only with highly stable aspects of the linguistic environment; a rapidly changing linguistic environment does not provide a stable target for natural selection. Thus, a biological endowment could not coevolve with properties of language that began as learned cultural conventions, because cultural conventions change much more rapidly than genes. We argue that this rules out the possibility that arbitrary properties of language, including abstract syntactic principles governing phrase structure, case marking, and agreement, have been built into a "language module" by natural selection. The genetic basis of human language acquisition and processing did not coevolve with language, but primarily predates the emergence of language. As suggested by Darwin, the fit between language and its underlying mechanisms arose because language has evolved to fit the human brain, rather than the reverse.
NASA Astrophysics Data System (ADS)
Zadeh, Lotfi A.
2001-06-01
Computing, in its usual sense, is centered on manipulation of numbers and symbols. In contrast, computing with words, or CW for short, is a methodology in which the objects of computation are words and propositions drawn from a natural language, e.g., small, large, far, heavy, not very likely, the price of gas is low and declining, Berkeley is near San Francisco, it is very unlikely that there will be a significant increase in the price of oil in the near future, etc. Computing with words is inspired by the remarkable human capability to perform a wide variety of physical and mental tasks without any measurements and any computations. Familiar examples of such tasks are parking a car, driving in heavy traffic, playing golf, riding a bicycle, understanding speech and summarizing a story. Underlying this remarkable capability is the brain's crucial ability to manipulate perceptions-perceptions of distance, size, weight, color, speed, time, direction, force, number, truth, likelihood and other characteristics of physical and mental objects. Manipulation of perceptions plays a key role in human recognition, decision and execution processes. As a methodology, computing with words provides a foundation for a computational theory of perceptions-a theory which may have an important bearing on how humans make-and machines might make-perception-based rational decisions in an environment of imprecision, uncertainty and partial truth. A basic difference between perceptions and measurements is that, in general, measurements are crisp whereas perceptions are fuzzy. One of the fundamental aims of science has been and continues to be that of progressing from perceptions to measurements. Pursuit of this aim has led to brilliant successes. We have sent men to the moon; we can build computers that are capable of performing billions of computations per second; we have constructed telescopes that can explore the far reaches of the universe; and we can date the age of rocks that are millions of years old. But alongside the brilliant successes stand conspicuous underachievements and outright failures. We cannot build robots which can move with the agility of animals or humans; we cannot automate driving in heavy traffic; we cannot translate from one language to another at the level of a human interpreter; we cannot create programs which can summarize non-trivial stories; our ability to model the behavior of economic systems leaves much to be desired; and we cannot build machines that can compete with children in the performance of a wide variety of physical and cognitive tasks. It may be argued that underlying the underachivements and failures is the unavailability of a methodology for reasoning and computing with perceptions rather than measurements. An outline of such a methodology-referred to as a computational theory of perceptions-is presented in this paper. The computational theory of perceptions, or CTP for short, is based on the methodology of computing with words (CW). In CTP, words play the role of labels of perceptions and, more generally, perceptions are expressed as propositions in a natural language. CW-based techniques are employed to translate propositions expressed in a natural language into what is called the Generalized Constraint Language (GCL). In this language, the meaning of a proposition is expressed as a generalized constraint, X isr R, where X is the constrained variable, R is the constraining relation and isr is a variable copula in which r is a variable whose value defines the way in which R constrains X. Among the basic types of constraints are: possibilistic, veristic, probabilistic, random set, Pawlak singing then the emphasis is put on the action aspect, while if we want to say that the singing is loud then the emphasis is on the sound, which is treated as a thing since one hears it. The crucial point is that one seems to be forced to make such a distinction, as assists the determination of structure, but the origin of this distinction is probably related to the different ways actions and objects are represented in the brain generally. Here the relevant tool (for detecting groups) is one which takes note of which areas of the brain are active, and which in creating an agent from a group tries to respect existing patterns. The general pattern in the above has been the same as other instances that have been discussed: specific tools lead to the paradigms of activity being gradually extended. Certain characteristics of the resulting agents make this activity tend to the useful; thus the tools have a certain potential that can be fruitfully realised. As agents accumulate, the activity that they cooperate in becomes more and more complex, but the vetting of new additions to the system and of the overall activity of the system ensures that it remains useful and in control (ideally, of course; we know that in human societies, such regulatory activity does not always work very well). .
ERIC Educational Resources Information Center
Kwon, Oh-Woog; Lee, Kiyoung; Kim, Young-Kil; Lee, Yunkeun
2015-01-01
This paper introduces a Dialog-Based Computer-Assisted second-Language Learning (DB-CALL) system using semantic and grammar correctness evaluations and the results of its experiment. While the system dialogues with English learners about a given topic, it automatically evaluates the grammar and content properness of their English utterances, then…
An Intelligent Computer-Based System for Sign Language Tutoring
ERIC Educational Resources Information Center
Ritchings, Tim; Khadragi, Ahmed; Saeb, Magdy
2012-01-01
A computer-based system for sign language tutoring has been developed using a low-cost data glove and a software application that processes the movement signals for signs in real-time and uses Pattern Matching techniques to decide if a trainee has closely replicated a teacher's recorded movements. The data glove provides 17 movement signals from…
Natural language from artificial life.
Kirby, Simon
2002-01-01
This article aims to show that linguistics, in particular the study of the lexico-syntactic aspects of language, provides fertile ground for artificial life modeling. A survey of the models that have been developed over the last decade and a half is presented to demonstrate that ALife techniques have a lot to offer an explanatory theory of language. It is argued that this is because much of the structure of language is determined by the interaction of three complex adaptive systems: learning, culture, and biological evolution. Computational simulation, informed by theoretical linguistics, is an appropriate response to the challenge of explaining real linguistic data in terms of the processes that underpin human language.
ERIC Educational Resources Information Center
Aryadoust, Vahid; Mehran, Parisa; Alizadeh, Mehrasa
2016-01-01
A few computer-assisted language learning (CALL) instruments have been developed in Iran to measure EFL (English as a foreign language) learners' attitude toward CALL. However, these instruments have no solid validity argument and accordingly would be unable to provide a reliable measurement of attitude. The present study aimed to develop a CALL…
Language, Learning, and Identity in Social Networking Sites for Language Learning: The Case of Busuu
ERIC Educational Resources Information Center
Alvarez Valencia, Jose Aldemar
2014-01-01
Recent progress in the discipline of computer applications such as the advent of web-based communication, afforded by the Web 2.0, has paved the way for novel applications in language learning, namely, social networking. Social networking has challenged the area of Computer Mediated Communication (CMC) to expand its research palette in order to…
ERIC Educational Resources Information Center
Strong, Gemma K.; Torgerson, Carole J.; Torgerson, David; Hulme, Charles
2011-01-01
Background: Fast ForWord is a suite of computer-based language intervention programs designed to improve children's reading and oral language skills. The programs are based on the hypothesis that oral language difficulties often arise from a rapid auditory temporal processing deficit that compromises the development of phonological…
Computer Simulation of Reading.
ERIC Educational Resources Information Center
Leton, Donald A.
In recent years, coding and decoding have been claimed to be the processes for converting one language form to another. But there has been little effort to locate these processes in the human learner or to identify the nature of the internal codes. Computer simulation of reading is useful because the similarities in the human reception and…
Roch, Alexandra M; Mehrabi, Saeed; Krishnan, Anand; Schmidt, Heidi E; Kesterson, Joseph; Beesley, Chris; Dexter, Paul R; Palakal, Mathew; Schmidt, C Max
2015-01-01
Introduction As many as 3% of computed tomography (CT) scans detect pancreatic cysts. Because pancreatic cysts are incidental, ubiquitous and poorly understood, follow-up is often not performed. Pancreatic cysts may have a significant malignant potential and their identification represents a ‘window of opportunity’ for the early detection of pancreatic cancer. The purpose of this study was to implement an automated Natural Language Processing (NLP)-based pancreatic cyst identification system. Method A multidisciplinary team was assembled. NLP-based identification algorithms were developed based on key words commonly used by physicians to describe pancreatic cysts and programmed for automated search of electronic medical records. A pilot study was conducted prospectively in a single institution. Results From March to September 2013, 566 233 reports belonging to 50 669 patients were analysed. The mean number of patients reported with a pancreatic cyst was 88/month (range 78–98). The mean sensitivity and specificity were 99.9% and 98.8%, respectively. Conclusion NLP is an effective tool to automatically identify patients with pancreatic cysts based on electronic medical records (EMR). This highly accurate system can help capture patients ‘at-risk’ of pancreatic cancer in a registry. PMID:25537257
NASA Astrophysics Data System (ADS)
Kosmidis, Kosmas; Kalampokis, Alkiviadis; Argyrakis, Panos
2006-10-01
We use the detrended fluctuation analysis (DFA) and the Grassberger-Proccacia analysis (GP) methods in order to study language characteristics. Despite that we construct our signals using only word lengths or word frequencies, excluding in this way huge amount of information from language, the application of GP analysis indicates that linguistic signals may be considered as the manifestation of a complex system of high dimensionality, different from random signals or systems of low dimensionality such as the Earth climate. The DFA method is additionally able to distinguish a natural language signal from a computer code signal. This last result may be useful in the field of cryptography.
Grammatical Analysis as a Distributed Neurobiological Function
Bozic, Mirjana; Fonteneau, Elisabeth; Su, Li; Marslen-Wilson, William D
2015-01-01
Language processing engages large-scale functional networks in both hemispheres. Although it is widely accepted that left perisylvian regions have a key role in supporting complex grammatical computations, patient data suggest that some aspects of grammatical processing could be supported bilaterally. We investigated the distribution and the nature of grammatical computations across language processing networks by comparing two types of combinatorial grammatical sequences—inflectionally complex words and minimal phrases—and contrasting them with grammatically simple words. Novel multivariate analyses revealed that they engage a coalition of separable subsystems: inflected forms triggered left-lateralized activation, dissociable into dorsal processes supporting morphophonological parsing and ventral, lexically driven morphosyntactic processes. In contrast, simple phrases activated a consistently bilateral pattern of temporal regions, overlapping with inflectional activations in L middle temporal gyrus. These data confirm the role of the left-lateralized frontotemporal network in supporting complex grammatical computations. Critically, they also point to the capacity of bilateral temporal regions to support simple, linear grammatical computations. This is consistent with a dual neurobiological framework where phylogenetically older bihemispheric systems form part of the network that supports language function in the modern human, and where significant capacities for language comprehension remain intact even following severe left hemisphere damage. PMID:25421880
Automatic reconstruction of a bacterial regulatory network using Natural Language Processing
Rodríguez-Penagos, Carlos; Salgado, Heladia; Martínez-Flores, Irma; Collado-Vides, Julio
2007-01-01
Background Manual curation of biological databases, an expensive and labor-intensive process, is essential for high quality integrated data. In this paper we report the implementation of a state-of-the-art Natural Language Processing system that creates computer-readable networks of regulatory interactions directly from different collections of abstracts and full-text papers. Our major aim is to understand how automatic annotation using Text-Mining techniques can complement manual curation of biological databases. We implemented a rule-based system to generate networks from different sets of documents dealing with regulation in Escherichia coli K-12. Results Performance evaluation is based on the most comprehensive transcriptional regulation database for any organism, the manually-curated RegulonDB, 45% of which we were able to recreate automatically. From our automated analysis we were also able to find some new interactions from papers not already curated, or that were missed in the manual filtering and review of the literature. We also put forward a novel Regulatory Interaction Markup Language better suited than SBML for simultaneously representing data of interest for biologists and text miners. Conclusion Manual curation of the output of automatic processing of text is a good way to complement a more detailed review of the literature, either for validating the results of what has been already annotated, or for discovering facts and information that might have been overlooked at the triage or curation stages. PMID:17683642
Programming the social computer.
Robertson, David; Giunchiglia, Fausto
2013-03-28
The aim of 'programming the global computer' was identified by Milner and others as one of the grand challenges of computing research. At the time this phrase was coined, it was natural to assume that this objective might be achieved primarily through extending programming and specification languages. The Internet, however, has brought with it a different style of computation that (although harnessing variants of traditional programming languages) operates in a style different to those with which we are familiar. The 'computer' on which we are running these computations is a social computer in the sense that many of the elementary functions of the computations it runs are performed by humans, and successful execution of a program often depends on properties of the human society over which the program operates. These sorts of programs are not programmed in a traditional way and may have to be understood in a way that is different from the traditional view of programming. This shift in perspective raises new challenges for the science of the Web and for computing in general.
A Natural Language Interface to Databases
NASA Technical Reports Server (NTRS)
Ford, D. R.
1990-01-01
The development of a Natural Language Interface (NLI) is presented which is semantic-based and uses Conceptual Dependency representation. The system was developed using Lisp and currently runs on a Symbolics Lisp machine.
A Text Knowledge Base from the AI Handbook.
ERIC Educational Resources Information Center
Simmons, Robert F.
1987-01-01
Describes a prototype natural language text knowledge system (TKS) that was used to organize 50 pages of a handbook on artificial intelligence as an inferential knowledge base with natural language query and command capabilities. Representation of text, database navigation, query systems, discourse structuring, and future research needs are…
NLPIR: A Theoretical Framework for Applying Natural Language Processing to Information Retrieval.
ERIC Educational Resources Information Center
Zhou, Lina; Zhang, Dongsong
2003-01-01
Proposes a theoretical framework called NLPIR that integrates natural language processing (NLP) into information retrieval (IR) based on the assumption that there exists representation distance between queries and documents. Discusses problems in traditional keyword-based IR, including relevance, and describes some existing NLP techniques.…
Student and Teacher Success: The Impact of Computers in Primary Grades.
ERIC Educational Resources Information Center
Drexler, Nancy Gadzuk; And Others
This paper discusses the impact of computers on student learning as reported by teachers participating in a study of a computer-based language arts instructional program for the early elementary grades--the Apple Learning Series: Early Language (ALS-EL). Although they found the program difficult to evaluate, some teachers stated that the ALS-EL…
Teaching Arabic with Technology at BYU: Learning from the Past to Bridge to the Future
ERIC Educational Resources Information Center
Bush, Michael D.; Browne, Jeremy M.
2004-01-01
Reporting in 1971 on research related to computer-based methods for teaching the Arabic writing system, Bunderson and Abboud cited the potential that computers have for language learning, a largely unfulfilled potential even in 2004. After a review of the relevant historical background for the justification of computer-aided language learning…
ERIC Educational Resources Information Center
Velez-Rubio, Miguel
2013-01-01
Teaching computer programming to freshmen students in Computer Sciences and other Information Technology areas has been identified as a complex activity. Different approaches have been studied looking for the best one that could help to improve this teaching process. A proposed approach was implemented which is based in the language immersion…
ERIC Educational Resources Information Center
Quann, Steve; Satin, Diana
This textbook leads high-beginning and intermediate English-as-a-Second-Language (ESL) students through cooperative computer-based activities that combine language learning with training in basic computer skills and word processing. Each unit concentrates on a basic concept of word processing while also focusing on a grammar topic. Skills are…
ERIC Educational Resources Information Center
Simpson, Andrea; El-Refaie, Amr; Stephenson, Caitlin; Chen, Yi-Ping Phoebe; Deng, Dennis; Erickson, Shane; Tay, David; Morris, Meg E.; Doube, Wendy; Caelli, Terry
2015-01-01
The purpose of this systematic review was to examine whether online or computer-based technologies were effective in assisting the development of speech and language skills in children with hearing loss. Relevant studies of children with hearing loss were analysed with reference to (1) therapy outcomes, (2) factors affecting outcomes, and (3)…
Evaluation of Computer Based Foreign Language Learning Software by Teachers and Students
ERIC Educational Resources Information Center
Baz, Fatih Çagatay; Tekdal, Mehmet
2014-01-01
The aim of this study is to evaluate Computer Based Foreign Language Learning software called Dynamic Education (DYNED) by teachers and students. The study is conducted with randomly chosen ten primary schools with the participants of 522 7th grade students and 7 English teachers. Three points Likert scale for teachers and five points Likert scale…
ERIC Educational Resources Information Center
Marty, Fernand
Three computer-based systems for phonetic/graphemic transcription of language are described, compared, and contrasted. The text is entirely in French, with examples given from the French language. The three approaches to transcription are: (1) text entered in standard typography and exiting in phonetic transcription with markers for rhythmic…
NASA Astrophysics Data System (ADS)
Guimaraes, Cayley; Antunes, Diego R.; de F. Guilhermino Trindade, Daniela; da Silva, Rafaella A. Lopes; Garcia, Laura Sanchez
This work presents a computational model (XML) of the Brazilian Sign Language (Libras), based on its phonology. The model was used to create a sample of representative signs to aid the recording of a base of videos whose aim is to support the development of tools to support genuine social inclusion of the deaf.
Data Discovery with IBM Watson
NASA Astrophysics Data System (ADS)
Fessler, J.
2016-12-01
BM Watson is a cognitive computing system that uses machine learning, statistical analysis, and natural language processing to find and understand the clues in questions posed to it. Watson was made famous when it bested two champions on TV's Jeopardy! show. Since then, Watson has evolved into a platform of cognitive services that can be trained on very granular fields up study. Watson is being used to support a number of subject domains, such as cancer research, public safety, engineering, and the intelligence community. IBM will be providing a presentation and demonstration on the Watson technology and will discuss its capabilities including Natural Language Processing, text analytics and enterprise search, as well as cognitive computing with deep Q&A. The team will also be giving examples of how IBM Watson technology is being used to support real-world problems across a number of public sector agencies
ERIC Educational Resources Information Center
Renaud, Claire
2010-01-01
Current second language (L2) research focuses on the level of features--that is, the core elements of languages in the Minimalist Program framework. These features, involved in computations, are further divided into two types: those that indicate to which category a word belongs (i.e., interpretable features) versus those that constrain the type…
Future perspectives - proposal for Oxford Physiome Project.
Oku, Yoshitaka
2010-01-01
The Physiome Project is an effort to understand living creatures using "analysis by synthesis" strategy, i.e., by reproducing their behaviors. In order to achieve its goal, sharing developed models between different computer languages and application programs to incorporate into integrated models is critical. To date, several XML-based markup languages has been developed for this purpose. However, source codes written with XML-based languages are very difficult to read and edit using text editors. An alternative way is to use an object-oriented meta-language, which can be translated to different computer languages and transplanted to different application programs. Object-oriented languages are suitable for describing structural organization by hierarchical classes and taking advantage of statistical properties to reduce the number of parameter while keeping the complexity of behaviors. Using object-oriented languages to describe each element and posting it to a public domain should be the next step to build up integrated models of the respiratory control system.
Towards a Computational Comparative Neuroprimatology: Framing the language-ready brain.
Arbib, Michael A
2016-03-01
We make the case for developing a Computational Comparative Neuroprimatology to inform the analysis of the function and evolution of the human brain. First, we update the mirror system hypothesis on the evolution of the language-ready brain by (i) modeling action and action recognition and opportunistic scheduling of macaque brains to hypothesize the nature of the last common ancestor of macaque and human (LCA-m); and then we (ii) introduce dynamic brain modeling to show how apes could acquire gesture through ontogenetic ritualization, hypothesizing the nature of evolution from LCA-m to the last common ancestor of chimpanzee and human (LCA-c). We then (iii) hypothesize the role of imitation, pantomime, protosign and protospeech in biological and cultural evolution from LCA-c to Homo sapiens with a language-ready brain. Second, we suggest how cultural evolution in Homo sapiens led from protolanguages to full languages with grammar and compositional semantics. Third, we assess the similarities and differences between the dorsal and ventral streams in audition and vision as the basis for presenting and comparing two models of language processing in the human brain: A model of (i) the auditory dorsal and ventral streams in sentence comprehension; and (ii) the visual dorsal and ventral streams in defining "what language is about" in both production and perception of utterances related to visual scenes provide the basis for (iii) a first step towards a synthesis and a look at challenges for further research. Copyright © 2015 Elsevier B.V. All rights reserved.
Towards a Computational Comparative Neuroprimatology: Framing the language-ready brain
NASA Astrophysics Data System (ADS)
Arbib, Michael A.
2016-03-01
We make the case for developing a Computational Comparative Neuroprimatology to inform the analysis of the function and evolution of the human brain. First, we update the mirror system hypothesis on the evolution of the language-ready brain by (i) modeling action and action recognition and opportunistic scheduling of macaque brains to hypothesize the nature of the last common ancestor of macaque and human (LCA-m); and then we (ii) introduce dynamic brain modeling to show how apes could acquire gesture through ontogenetic ritualization, hypothesizing the nature of evolution from LCA-m to the last common ancestor of chimpanzee and human (LCA-c). We then (iii) hypothesize the role of imitation, pantomime, protosign and protospeech in biological and cultural evolution from LCA-c to Homo sapiens with a language-ready brain. Second, we suggest how cultural evolution in Homo sapiens led from protolanguages to full languages with grammar and compositional semantics. Third, we assess the similarities and differences between the dorsal and ventral streams in audition and vision as the basis for presenting and comparing two models of language processing in the human brain: A model of (i) the auditory dorsal and ventral streams in sentence comprehension; and (ii) the visual dorsal and ventral streams in defining ;what language is about; in both production and perception of utterances related to visual scenes provide the basis for (iii) a first step towards a synthesis and a look at challenges for further research.
Positionalism of Relations and Its Consequences for Fact-Oriented Modelling
NASA Astrophysics Data System (ADS)
Keet, C. Maria
Natural language-based conceptual modelling as well as the use of diagrams have been essential components of fact-oriented modelling from its inception. However, transforming natural language to its corresponding object-role modelling diagram, and vv., is not trivial. This is due to the more fundamental problem of the different underlying ontological commitments concerning positionalism of the fact types. The natural language-based approach adheres to the standard view whereas the diagram-based approach has a positionalist commitment, which is, from an ontological perspective, incompatible with the former. This hinders seamless transition between the two approaches and affects interoperability with other conceptual modelling languages. One can adopt either the limited standard view or the positionalist commitment with fact types that may not be easily verbalisable but which facilitates data integration and reusability of conceptual models with ontological foundations.
ERIC Educational Resources Information Center
Eslami, Zohreh R.; Kung, Wan-Tsai
2016-01-01
The purpose of this study was to explore the occurrence of incidental focus-on-form and its effect on subsequent second language (L2) production of learners of different dyads in an online task-based language learning context. The participants included Taiwanese learners of English as a foreign language at different proficiency levels, and native…
Robson, Barry
2016-12-01
The Q-UEL language of XML-like tags and the associated software applications are providing a valuable toolkit for Evidence Based Medicine (EBM). In this paper the already existing applications, data bases, and tags are brought together with new ones. The particular Q-UEL embodiment used here is the BioIngine. The main challenge is one of bringing together the methods of symbolic reasoning and calculative probabilistic inference that underlie EBM and medical decision making. Some space is taken to review this background. The unification is greatly facilitated by Q-UEL's roots in the notation and algebra of Dirac, and by extending Q-UEL into the Wolfram programming environment. Further, the overall problem of integration is also a relatively simple one because of the nature of Q-UEL as a language for interoperability in healthcare and biomedicine, while the notion of workflow is facilitated because of the EBM best practice known as PICO. What remains difficult is achieving a high degree of overall automation because of a well-known difficulty in capturing human expertise in computers: the Feigenbaum bottleneck. Copyright © 2016 Elsevier Ltd. All rights reserved.
SWAN: An expert system with natural language interface for tactical air capability assessment
NASA Technical Reports Server (NTRS)
Simmons, Robert M.
1987-01-01
SWAN is an expert system and natural language interface for assessing the war fighting capability of Air Force units in Europe. The expert system is an object oriented knowledge based simulation with an alternate worlds facility for performing what-if excursions. Responses from the system take the form of generated text, tables, or graphs. The natural language interface is an expert system in its own right, with a knowledge base and rules which understand how to access external databases, models, or expert systems. The distinguishing feature of the Air Force expert system is its use of meta-knowledge to generate explanations in the frame and procedure based environment.
NASA Technical Reports Server (NTRS)
Owre, Sam; Shankar, Natarajan
1999-01-01
A specification language is a medium for expressing what is computed rather than how it is computed. Specification languages share some features with programming languages but are also different in several important ways. For our purpose, a specification language is a logic within which the behavior of computational systems can be formalized. Although a specification can be used to simulate the behavior of such systems, we mainly use specifications to state and prove system properties with mechanical assistance. We present the formal semantics of the specification language of SRI's Prototype Verification System (PVS). This specification language is based on the simply typed lambda calculus. The novelty in PVS is that it contains very expressive language features whose static analysis (e.g., typechecking) requires the assistance of a theorem prover. The formal semantics illuminates several of the design considerations underlying PVS, the interaction between theorem proving and typechecking.
NLP-based Identification of Pneumonia Cases from Free-Text Radiological Reports
Elkin, Peter L.; Froehling, David; Wahner-Roedler, Dietlind; Trusko, Brett; Welsh, Gail; Ma, Haobo; Asatryan, Armen X.; Tokars, Jerome I.; Rosenbloom, S. Trent; Brown, Steven H.
2008-01-01
Radiological reports are a rich source of clinical data which can be mined to assist with biosurveillance of emerging infectious diseases. In addition to biosurveillance, radiological reports are an important source of clinical data for health service research. Pneumonias and other radiological findings on chest xray or chest computed tomography (CT) are one type of relevant finding to both biosurveillance and health services research. In this study we examined the ability of a Natural Language Processing system to accurately identify pneumonias and other lesions from within free-text radiological reports. The system encoded the reports in the SNOMED CT Ontology and then a set of SNOMED CT based rules were created in our Health Archetype Language aimed at the identification of these radiological findings and diagnoses. The encoded rule was executed against the SNOMED CT encodings of the radiological reports. The accuracy of the reports was compared with a Clinician review of the Radiological Reports. The accuracy of the system in the identification of pneumonias was high with a Sensitivity (recall) of 100%, a specificity of 98%, and a positive predictive value (precision) of 97%. We conclude that SNOMED CT based computable rules are accurate enough for the automated biosurveillance of pneumonias from radiological reports. PMID:18998791
A State Cyber Hub Operations Framework
2016-06-01
to communicate and sense or interact with their internal states or the external environment. Machine Learning: A type of artificial intelligence that... artificial intelligence , and computational linguistics concerned with the interactions between computers and human (natural) languages. Patching: A piece...formalizing a proof of concept for cyber initiatives and developed frameworks for operationalizing the data and intelligence produced across state
ERIC Educational Resources Information Center
Wiggins, Joseph B.; Grafsgaard, Joseph F.; Boyer, Kristy Elizabeth; Wiebe, Eric N.; Lester, James C.
2017-01-01
In recent years, significant advances have been made in intelligent tutoring systems, and these advances hold great promise for adaptively supporting computer science (CS) learning. In particular, tutorial dialogue systems that engage students in natural language dialogue can create rich, adaptive interactions. A promising approach to increasing…
Teaching where We Are: Place-Based Language Arts
ERIC Educational Resources Information Center
Lundahl, Merrilyne
2011-01-01
This article discusses building ecoliteracy through place-based education (PBE) within English language arts: some ideas of what PBE is, why it's important, and examples of how it might be applied. The author contends that observing nature and creating personal metaphors from the natural world can help students develop keener writing skills and…
From emblems to diagrams: Kepler's new pictorial language of scientific representation.
Chen-Morris, Raz
2009-01-01
Kepler's treatise on optics of 1604 furnished, along with technical solutions to problems in medieval perspective, a mathematically-based visual language for the observation of nature. This language, based on Kepler's theory of retinal pictures, ascribed a new role to geometrical diagrams. This paper examines Kepler's pictorial language against the backdrop of alchemical emblems that flourished in and around the court of Rudolf II in Prague. It highlights the cultural context in which Kepler's optics was immersed, and the way in which Kepler attempted to demarcate his new science from other modes of the investigation of nature.
ERIC Educational Resources Information Center
Huffstetter, Mary; King, James R.; Onwuegbuzie, Anthony J.; Schneider, Jenifer J.; Powell-Smith, Kelly A.
2010-01-01
This study examined the effects of a computer-based early reading program (Headsprout Early Reading) on the oral language and early reading skills of at-risk preschool children. In a pretest-posttest control group design, 62 children were randomly assigned to receive supplemental instruction with Headsprout Early Reading (experimental group) or…
ERIC Educational Resources Information Center
Van Laere, Evelien; Rosiers, Kirsten; Van Avermaet, Piet; Slembrouck, Stef; van Braak, Johan
2017-01-01
Computer-based learning environments (CBLEs) have the potential to integrate the linguistic diversity present in classrooms as a resourceful tool in pupils' learning process. Particularly for pupils who speak a language at home other than the language which is used at school, more understanding is needed on how CBLEs offering multilingual content…
ERIC Educational Resources Information Center
Lu, Zhihong; Wang, Yanfei
2014-01-01
The effective design of test items within a computer-based language test (CBLT) for developing English as a foreign language (EFL) learners' listening and speaking skills has become an increasingly challenging task for both test users and test designers compared with that of pencil-and-paper tests in the past. It needs to fit integrated oral…
Machine learning and radiology.
Wang, Shijun; Summers, Ronald M
2012-07-01
In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers. Copyright © 2012. Published by Elsevier B.V.
ERIC Educational Resources Information Center
Pederson, Kathleen Marshall
The status of research on computer-assisted language learning (CALL) is explored beginning with a historical perspective of research on the language laboratory, followed by analyses of applied research on CALL. A theoretical base is provided to illustrate the need for more basic research on CALL that considers computer capabilities, learner…
ERIC Educational Resources Information Center
Zhang, Yujie; Terai, Asuka; Nakagawa, Masanori
2013-01-01
Inductive reasoning under risk conditions is an important thinking process not only for sciences but also in our daily life. From this viewpoint, it is very useful for language learning to construct computational models of inductive reasoning which realize the CAE for foreign languages. This study proposes the comparison of inductive reasoning…
Clips as a knowledge based language
NASA Technical Reports Server (NTRS)
Harrington, James B.
1987-01-01
CLIPS is a language for writing expert systems applications on a personal or small computer. Here, the CLIPS programming language is described and compared to three other artificial intelligence (AI) languages (LISP, Prolog, and OPS5) with regard to the processing they provide for the implementation of a knowledge based system (KBS). A discussion is given on how CLIPS would be used in a control system.
ERIC Educational Resources Information Center
Jarman, Jay
2011-01-01
This dissertation focuses on developing and evaluating hybrid approaches for analyzing free-form text in the medical domain. This research draws on natural language processing (NLP) techniques that are used to parse and extract concepts based on a controlled vocabulary. Once important concepts are extracted, additional machine learning algorithms,…
ERIC Educational Resources Information Center
LeBlanc, Linda A.; Geiger, Kaneen B.; Sautter, Rachael A.; Sidener, Tina M.
2007-01-01
The Natural Language Paradigm (NLP) has proven effective in increasing spontaneous verbalizations for children with autism. This study investigated the use of NLP with older adults with cognitive impairments served at a leisure-based adult day program for seniors. Three individuals with limited spontaneous use of functional language participated…
ERIC Educational Resources Information Center
Garrett-Rucks, Paula
2013-01-01
Fostering and assessing language learners' cultural understanding is a daunting task, particularly at the early stages of language learning with target language instruction. The purpose of this study was to explore the development of beginning French language learners' intercultural understanding in a computer-mediated environment where students…
High level language-based robotic control system
NASA Technical Reports Server (NTRS)
Rodriguez, Guillermo (Inventor); Kruetz, Kenneth K. (Inventor); Jain, Abhinandan (Inventor)
1994-01-01
This invention is a robot control system based on a high level language implementing a spatial operator algebra. There are two high level languages included within the system. At the highest level, applications programs can be written in a robot-oriented applications language including broad operators such as MOVE and GRASP. The robot-oriented applications language statements are translated into statements in the spatial operator algebra language. Programming can also take place using the spatial operator algebra language. The statements in the spatial operator algebra language from either source are then translated into machine language statements for execution by a digital control computer. The system also includes the capability of executing the control code sequences in a simulation mode before actual execution to assure proper action at execution time. The robot's environment is checked as part of the process and dynamic reconfiguration is also possible. The languages and system allow the programming and control of multiple arms and the use of inward/outward spatial recursions in which every computational step can be related to a transformation from one point in the mechanical robot to another point to name two major advantages.
High level language-based robotic control system
NASA Technical Reports Server (NTRS)
Rodriguez, Guillermo (Inventor); Kreutz, Kenneth K. (Inventor); Jain, Abhinandan (Inventor)
1996-01-01
This invention is a robot control system based on a high level language implementing a spatial operator algebra. There are two high level languages included within the system. At the highest level, applications programs can be written in a robot-oriented applications language including broad operators such as MOVE and GRASP. The robot-oriented applications language statements are translated into statements in the spatial operator algebra language. Programming can also take place using the spatial operator algebra language. The statements in the spatial operator algebra language from either source are then translated into machine language statements for execution by a digital control computer. The system also includes the capability of executing the control code sequences in a simulation mode before actual execution to assure proper action at execution time. The robot's environment is checked as part of the process and dynamic reconfiguration is also possible. The languages and system allow the programming and control of multiple arms and the use of inward/outward spatial recursions in which every computational step can be related to a transformation from one point in the mechanical robot to another point to name two major advantages.
ERIC Educational Resources Information Center
Depradine, Colin; Gay, Glenda
2004-01-01
With the strong link between programming and the underlying technology, the incorporation of computer technology into the teaching of a programming language course should be a natural progression. However, the abstract nature of programming can make such integration a difficult prospect to achieve. As a result, the main development tool, the…
Never-Ending Learning for Deep Understanding of Natural Language
2017-10-01
CA policy clarification memorandum dated 16 Jan 09. 13. SUPPLEMENTARY NOTES 14. ABSTRACT This research has explored the thesis that very... thesis we have built on our earlier research on the Never Ending Language Learning (NELL) computer system, which has been running non- stop since... thesis that very significant amounts of background knowledge can lead to very substantial improvements in the accuracy of deep text analysis and
Designing a VOIP Based Language Test
ERIC Educational Resources Information Center
Garcia Laborda, Jesus; Magal Royo, Teresa; Otero de Juan, Nuria; Gimenez Lopez, Jose L.
2015-01-01
Assessing speaking is one of the most difficult tasks in computer based language testing. Many countries all over the world face the need to implement standardized language tests where speaking tasks are commonly included. However, a number of problems make them rather impractical such as the costs, the personnel involved, the length of time for…
NASA Astrophysics Data System (ADS)
Meurant, Robert C.
Second Language (L2) Digital Literacy is of emerging importance within English as a Foreign Language (EFL) in Korea, and will evolve to become regarded as the most critical component of overall L2 English Literacy. Computer-based Internet-hosted Learning Management Systems (LMS), such as the popular open-source Moodle, are rapidly being adopted worldwide for distance education, and are also being applied to blended (hybrid) education. In EFL Education, they have a special potential: by setting the LMS to force English to be used exclusively throughout a course website, the meta-language can be made the target L2 language. Of necessity, students develop the ability to use English to navigate the Internet, access and contribute to online resources, and engage in computer-mediated communication. Through such pragmatic engagement with English, students significantly develop their L2 Digital Literacy.
The State-of-the-Art in Natural Language Understanding.
1981-01-28
driven text analysis. If we know a story is about a restaurant, we expect that we may encounter a waitress, menu, table, a bill, food , and other... Pront aids for Data Bases During the 70’s a number of natural language data base front ends apreared: LUNPLR Woods et al 19721 has already been briefly...like to loo.< it inr. ui4 : 3D ’-- "-: handling of novel language, especially netaphor; az-I i,?i nn rti inriq, -mlerstanding systems: the handling of
Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł
2015-05-01
In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Cognitive Computing Approach for Classification of Complaints in the Insurance Industry
NASA Astrophysics Data System (ADS)
Forster, J.; Entrup, B.
2017-10-01
In this paper we present and evaluate a cognitive computing approach for classification of dissatisfaction and four complaint specific complaint classes in correspondence documents between insurance clients and an insurance company. A cognitive computing approach includes the combination classical natural language processing methods, machine learning algorithms and the evaluation of hypothesis. The approach combines a MaxEnt machine learning algorithm with language modelling, tf-idf and sentiment analytics to create a multi-label text classification model. The result is trained and tested with a set of 2500 original insurance communication documents written in German, which have been manually annotated by the partnering insurance company. With a F1-Score of 0.9, a reliable text classification component has been implemented and evaluated. A final outlook towards a cognitive computing insurance assistant is given in the end.
F-Nets and Software Cabling: Deriving a Formal Model and Language for Portable Parallel Programming
NASA Technical Reports Server (NTRS)
DiNucci, David C.; Saini, Subhash (Technical Monitor)
1998-01-01
Parallel programming is still being based upon antiquated sequence-based definitions of the terms "algorithm" and "computation", resulting in programs which are architecture dependent and difficult to design and analyze. By focusing on obstacles inherent in existing practice, a more portable model is derived here, which is then formalized into a model called Soviets which utilizes a combination of imperative and functional styles. This formalization suggests more general notions of algorithm and computation, as well as insights into the meaning of structured programming in a parallel setting. To illustrate how these principles can be applied, a very-high-level graphical architecture-independent parallel language, called Software Cabling, is described, with many of the features normally expected from today's computer languages (e.g. data abstraction, data parallelism, and object-based programming constructs).
Reading Strategies: Issues in the Computerization of Machiavelli's "Il demonio che prese moglie".
ERIC Educational Resources Information Center
Morgan, Leslie Zarker
1994-01-01
The ideal computer-based foreign language reading program must include cognitive background, a learning taxonomy, sound computer design, and knowledge of what is needed for the specific language. Machiavelli's "Il demonia che prese moglie" is chosen for study due to its historical interest. (63 references) (CK)
Liu, Yuanchao; Liu, Ming; Wang, Xin
2015-01-01
The objective of text clustering is to divide document collections into clusters based on the similarity between documents. In this paper, an extension-based feature modeling approach towards semantically sensitive text clustering is proposed along with the corresponding feature space construction and similarity computation method. By combining the similarity in traditional feature space and that in extension space, the adverse effects of the complexity and diversity of natural language can be addressed and clustering semantic sensitivity can be improved correspondingly. The generated clusters can be organized using different granularities. The experimental evaluations on well-known clustering algorithms and datasets have verified the effectiveness of our approach.
Liu, Yuanchao; Liu, Ming; Wang, Xin
2015-01-01
The objective of text clustering is to divide document collections into clusters based on the similarity between documents. In this paper, an extension-based feature modeling approach towards semantically sensitive text clustering is proposed along with the corresponding feature space construction and similarity computation method. By combining the similarity in traditional feature space and that in extension space, the adverse effects of the complexity and diversity of natural language can be addressed and clustering semantic sensitivity can be improved correspondingly. The generated clusters can be organized using different granularities. The experimental evaluations on well-known clustering algorithms and datasets have verified the effectiveness of our approach. PMID:25794172
GATECloud.net: a platform for large-scale, open-source text processing on the cloud.
Tablan, Valentin; Roberts, Ian; Cunningham, Hamish; Bontcheva, Kalina
2013-01-28
Cloud computing is increasingly being regarded as a key enabler of the 'democratization of science', because on-demand, highly scalable cloud computing facilities enable researchers anywhere to carry out data-intensive experiments. In the context of natural language processing (NLP), algorithms tend to be complex, which makes their parallelization and deployment on cloud platforms a non-trivial task. This study presents a new, unique, cloud-based platform for large-scale NLP research--GATECloud. net. It enables researchers to carry out data-intensive NLP experiments by harnessing the vast, on-demand compute power of the Amazon cloud. Important infrastructural issues are dealt with by the platform, completely transparently for the researcher: load balancing, efficient data upload and storage, deployment on the virtual machines, security and fault tolerance. We also include a cost-benefit analysis and usage evaluation.
Machine-aided indexing at NASA
NASA Technical Reports Server (NTRS)
Silvester, June P.; Genuardi, Michael T.; Klingbiel, Paul H.
1994-01-01
This report describes the NASA Lexical Dictionary (NLD), a machine-aided indexing system used online at the National Aeronautics and Space Administration's Center for AeroSpace Information (CASI). This system automatically suggests a set of candidate terms from NASA's controlled vocabulary for any designated natural language text input. The system is comprised of a text processor that is based on the computational, nonsyntactic analysis of input text and an extensive knowledge base that serves to recognize and translate text-extracted concepts. The functions of the various NLD system components are described in detail, and production and quality benefits resulting from the implementation of machine-aided indexing at CASI are discussed.
Yun, Jian; Shang, Song-Chao; Wei, Xiao-Dan; Liu, Shuang; Li, Zhi-Jie
2016-01-01
Language is characterized by both ecological properties and social properties, and competition is the basic form of language evolution. The rise and decline of one language is a result of competition between languages. Moreover, this rise and decline directly influences the diversity of human culture. Mathematics and computer modeling for language competition has been a popular topic in the fields of linguistics, mathematics, computer science, ecology, and other disciplines. Currently, there are several problems in the research on language competition modeling. First, comprehensive mathematical analysis is absent in most studies of language competition models. Next, most language competition models are based on the assumption that one language in the model is stronger than the other. These studies tend to ignore cases where there is a balance of power in the competition. The competition between two well-matched languages is more practical, because it can facilitate the co-development of two languages. A third issue with current studies is that many studies have an evolution result where the weaker language inevitably goes extinct. From the integrated point of view of ecology and sociology, this paper improves the Lotka-Volterra model and basic reaction-diffusion model to propose an "ecology-society" computational model for describing language competition. Furthermore, a strict and comprehensive mathematical analysis was made for the stability of the equilibria. Two languages in competition may be either well-matched or greatly different in strength, which was reflected in the experimental design. The results revealed that language coexistence, and even co-development, are likely to occur during language competition.
Implementing Artificial Intelligence Behaviors in a Virtual World
NASA Technical Reports Server (NTRS)
Krisler, Brian; Thome, Michael
2012-01-01
In this paper, we will present a look at the current state of the art in human-computer interface technologies, including intelligent interactive agents, natural speech interaction and gestural based interfaces. We describe our use of these technologies to implement a cost effective, immersive experience on a public region in Second Life. We provision our Artificial Agents as a German Shepherd Dog avatar with an external rules engine controlling the behavior and movement. To interact with the avatar, we implemented a natural language and gesture system allowing the human avatars to use speech and physical gestures rather than interacting via a keyboard and mouse. The result is a system that allows multiple humans to interact naturally with AI avatars by playing games such as fetch with a flying disk and even practicing obedience exercises using voice and gesture, a natural seeming day in the park.
Object oriented development of engineering software using CLIPS
NASA Technical Reports Server (NTRS)
Yoon, C. John
1991-01-01
Engineering applications involve numeric complexity and manipulations of a large amount of data. Traditionally, numeric computation has been the concern in developing an engineering software. As engineering application software became larger and more complex, management of resources such as data, rather than the numeric complexity, has become the major software design problem. Object oriented design and implementation methodologies can improve the reliability, flexibility, and maintainability of the resulting software; however, some tasks are better solved with the traditional procedural paradigm. The C Language Integrated Production System (CLIPS), with deffunction and defgeneric constructs, supports the procedural paradigm. The natural blending of object oriented and procedural paradigms has been cited as the reason for the popularity of the C++ language. The CLIPS Object Oriented Language's (COOL) object oriented features are more versatile than C++'s. A software design methodology based on object oriented and procedural approaches appropriate for engineering software, and to be implemented in CLIPS was outlined. A method for sensor placement for Space Station Freedom is being implemented in COOL as a sample problem.
NASA Technical Reports Server (NTRS)
Hyde, Patricia R.; Loftin, R. Bowen
1993-01-01
The volume 2 proceedings from the 1993 Conference on Intelligent Computer-Aided Training and Virtual Environment Technology are presented. Topics discussed include intelligent computer assisted training (ICAT) systems architectures, ICAT educational and medical applications, virtual environment (VE) training and assessment, human factors engineering and VE, ICAT theory and natural language processing, ICAT military applications, VE engineering applications, ICAT knowledge acquisition processes and applications, and ICAT aerospace applications.
ERIC Educational Resources Information Center
Li, Jinrong
2012-01-01
The dissertation examines how synchronous text-based computer-mediated communication (SCMC) tasks may affect English as a Second Language (ESL) learners' development of second language (L2) and academic literacy. The study is motivated by two issues concerning the use of SCMC tasks in L2 writing classes. First, although some of the alleged…
Computer-Based Educational Software System. Final Report.
ERIC Educational Resources Information Center
Brandt, Richard C.; Davis, Bradley N.
CBESS (Computer-Based Educational Software System) is a set of 22 programs addressing authoring, instructional delivery, and instructional management. The programs are divided into five groups: (1) Computer-Based Memorization System (CBMS), which helps students acquire and maintain declarative (factual) knowledge (11 programs); (2) Language Skills…
NASA Astrophysics Data System (ADS)
Doerr, Martin; Freitas, Fred; Guizzardi, Giancarlo; Han, Hyoil
Ontology is a cross-disciplinary field concerned with the study of concepts and theories that can be used for representing shared conceptualizations of specific domains. Ontological Engineering is a discipline in computer and information science concerned with the development of techniques, methods, languages and tools for the systematic construction of concrete artifacts capturing these representations, i.e., models (e.g., domain ontologies) and metamodels (e.g., upper-level ontologies). In recent years, there has been a growing interest in the application of formal ontology and ontological engineering to solve modeling problems in diverse areas in computer science such as software and data engineering, knowledge representation, natural language processing, information science, among many others.
NASA Astrophysics Data System (ADS)
Fuentes-Cabrera, Miguel; Anderson, John D.; Wilmoth, Jared; Ginovart, Marta; Prats, Clara; Portell-Canal, Xavier; Retterer, Scott
Microbial interactions are critical for governing community behavior and structure in natural environments. Examination of microbial interactions in the lab involves growth under ideal conditions in batch culture; conditions that occur in nature are, however, characterized by disequilibrium. Of particular interest is the role that system variables play in shaping cell-to-cell interactions and organization at ultrafine spatial scales. We seek to use experiments and agent-based modeling to help discover mechanisms relevant to microbial dynamics and interactions in the environment. Currently, we are using an agent-based model to simulate microbial growth, dynamics and interactions that occur on a microwell-array device developed in our lab. Bacterial cells growing in the microwells of this platform can be studied with high-throughput and high-content image analyses using brightfield and fluorescence microscopy. The agent-based model is written in the language Netlogo, which in turn is ''plugged into'' a computational framework that allows submitting many calculations in parallel for different initial parameters; visualizing the outcomes in an interactive phase-like diagram; and searching, with a genetic algorithm, for the parameters that lead to the most optimal simulation outcome.
ERIC Educational Resources Information Center
Dudley, Albert P.; And Others
1997-01-01
Presents various tips that are useful in the classroom for teaching second languages. These tips focus on teaching basic computer operations; using annotations to foster error corrections in language; using video clips as a part of a U.S. history or culture-based English-as-a-Second-Language lesson; using karaoke to speak with less inhibition; and…
Grammatical analysis as a distributed neurobiological function.
Bozic, Mirjana; Fonteneau, Elisabeth; Su, Li; Marslen-Wilson, William D
2015-03-01
Language processing engages large-scale functional networks in both hemispheres. Although it is widely accepted that left perisylvian regions have a key role in supporting complex grammatical computations, patient data suggest that some aspects of grammatical processing could be supported bilaterally. We investigated the distribution and the nature of grammatical computations across language processing networks by comparing two types of combinatorial grammatical sequences--inflectionally complex words and minimal phrases--and contrasting them with grammatically simple words. Novel multivariate analyses revealed that they engage a coalition of separable subsystems: inflected forms triggered left-lateralized activation, dissociable into dorsal processes supporting morphophonological parsing and ventral, lexically driven morphosyntactic processes. In contrast, simple phrases activated a consistently bilateral pattern of temporal regions, overlapping with inflectional activations in L middle temporal gyrus. These data confirm the role of the left-lateralized frontotemporal network in supporting complex grammatical computations. Critically, they also point to the capacity of bilateral temporal regions to support simple, linear grammatical computations. This is consistent with a dual neurobiological framework where phylogenetically older bihemispheric systems form part of the network that supports language function in the modern human, and where significant capacities for language comprehension remain intact even following severe left hemisphere damage. Copyright © 2014 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Using Electronic Portfolios for Second Language Assessment
ERIC Educational Resources Information Center
Cummins, Patricia W.; Davesne, Celine
2009-01-01
Portfolio assessment as developed in Europe presents a learner-empowering alternative to computer-based testing. The authors present the European Language Portfolio (ELP) and its American adaptations, LinguaFolio and the Global Language Portfolio, as tools to be used with the Common European Framework of Reference for languages and the American…
Student Modeling and Ab Initio Language Learning.
ERIC Educational Resources Information Center
Heift, Trude; Schulze, Mathias
2003-01-01
Provides examples of student modeling techniques that have been employed in computer-assisted language learning over the past decade. Describes two systems for learning German: "German Tutor" and "Geroline." Shows how a student model can support computerized adaptive language testing for diagnostic purposes in a Web-based language learning…
Fusing Symbolic and Numerical Diagnostic Computations
NASA Technical Reports Server (NTRS)
James, Mark
2007-01-01
X-2000 Anomaly Detection Language denotes a developmental computing language, and the software that establishes and utilizes the language, for fusing two diagnostic computer programs, one implementing a numerical analysis method, the other implementing a symbolic analysis method into a unified event-based decision analysis software system for realtime detection of events (e.g., failures) in a spacecraft, aircraft, or other complex engineering system. The numerical analysis method is performed by beacon-based exception analysis for multi-missions (BEAMs), which has been discussed in several previous NASA Tech Briefs articles. The symbolic analysis method is, more specifically, an artificial-intelligence method of the knowledge-based, inference engine type, and its implementation is exemplified by the Spacecraft Health Inference Engine (SHINE) software. The goal in developing the capability to fuse numerical and symbolic diagnostic components is to increase the depth of analysis beyond that previously attainable, thereby increasing the degree of confidence in the computed results. In practical terms, the sought improvement is to enable detection of all or most events, with no or few false alarms.
Language Learning and the Raising of Cultural Awareness through Internet Telephony: A Case Study
ERIC Educational Resources Information Center
Polisca, Elena
2011-01-01
This article seeks to assess the impact of V-Pal (Virtual Partnerships for All Languages) on the student language learning experience within a conventional UK higher education (HE) curriculum. V-Pal is an innovative computer-mediated language scheme, based on a reciprocal, distance-learning language project, run by the University of Manchester in…
ERIC Educational Resources Information Center
Tan, Lan Liana; Wigglesworth, Gillian; Storch, Neomy
2010-01-01
In today's second language classrooms, students are often asked to work in pairs or small groups. Such collaboration can take place face-to-face, but now more often via computer mediated communication. This paper reports on a study which investigated the effect of the medium of communication on the nature of pair interaction. The study involved…
Learning Computer Programming: Implementing a Fractal in a Turing Machine
ERIC Educational Resources Information Center
Pereira, Hernane B. de B.; Zebende, Gilney F.; Moret, Marcelo A.
2010-01-01
It is common to start a course on computer programming logic by teaching the algorithm concept from the point of view of natural languages, but in a schematic way. In this sense we note that the students have difficulties in understanding and implementation of the problems proposed by the teacher. The main idea of this paper is to show that the…
LABORATORY PROCESS CONTROLLER USING NATURAL LANGUAGE COMMANDS FROM A PERSONAL COMPUTER
NASA Technical Reports Server (NTRS)
Will, H.
1994-01-01
The complex environment of the typical research laboratory requires flexible process control. This program provides natural language process control from an IBM PC or compatible machine. Sometimes process control schedules require changes frequently, even several times per day. These changes may include adding, deleting, and rearranging steps in a process. This program sets up a process control system that can either run without an operator, or be run by workers with limited programming skills. The software system includes three programs. Two of the programs, written in FORTRAN77, record data and control research processes. The third program, written in Pascal, generates the FORTRAN subroutines used by the other two programs to identify the user commands with the user-written device drivers. The software system also includes an input data set which allows the user to define the user commands which are to be executed by the computer. To set the system up the operator writes device driver routines for all of the controlled devices. Once set up, this system requires only an input file containing natural language command lines which tell the system what to do and when to do it. The operator can make up custom commands for operating and taking data from external research equipment at any time of the day or night without the operator in attendance. This process control system requires a personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. The program requires a FORTRAN77 compiler and user-written device drivers. This program was developed in 1989 and has a memory requirement of about 62 Kbytes.
A bootstrapping method for development of Treebank
NASA Astrophysics Data System (ADS)
Zarei, F.; Basirat, A.; Faili, H.; Mirain, M.
2017-01-01
Using statistical approaches beside the traditional methods of natural language processing could significantly improve both the quality and performance of several natural language processing (NLP) tasks. The effective usage of these approaches is subject to the availability of the informative, accurate and detailed corpora on which the learners are trained. This article introduces a bootstrapping method for developing annotated corpora based on a complex and rich linguistically motivated elementary structure called supertag. To this end, a hybrid method for supertagging is proposed that combines both of the generative and discriminative methods of supertagging. The method was applied on a subset of Wall Street Journal (WSJ) in order to annotate its sentences with a set of linguistically motivated elementary structures of the English XTAG grammar that is using a lexicalised tree-adjoining grammar formalism. The empirical results confirm that the bootstrapping method provides a satisfactory way for annotating the English sentences with the mentioned structures. The experiments show that the method could automatically annotate about 20% of WSJ with the accuracy of F-measure about 80% of which is particularly 12% higher than the F-measure of the XTAG Treebank automatically generated from the approach proposed by Basirat and Faili [(2013). Bridge the gap between statistical and hand-crafted grammars. Computer Speech and Language, 27, 1085-1104].
Language Learning Going Global: Linking Teachers and Learners via Commercial Skype-Based CMC
ERIC Educational Resources Information Center
Terhune, N. M.
2016-01-01
This paper reports on students' use of face-to-face synchronous computer-mediated communication (CMC) for oral language learning. It describes a university English language class designed to prepare students for overseas study in which a Skype-based English conversation school was piloted. The study offers analysis of how students used the CMC…
Tool Mediation in Focus on Form Activities: Case Studies in a Grammar-Exploring Environment
ERIC Educational Resources Information Center
Karlstrom, Petter; Cerratto-Pargman, Teresa; Lindstrom, Henrik; Knutsson, Ola
2007-01-01
We present two case studies of two different pedagogical tasks in a Computer Assisted Language Learning environment called Grim. The main design principle in Grim is to support "Focus on Form" in second language pedagogy. Grim contains several language technology-based features for exploring linguistic forms (static, rule-based and statistical),…
ERIC Educational Resources Information Center
Rahimi, Zahra; Litman, Diane; Correnti, Richard; Wang, Elaine; Matsumura, Lindsay Clare
2017-01-01
This paper presents an investigation of score prediction based on natural language processing for two targeted constructs within analytic text-based writing: 1) students' effective use of evidence and, 2) their organization of ideas and evidence in support of their claim. With the long-term goal of producing feedback for students and teachers, we…
Huang, Yang; Lowe, Henry J; Klein, Dan; Cucina, Russell J
2005-01-01
The aim of this study was to develop and evaluate a method of extracting noun phrases with full phrase structures from a set of clinical radiology reports using natural language processing (NLP) and to investigate the effects of using the UMLS(R) Specialist Lexicon to improve noun phrase identification within clinical radiology documents. The noun phrase identification (NPI) module is composed of a sentence boundary detector, a statistical natural language parser trained on a nonmedical domain, and a noun phrase (NP) tagger. The NPI module processed a set of 100 XML-represented clinical radiology reports in Health Level 7 (HL7)(R) Clinical Document Architecture (CDA)-compatible format. Computed output was compared with manual markups made by four physicians and one author for maximal (longest) NP and those made by one author for base (simple) NP, respectively. An extended lexicon of biomedical terms was created from the UMLS Specialist Lexicon and used to improve NPI performance. The test set was 50 randomly selected reports. The sentence boundary detector achieved 99.0% precision and 98.6% recall. The overall maximal NPI precision and recall were 78.9% and 81.5% before using the UMLS Specialist Lexicon and 82.1% and 84.6% after. The overall base NPI precision and recall were 88.2% and 86.8% before using the UMLS Specialist Lexicon and 93.1% and 92.6% after, reducing false-positives by 31.1% and false-negatives by 34.3%. The sentence boundary detector performs excellently. After the adaptation using the UMLS Specialist Lexicon, the statistical parser's NPI performance on radiology reports increased to levels comparable to the parser's native performance in its newswire training domain and to that reported by other researchers in the general nonmedical domain.
Baneyx, Audrey; Charlet, Jean; Jaulent, Marie-Christine
2007-01-01
Pathologies and acts are classified in thesauri to help physicians to code their activity. In practice, the use of thesauri is not sufficient to reduce variability in coding and thesauri are not suitable for computer processing. We think the automation of the coding task requires a conceptual modeling of medical items: an ontology. Our task is to help lung specialists code acts and diagnoses with software that represents medical knowledge of this concerned specialty by an ontology. The objective of the reported work was to build an ontology of pulmonary diseases dedicated to the coding process. To carry out this objective, we develop a precise methodological process for the knowledge engineer in order to build various types of medical ontologies. This process is based on the need to express precisely in natural language the meaning of each concept using differential semantics principles. A differential ontology is a hierarchy of concepts and relationships organized according to their similarities and differences. Our main research hypothesis is to apply natural language processing tools to corpora to develop the resources needed to build the ontology. We consider two corpora, one composed of patient discharge summaries and the other being a teaching book. We propose to combine two approaches to enrich the ontology building: (i) a method which consists of building terminological resources through distributional analysis and (ii) a method based on the observation of corpus sequences in order to reveal semantic relationships. Our ontology currently includes 1550 concepts and the software implementing the coding process is still under development. Results show that the proposed approach is operational and indicates that the combination of these methods and the comparison of the resulting terminological structures give interesting clues to a knowledge engineer for the building of an ontology.
Developing Formal Correctness Properties from Natural Language Requirements
NASA Technical Reports Server (NTRS)
Nikora, Allen P.
2006-01-01
This viewgraph presentation reviews the rationale of the program to transform natural language specifications into formal notation.Specifically, automate generation of Linear Temporal Logic (LTL)correctness properties from natural language temporal specifications. There are several reasons for this approach (1) Model-based techniques becoming more widely accepted, (2) Analytical verification techniques (e.g., model checking, theorem proving) significantly more effective at detecting types of specification design errors (e.g., race conditions, deadlock) than manual inspection, (3) Many requirements still written in natural language, which results in a high learning curve for specification languages, associated tools and increased schedule and budget pressure on projects reduce training opportunities for engineers, and (4) Formulation of correctness properties for system models can be a difficult problem. This has relevance to NASA in that it would simplify development of formal correctness properties, lead to more widespread use of model-based specification, design techniques, assist in earlier identification of defects and reduce residual defect content for space mission software systems. The presentation also discusses: potential applications, accomplishments and/or technological transfer potential and the next steps.
A parallelized binary search tree
USDA-ARS?s Scientific Manuscript database
PTTRNFNDR is an unsupervised statistical learning algorithm that detects patterns in DNA sequences, protein sequences, or any natural language texts that can be decomposed into letters of a finite alphabet. PTTRNFNDR performs complex mathematical computations and its processing time increases when i...
Jorge-Botana, Guillermo; Olmos, Ricardo; Luzón, José M
2018-01-01
The aim of this paper is to describe and explain one useful computational methodology to model the semantic development of word representation: Word maturity. In particular, the methodology is based on the longitudinal word monitoring created by Kirylev and Landauer using latent semantic analysis for the representation of lexical units. The paper is divided into two parts. First, the steps required to model the development of the meaning of words are explained in detail. We describe the technical and theoretical aspects of each step. Second, we provide a simple example of application of this methodology with some simple tools that can be used by applied researchers. This paper can serve as a user-friendly guide for researchers interested in modeling changes in the semantic representations of words. Some current aspects of the technique and future directions are also discussed. WIREs Cogn Sci 2018, 9:e1457. doi: 10.1002/wcs.1457 This article is categorized under: Computer Science > Natural Language Processing Linguistics > Language Acquisition Psychology > Development and Aging. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Sklenář, Ivan; Kříž, Václav
1990-11-01
Programs with a natural-language user interface and text-processing programs require a vocabulary providing the mapping of the individual word form onto a lexeme, e.g. "says", "said", "saying"→"see". Examples of such programs are indexing programs for information retrieval, and spelling correctors for text-processing systems. The lexicographical task of such a computer vocabulary is especially difficult for Slavic languages, because their morphological structure is complex. An average Czech verb, for example, has 25 forms, and we have identified more than 100 paradigms for verbs. In order to support the creation of a Czech vocabulary, we have designed a system of programs for paradigm identification and derivation of words. The result of our effort is a vocabulary comprising 110 000 words and 1250 000 word forms. This vocabulary was used for the PASSAT system in the Czechoslovak Press Agency. This vocabulary may also be used in a spelling corrector. However, for such an application the vocabulary must be compressed into a compact form in order to shorten the access times. Compression is based on the paradigmatic structure of morphology which defines suffix sets for each word.
Robson, Barry; Boray, Srinidhi
2016-06-01
Extracting medical knowledge by structured data mining of many medical records and from unstructured data mining of natural language source text on the Internet will become increasingly important for clinical decision support. Output from these sources can be transformed into large numbers of elements of knowledge in a Knowledge Representation Store (KRS), here using the notation and to some extent the algebraic principles of the Q-UEL Web-based universal exchange and inference language described previously, rooted in Dirac notation from quantum mechanics and linguistic theory. In a KRS, semantic structures or statements about the world of interest to medicine are analogous to natural language sentences seen as formed from noun phrases separated by verbs, prepositions and other descriptions of relationships. A convenient method of testing and better curating these elements of knowledge is by having the computer use them to take the test of a multiple choice medical licensing examination. It is a venture which perhaps tells us almost as much about the reasoning of students and examiners as it does about the requirements for Artificial Intelligence as employed in clinical decision making. It emphasizes the role of context and of contextual probabilities as opposed to the more familiar intrinsic probabilities, and of a preliminary form of logic that we call presyllogistic reasoning. Copyright © 2016 Elsevier Ltd. All rights reserved.
Automatic Mexican sign language and digits recognition using normalized central moments
NASA Astrophysics Data System (ADS)
Solís, Francisco; Martínez, David; Espinosa, Oscar; Toxqui, Carina
2016-09-01
This work presents a framework for automatic Mexican sign language and digits recognition based on computer vision system using normalized central moments and artificial neural networks. Images are captured by digital IP camera, four LED reflectors and a green background in order to reduce computational costs and prevent the use of special gloves. 42 normalized central moments are computed per frame and used in a Multi-Layer Perceptron to recognize each database. Four versions per sign and digit were used in training phase. 93% and 95% of recognition rates were achieved for Mexican sign language and digits respectively.
Multilingual natural language generation as part of a medical terminology server.
Wagner, J C; Solomon, W D; Michel, P A; Juge, C; Baud, R H; Rector, A L; Scherrer, J R
1995-01-01
Re-usable and sharable, and therefore language-independent concept models are of increasing importance in the medical domain. The GALEN project (Generalized Architecture for Languages Encyclopedias and Nomenclatures in Medicine) aims at developing language-independent concept representation systems as the foundations for the next generation of multilingual coding systems. For use within clinical applications, the content of the model has to be mapped to natural language. A so-called Multilingual Information Module (MM) establishes the link between the language-independent concept model and different natural languages. This text generation software must be versatile enough to cope at the same time with different languages and with different parts of a compositional model. It has to meet, on the one hand, the properties of the language as used in the medical domain and, on the other hand, the specific characteristics of the underlying model and its representation formalism. We propose a semantic-oriented approach to natural language generation that is based on linguistic annotations to a concept model. This approach is realized as an integral part of a Terminology Server, built around the concept model and offering different terminological services for clinical applications.
ONRASIA Scientific Information Bulletin, Volume 16, Number 1
1991-03-01
be expressed naturally in an and hence the programs produced by pline. They range from computing the algebraic language such as Fortran, these efforts...years devel- gram an iterative scheme to solve the function satisfies oping vectorizing compilers for Hitachi. problem. This is quite natural to do in...for it ential equations to be expressed in a on the plate, with 0,=1 at the outside to compile into efficient vectorizable natural mathematical syntax
Effect of Network-Assisted Language Teaching Model on Undergraduate English Skills
ERIC Educational Resources Information Center
He, Chunyan
2013-01-01
With the coming of the information age, computer-based teaching model has had an important impact on English teaching. Since 2004, the trial instruction on Network-assisted Language Teaching (NALT) Model integrating the English instruction and computer technology has been launched at some universities in China, including China university of…
COMETT-CALLIOPE: The Implementation of Call Materials for Business and Industrial Purposes.
ERIC Educational Resources Information Center
Van Elsen, Edwig; And Others
The development of a Computer Assisted Language Learning for Information Organization and Production in Europe (CALLIOPE) program is discussed. CALLIOPE is a program launched by the European Community that is intended to provide computer-based foreign language instruction for the business and industrial environment. Program goals are two-fold: (1)…
Learning Vocabulary in a Foreign Language: A Computer Software Based Model Attempt
ERIC Educational Resources Information Center
Yelbay Yilmaz, Yasemin
2015-01-01
This study aimed at devising a vocabulary learning software that would help learners learn and retain vocabulary items effectively. Foundation linguistics and learning theories have been adapted to the foreign language vocabulary learning context using a computer software named Parole that was designed exclusively for this study. Experimental…
Whole Language, Computers and CD-ROM Technology: A Kindergarten Unit on "Benjamin Bunny."
ERIC Educational Resources Information Center
Balajthy, Ernest
A kindergarten teacher, two preservice teachers, and a college consultant on educational computer technology designed and developed a 10-day whole-language integrated unit on the theme of Beatrix Potter's "Benjamin Bunny." The project was designed as a demonstration of the potential of integrating the CD-ROM-based version of…
ERIC Educational Resources Information Center
Marek, Michael W.; Wu, Wen-Chi Vivian
2014-01-01
This conceptual, interdisciplinary inquiry explores Complex Dynamic Systems as the concept relates to the internal and external environmental factors affecting computer assisted language learning (CALL). Based on the results obtained by de Rosnay ["World Futures: The Journal of General Evolution", 67(4/5), 304-315 (2011)], who observed…
Creation and Development of an Integrated Model of New Technologies and ESP
ERIC Educational Resources Information Center
Garcia Laborda, Jesus
2004-01-01
It seems irrefutable that the world is progressing in concert with computer science. Educational applications and projects for first and second language acquisition have not been left behind. However, currently it seems that the reputation of completely computer-based language learning courses has taken a nosedive, and, consequently there has been…
Investigation, Development, and Evaluation of Performance Proving for Fault-tolerant Computers
NASA Technical Reports Server (NTRS)
Levitt, K. N.; Schwartz, R.; Hare, D.; Moore, J. S.; Melliar-Smith, P. M.; Shostak, R. E.; Boyer, R. S.; Green, M. W.; Elliott, W. D.
1983-01-01
A number of methodologies for verifying systems and computer based tools that assist users in verifying their systems were developed. These tools were applied to verify in part the SIFT ultrareliable aircraft computer. Topics covered included: STP theorem prover; design verification of SIFT; high level language code verification; assembly language level verification; numerical algorithm verification; verification of flight control programs; and verification of hardware logic.
Metalevel programming in robotics: Some issues
NASA Technical Reports Server (NTRS)
Kumarn, A.; Parameswaran, N.
1987-01-01
Computing in robotics has two important requirements: efficiency and flexibility. Algorithms for robot actions are implemented usually in procedural languages such as VAL and AL. But, since their excessive bindings create inflexible structures of computation, it is proposed that Logic Programming is a more suitable language for robot programming due to its non-determinism, declarative nature, and provision for metalevel programming. Logic Programming, however, results in inefficient computations. As a solution to this problem, researchers discuss a framework in which controls can be described to improve efficiency. They have divided controls into: (1) in-code and (2) metalevel and discussed them with reference to selection of rules and dataflow. Researchers illustrated the merit of Logic Programming by modelling the motion of a robot from one point to another avoiding obstacles.
Integrating language models into classifiers for BCI communication: a review
NASA Astrophysics Data System (ADS)
Speier, W.; Arnold, C.; Pouratian, N.
2016-06-01
Objective. The present review systematically examines the integration of language models to improve classifier performance in brain-computer interface (BCI) communication systems. Approach. The domain of natural language has been studied extensively in linguistics and has been used in the natural language processing field in applications including information extraction, machine translation, and speech recognition. While these methods have been used for years in traditional augmentative and assistive communication devices, information about the output domain has largely been ignored in BCI communication systems. Over the last few years, BCI communication systems have started to leverage this information through the inclusion of language models. Main results. Although this movement began only recently, studies have already shown the potential of language integration in BCI communication and it has become a growing field in BCI research. BCI communication systems using language models in their classifiers have progressed down several parallel paths, including: word completion; signal classification; integration of process models; dynamic stopping; unsupervised learning; error correction; and evaluation. Significance. Each of these methods have shown significant progress, but have largely been addressed separately. Combining these methods could use the full potential of language model, yielding further performance improvements. This integration should be a priority as the field works to create a BCI system that meets the needs of the amyotrophic lateral sclerosis population.
Integrating language models into classifiers for BCI communication: a review.
Speier, W; Arnold, C; Pouratian, N
2016-06-01
The present review systematically examines the integration of language models to improve classifier performance in brain-computer interface (BCI) communication systems. The domain of natural language has been studied extensively in linguistics and has been used in the natural language processing field in applications including information extraction, machine translation, and speech recognition. While these methods have been used for years in traditional augmentative and assistive communication devices, information about the output domain has largely been ignored in BCI communication systems. Over the last few years, BCI communication systems have started to leverage this information through the inclusion of language models. Although this movement began only recently, studies have already shown the potential of language integration in BCI communication and it has become a growing field in BCI research. BCI communication systems using language models in their classifiers have progressed down several parallel paths, including: word completion; signal classification; integration of process models; dynamic stopping; unsupervised learning; error correction; and evaluation. Each of these methods have shown significant progress, but have largely been addressed separately. Combining these methods could use the full potential of language model, yielding further performance improvements. This integration should be a priority as the field works to create a BCI system that meets the needs of the amyotrophic lateral sclerosis population.
Base Numeration Systems and Introduction to Computer Programming.
ERIC Educational Resources Information Center
Kim, K. Ed.; And Others
This teaching guide is for the instructor of an introductory course in computer programming using FORTRAN language. Five FORTRAN programs are incorporated in this guide, which has been used as a FORTRAN IV SELF TEACHER. The base eight, base four, and base two concepts are integrated with FORTRAN computer programs, geoblock activities, and related…
ERIC Educational Resources Information Center
Vlas, Radu Eduard
2012-01-01
Open source projects do have requirements; they are, however, mostly informal, text descriptions found in requests, forums, and other correspondence. Understanding such requirements provides insight into the nature of open source projects. Unfortunately, manual analysis of natural language requirements is time-consuming, and for large projects,…
Language-driven anticipatory eye movements in virtual reality.
Eichert, Nicole; Peeters, David; Hagoort, Peter
2018-06-01
Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments.
IPython: components for interactive and parallel computing across disciplines. (Invited)
NASA Astrophysics Data System (ADS)
Perez, F.; Bussonnier, M.; Frederic, J. D.; Froehle, B. M.; Granger, B. E.; Ivanov, P.; Kluyver, T.; Patterson, E.; Ragan-Kelley, B.; Sailer, Z.
2013-12-01
Scientific computing is an inherently exploratory activity that requires constantly cycling between code, data and results, each time adjusting the computations as new insights and questions arise. To support such a workflow, good interactive environments are critical. The IPython project (http://ipython.org) provides a rich architecture for interactive computing with: 1. Terminal-based and graphical interactive consoles. 2. A web-based Notebook system with support for code, text, mathematical expressions, inline plots and other rich media. 3. Easy to use, high performance tools for parallel computing. Despite its roots in Python, the IPython architecture is designed in a language-agnostic way to facilitate interactive computing in any language. This allows users to mix Python with Julia, R, Octave, Ruby, Perl, Bash and more, as well as to develop native clients in other languages that reuse the IPython clients. In this talk, I will show how IPython supports all stages in the lifecycle of a scientific idea: 1. Individual exploration. 2. Collaborative development. 3. Production runs with parallel resources. 4. Publication. 5. Education. In particular, the IPython Notebook provides an environment for "literate computing" with a tight integration of narrative and computation (including parallel computing). These Notebooks are stored in a JSON-based document format that provides an "executable paper": notebooks can be version controlled, exported to HTML or PDF for publication, and used for teaching.
ERIC Educational Resources Information Center
Feldman, David
1975-01-01
Presents a computerized program for foreign language learning giving drills for all the major language skills. The drills are followed by an extensive bibliography of documents in some way dealing with computer based instruction, particularly foreign language instruction. (Text is in Spanish.) (TL)
Current Trends in English Language Testing. Conference Proceedings for CTELT 1997 and 1998, Vol. 1.
ERIC Educational Resources Information Center
Coombe, Christine A., Ed.
Papers from the 1997 and 1998 Current Trends in English Language Testing (CTELT) conferences include: "Computer-Based Language Testing: The Call of the Internet" (G. Fulcher); "Uses of the PET (Preliminary English Test) at Sultan Qaboos University" (R. Taylor); "Issues in Foreign and Second Language Academic Listening…
The Effect of Formative Assessments on Language Performance
ERIC Educational Resources Information Center
Radford, Brian W.
2014-01-01
This study sought to improve the language learning outcomes at the Missionary Training Center in Provo, Utah. Young men and women between the ages of 19-24 are taught a foreign language in an accelerated environment. In an effort to improve learning outcomes, computer-based practice and teaching of language performance criteria were provided to…
Effect of the Affordances of a Virtual Environment on Second Language Oral Proficiency
ERIC Educational Resources Information Center
Carruthers, Heidy P. Cuervo
2013-01-01
The traditional language laboratory consists of computer-based exercises in which students practice the language individually, working on language form drills and listening comprehension activities. In addition to the traditional approach to the laboratory requirement, students in the study participated in a weekly conversation hour focusing on…
Plant Phenotyping using Probabilistic Topic Models: Uncovering the Hyperspectral Language of Plants
Wahabzada, Mirwaes; Mahlein, Anne-Katrin; Bauckhage, Christian; Steiner, Ulrike; Oerke, Erich-Christian; Kersting, Kristian
2016-01-01
Modern phenotyping and plant disease detection methods, based on optical sensors and information technology, provide promising approaches to plant research and precision farming. In particular, hyperspectral imaging have been found to reveal physiological and structural characteristics in plants and to allow for tracking physiological dynamics due to environmental effects. In this work, we present an approach to plant phenotyping that integrates non-invasive sensors, computer vision, as well as data mining techniques and allows for monitoring how plants respond to stress. To uncover latent hyperspectral characteristics of diseased plants reliably and in an easy-to-understand way, we “wordify” the hyperspectral images, i.e., we turn the images into a corpus of text documents. Then, we apply probabilistic topic models, a well-established natural language processing technique that identifies content and topics of documents. Based on recent regularized topic models, we demonstrate that one can track automatically the development of three foliar diseases of barley. We also present a visualization of the topics that provides plant scientists an intuitive tool for hyperspectral imaging. In short, our analysis and visualization of characteristic topics found during symptom development and disease progress reveal the hyperspectral language of plant diseases. PMID:26957018
Generating Contextual Descriptions of Virtual Reality (VR) Spaces
NASA Astrophysics Data System (ADS)
Olson, D. M.; Zaman, C. H.; Sutherland, A.
2017-12-01
Virtual reality holds great potential for science communication, education, and research. However, interfaces for manipulating data and environments in virtual worlds are limited and idiosyncratic. Furthermore, speech and vision are the primary modalities by which humans collect information about the world, but the linking of visual and natural language domains is a relatively new pursuit in computer vision. Machine learning techniques have been shown to be effective at image and speech classification, as well as at describing images with language (Karpathy 2016), but have not yet been used to describe potential actions. We propose a technique for creating a library of possible context-specific actions associated with 3D objects in immersive virtual worlds based on a novel dataset generated natively in virtual reality containing speech, image, gaze, and acceleration data. We will discuss the design and execution of a user study in virtual reality that enabled the collection and the development of this dataset. We will also discuss the development of a hybrid machine learning algorithm linking vision data with environmental affordances in natural language. Our findings demonstrate that it is possible to develop a model which can generate interpretable verbal descriptions of possible actions associated with recognized 3D objects within immersive VR environments. This suggests promising applications for more intuitive user interfaces through voice interaction within 3D environments. It also demonstrates the potential to apply vast bodies of embodied and semantic knowledge to enrich user interaction within VR environments. This technology would allow for applications such as expert knowledge annotation of 3D environments, complex verbal data querying and object manipulation in virtual spaces, and computer-generated, dynamic 3D object affordances and functionality during simulations.
An expert system for natural language processing
NASA Technical Reports Server (NTRS)
Hennessy, John F.
1988-01-01
A solution to the natural language processing problem that uses a rule based system, written in OPS5, to replace the traditional parsing method is proposed. The advantage to using a rule based system are explored. Specifically, the extensibility of a rule based solution is discussed as well as the value of maintaining rules that function independently. Finally, the power of using semantics to supplement the syntactic analysis of a sentence is considered.
Appendix Y. The Integrated Communications Experiment (ICE) Summary.
ERIC Educational Resources Information Center
Coffin, Robert
This appendix describes the Integrated Communications Experiment (ICE), a comprehensive computer software capability developed for the ComField Project. Each major characteristic of the data processing system is treated separately: natural language processing, flexibility, noninterference with the educational process, multipurposeness,…
Automatic generation of the index of productive syntax for child language transcripts.
Hassanali, Khairun-nisa; Liu, Yang; Iglesias, Aquiles; Solorio, Thamar; Dollaghan, Christine
2014-03-01
The index of productive syntax (IPSyn; Scarborough (Applied Psycholinguistics 11:1-22, 1990) is a measure of syntactic development in child language that has been used in research and clinical settings to investigate the grammatical development of various groups of children. However, IPSyn is mostly calculated manually, which is an extremely laborious process. In this article, we describe the AC-IPSyn system, which automatically calculates the IPSyn score for child language transcripts using natural language processing techniques. Our results show that the AC-IPSyn system performs at levels comparable to scores computed manually. The AC-IPSyn system can be downloaded from www.hlt.utdallas.edu/~nisa/ipsyn.html .
A primer in macromolecular linguistics.
Searls, David B
2013-03-01
Polymeric macromolecules, when viewed abstractly as strings of symbols, can be treated in terms of formal language theory, providing a mathematical foundation for characterizing such strings both as collections and in terms of their individual structures. In addition this approach offers a framework for analysis of macromolecules by tools and conventions widely used in computational linguistics. This article introduces the ways that linguistics can be and has been applied to molecular biology, covering the relevant formal language theory at a relatively nontechnical level. Analogies between macromolecules and human natural language are used to provide intuitive insights into the relevance of grammars, parsing, and analysis of language complexity to biology. Copyright © 2012 Wiley Periodicals, Inc.
Beliefs about Learning English as a Second Language among Native Groups in Rural Sabah, Malaysia
ERIC Educational Resources Information Center
Krishnasamy, Hariharan N.; Veloo, Arsaythamby; Lu, Ho Fui
2013-01-01
This paper identifies differences between the three ethnic groups, namely, Kadazans/Dusuns, Bajaus, and other minority ethnic groups on the beliefs about learning English as a second language based on the five variables, that is, language aptitude, language learning difficulty, language learning and communicating strategies, nature of language…
Automatic Selection of Suitable Sentences for Language Learning Exercises
ERIC Educational Resources Information Center
Pilán, Ildikó; Volodina, Elena; Johansson, Richard
2013-01-01
In our study we investigated second and foreign language (L2) sentence readability, an area little explored so far in the case of several languages, including Swedish. The outcome of our research consists of two methods for sentence selection from native language corpora based on Natural Language Processing (NLP) and machine learning (ML)…
ERIC Educational Resources Information Center
Ryder, Nuala; Leinonen, Eeva; Schulz, Joerg
2008-01-01
Background: Pragmatic language impairment in children with specific language impairment has proved difficult to assess, and the nature of their abilities to comprehend pragmatic meaning has not been fully investigated. Aims: To develop both a cognitive approach to pragmatic language assessment based on Relevance Theory and an assessment tool for…
Toward a theory of distributed word expert natural language parsing
NASA Technical Reports Server (NTRS)
Rieger, C.; Small, S.
1981-01-01
An approach to natural language meaning-based parsing in which the unit of linguistic knowledge is the word rather than the rewrite rule is described. In the word expert parser, knowledge about language is distributed across a population of procedural experts, each representing a word of the language, and each an expert at diagnosing that word's intended usage in context. The parser is structured around a coroutine control environment in which the generator-like word experts ask questions and exchange information in coming to collective agreement on sentence meaning. The word expert theory is advanced as a better cognitive model of human language expertise than the traditional rule-based approach. The technical discussion is organized around examples taken from the prototype LISP system which implements parts of the theory.
ERIC Educational Resources Information Center
Atai, Mahmood Reza; Shoja, Leila
2011-01-01
Even though English for Specific Academic Purposes (ESAP) courses constitute a significant part of the Iranian university curriculum, curriculum developers have generally developed the programs based on intuition. This study assessed the present and target situation academic language needs of undergraduate students of computer engineering. To this…
An Overview of the Needs of Technology in Language Testing in Spain
ERIC Educational Resources Information Center
Garcia Laborda, Jesus; Magal Royo, Teresa; Barcena Madera, Elena
2015-01-01
Over the few years, computer based language testing has become prevailing worldwide. The number of institutions the use computers as the main means of delivery has increased dramatically. Many students face each day tests for well-known high-stakes decisions which imply the knowledge and ability to use technology to provide evidence of language…
CMC Technologies for Teaching Foreign Languages: What's on the Horizon?
ERIC Educational Resources Information Center
Lafford, Peter A.; Lafford, Barbara A.
2005-01-01
Computer-mediated communication (CMC) technologies have begun to play an increasingly important role in the teaching of foreign/second (L2) languages. Its use in this context is supported by a growing body of CMC research that highlights the importance of the negotiation of meaning and computer-based interaction in the process of second language…
Using a Dialogue System Based on Dialogue Maps for Computer Assisted Second Language Learning
ERIC Educational Resources Information Center
Choi, Sung-Kwon; Kwon, Oh-Woog; Kim, Young-Kil; Lee, Yunkeun
2016-01-01
In order to use dialogue systems for computer assisted second-language learning systems, one of the difficult issues in such systems is how to construct large-scale dialogue knowledge that matches the dialogue modelling of a dialogue system. This paper describes how we have accomplished the short-term construction of large-scale and…
Expected Utility Based Decision Making under Z-Information and Its Application.
Aliev, Rashad R; Mraiziq, Derar Atallah Talal; Huseynov, Oleg H
2015-01-01
Real-world decision relevant information is often partially reliable. The reasons are partial reliability of the source of information, misperceptions, psychological biases, incompetence, and so forth. Z-numbers based formalization of information (Z-information) represents a natural language (NL) based value of a variable of interest in line with the related NL based reliability. What is important is that Z-information not only is the most general representation of real-world imperfect information but also has the highest descriptive power from human perception point of view as compared to fuzzy number. In this study, we present an approach to decision making under Z-information based on direct computation over Z-numbers. This approach utilizes expected utility paradigm and is applied to a benchmark decision problem in the field of economics.
NASA Astrophysics Data System (ADS)
Spinney, Laura
2017-09-01
Computer scientist Luc Steels uses artificial intelligence to explore the origins and evolution of language. He is best known for his 1999-2001 Talking Heads Experiment, in which robots had to construct a language from scratch to communicate with each other. Now Steels, who works at the Free University of Brussels (VUB), has composed an opera based on the legend of Faust, with a twenty-first-century twist. He talks about Mozart as a nascent computer programmer, how music maps onto language, and the blurred boundaries of a digitized world.
First stage identification of syntactic elements in an extra-terrestrial signal
NASA Astrophysics Data System (ADS)
Elliott, John
2011-02-01
By investigating the generic attributes of a representative set of terrestrial languages at varying levels of abstraction, it is our endeavour to try and isolate elements of the signal universe, which are computationally tractable for its detection and structural decipherment. Ultimately, our aim is to contribute in some way to the understanding of what 'languageness' actually is. This paper describes algorithms and software developed to characterise and detect generic intelligent language-like features in an input signal, using natural language learning techniques: looking for characteristic statistical "language-signatures" in test corpora. As a first step towards such species-independent language-detection, we present a suite of programs to analyse digital representations of a range of data, and use the results to extrapolate whether or not there are language-like structures which distinguish this data from other sources, such as music, images, and white noise.
Simulation/Gaming and the Acquisition of Communicative Competence in Another Language.
ERIC Educational Resources Information Center
Garcia-Carbonell, Amparo; Rising, Beverly; Montero, Begona; Watts, Frances
2001-01-01
Discussion of communicative competence in second language acquisition focuses on a theoretical and practical meshing of simulation and gaming methodology with theories of foreign language acquisition, including task-based learning, interaction, and comprehensible input. Describes experiments conducted with computer-assisted simulations in…
Video to Text (V2T) in Wide Area Motion Imagery
2015-09-01
microtext) or a document (e.g., using Sphinx or Apache NLP ) as an automated approach [102]. Previous work in natural language full-text searching...language processing ( NLP ) based module. The heart of the structured text processing module includes the following seven key word banks...Features Tracker MHT Multiple Hypothesis Tracking MIL Multiple Instance Learning NLP Natural Language Processing OAB Online AdaBoost OF Optic Flow
A formal approach to the analysis of clinical computer-interpretable guideline modeling languages.
Grando, M Adela; Glasspool, David; Fox, John
2012-01-01
To develop proof strategies to formally study the expressiveness of workflow-based languages, and to investigate their applicability to clinical computer-interpretable guideline (CIG) modeling languages. We propose two strategies for studying the expressiveness of workflow-based languages based on a standard set of workflow patterns expressed as Petri nets (PNs) and notions of congruence and bisimilarity from process calculus. Proof that a PN-based pattern P can be expressed in a language L can be carried out semi-automatically. Proof that a language L cannot provide the behavior specified by a PNP requires proof by exhaustion based on analysis of cases and cannot be performed automatically. The proof strategies are generic but we exemplify their use with a particular CIG modeling language, PROforma. To illustrate the method we evaluate the expressiveness of PROforma against three standard workflow patterns and compare our results with a previous similar but informal comparison. We show that the two proof strategies are effective in evaluating a CIG modeling language against standard workflow patterns. We find that using the proposed formal techniques we obtain different results to a comparable previously published but less formal study. We discuss the utility of these analyses as the basis for principled extensions to CIG modeling languages. Additionally we explain how the same proof strategies can be reused to prove the satisfaction of patterns expressed in the declarative language CIGDec. The proof strategies we propose are useful tools for analysing the expressiveness of CIG modeling languages. This study provides good evidence of the benefits of applying formal methods of proof over semi-formal ones. Copyright © 2011 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Geluso, Joe
2013-01-01
Usage-based theories of language learning suggest that native speakers of a language are acutely aware of formulaic language due in large part to frequency effects. Corpora and data-driven learning can offer useful insights into frequent patterns of naturally occurring language to second/foreign language learners who, unlike native speakers, are…
Modeling Teaching with a Computer-Based Concordancer in a TESL Preservice Teacher Education Program.
ERIC Educational Resources Information Center
Gan, Siowck-Lee; And Others
1996-01-01
This study modeled teaching with a computer-based concordancer in a Teaching English-as-a-Second-Language program. Preservice teachers were randomly assigned to work with computer concordancing software or vocabulary exercises to develop word attack skills. Pretesting and posttesting indicated that computer concordancing was more effective in…
Adult Learning in a Computer-Based ESL Acquisition Program
ERIC Educational Resources Information Center
Sanchez, Karen Renee
2013-01-01
This study explores the self-efficacy of students learning English as a Second Language on the computer-based Rosetta Stone program. The research uses a qualitative approach to explore how a readily available computer-based learning program, Rosetta Stone, can help adult immigrant students gain some English competence and so acquire a greater…
pWeb: A High-Performance, Parallel-Computing Framework for Web-Browser-Based Medical Simulation.
Halic, Tansel; Ahn, Woojin; De, Suvranu
2014-01-01
This work presents a pWeb - a new language and compiler for parallelization of client-side compute intensive web applications such as surgical simulations. The recently introduced HTML5 standard has enabled creating unprecedented applications on the web. Low performance of the web browser, however, remains the bottleneck of computationally intensive applications including visualization of complex scenes, real time physical simulations and image processing compared to native ones. The new proposed language is built upon web workers for multithreaded programming in HTML5. The language provides fundamental functionalities of parallel programming languages as well as the fork/join parallel model which is not supported by web workers. The language compiler automatically generates an equivalent parallel script that complies with the HTML5 standard. A case study on realistic rendering for surgical simulations demonstrates enhanced performance with a compact set of instructions.
ERIC Educational Resources Information Center
Schmid, Euline Cutrim; Hegelheimer, Volker
2014-01-01
This paper presents research findings of a longitudinal empirical case study that investigated an innovative Computer Assisted Language Learning (CALL) professional development program for pre-service English as Foreign Language (EFL) teachers. The conceptualization of the program was based on the assumption that pre-service language teachers…
It's Just a Game, Right? Types of Play in Foreign Language CMC
ERIC Educational Resources Information Center
Warner, Chantelle N.
2004-01-01
This study focuses on the various playful uses of language that occurred during a semester-long study of two German language courses using one type of synchronous network-based medium, the MOO. Research and use of synchronous computer-mediated communication (CMC) have flourished in the study of second-language acquisition (SLA) since the late…
NASA Astrophysics Data System (ADS)
Wallace, Richard S.
This paper is a technical presentation of Artificial Linguistic Internet Computer Entity (A.L.I.C.E.) and Artificial Intelligence Markup Language (AIML), set in context by historical and philosophical ruminations on human consciousness. A.L.I.C.E., the first AIML-based personality program, won the Loebner Prize as "the most human computer" at the annual Turing Test contests in 2000, 2001, and 2004. The program, and the organization that develops it, is a product of the world of free software. More than 500 volunteers from around the world have contributed to her development. This paper describes the history of A.L.I.C.E. and AIML-free software since 1995, noting that the theme and strategy of deception and pretense upon which AIML is based can be traced through the history of Artificial Intelligence research. This paper goes on to show how to use AIML to create robot personalities like A.L.I.C.E. that pretend to be intelligent and selfaware. The paper winds up with a survey of some of the philosophical literature on the question of consciousness. We consider Searle's Chinese Room, and the view that natural language understanding by a computer is impossible. We note that the proposition "consciousness is an illusion" may be undermined by the paradoxes it apparently implies. We conclude that A.L.I.C.E. does pass the Turing Test, at least, to paraphrase Abraham Lincoln, for some of the people some of the time.
ERIC Educational Resources Information Center
Wash, Darrel Patrick
1989-01-01
Making a machine seem intelligent is not easy. As a consequence, demand has been rising for computer professionals skilled in artificial intelligence and is likely to continue to go up. These workers develop expert systems and solve the mysteries of machine vision, natural language processing, and neural networks. (Editor)
Artificial Intelligence: Underlying Assumptions and Basic Objectives.
ERIC Educational Resources Information Center
Cercone, Nick; McCalla, Gordon
1984-01-01
Presents perspectives on methodological assumptions underlying research efforts in artificial intelligence (AI) and charts activities, motivations, methods, and current status of research in each of the major AI subareas: natural language understanding; computer vision; expert systems; search, problem solving, planning; theorem proving and logic…
Code of Federal Regulations, 2012 CFR
2012-04-01
... education emphasizing literacy in language arts, mathematics, natural and physical sciences, history, and related social sciences. Bureau means the Bureau of Indian Affairs of the Department of the Interior... specified level of mastery. Computer literacy used here means the general range of skills and understanding...
Code of Federal Regulations, 2013 CFR
2013-04-01
... education emphasizing literacy in language arts, mathematics, natural and physical sciences, history, and related social sciences. Bureau means the Bureau of Indian Affairs of the Department of the Interior... specified level of mastery. Computer literacy used here means the general range of skills and understanding...
Code of Federal Regulations, 2014 CFR
2014-04-01
... education emphasizing literacy in language arts, mathematics, natural and physical sciences, history, and related social sciences. Bureau means the Bureau of Indian Affairs of the Department of the Interior... specified level of mastery. Computer literacy used here means the general range of skills and understanding...
Sustaining Multimodal Language Learner Interactions Online
ERIC Educational Resources Information Center
Satar, H. Müge
2015-01-01
Social presence is considered an important quality in computer-mediated communication as it promotes willingness in learners to take risks through participation in interpersonal exchanges (Kehrwald, 2008) and makes communication more natural (Lowenthal, 2010). While social presence has mostly been investigated through questionnaire data and…
NASA Technical Reports Server (NTRS)
Sanz, J.; Pischel, K.; Hubler, D.
1992-01-01
An application for parallel computation on a combined cluster of powerful workstations and supercomputers was developed. A Parallel Virtual Machine (PVM) is used as message passage language on a macro-tasking parallelization of the Aerodynamic Inverse Design and Analysis for a Full Engine computer code. The heterogeneous nature of the cluster is perfectly handled by the controlling host machine. Communication is established via Ethernet with the TCP/IP protocol over an open network. A reasonable overhead is imposed for internode communication, rendering an efficient utilization of the engaged processors. Perhaps one of the most interesting features of the system is its versatile nature, that permits the usage of the computational resources available that are experiencing less use at a given point in time.
Detecting Target Objects by Natural Language Instructions Using an RGB-D Camera
Bao, Jiatong; Jia, Yunyi; Cheng, Yu; Tang, Hongru; Xi, Ning
2016-01-01
Controlling robots by natural language (NL) is increasingly attracting attention for its versatility, convenience and no need of extensive training for users. Grounding is a crucial challenge of this problem to enable robots to understand NL instructions from humans. This paper mainly explores the object grounding problem and concretely studies how to detect target objects by the NL instructions using an RGB-D camera in robotic manipulation applications. In particular, a simple yet robust vision algorithm is applied to segment objects of interest. With the metric information of all segmented objects, the object attributes and relations between objects are further extracted. The NL instructions that incorporate multiple cues for object specifications are parsed into domain-specific annotations. The annotations from NL and extracted information from the RGB-D camera are matched in a computational state estimation framework to search all possible object grounding states. The final grounding is accomplished by selecting the states which have the maximum probabilities. An RGB-D scene dataset associated with different groups of NL instructions based on different cognition levels of the robot are collected. Quantitative evaluations on the dataset illustrate the advantages of the proposed method. The experiments of NL controlled object manipulation and NL-based task programming using a mobile manipulator show its effectiveness and practicability in robotic applications. PMID:27983604
Comparison of LISP and MUMPS as implementation languages for knowledge-based systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, A.C.
1984-01-01
Major components of knowledge-based systems are summarized, along with the programming language features generally useful in their implementation. LISP and MUMPS are briefly described and compared as vehicles for building knowledge-based systems. The paper concludes with suggestions for extensions to MUMPS which might increase its usefulness in artificial intelligence applications without affecting the essential nature of the language. 8 references.
Prediction of psychosis across protocols and risk cohorts using automated language analysis.
Corcoran, Cheryl M; Carrillo, Facundo; Fernández-Slezak, Diego; Bedi, Gillinder; Klim, Casimir; Javitt, Daniel C; Bearden, Carrie E; Cecchi, Guillermo A
2018-02-01
Language and speech are the primary source of data for psychiatrists to diagnose and treat mental disorders. In psychosis, the very structure of language can be disturbed, including semantic coherence (e.g., derailment and tangentiality) and syntactic complexity (e.g., concreteness). Subtle disturbances in language are evident in schizophrenia even prior to first psychosis onset, during prodromal stages. Using computer-based natural language processing analyses, we previously showed that, among English-speaking clinical (e.g., ultra) high-risk youths, baseline reduction in semantic coherence (the flow of meaning in speech) and in syntactic complexity could predict subsequent psychosis onset with high accuracy. Herein, we aimed to cross-validate these automated linguistic analytic methods in a second larger risk cohort, also English-speaking, and to discriminate speech in psychosis from normal speech. We identified an automated machine-learning speech classifier - comprising decreased semantic coherence, greater variance in that coherence, and reduced usage of possessive pronouns - that had an 83% accuracy in predicting psychosis onset (intra-protocol), a cross-validated accuracy of 79% of psychosis onset prediction in the original risk cohort (cross-protocol), and a 72% accuracy in discriminating the speech of recent-onset psychosis patients from that of healthy individuals. The classifier was highly correlated with previously identified manual linguistic predictors. Our findings support the utility and validity of automated natural language processing methods to characterize disturbances in semantics and syntax across stages of psychotic disorder. The next steps will be to apply these methods in larger risk cohorts to further test reproducibility, also in languages other than English, and identify sources of variability. This technology has the potential to improve prediction of psychosis outcome among at-risk youths and identify linguistic targets for remediation and preventive intervention. More broadly, automated linguistic analysis can be a powerful tool for diagnosis and treatment across neuropsychiatry. © 2018 World Psychiatric Association.
Cross-Language Information Retrieval: An Analysis of Errors.
ERIC Educational Resources Information Center
Ruiz, Miguel E.; Srinivasan, Padmini
1998-01-01
Investigates an automatic method for Cross Language Information Retrieval (CLIR) that utilizes the multilingual Unified Medical Language System (UMLS) Metathesaurus to translate Spanish natural-language queries into English. Results indicate that for Spanish, the UMLS Metathesaurus-based CLIR method is at least equivalent to if not better than…
Clinical and Educational Perspectives on Language Intervention for Children with Autism.
ERIC Educational Resources Information Center
Kamhi, Alan G.; And Others
The paper examines aspects of effective language intervention with autistic children. An overview is presented about the nature of language, its perception and comprehension, and the production of speech-language. Assessment strategies are considered. The second part of the paper analyzes traditional and communications-based intervention programs.…
Informal Language Learning Setting: Technology or Social Interaction?
ERIC Educational Resources Information Center
Bahrani, Taher; Sim, Tam Shu
2012-01-01
Based on the informal language learning theory, language learning can occur outside the classroom setting unconsciously and incidentally through interaction with the native speakers or exposure to authentic language input through technology. However, an EFL context lacks the social interaction which naturally occurs in an ESL context. To explore…
Applying language technology to nursing documents: pros and cons with a focus on ethics.
Suominen, Hanna; Lehtikunnas, Tuija; Back, Barbro; Karsten, Helena; Salakoski, Tapio; Salanterä, Sanna
2007-10-01
The present study discusses ethics in building and using applications based on natural language processing in electronic nursing documentation. Specifically, we first focus on the question of how patient confidentiality can be ensured in developing language technology for the nursing documentation domain. Then, we identify and theoretically analyze the ethical outcomes which arise when using natural language processing to support clinical judgement and decision-making. In total, we put forward and justify 10 claims related to ethics in applying language technology to nursing documents. A review of recent scientific articles related to ethics in electronic patient records or in the utilization of large databases was conducted. Then, the results were compared with ethical guidelines for nurses and the Finnish legislation covering health care and processing of personal data. Finally, the practical experiences of the authors in applying the methods of natural language processing to nursing documents were appended. Patient records supplemented with natural language processing capabilities may help nurses give better, more efficient and more individualized care for their patients. In addition, language technology may facilitate patients' possibility to receive truthful information about their health and improve the nature of narratives. Because of these benefits, research about the use of language technology in narratives should be encouraged. In contrast, privacy-sensitive health care documentation brings specific ethical concerns and difficulties to the natural language processing of nursing documents. Therefore, when developing natural language processing tools, patient confidentiality must be ensured. While using the tools, health care personnel should always be responsible for the clinical judgement and decision-making. One should also consider that the use of language technology in nursing narratives may threaten patients' rights by using documentation collected for other purposes. Applying language technology to nursing documents may, on the one hand, contribute to the quality of care, but, on the other hand, threaten patient confidentiality. As an overall conclusion, natural language processing of nursing documents holds the promise of great benefits if the potential risks are taken into consideration.
Semantic Processing for Communicative Exercises in Foreign-Language Learning.
ERIC Educational Resources Information Center
Mulford, George W.
1989-01-01
Outlines the history of semantically based programs that have influenced the design of computer assisted language instruction (CALI) programs. Describes early attempts to make intelligent CALI as well as current projects, including the Foreign Language Adventure Game, developed at the University of Delaware. Describes some important…
Natural Language Processing and Game-Based Practice in iSTART
ERIC Educational Resources Information Center
Jackson, G. Tanner; Boonthum-Denecke, Chutima; McNamara, Danielle S.
2015-01-01
Intelligent Tutoring Systems (ITSs) are situated in a potential struggle between effective pedagogy and system enjoyment and engagement. iSTART, a reading strategy tutoring system in which students practice generating self-explanations and using reading strategies, employs two devices to engage the user. The first is natural language processing…
Linguistically Motivated Features for CCG Realization Ranking
ERIC Educational Resources Information Center
Rajkumar, Rajakrishnan
2012-01-01
Natural Language Generation (NLG) is the process of generating natural language text from an input, which is a communicative goal and a database or knowledge base. Informally, the architecture of a standard NLG system consists of the following modules (Reiter and Dale, 2000): content determination, sentence planning (or microplanning) and surface…
A natural language interface to databases
NASA Technical Reports Server (NTRS)
Ford, D. R.
1988-01-01
The development of a Natural Language Interface which is semantic-based and uses Conceptual Dependency representation is presented. The system was developed using Lisp and currently runs on a Symbolics Lisp machine. A key point is that the parser handles morphological analysis, which expands its capabilities of understanding more words.
Two Interpretive Systems for Natural Language?
ERIC Educational Resources Information Center
Frazier, Lyn
2015-01-01
It is proposed that humans have available to them two systems for interpreting natural language. One system is familiar from formal semantics. It is a type based system that pairs a syntactic form with its interpretation using grammatical rules of composition. This system delivers both plausible and implausible meanings. The other proposed system…
The nature of the language input affects brain activation during learning from a natural language
Plante, Elena; Patterson, Dianne; Gómez, Rebecca; Almryde, Kyle R.; White, Milo G.; Asbjørnsen, Arve E.
2015-01-01
Artificial language studies have demonstrated that learners are able to segment individual word-like units from running speech using the transitional probability information. However, this skill has rarely been examined in the context of natural languages, where stimulus parameters can be quite different. In this study, two groups of English-speaking learners were exposed to Norwegian sentences over the course of three fMRI scans. One group was provided with input in which transitional probabilities predicted the presence of target words in the sentences. This group quickly learned to identify the target words and fMRI data revealed an extensive and highly dynamic learning network. These results were markedly different from activation seen for a second group of participants. This group was provided with highly similar input that was modified so that word learning based on syllable co-occurrences was not possible. These participants showed a much more restricted network. The results demonstrate that the nature of the input strongly influenced the nature of the network that learners employ to learn the properties of words in a natural language. PMID:26257471
Choi, Jeeyae; Choi, Jeungok E
2014-01-01
To provide best recommendations at the point of care, guidelines have been implemented in computer systems. As a prerequisite, guidelines are translated into a computer-interpretable guideline format. Since there are no specific tools to translate nursing guidelines, only a few nursing guidelines are translated and implemented in computer systems. Unified modeling language (UML) is a software writing language and is known to well and accurately represent end-users' perspective, due to the expressive characteristics of the UML. In order to facilitate the development of computer systems for nurses' use, the UML was used to translate a paper-based nursing guideline, and its ease of use and the usefulness were tested through a case study of a genetic counseling guideline. The UML was found to be a useful tool to nurse informaticians and a sufficient tool to model a guideline in a computer program.
Introduction to the computational structural mechanics testbed
NASA Technical Reports Server (NTRS)
Lotts, C. G.; Greene, W. H.; Mccleary, S. L.; Knight, N. F., Jr.; Paulson, S. S.; Gillian, R. E.
1987-01-01
The Computational Structural Mechanics (CSM) testbed software system based on the SPAR finite element code and the NICE system is described. This software is denoted NICE/SPAR. NICE was developed at Lockheed Palo Alto Research Laboratory and contains data management utilities, a command language interpreter, and a command language definition for integrating engineering computational modules. SPAR is a system of programs used for finite element structural analysis developed for NASA by Lockheed and Engineering Information Systems, Inc. It includes many complementary structural analysis, thermal analysis, utility functions which communicate through a common database. The work on NICE/SPAR was motivated by requirements for a highly modular and flexible structural analysis system to use as a tool in carrying out research in computational methods and exploring computer hardware. Analysis examples are presented which demonstrate the benefits gained from a combination of the NICE command language with a SPAR computational modules.
ERIC Educational Resources Information Center
Pfenninger, Simone E.
2016-01-01
This study investigates the interrelation of motivation, autonomy, metacognition, and L3 gains made as a function of three months of intervention with computer software specifically designed for the private use of dyslexic Swiss German learners of Standard German as a second language (L2) and English as a third language (L3). Based on…
A Computer Assisted Method to Track Listening Strategies in Second Language Learning
ERIC Educational Resources Information Center
Roussel, Stephanie
2011-01-01
Many studies about listening strategies are based on what learners report while listening to an oral message in the second language (Vandergrift, 2003; Graham, 2006). By recording a video of the computer screen while L2 learners (L1 French) were listening to an MP3-track in German, this study uses a novel approach and recent developments in…
ERIC Educational Resources Information Center
Pu, Minran
2009-01-01
The purpose of the study was to investigate the relationship between college EFL students' autonomous learning capacity and motivation in using web-based Computer-Assisted Language Learning (CALL) in China. This study included three questionnaires: the student background questionnaire, the questionnaire on student autonomous learning capacity, and…
ERIC Educational Resources Information Center
Yue, Siwei; Wang, Xuefei
2014-01-01
Based on a corpus of 296 authentic business emails produced in computer-mediated business communication from 7 Chinese international trade enterprises, this paper addresses the language strategy applied in CMC (Computer-mediated Communication) by examining the use of hedges. With the emergence of internet, a wider range of hedges are applied…
Principles of parametric estimation in modeling language competition
Zhang, Menghan; Gong, Tao
2013-01-01
It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka–Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data. PMID:23716678
Principles of parametric estimation in modeling language competition.
Zhang, Menghan; Gong, Tao
2013-06-11
It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka-Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data.
A software architecture for multidisciplinary applications: Integrating task and data parallelism
NASA Technical Reports Server (NTRS)
Chapman, Barbara; Mehrotra, Piyush; Vanrosendale, John; Zima, Hans
1994-01-01
Data parallel languages such as Vienna Fortran and HPF can be successfully applied to a wide range of numerical applications. However, many advanced scientific and engineering applications are of a multidisciplinary and heterogeneous nature and thus do not fit well into the data parallel paradigm. In this paper we present new Fortran 90 language extensions to fill this gap. Tasks can be spawned as asynchronous activities in a homogeneous or heterogeneous computing environment; they interact by sharing access to Shared Data Abstractions (SDA's). SDA's are an extension of Fortran 90 modules, representing a pool of common data, together with a set of Methods for controlled access to these data and a mechanism for providing persistent storage. Our language supports the integration of data and task parallelism as well as nested task parallelism and thus can be used to express multidisciplinary applications in a natural and efficient way.
Merging the Internet and Hypermedia in the English Language Arts.
ERIC Educational Resources Information Center
Reed, W. Michael; Wells, John G.
1997-01-01
Discussion of hypermedia and computer-mediated communication focuses on a project that merges a language arts Internet resource with a hypermedia-based knowledge construction approach to learning. Highlights include constructing a HyperCard-based program on Shakespeare's "Hamlet," gophers and search engines, downloading, collaborative…
Evaluating Computer-Generated Domain-Oriented Vocabularies.
ERIC Educational Resources Information Center
Damerau, Fred J.
1990-01-01
Discusses methods for automatically compiling domain-oriented vocabularies in natural language systems and describes techniques for evaluating the quality of the resulting word lists. A study is described that used subject headings from Grolier's Encyclopedia and the United Press International newswire, and filters for removing high frequency…
A Grammar Library for Information Structure
ERIC Educational Resources Information Center
Song, Sanghoun
2014-01-01
This dissertation makes substantial contributions to both the theoretical and computational treatment of information structure, with an eye toward creating natural language processing applications such as multilingual machine translation systems. The aim of the present dissertation is to create a grammar library of information structure for the…
Digital Literacy and Netiquette: Awareness and Perception in EFL Learning Context
ERIC Educational Resources Information Center
Nia, Sara Farshad; Marandi, Susan
2014-01-01
With the growing popularity of digital technologies and computer-mediated communication (CMC), various types of interactive communication technology are being increasingly integrated into foreign/second language learning environments. Nevertheless, due to its nature, online communication is susceptible to misunderstandings and miscommunications,…
Knowledge Representation: A Brief Review.
ERIC Educational Resources Information Center
Vickery, B. C.
1986-01-01
Reviews different structures and techniques of knowledge representation: structure of database records and files, data structures in computer programming, syntatic and semantic structure of natural language, knowledge representation in artificial intelligence, and models of human memory. A prototype expert system that makes use of some of these…
Selecting the Best Mobile Information Service with Natural Language User Input
NASA Astrophysics Data System (ADS)
Feng, Qiangze; Qi, Hongwei; Fukushima, Toshikazu
Information services accessed via mobile phones provide information directly relevant to subscribers’ daily lives and are an area of dynamic market growth worldwide. Although many information services are currently offered by mobile operators, many of the existing solutions require a unique gateway for each service, and it is inconvenient for users to have to remember a large number of such gateways. Furthermore, the Short Message Service (SMS) is very popular in China and Chinese users would prefer to access these services in natural language via SMS. This chapter describes a Natural Language Based Service Selection System (NL3S) for use with a large number of mobile information services. The system can accept user queries in natural language and navigate it to the required service. Since it is difficult for existing methods to achieve high accuracy and high coverage and anticipate which other services a user might want to query, the NL3S is developed based on a Multi-service Ontology (MO) and Multi-service Query Language (MQL). The MO and MQL provide semantic and linguistic knowledge, respectively, to facilitate service selection for a user query and to provide adaptive service recommendations. Experiments show that the NL3S can achieve 75-95% accuracies and 85-95% satisfactions for processing various styles of natural language queries. A trial involving navigation of 30 different mobile services shows that the NL3S can provide a viable commercial solution for mobile operators.
End-User Use of Data Base Query Language: Pros and Cons.
ERIC Educational Resources Information Center
Nicholes, Walter
1988-01-01
Man-machine interface, the concept of a computer "query," a review of database technology, and a description of the use of query languages at Brigham Young University are discussed. The pros and cons of end-user use of database query languages are explored. (Author/MLW)
A Software Development Approach for Computer Assisted Language Learning
ERIC Educational Resources Information Center
Cushion, Steve
2005-01-01
Over the last 5 years we have developed, produced, tested, and evaluated an authoring software package to produce web-based, interactive, audio-enhanced language-learning material. That authoring package has been used to produce language-learning material in French, Spanish, German, Arabic, and Tamil. We are currently working on increasing…
Developing a Multimedia, Computer-Based Spanish Placement Test
ERIC Educational Resources Information Center
Zabaleta, Francisco
2007-01-01
Placing students of a foreign language within a basic language program constitutes an ongoing problem, particularly for large university departments when they have many incoming freshmen and transfer students. This article outlines the author's experience designing and piloting a language placement test for a university level Spanish program. The…
Anxiety in Language Testing: The APTIS Case
ERIC Educational Resources Information Center
Valencia Robles, Jeannette de Fátima
2017-01-01
The requirement of holding a diploma which certifies proficiency level in a foreign language is constantly increasing in academic and working environments. Computer-based testing has become a prevailing tendency for these and other educational purposes. Each year large numbers of students take online language tests everywhere in the world. In…
A Risk Management Approach to the "Insider Threat"
NASA Astrophysics Data System (ADS)
Bishop, Matt; Engle, Sophie; Frincke, Deborah A.; Gates, Carrie; Greitzer, Frank L.; Peisert, Sean; Whalen, Sean
Recent surveys indicate that the financial impact and operating losses due to insider intrusions are increasing. But these studies often disagree on what constitutes an "insider;" indeed, manydefine it only implicitly. In theory, appropriate selection of, and enforcement of, properly specified security policies should prevent legitimate users from abusing their access to computer systems, information, and other resources. However, even if policies could be expressed precisely, the natural mapping between the natural language expression of a security policy, and the expression of that policyin a form that can be implemented on a computer system or network, createsgaps in enforcement. This paper defines "insider" precisely, in termsof thesegaps, andexploresan access-based modelfor analyzing threats that include those usually termed "insider threats." This model enables an organization to order its resources based on thebusinessvalue for that resource andof the information it contains. By identifying those users with access to high-value resources, we obtain an ordered list of users who can cause the greatest amount of damage. Concurrently with this, we examine psychological indicators in order to determine which usersareatthe greatestriskofacting inappropriately. We concludebyexamining how to merge this model with one of forensic logging and auditing.