Sample records for language processing applications

  1. Natural Language Processing: Toward Large-Scale, Robust Systems.

    ERIC Educational Resources Information Center

    Haas, Stephanie W.

    1996-01-01

    Natural language processing (NLP) is concerned with getting computers to do useful things with natural language. Major applications include machine translation, text generation, information retrieval, and natural language interfaces. Reviews important developments since 1987 that have led to advances in NLP; current NLP applications; and problems…

  2. Semi-Automated Methods for Refining a Domain-Specific Terminology Base

    DTIC Science & Technology

    2011-02-01

    only as a resource for written and oral translation, but also for Natural Language Processing ( NLP ) applications, text retrieval, document indexing...Natural Language Processing ( NLP ) applications, text retrieval, document indexing, and other knowledge management tasks. The objective of this...also for Natural Language Processing ( NLP ) applications, text retrieval (1), document indexing, and other knowledge management tasks. The National

  3. Natural Language Processing.

    ERIC Educational Resources Information Center

    Chowdhury, Gobinda G.

    2003-01-01

    Discusses issues related to natural language processing, including theoretical developments; natural language understanding; tools and techniques; natural language text processing systems; abstracting; information extraction; information retrieval; interfaces; software; Internet, Web, and digital library applications; machine translation for…

  4. Survey of Natural Language Processing Techniques in Bioinformatics.

    PubMed

    Zeng, Zhiqiang; Shi, Hua; Wu, Yun; Hong, Zhiling

    2015-01-01

    Informatics methods, such as text mining and natural language processing, are always involved in bioinformatics research. In this study, we discuss text mining and natural language processing methods in bioinformatics from two perspectives. First, we aim to search for knowledge on biology, retrieve references using text mining methods, and reconstruct databases. For example, protein-protein interactions and gene-disease relationship can be mined from PubMed. Then, we analyze the applications of text mining and natural language processing techniques in bioinformatics, including predicting protein structure and function, detecting noncoding RNA. Finally, numerous methods and applications, as well as their contributions to bioinformatics, are discussed for future use by text mining and natural language processing researchers.

  5. Sciencepoetry and Language/Culture Teaching.

    ERIC Educational Resources Information Center

    Romano, James V.

    1988-01-01

    Examines Rafael Catala's notion of sciencepoetry and an application of modern scientific principles to the teaching of language and culture, the "Lange Process." This interactive language/culture learning process relates target and native languages, culture, and perceptions. (Author/LMO)

  6. A Domain Description Language for Data Processing

    NASA Technical Reports Server (NTRS)

    Golden, Keith

    2003-01-01

    We discuss an application of planning to data processing, a planning problem which poses unique challenges for domain description languages. We discuss these challenges and why the current PDDL standard does not meet them. We discuss DPADL (Data Processing Action Description Language), a language for describing planning domains that involve data processing. DPADL is a declarative, object-oriented language that supports constraints and embedded Java code, object creation and copying, explicit inputs and outputs for actions, and metadata descriptions of existing and desired data. DPADL is supported by the IMAGEbot system, which we are using to provide automation for an ecological forecasting application. We compare DPADL to PDDL and discuss changes that could be made to PDDL to make it more suitable for representing planning domains that involve data processing actions.

  7. Composing Models of Geographic Physical Processes

    NASA Astrophysics Data System (ADS)

    Hofer, Barbara; Frank, Andrew U.

    Processes are central for geographic information science; yet geographic information systems (GIS) lack capabilities to represent process related information. A prerequisite to including processes in GIS software is a general method to describe geographic processes independently of application disciplines. This paper presents such a method, namely a process description language. The vocabulary of the process description language is derived formally from mathematical models. Physical processes in geography can be described in two equivalent languages: partial differential equations or partial difference equations, where the latter can be shown graphically and used as a method for application specialists to enter their process models. The vocabulary of the process description language comprises components for describing the general behavior of prototypical geographic physical processes. These process components can be composed by basic models of geographic physical processes, which is shown by means of an example.

  8. An object-oriented approach for harmonization of multimedia markup languages

    NASA Astrophysics Data System (ADS)

    Chen, Yih-Feng; Kuo, May-Chen; Sun, Xiaoming; Kuo, C.-C. Jay

    2003-12-01

    An object-oriented methodology is proposed to harmonize several different markup languages in this research. First, we adopt the Unified Modelling Language (UML) as the data model to formalize the concept and the process of the harmonization process between the eXtensible Markup Language (XML) applications. Then, we design the Harmonization eXtensible Markup Language (HXML) based on the data model and formalize the transformation between the Document Type Definitions (DTDs) of the original XML applications and HXML. The transformation between instances is also discussed. We use the harmonization of SMIL and X3D as an example to demonstrate the proposed methodology. This methodology can be generalized to various application domains.

  9. Neural Network Computing and Natural Language Processing.

    ERIC Educational Resources Information Center

    Borchardt, Frank

    1988-01-01

    Considers the application of neural network concepts to traditional natural language processing and demonstrates that neural network computing architecture can: (1) learn from actual spoken language; (2) observe rules of pronunciation; and (3) reproduce sounds from the patterns derived by its own processes. (Author/CB)

  10. Logo Talks Back.

    ERIC Educational Resources Information Center

    Bearden, Donna; Muller, Jim

    1983-01-01

    In addition to turtle graphics, the Logo programing language has list and text processing capabilities that open up opportunities for word games, language programs, word processing, and other applications. Provided are examples of these applications using both Apple and MIT Logo versions. Includes sample interactive programs. (JN)

  11. Speech endpoint detection with non-language speech sounds for generic speech processing applications

    NASA Astrophysics Data System (ADS)

    McClain, Matthew; Romanowski, Brian

    2009-05-01

    Non-language speech sounds (NLSS) are sounds produced by humans that do not carry linguistic information. Examples of these sounds are coughs, clicks, breaths, and filled pauses such as "uh" and "um" in English. NLSS are prominent in conversational speech, but can be a significant source of errors in speech processing applications. Traditionally, these sounds are ignored by speech endpoint detection algorithms, where speech regions are identified in the audio signal prior to processing. The ability to filter NLSS as a pre-processing step can significantly enhance the performance of many speech processing applications, such as speaker identification, language identification, and automatic speech recognition. In order to be used in all such applications, NLSS detection must be performed without the use of language models that provide knowledge of the phonology and lexical structure of speech. This is especially relevant to situations where the languages used in the audio are not known apriori. We present the results of preliminary experiments using data from American and British English speakers, in which segments of audio are classified as language speech sounds (LSS) or NLSS using a set of acoustic features designed for language-agnostic NLSS detection and a hidden-Markov model (HMM) to model speech generation. The results of these experiments indicate that the features and model used are capable of detection certain types of NLSS, such as breaths and clicks, while detection of other types of NLSS such as filled pauses will require future research.

  12. The Application of Natural Language Processing to Augmentative and Alternative Communication

    ERIC Educational Resources Information Center

    Higginbotham, D. Jeffery; Lesher, Gregory W.; Moulton, Bryan J.; Roark, Brian

    2012-01-01

    Significant progress has been made in the application of natural language processing (NLP) to augmentative and alternative communication (AAC), particularly in the areas of interface design and word prediction. This article will survey the current state-of-the-science of NLP in AAC and discuss its future applications for the development of next…

  13. Hybrid Applications Of Artificial Intelligence

    NASA Technical Reports Server (NTRS)

    Borchardt, Gary C.

    1988-01-01

    STAR, Simple Tool for Automated Reasoning, is interactive, interpreted programming language for development and operation of artificial-intelligence application systems. Couples symbolic processing with compiled-language functions and data structures. Written in C language and currently available in UNIX version (NPO-16832), and VMS version (NPO-16965).

  14. The application of natural language processing to augmentative and alternative communication.

    PubMed

    Higginbotham, D Jeffery; Lesher, Gregory W; Moulton, Bryan J; Roark, Brian

    2011-01-01

    Significant progress has been made in the application of natural language processing (NLP) to augmentative and alternative communication (AAC), particularly in the areas of interface design and word prediction. This article will survey the current state-of-the-science of NLP in AAC and discuss its future applications for the development of next generation of AAC technology.

  15. Paradigms of Evaluation in Natural Language Processing: Field Linguistics for Glass Box Testing

    ERIC Educational Resources Information Center

    Cohen, Kevin Bretonnel

    2010-01-01

    Although software testing has been well-studied in computer science, it has received little attention in natural language processing. Nonetheless, a fully developed methodology for glass box evaluation and testing of language processing applications already exists in the field methods of descriptive linguistics. This work lays out a number of…

  16. Clinical Natural Language Processing in languages other than English: opportunities and challenges.

    PubMed

    Névéol, Aurélie; Dalianis, Hercules; Velupillai, Sumithra; Savova, Guergana; Zweigenbaum, Pierre

    2018-03-30

    Natural language processing applied to clinical text or aimed at a clinical outcome has been thriving in recent years. This paper offers the first broad overview of clinical Natural Language Processing (NLP) for languages other than English. Recent studies are summarized to offer insights and outline opportunities in this area. We envision three groups of intended readers: (1) NLP researchers leveraging experience gained in other languages, (2) NLP researchers faced with establishing clinical text processing in a language other than English, and (3) clinical informatics researchers and practitioners looking for resources in their languages in order to apply NLP techniques and tools to clinical practice and/or investigation. We review work in clinical NLP in languages other than English. We classify these studies into three groups: (i) studies describing the development of new NLP systems or components de novo, (ii) studies describing the adaptation of NLP architectures developed for English to another language, and (iii) studies focusing on a particular clinical application. We show the advantages and drawbacks of each method, and highlight the appropriate application context. Finally, we identify major challenges and opportunities that will affect the impact of NLP on clinical practice and public health studies in a context that encompasses English as well as other languages.

  17. STAR - A computer language for hybrid AI applications

    NASA Technical Reports Server (NTRS)

    Borchardt, G. C.

    1986-01-01

    Constructing Artificial Intelligence application systems which rely on both symbolic and non-symbolic processing places heavy demands on the communication of data between dissimilar languages. This paper describes STAR (Simple Tool for Automated Reasoning), a computer language for the development of AI application systems which supports the transfer of data structures between a symbolic level and a non-symbolic level defined in languages such as FORTRAN, C and PASCAL. The organization of STAR is presented, followed by the description of an application involving STAR in the interpretation of airborne imaging spectrometer data.

  18. Practical Classroom Applications of Language Experience: Looking Back, Looking Forward.

    ERIC Educational Resources Information Center

    Nelson, Olga G., Ed.; Linek, Wayne M., Ed.

    The 38 essays in this book look back at language experience as an educational approach, provide practical classroom applications, and reconceptualize language experience as an overarching education process. Classroom teachers and reading specialists describe strategies in use in a variety of classroom settings and describe ways to integrate…

  19. Language Processing within the Striatum: Evidence from a PET Correlation Study in Huntington's Disease

    ERIC Educational Resources Information Center

    Teichmann, Marc; Gaura, Veronique; Demonet, Jean-Francois; Supiot, Frederic; Delliaux, Marie; Verny, Christophe; Renou, Pierre; Remy, Philippe; Bachoud-Levi, Anne-Catherine

    2008-01-01

    The role of sub-cortical structures in language processing, and more specifically of the striatum, remains controversial. In line with psycholinguistic models stating that language processing implies both the recovery of lexical information and the application of combinatorial rules, the striatum has been claimed to be involved either in the…

  20. Communication in science.

    PubMed

    Deda, H; Yakupoglu, H

    2002-01-01

    Science must have a common language. For centuries, Latin language carried out this job, but the progress in computer technology and internet world through the last 20 years, began to produce a new language with the new century; the computer language. The information masses, which need data language standardization, are the followings; Digital libraries and medical education systems, Consumer health informatics, Medical education systems, World Wide Web Applications, Database systems, Medical language processing, Automatic indexing systems, Image processing units, Telemedicine, New Generation Internet (NGI).

  1. Behavioral Signal Processing: Deriving Human Behavioral Informatics From Speech and Language: Computational techniques are presented to analyze and model expressed and perceived human behavior-variedly characterized as typical, atypical, distressed, and disordered-from speech and language cues and their applications in health, commerce, education, and beyond.

    PubMed

    Narayanan, Shrikanth; Georgiou, Panayiotis G

    2013-02-07

    The expression and experience of human behavior are complex and multimodal and characterized by individual and contextual heterogeneity and variability. Speech and spoken language communication cues offer an important means for measuring and modeling human behavior. Observational research and practice across a variety of domains from commerce to healthcare rely on speech- and language-based informatics for crucial assessment and diagnostic information and for planning and tracking response to an intervention. In this paper, we describe some of the opportunities as well as emerging methodologies and applications of human behavioral signal processing (BSP) technology and algorithms for quantitatively understanding and modeling typical, atypical, and distressed human behavior with a specific focus on speech- and language-based communicative, affective, and social behavior. We describe the three important BSP components of acquiring behavioral data in an ecologically valid manner across laboratory to real-world settings, extracting and analyzing behavioral cues from measured data, and developing models offering predictive and decision-making support. We highlight both the foundational speech and language processing building blocks as well as the novel processing and modeling opportunities. Using examples drawn from specific real-world applications ranging from literacy assessment and autism diagnostics to psychotherapy for addiction and marital well being, we illustrate behavioral informatics applications of these signal processing techniques that contribute to quantifying higher level, often subjectively described, human behavior in a domain-sensitive fashion.

  2. Sound Evidence: The Missing Piece of the Jigsaw in Formulaic Language Research

    ERIC Educational Resources Information Center

    Lin, Phoebe M. S.

    2012-01-01

    With the ever increasing number of studies on formulaic language, we are beginning to learn more about the processing of formulaic language (e.g. Ellis et al. 2008; Siyanova et al. 2011), its use in speech (e.g. Aijmer 1996; Wood 2012) and writing (e.g. Hyland 2008a, 2008b) and its application in natural language processing (e.g. Tschichold 2000).…

  3. Review of Knowledge Enhanced Electronic Logic (KEEL) Technology

    DTIC Science & Technology

    2016-09-01

    compiled. Two KEEL Engine processing models are available for most languages : The “Normal Model” processes information as if it was processed on an... language also makes it easy to “see” the functional relationships and the dynamic (interactive) nature of the language , allows one to interact with...for the Accelerated Processing Model ( Patent number 7,512,581 (3/31/2009)). In June 2006, application US 11/446/801 was submitted to support

  4. Learning Foreign Languages Using Mobile Applications

    ERIC Educational Resources Information Center

    Gafni, Ruti; Achituv, Dafni Biran; Rachmani, Gila Joyce

    2017-01-01

    Aim/Purpose: This study examines how the use of a Mobile Assisted Language Learning (MALL) application influences the learners' attitudes towards the process of learning, and more specifically in voluntary and mandatory environments. Background: Mobile devices and applications, which have become an integral part of our lives, are used for…

  5. Parallel Signal Processing and System Simulation using aCe

    NASA Technical Reports Server (NTRS)

    Dorband, John E.; Aburdene, Maurice F.

    2003-01-01

    Recently, networked and cluster computation have become very popular for both signal processing and system simulation. A new language is ideally suited for parallel signal processing applications and system simulation since it allows the programmer to explicitly express the computations that can be performed concurrently. In addition, the new C based parallel language (ace C) for architecture-adaptive programming allows programmers to implement algorithms and system simulation applications on parallel architectures by providing them with the assurance that future parallel architectures will be able to run their applications with a minimum of modification. In this paper, we will focus on some fundamental features of ace C and present a signal processing application (FFT).

  6. Managing Risk in Mobile Applications with Formal Security Policies

    DTIC Science & Technology

    2013-04-01

    Alternatively, Breaux and Powers (2009) found the Business Process Modeling Notation ( BPMN ), a declarative language for describing business processes, to be...the Business Process Execution Language (BPEL), preferred as the candidate formal semantics for BPMN , only works for limited classes of BPMN models

  7. Praxis language reference manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, J.H.

    1981-01-01

    This document is a language reference manual for the programming language Praxis. The document contains the specifications that must be met by any compiler for the language. The Praxis language was designed for systems programming in real-time process applications. Goals for the language and its implementations are: (1) highly efficient code generated by the compiler; (2) program portability; (3) completeness, that is, all programming requirements can be met by the language without needing an assembler; and (4) separate compilation to aid in design and management of large systems. The language does not provide any facilities for input/output, stack and queuemore » handling, string operations, parallel processing, or coroutine processing. These features can be implemented as routines in the language, using machine-dependent code to take advantage of facilities in the control environment on different machines.« less

  8. We Need to Communicate! Helping Hearing Parents of Deaf Children Learn American Sign Language

    ERIC Educational Resources Information Center

    Weaver, Kimberly A.; Starner, Thad

    2011-01-01

    Language immersion from birth is crucial to a child's language development. However, language immersion can be particularly challenging for hearing parents of deaf children to provide as they may have to overcome many difficulties while learning American Sign Language (ASL). We are in the process of creating a mobile application to help hearing…

  9. Behavioral Signal Processing: Deriving Human Behavioral Informatics From Speech and Language

    PubMed Central

    Narayanan, Shrikanth; Georgiou, Panayiotis G.

    2013-01-01

    The expression and experience of human behavior are complex and multimodal and characterized by individual and contextual heterogeneity and variability. Speech and spoken language communication cues offer an important means for measuring and modeling human behavior. Observational research and practice across a variety of domains from commerce to healthcare rely on speech- and language-based informatics for crucial assessment and diagnostic information and for planning and tracking response to an intervention. In this paper, we describe some of the opportunities as well as emerging methodologies and applications of human behavioral signal processing (BSP) technology and algorithms for quantitatively understanding and modeling typical, atypical, and distressed human behavior with a specific focus on speech- and language-based communicative, affective, and social behavior. We describe the three important BSP components of acquiring behavioral data in an ecologically valid manner across laboratory to real-world settings, extracting and analyzing behavioral cues from measured data, and developing models offering predictive and decision-making support. We highlight both the foundational speech and language processing building blocks as well as the novel processing and modeling opportunities. Using examples drawn from specific real-world applications ranging from literacy assessment and autism diagnostics to psychotherapy for addiction and marital well being, we illustrate behavioral informatics applications of these signal processing techniques that contribute to quantifying higher level, often subjectively described, human behavior in a domain-sensitive fashion. PMID:24039277

  10. Contracting for Computer Software in Standardized Computer Languages

    PubMed Central

    Brannigan, Vincent M.; Dayhoff, Ruth E.

    1982-01-01

    The interaction between standardized computer languages and contracts for programs which use these languages is important to the buyer or seller of software. The rationale for standardization, the problems in standardizing computer languages, and the difficulties of determining whether the product conforms to the standard are issues which must be understood. The contract law processes of delivery, acceptance testing, acceptance, rejection, and revocation of acceptance are applicable to the contracting process for standard language software. Appropriate contract language is suggested for requiring strict compliance with a standard, and an overview of remedies is given for failure to comply.

  11. STAR (Simple Tool for Automated Reasoning): Tutorial guide and reference manual

    NASA Technical Reports Server (NTRS)

    Borchardt, G. C.

    1985-01-01

    STAR is an interactive, interpreted programming language for the development and operation of Artificial Intelligence application systems. The language is intended for use primarily in the development of software application systems which rely on a combination of symbolic processing, central to the vast majority of AI algorithms, with routines and data structures defined in compiled languages such as C, FORTRAN and PASCAL. References to routines and data structures defined in compiled languages are intermixed with symbolic structures in STAR, resulting in a hybrid operating environment in which symbolic and non-symbolic processing and organization of data may interact to a high degree within the execution of particular application systems. The STAR language was developed in the course of a project involving AI techniques in the interpretation of imaging spectrometer data and is derived in part from a previous language called CLIP. The interpreter for STAR is implemented as a program defined in the language C and has been made available for distribution in source code form through NASA's Computer Software Management and Information Center (COSMIC). Contained within this report are the STAR Tutorial Guide, which introduces the language in a step-by-step manner, and the STAR Reference Manual, which provides a detailed summary of the features of STAR.

  12. An overview of computer-based natural language processing

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1983-01-01

    Computer based Natural Language Processing (NLP) is the key to enabling humans and their computer based creations to interact with machines in natural language (like English, Japanese, German, etc., in contrast to formal computer languages). The doors that such an achievement can open have made this a major research area in Artificial Intelligence and Computational Linguistics. Commercial natural language interfaces to computers have recently entered the market and future looks bright for other applications as well. This report reviews the basic approaches to such systems, the techniques utilized, applications, the state of the art of the technology, issues and research requirements, the major participants and finally, future trends and expectations. It is anticipated that this report will prove useful to engineering and research managers, potential users, and others who will be affected by this field as it unfolds.

  13. A Novel Approach to Creating Disambiguated Multilingual Dictionaries

    ERIC Educational Resources Information Center

    Boguslavsky, Igor; Cardenosa, Jesus; Gallardo, Carolina

    2009-01-01

    Multilingual lexicons are needed in various applications, such as cross-lingual information retrieval, machine translation, and some others. Often, these applications suffer from the ambiguity of dictionary items, especially when an intermediate natural language is involved in the process of the dictionary construction, since this language adds…

  14. The Application of Morpho-Syntactic Language Processing to Effective Phrase Matching.

    ERIC Educational Resources Information Center

    Sheridan, Paraic; Smeaton, Alan F.

    1992-01-01

    Describes a process of morpho-syntactic language analysis for information retrieval. Tree Structured Analytics (TSA) used for text representation is summarized; the matching process developed for such structures is outlined with an example appended; and experiments carried out to evaluate the effectiveness of TSA matching are discussed. (26…

  15. Language Awareness as a Challenge for Business

    ERIC Educational Resources Information Center

    Hunerberg, Reinhard; Geile, Andrea

    2012-01-01

    The following contribution is a meta-analysis of the language awareness discipline from a practical application point of view. It is based on a keynote speech at the "10th International Conference of the Association for Language Awareness," and deals with the implications of business requirements for language use in communication processes. The…

  16. A Tutorial on Techniques and Applications for Natural Language Processing

    DTIC Science & Technology

    1983-10-17

    mentioned above as specific to context-free grammars were tackled by linguists, in particular Chomsky [21, 221 through Transformational Grammar . As shown...DTIC e, C 17 October 1983 MAY 1,5 1990 DEPARTMENT of COMPUTER SCIENCE Approved for pu ]3 -- ,. " Carnegie-Mellon University . . . - -A.,,Anm m n n n n ln...A Tutorial on Techniques and Applications for Natural Language Processing Philip J. Hayes and Jaime G. Carbonell Carnegie-Mellon University 17

  17. The applicability of normalisation process theory to speech and language therapy: a review of qualitative research on a speech and language intervention.

    PubMed

    James, Deborah M

    2011-08-12

    The Bercow review found a high level of public dissatisfaction with speech and language services for children. Children with speech, language, and communication needs (SLCN) often have chronic complex conditions that require provision from health, education, and community services. Speech and language therapists are a small group of Allied Health Professionals with a specialist skill-set that equips them to work with children with SLCN. They work within and across the diverse range of public service providers. The aim of this review was to explore the applicability of Normalisation Process Theory (NPT) to the case of speech and language therapy. A review of qualitative research on a successfully embedded speech and language therapy intervention was undertaken to test the applicability of NPT. The review focused on two of the collective action elements of NPT (relational integration and interaction workability) using all previously published qualitative data from both parents and practitioners' perspectives on the intervention. The synthesis of the data based on the Normalisation Process Model (NPM) uncovered strengths in the interpersonal processes between the practitioners and parents, and weaknesses in how the accountability of the intervention is distributed in the health system. The analysis based on the NPM uncovered interpersonal processes between the practitioners and parents that were likely to have given rise to successful implementation of the intervention. In previous qualitative research on this intervention where the Medical Research Council's guidance on developing a design for a complex intervention had been used as a framework, the interpersonal work within the intervention had emerged as a barrier to implementation of the intervention. It is suggested that the design of services for children and families needs to extend beyond the consideration of benefits and barriers to embrace the social processes that appear to afford success in embedding innovation in healthcare.

  18. High level language-based robotic control system

    NASA Technical Reports Server (NTRS)

    Rodriguez, Guillermo (Inventor); Kruetz, Kenneth K. (Inventor); Jain, Abhinandan (Inventor)

    1994-01-01

    This invention is a robot control system based on a high level language implementing a spatial operator algebra. There are two high level languages included within the system. At the highest level, applications programs can be written in a robot-oriented applications language including broad operators such as MOVE and GRASP. The robot-oriented applications language statements are translated into statements in the spatial operator algebra language. Programming can also take place using the spatial operator algebra language. The statements in the spatial operator algebra language from either source are then translated into machine language statements for execution by a digital control computer. The system also includes the capability of executing the control code sequences in a simulation mode before actual execution to assure proper action at execution time. The robot's environment is checked as part of the process and dynamic reconfiguration is also possible. The languages and system allow the programming and control of multiple arms and the use of inward/outward spatial recursions in which every computational step can be related to a transformation from one point in the mechanical robot to another point to name two major advantages.

  19. High level language-based robotic control system

    NASA Technical Reports Server (NTRS)

    Rodriguez, Guillermo (Inventor); Kreutz, Kenneth K. (Inventor); Jain, Abhinandan (Inventor)

    1996-01-01

    This invention is a robot control system based on a high level language implementing a spatial operator algebra. There are two high level languages included within the system. At the highest level, applications programs can be written in a robot-oriented applications language including broad operators such as MOVE and GRASP. The robot-oriented applications language statements are translated into statements in the spatial operator algebra language. Programming can also take place using the spatial operator algebra language. The statements in the spatial operator algebra language from either source are then translated into machine language statements for execution by a digital control computer. The system also includes the capability of executing the control code sequences in a simulation mode before actual execution to assure proper action at execution time. The robot's environment is checked as part of the process and dynamic reconfiguration is also possible. The languages and system allow the programming and control of multiple arms and the use of inward/outward spatial recursions in which every computational step can be related to a transformation from one point in the mechanical robot to another point to name two major advantages.

  20. Technologies for Language Assessment.

    ERIC Educational Resources Information Center

    Burstein, Jill; And Others

    1996-01-01

    Reviews current and developing technology uses that are relevant to language assessment and discusses examples of recent linguistic applications from the laboratory at the Educational Testing Service. The processes of language test development are described and the functions they serve from the perspective of a large testing organization are…

  1. Artificial Intelligence and CALL.

    ERIC Educational Resources Information Center

    Underwood, John H.

    The potential application of artificial intelligence (AI) to computer-assisted language learning (CALL) is explored. Two areas of AI that hold particular interest to those who deal with language meaning--knowledge representation and expert systems, and natural-language processing--are described and examples of each are presented. AI contribution…

  2. What's So Hard about Understanding Language?

    ERIC Educational Resources Information Center

    Read, Walter; And Others

    A discussion of the application of artificial intelligence to natural language processing looks at several problems in language comprehension, involving semantic ambiguity, anaphoric reference, and metonymy. Examples of these problems are cited, and the importance of the computational approach in analyzing them is explained. The approach applies…

  3. 12 CFR 516.80 - What language must I use in my publication?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 5 2011-01-01 2011-01-01 false What language must I use in my publication? 516... APPLICATION PROCESSING PROCEDURES Publication Requirements § 516.80 What language must I use in my publication? (a) English. You must publish the notice in a newspaper printed in the English language. (b) Other...

  4. Natural language processing-based COTS software and related technologies survey.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stickland, Michael G.; Conrad, Gregory N.; Eaton, Shelley M.

    Natural language processing-based knowledge management software, traditionally developed for security organizations, is now becoming commercially available. An informal survey was conducted to discover and examine current NLP and related technologies and potential applications for information retrieval, information extraction, summarization, categorization, terminology management, link analysis, and visualization for possible implementation at Sandia National Laboratories. This report documents our current understanding of the technologies, lists software vendors and their products, and identifies potential applications of these technologies.

  5. The research of computer multimedia assistant in college English listening

    NASA Astrophysics Data System (ADS)

    Zhang, Qian

    2012-04-01

    With the technology development of network information, there exists more and more seriously questions to our education. Computer multimedia application breaks the traditional foreign language teaching and brings new challenges and opportunities for the education. Through the multiple media application, the teaching process is full of animation, image, voice, and characters. This can improve the learning initiative and objective with great development of learning efficiency. During the traditional foreign language teaching, people use characters learning. However, through this method, the theory performance is good but the practical application is low. During the long time computer multimedia application in the foreign language teaching, many teachers still have prejudice. Therefore, the method is not obtaining the effect. After all the above, the research has significant meaning for improving the teaching quality of foreign language.

  6. My Personal Mobile Language Learning Environment: An Exploration and Classification of Language Learning Possibilities Using the iPhone

    ERIC Educational Resources Information Center

    Perifanou, Maria A.

    2011-01-01

    Mobile devices can motivate learners through moving language learning from predominantly classroom-based contexts into contexts that are free from time and space. The increasing development of new applications can offer valuable support to the language learning process and can provide a basis for a new self regulated and personal approach to…

  7. Language-Switching Costs in Bilingual Mathematics Learning

    ERIC Educational Resources Information Center

    Grabner, Roland H.; Saalbach, Henrik; Eckstein, Doris

    2012-01-01

    Behavioral studies on bilingual learning have revealed cognitive costs (lower accuracy and/or higher processing time) when the language of application differs from the language of learning. The aim of this functional magnetic resonance imaging (fMRI) study was to provide insights into the cognitive underpinnings of these costs (so-called…

  8. 7 CFR 253.5 - State agency requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... households which speak the same non-English language and which do not contain adults(s) fluent in English as a second language. If the non-English language is spoken but not written, the State agency shall... sufficient bilingual staff for the timely processing of non-English speaking applicants. (3) The State agency...

  9. Survey of Event Processing

    DTIC Science & Technology

    2007-12-01

    1 A Brief History of Event Processing... history of event processing. The Applications section defines several application domains and use cases for event processing technology. Event...subscription” and “subscription language” will be used where some will often use “(continuous) query” or “query language.” A Brief History of

  10. Advances in natural language processing.

    PubMed

    Hirschberg, Julia; Manning, Christopher D

    2015-07-17

    Natural language processing employs computational techniques for the purpose of learning, understanding, and producing human language content. Early computational approaches to language research focused on automating the analysis of the linguistic structure of language and developing basic technologies such as machine translation, speech recognition, and speech synthesis. Today's researchers refine and make use of such tools in real-world applications, creating spoken dialogue systems and speech-to-speech translation engines, mining social media for information about health or finance, and identifying sentiment and emotion toward products and services. We describe successes and challenges in this rapidly advancing area. Copyright © 2015, American Association for the Advancement of Science.

  11. Multitasking-Pascal extensions solve concurrency problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mackie, P.H.

    1982-09-29

    To avoid deadlock (one process waiting for a resource than another process can't release) and indefinite postponement (one process being continually denied a resource request) in a multitasking-system application, it is possible to use a high-level development language with built-in concurrency handlers. Parallel Pascal is one such language; it extends standard Pascal via special task synchronizers: a new data type called signal, new system procedures called wait and send and a Boolean function termed awaited. To understand the language's use the author examines the problems it helps solve.

  12. Natural language processing pipelines to annotate BioC collections with an application to the NCBI disease corpus

    PubMed Central

    Comeau, Donald C.; Liu, Haibin; Islamaj Doğan, Rezarta; Wilbur, W. John

    2014-01-01

    BioC is a new format and associated code libraries for sharing text and annotations. We have implemented BioC natural language preprocessing pipelines in two popular programming languages: C++ and Java. The current implementations interface with the well-known MedPost and Stanford natural language processing tool sets. The pipeline functionality includes sentence segmentation, tokenization, part-of-speech tagging, lemmatization and sentence parsing. These pipelines can be easily integrated along with other BioC programs into any BioC compliant text mining systems. As an application, we converted the NCBI disease corpus to BioC format, and the pipelines have successfully run on this corpus to demonstrate their functionality. Code and data can be downloaded from http://bioc.sourceforge.net. Database URL: http://bioc.sourceforge.net PMID:24935050

  13. Applications of formal simulation languages in the control and monitoring subsystems of Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Lacovara, R. C.

    1990-01-01

    The notions, benefits, and drawbacks of numeric simulation are introduced. Two formal simulation languages, Simpscript and Modsim are introduced. The capabilities of each are discussed briefly, and then the two programs are compared. The use of simulation in the process of design engineering for the Control and Monitoring System (CMS) for Space Station Freedom is discussed. The application of the formal simulation language to the CMS design is presented, and recommendations are made as to their use.

  14. Communicative Discourse in Second Language Classrooms: From Building Skills to Becoming Skillful

    ERIC Educational Resources Information Center

    Suleiman, Mahmoud

    2013-01-01

    The dynamics of the communicative discourse is a natural process that requires an application of a wide range of skills and strategies. In particular, linguistic discourse and the interaction process have a huge impact on promoting literacy and academic skills in all students especially English language learners (ELLs). Using interactive…

  15. A brief description and comparison of programming languages FORTRAN, ALGOL, COBOL, PL/1, and LISP 1.5 from a critical standpoint

    NASA Technical Reports Server (NTRS)

    Mathur, F. P.

    1972-01-01

    Several common higher level program languages are described. FORTRAN, ALGOL, COBOL, PL/1, and LISP 1.5 are summarized and compared. FORTRAN is the most widely used scientific programming language. ALGOL is a more powerful language for scientific programming. COBOL is used for most commercial programming applications. LISP 1.5 is primarily a list-processing language. PL/1 attempts to combine the desirable features of FORTRAN, ALGOL, and COBOL into a single language.

  16. Adapting existing natural language processing resources for cardiovascular risk factors identification in clinical notes.

    PubMed

    Khalifa, Abdulrahman; Meystre, Stéphane

    2015-12-01

    The 2014 i2b2 natural language processing shared task focused on identifying cardiovascular risk factors such as high blood pressure, high cholesterol levels, obesity and smoking status among other factors found in health records of diabetic patients. In addition, the task involved detecting medications, and time information associated with the extracted data. This paper presents the development and evaluation of a natural language processing (NLP) application conceived for this i2b2 shared task. For increased efficiency, the application main components were adapted from two existing NLP tools implemented in the Apache UIMA framework: Textractor (for dictionary-based lookup) and cTAKES (for preprocessing and smoking status detection). The application achieved a final (micro-averaged) F1-measure of 87.5% on the final evaluation test set. Our attempt was mostly based on existing tools adapted with minimal changes and allowed for satisfying performance with limited development efforts. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Distributed semantic networks and CLIPS

    NASA Technical Reports Server (NTRS)

    Snyder, James; Rodriguez, Tony

    1991-01-01

    Semantic networks of frames are commonly used as a method of reasoning in many problems. In most of these applications the semantic network exists as a single entity in a single process environment. Advances in workstation hardware provide support for more sophisticated applications involving multiple processes, interacting in a distributed environment. In these applications the semantic network may well be distributed over several concurrently executing tasks. This paper describes the design and implementation of a frame based, distributed semantic network in which frames are accessed both through C Language Integrated Production System (CLIPS) expert systems and procedural C++ language programs. The application area is a knowledge based, cooperative decision making model utilizing both rule based and procedural experts.

  18. Clips as a knowledge based language

    NASA Technical Reports Server (NTRS)

    Harrington, James B.

    1987-01-01

    CLIPS is a language for writing expert systems applications on a personal or small computer. Here, the CLIPS programming language is described and compared to three other artificial intelligence (AI) languages (LISP, Prolog, and OPS5) with regard to the processing they provide for the implementation of a knowledge based system (KBS). A discussion is given on how CLIPS would be used in a control system.

  19. Processing of Regular and Irregular Past Tense Morphology in Highly Proficient Second Language Learners of English: A Self-Paced Reading Study

    ERIC Educational Resources Information Center

    Pliatsikas, Christos; Marinis, Theodoros

    2013-01-01

    Dual-system models suggest that English past tense morphology involves two processing routes: rule application for regular verbs and memory retrieval for irregular verbs. In second language (L2) processing research, Ullman suggested that both verb types are retrieved from memory, but more recently Clahsen and Felser and Ullman argued that past…

  20. Transferred L1 Strategies and L2 Syntactic Structure in L2 Sentence Comprehension.

    ERIC Educational Resources Information Center

    Koda, Keiko

    1993-01-01

    The application of language processing skills between 2 languages with dissimilar morphosyntactic features was investigated with 72 American university students learning Japanese. Results suggest that learners' first- and second-language knowledge both play a significant role and that the linguistic knowledge and coding capability for text…

  1. Automatic Item Generation via Frame Semantics: Natural Language Generation of Math Word Problems.

    ERIC Educational Resources Information Center

    Deane, Paul; Sheehan, Kathleen

    This paper is an exploration of the conceptual issues that have arisen in the course of building a natural language generation (NLG) system for automatic test item generation. While natural language processing techniques are applicable to general verbal items, mathematics word problems are particularly tractable targets for natural language…

  2. A Morphological Analyzer for Vocalized or Not Vocalized Arabic Language

    NASA Astrophysics Data System (ADS)

    El Amine Abderrahim, Med; Breksi Reguig, Fethi

    This research has been to show the realization of a morphological analyzer of the Arabic language (vocalized or not vocalized). This analyzer is based upon our object model for the Arabic Natural Language Processing (NLP) and can be exploited by NLP applications such as translation machine, orthographical correction and the search for information.

  3. Language processing within the striatum: evidence from a PET correlation study in Huntington's disease.

    PubMed

    Teichmann, Marc; Gaura, Véronique; Démonet, Jean-François; Supiot, Frédéric; Delliaux, Marie; Verny, Christophe; Renou, Pierre; Remy, Philippe; Bachoud-Lévi, Anne-Catherine

    2008-04-01

    The role of sub-cortical structures in language processing, and more specifically of the striatum, remains controversial. In line with psycholinguistic models stating that language processing implies both the recovery of lexical information and the application of combinatorial rules, the striatum has been claimed to be involved either in the former component or in the latter. The present study reconciles these conflicting views by showing the striatum's involvement in both language processes, depending on distinct striatal sub-regions. Using PET scanning in a model of striatal disorders, namely Huntington's disease (HD), we correlated metabolic data of 31 early stage HD patients regarding different striatal sub-regions with behavioural scores on three rule/lexicon tasks drawn from word morphology, syntax and from a non-linguistic domain, namely arithmetic. Behavioural results reflected impairment on both processing aspects, while deficits predominated on rule application. Both correlated with the left striatum but involved distinct striatal sub-regions. We suggest that the left striatum encompasses linguistic and arithmetic circuits, which differ with respect to their anatomical and functional specification, comprising ventrally located regions dedicated to rule computations and more dorsal portions pertaining to lexical devices.

  4. Innovative Second Language Speaking Practice with Interactive Videos in a Rich Internet Application Environment

    ERIC Educational Resources Information Center

    Pereira, Juan A.; Sanz-Santamaría, Silvia; Montero, Raúl; Gutiérrez, Julián

    2012-01-01

    Attaining a satisfactory level of oral communication in a second language is a laborious process. In this action research paper we describe a new method applied through the use of interactive videos and the Babelium Project Rich Internet Application (RIA), which allows students to practice speaking skills through a variety of exercises. We present…

  5. pWeb: A High-Performance, Parallel-Computing Framework for Web-Browser-Based Medical Simulation.

    PubMed

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2014-01-01

    This work presents a pWeb - a new language and compiler for parallelization of client-side compute intensive web applications such as surgical simulations. The recently introduced HTML5 standard has enabled creating unprecedented applications on the web. Low performance of the web browser, however, remains the bottleneck of computationally intensive applications including visualization of complex scenes, real time physical simulations and image processing compared to native ones. The new proposed language is built upon web workers for multithreaded programming in HTML5. The language provides fundamental functionalities of parallel programming languages as well as the fork/join parallel model which is not supported by web workers. The language compiler automatically generates an equivalent parallel script that complies with the HTML5 standard. A case study on realistic rendering for surgical simulations demonstrates enhanced performance with a compact set of instructions.

  6. Processing Academic Language through Four Corners Vocabulary Chart Applications

    ERIC Educational Resources Information Center

    Smith, Sarah; Sanchez, Claudia; Betty, Sharon; Davis, Shiloh

    2016-01-01

    4 Corners Vocabulary Charts (FCVCs) are explored as a multipurpose vehicle for processing academic language in a 5th-grade classroom. FCVCs typically display a vocabulary word, an illustration of the word, synonyms associated with the word, a sentence using a given vocabulary word, and a definition of the term in students' words. The use of…

  7. Artificial intelligence, expert systems, computer vision, and natural language processing

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  8. Natural language processing pipelines to annotate BioC collections with an application to the NCBI disease corpus.

    PubMed

    Comeau, Donald C; Liu, Haibin; Islamaj Doğan, Rezarta; Wilbur, W John

    2014-01-01

    BioC is a new format and associated code libraries for sharing text and annotations. We have implemented BioC natural language preprocessing pipelines in two popular programming languages: C++ and Java. The current implementations interface with the well-known MedPost and Stanford natural language processing tool sets. The pipeline functionality includes sentence segmentation, tokenization, part-of-speech tagging, lemmatization and sentence parsing. These pipelines can be easily integrated along with other BioC programs into any BioC compliant text mining systems. As an application, we converted the NCBI disease corpus to BioC format, and the pipelines have successfully run on this corpus to demonstrate their functionality. Code and data can be downloaded from http://bioc.sourceforge.net. Database URL: http://bioc.sourceforge.net. © The Author(s) 2014. Published by Oxford University Press.

  9. An amodal shared resource model of language-mediated visual attention

    PubMed Central

    Smith, Alastair C.; Monaghan, Padraic; Huettig, Falk

    2013-01-01

    Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes involved in this complex multimodal behavior and a potential explanation for how this ability is acquired. We demonstrate that the model is not only sufficient to account for the experimental effects of Visual World Paradigm studies but also that these effects are emergent properties of the architecture of the model itself, rather than requiring separate information processing channels or modular processing systems. The model provides an explicit description of the connection between the modality-specific input from language and vision and the distribution of eye gaze in language-mediated visual attention. The paper concludes by discussing future applications for the model, specifically its potential for investigating the factors driving observed individual differences in language-mediated eye gaze. PMID:23966967

  10. Transforming English Language Learners' Work Readiness: Case Studies in Explicit, Work-Specific Vocabulary Instruction

    ERIC Educational Resources Information Center

    Madrigal-Hopes, Diana L.; Villavicencio, Edna; Foote, Martha M.; Green, Chris

    2014-01-01

    This qualitative study examined the impact of a six-step framework for work-specific vocabulary instruction in adult English language learners (ELLs). Guided by research in English as a second language (ESL) methodology and the transactional theory, the researchers sought to unveil how these processes supported the acquisition and application of…

  11. Specification and Design Methodologies for High-Speed Fault-Tolerant Array Algorithms and Structures for VLSI.

    DTIC Science & Technology

    1987-06-01

    evaluation and chip layout planning for VLSI digital systems. A high-level applicative (functional) language, implemented at UCLA, allows combining of...operating system. 2.1 Introduction The complexity of VLSI requires the application of CAD tools at all levels of the design process. In order to be...effective, these tools must be adaptive to the specific design. In this project we studied a design method based on the use of applicative languages

  12. Brain signal variability as a window into the bidirectionality between music and language processing: moving from a linear to a nonlinear model

    PubMed Central

    Hutka, Stefanie; Bidelman, Gavin M.; Moreno, Sylvain

    2013-01-01

    There is convincing empirical evidence for bidirectional transfer between music and language, such that experience in either domain can improve mental processes required by the other. This music-language relationship has been studied using linear models (e.g., comparing mean neural activity) that conceptualize brain activity as a static entity. The linear approach limits how we can understand the brain’s processing of music and language because the brain is a nonlinear system. Furthermore, there is evidence that the networks supporting music and language processing interact in a nonlinear manner. We therefore posit that the neural processing and transfer between the domains of language and music are best viewed through the lens of a nonlinear framework. Nonlinear analysis of neurophysiological activity may yield new insight into the commonalities, differences, and bidirectionality between these two cognitive domains not measurable in the local output of a cortical patch. We thus propose a novel application of brain signal variability (BSV) analysis, based on mutual information and signal entropy, to better understand the bidirectionality of music-to-language transfer in the context of a nonlinear framework. This approach will extend current methods by offering a nuanced, network-level understanding of the brain complexity involved in music-language transfer. PMID:24454295

  13. Brain signal variability as a window into the bidirectionality between music and language processing: moving from a linear to a nonlinear model.

    PubMed

    Hutka, Stefanie; Bidelman, Gavin M; Moreno, Sylvain

    2013-12-30

    There is convincing empirical evidence for bidirectional transfer between music and language, such that experience in either domain can improve mental processes required by the other. This music-language relationship has been studied using linear models (e.g., comparing mean neural activity) that conceptualize brain activity as a static entity. The linear approach limits how we can understand the brain's processing of music and language because the brain is a nonlinear system. Furthermore, there is evidence that the networks supporting music and language processing interact in a nonlinear manner. We therefore posit that the neural processing and transfer between the domains of language and music are best viewed through the lens of a nonlinear framework. Nonlinear analysis of neurophysiological activity may yield new insight into the commonalities, differences, and bidirectionality between these two cognitive domains not measurable in the local output of a cortical patch. We thus propose a novel application of brain signal variability (BSV) analysis, based on mutual information and signal entropy, to better understand the bidirectionality of music-to-language transfer in the context of a nonlinear framework. This approach will extend current methods by offering a nuanced, network-level understanding of the brain complexity involved in music-language transfer.

  14. The Role of the Striatum in Sentence Processing: Evidence from a Priming Study in Early Stages of Huntington's Disease

    ERIC Educational Resources Information Center

    Teichmann, Marc; Dupoux, Emmanuel; Cesaro, Pierre; Bachoud-Levi, Anne-Catherine

    2008-01-01

    The role of sub-cortical structures such as the striatum in language remains a controversial issue. Based on linguistic claims that language processing implies both recovery of lexical information and application of combinatorial rules it has been shown that striatal damaged patients have difficulties applying conjugation rules while lexical…

  15. Evaluation of Automated Natural Language Processing in the Further Development of Science Information Retrieval. String Program Reports No. 10.

    ERIC Educational Resources Information Center

    Sager, Naomi

    This investigation matches the emerging techniques in computerized natural language processing against emerging needs for such techniques in the information field to evaluate and extend such techniques for future applications and to establish a basis and direction for further research toward these goals. An overview describes developments in the…

  16. Standardized languages and notations for graphical modelling of patient care processes: a systematic review.

    PubMed

    Mincarone, Pierpaolo; Leo, Carlo Giacomo; Trujillo-Martín, Maria Del Mar; Manson, Jan; Guarino, Roberto; Ponzini, Giuseppe; Sabina, Saverio

    2018-04-01

    The importance of working toward quality improvement in healthcare implies an increasing interest in analysing, understanding and optimizing process logic and sequences of activities embedded in healthcare processes. Their graphical representation promotes faster learning, higher retention and better compliance. The study identifies standardized graphical languages and notations applied to patient care processes and investigates their usefulness in the healthcare setting. Peer-reviewed literature up to 19 May 2016. Information complemented by a questionnaire sent to the authors of selected studies. Systematic review conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement. Five authors extracted results of selected studies. Ten articles met the inclusion criteria. One notation and language for healthcare process modelling were identified with an application to patient care processes: Business Process Model and Notation and Unified Modeling Language™. One of the authors of every selected study completed the questionnaire. Users' comprehensibility and facilitation of inter-professional analysis of processes have been recognized, in the filled in questionnaires, as major strengths for process modelling in healthcare. Both the notation and the language could increase the clarity of presentation thanks to their visual properties, the capacity of easily managing macro and micro scenarios, the possibility of clearly and precisely representing the process logic. Both could increase guidelines/pathways applicability by representing complex scenarios through charts and algorithms hence contributing to reduce unjustified practice variations which negatively impact on quality of care and patient safety.

  17. Text Manipulation Techniques and Foreign Language Composition.

    ERIC Educational Resources Information Center

    Walker, Ronald W.

    1982-01-01

    Discusses an approach to teaching second language composition which emphasizes (1) careful analysis of model texts from a limited, but well-defined perspective and (2) the application of text manipulation techniques developed by the word processing industry to student compositions. (EKN)

  18. A development framework for distributed artificial intelligence

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.; Cottman, Bruce H.

    1989-01-01

    The authors describe distributed artificial intelligence (DAI) applications in which multiple organizations of agents solve multiple domain problems. They then describe work in progress on a DAI system development environment, called SOCIAL, which consists of three primary language-based components. The Knowledge Object Language defines models of knowledge representation and reasoning. The metaCourier language supplies the underlying functionality for interprocess communication and control access across heterogeneous computing environments. The metaAgents language defines models for agent organization coordination, control, and resource management. Application agents and agent organizations will be constructed by combining metaAgents and metaCourier building blocks with task-specific functionality such as diagnostic or planning reasoning. This architecture hides implementation details of communications, control, and integration in distributed processing environments, enabling application developers to concentrate on the design and functionality of the intelligent agents and agent networks themselves.

  19. QATT: a Natural Language Interface for QPE. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    White, Douglas Robert-Graham

    1989-01-01

    QATT, a natural language interface developed for the Qualitative Process Engine (QPE) system is presented. The major goal was to evaluate the use of a preexisting natural language understanding system designed to be tailored for query processing in multiple domains of application. The other goal of QATT is to provide a comfortable environment in which to query envisionments in order to gain insight into the qualitative behavior of physical systems. It is shown that the use of the preexisting system made possible the development of a reasonably useful interface in a few months.

  20. A Guide to IRUS-II Application Development

    DTIC Science & Technology

    1989-09-01

    Stallard (editors). Research and Develo; nent in Natural Language b’nderstan,;ng as Part of t/i Strategic Computing Program . chapter 3, pages 27-34...Development in Natural Language Processing in the Strategic Computing Program . Compi-nrional Linguistics 12(2):132-136. April-June, 1986. [24] Sidner. C.L...assist developers interested in adapting IRUS-11 to new application domains Chapter 2 provides a general introduction and overviev ,. Chapter 3 describes

  1. Automated identification of wound information in clinical notes of patients with heart diseases: Developing and validating a natural language processing application.

    PubMed

    Topaz, Maxim; Lai, Kenneth; Dowding, Dawn; Lei, Victor J; Zisberg, Anna; Bowles, Kathryn H; Zhou, Li

    2016-12-01

    Electronic health records are being increasingly used by nurses with up to 80% of the health data recorded as free text. However, only a few studies have developed nursing-relevant tools that help busy clinicians to identify information they need at the point of care. This study developed and validated one of the first automated natural language processing applications to extract wound information (wound type, pressure ulcer stage, wound size, anatomic location, and wound treatment) from free text clinical notes. First, two human annotators manually reviewed a purposeful training sample (n=360) and random test sample (n=1100) of clinical notes (including 50% discharge summaries and 50% outpatient notes), identified wound cases, and created a gold standard dataset. We then trained and tested our natural language processing system (known as MTERMS) to process the wound information. Finally, we assessed our automated approach by comparing system-generated findings against the gold standard. We also compared the prevalence of wound cases identified from free-text data with coded diagnoses in the structured data. The testing dataset included 101 notes (9.2%) with wound information. The overall system performance was good (F-measure is a compiled measure of system's accuracy=92.7%), with best results for wound treatment (F-measure=95.7%) and poorest results for wound size (F-measure=81.9%). Only 46.5% of wound notes had a structured code for a wound diagnosis. The natural language processing system achieved good performance on a subset of randomly selected discharge summaries and outpatient notes. In more than half of the wound notes, there were no coded wound diagnoses, which highlight the significance of using natural language processing to enrich clinical decision making. Our future steps will include expansion of the application's information coverage to other relevant wound factors and validation of the model with external data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. GOAL - A test engineer oriented language. [Ground Operations Aerospace Language for coding automatic test

    NASA Technical Reports Server (NTRS)

    Mitchell, T. R.

    1974-01-01

    The development of a test engineer oriented language has been under way at the Kennedy Space Center for several years. The result of this effort is the Ground Operations Aerospace Language, GOAL, a self-documenting, high-order language suitable for coding automatic test, checkout and launch procedures. GOAL is a highly readable, writable, retainable language that is easily learned by nonprogramming oriented engineers. It is sufficiently powerful for use at all levels of Space Shuttle ground processing, from line replaceable unit checkout to integrated launch day operations. This paper will relate the language development, and describe GOAL and its applications.

  3. Definition of an auxiliary processor dedicated to real-time operating system kernels

    NASA Technical Reports Server (NTRS)

    Halang, Wolfgang A.

    1988-01-01

    In order to increase the efficiency of process control data processing, it is necessary to enhance the productivity of real time high level languages and to automate the task administration, because presently 60 percent or more of the applications are still programmed in assembly languages. This may be achieved by migrating apt functions for the support of process control oriented languages into the hardware, i.e., by new architectures. Whereas numerous high level languages have already been defined or realized, there are no investigations yet on hardware assisted implementation of real time features. The requirements to be fulfilled by languages and operating systems in hard real time environment are summarized. A comparison of the most prominent languages, viz. Ada, HAL/S, LTR, Pearl, as well as the real time extensions of FORTRAN and PL/1, reveals how existing languages meet these demands and which features still need to be incorporated to enable the development of reliable software with predictable program behavior, thus making it possible to carry out a technical safety approval. Accordingly, Pearl proved to be the closest match to the mentioned requirements.

  4. Listening from the Inside Out.

    ERIC Educational Resources Information Center

    Joiner, Elizabeth G.

    1984-01-01

    Examines studies in brain research which are closely related to language learning. Discusses Asher's Total Physical Response and Lozanov's Suggestopedia as approaches which activate the right brain hemisphere and involve it in the language learning process. Discusses practical applications for what is currently known about listening. (SED)

  5. Applying Standard Interfaces to a Process-Control Language

    NASA Technical Reports Server (NTRS)

    Berthold, Richard T.

    2005-01-01

    A method of applying open-operating-system standard interfaces to the NASA User Interface Language (UIL) has been devised. UIL is a computing language that can be used in monitoring and controlling automated processes: for example, the Timeliner computer program, written in UIL, is a general-purpose software system for monitoring and controlling sequences of automated tasks in a target system. In providing the major elements of connectivity between UIL and the target system, the present method offers advantages over the prior method. Most notably, unlike in the prior method, the software description of the target system can be made independent of the applicable compiler software and need not be linked to the applicable executable compiler image. Also unlike in the prior method, it is not necessary to recompile the source code and relink the source code to a new executable compiler image. Abstraction of the description of the target system to a data file can be defined easily, with intuitive syntax, and knowledge of the source-code language is not needed for the definition.

  6. Product Definition Data Interface (PDDI) Product Specification

    DTIC Science & Technology

    1991-07-01

    syntax of the language gives a precise specification of the data without interpretation of it. M - Constituent Read Block. CSECT - Control Section, the...to conform to the PDDI Access Software’s internal data representation so that it may be further processed. JCL - Job Control Language - IBM language...software development and life cycle * phases. OUALITY CONTROL - The planned and systematic application of all actions (management/technical) necessary to

  7. STAR- A SIMPLE TOOL FOR AUTOMATED REASONING SUPPORTING HYBRID APPLICATIONS OF ARTIFICIAL INTELLIGENCE (DEC VAX VERSION)

    NASA Technical Reports Server (NTRS)

    Borchardt, G. C.

    1994-01-01

    The Simple Tool for Automated Reasoning program (STAR) is an interactive, interpreted programming language for the development and operation of artificial intelligence (AI) application systems. STAR provides an environment for integrating traditional AI symbolic processing with functions and data structures defined in compiled languages such as C, FORTRAN and PASCAL. This type of integration occurs in a number of AI applications including interpretation of numerical sensor data, construction of intelligent user interfaces to existing compiled software packages, and coupling AI techniques with numerical simulation techniques and control systems software. The STAR language was created as part of an AI project for the evaluation of imaging spectrometer data at NASA's Jet Propulsion Laboratory. Programming in STAR is similar to other symbolic processing languages such as LISP and CLIP. STAR includes seven primitive data types and associated operations for the manipulation of these structures. A semantic network is used to organize data in STAR, with capabilities for inheritance of values and generation of side effects. The AI knowledge base of STAR can be a simple repository of records or it can be a highly interdependent association of implicit and explicit components. The symbolic processing environment of STAR may be extended by linking the interpreter with functions defined in conventional compiled languages. These external routines interact with STAR through function calls in either direction, and through the exchange of references to data structures. The hybrid knowledge base may thus be accessed and processed in general by either side of the application. STAR is initially used to link externally compiled routines and data structures. It is then invoked to interpret the STAR rules and symbolic structures. In a typical interactive session, the user enters an expression to be evaluated, STAR parses the input, evaluates the expression, performs any file input/output required, and displays the results. The STAR interpreter is written in the C language for interactive execution. It has been implemented on a VAX 11/780 computer operating under VMS, and the UNIX version has been implemented on a Sun Microsystems 2/170 workstation. STAR has a memory requirement of approximately 200K of 8 bit bytes, excluding externally compiled functions and application-dependent symbolic definitions. This program was developed in 1985.

  8. STAR- A SIMPLE TOOL FOR AUTOMATED REASONING SUPPORTING HYBRID APPLICATIONS OF ARTIFICIAL INTELLIGENCE (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Borchardt, G. C.

    1994-01-01

    The Simple Tool for Automated Reasoning program (STAR) is an interactive, interpreted programming language for the development and operation of artificial intelligence (AI) application systems. STAR provides an environment for integrating traditional AI symbolic processing with functions and data structures defined in compiled languages such as C, FORTRAN and PASCAL. This type of integration occurs in a number of AI applications including interpretation of numerical sensor data, construction of intelligent user interfaces to existing compiled software packages, and coupling AI techniques with numerical simulation techniques and control systems software. The STAR language was created as part of an AI project for the evaluation of imaging spectrometer data at NASA's Jet Propulsion Laboratory. Programming in STAR is similar to other symbolic processing languages such as LISP and CLIP. STAR includes seven primitive data types and associated operations for the manipulation of these structures. A semantic network is used to organize data in STAR, with capabilities for inheritance of values and generation of side effects. The AI knowledge base of STAR can be a simple repository of records or it can be a highly interdependent association of implicit and explicit components. The symbolic processing environment of STAR may be extended by linking the interpreter with functions defined in conventional compiled languages. These external routines interact with STAR through function calls in either direction, and through the exchange of references to data structures. The hybrid knowledge base may thus be accessed and processed in general by either side of the application. STAR is initially used to link externally compiled routines and data structures. It is then invoked to interpret the STAR rules and symbolic structures. In a typical interactive session, the user enters an expression to be evaluated, STAR parses the input, evaluates the expression, performs any file input/output required, and displays the results. The STAR interpreter is written in the C language for interactive execution. It has been implemented on a VAX 11/780 computer operating under VMS, and the UNIX version has been implemented on a Sun Microsystems 2/170 workstation. STAR has a memory requirement of approximately 200K of 8 bit bytes, excluding externally compiled functions and application-dependent symbolic definitions. This program was developed in 1985.

  9. Applying language technology to nursing documents: pros and cons with a focus on ethics.

    PubMed

    Suominen, Hanna; Lehtikunnas, Tuija; Back, Barbro; Karsten, Helena; Salakoski, Tapio; Salanterä, Sanna

    2007-10-01

    The present study discusses ethics in building and using applications based on natural language processing in electronic nursing documentation. Specifically, we first focus on the question of how patient confidentiality can be ensured in developing language technology for the nursing documentation domain. Then, we identify and theoretically analyze the ethical outcomes which arise when using natural language processing to support clinical judgement and decision-making. In total, we put forward and justify 10 claims related to ethics in applying language technology to nursing documents. A review of recent scientific articles related to ethics in electronic patient records or in the utilization of large databases was conducted. Then, the results were compared with ethical guidelines for nurses and the Finnish legislation covering health care and processing of personal data. Finally, the practical experiences of the authors in applying the methods of natural language processing to nursing documents were appended. Patient records supplemented with natural language processing capabilities may help nurses give better, more efficient and more individualized care for their patients. In addition, language technology may facilitate patients' possibility to receive truthful information about their health and improve the nature of narratives. Because of these benefits, research about the use of language technology in narratives should be encouraged. In contrast, privacy-sensitive health care documentation brings specific ethical concerns and difficulties to the natural language processing of nursing documents. Therefore, when developing natural language processing tools, patient confidentiality must be ensured. While using the tools, health care personnel should always be responsible for the clinical judgement and decision-making. One should also consider that the use of language technology in nursing narratives may threaten patients' rights by using documentation collected for other purposes. Applying language technology to nursing documents may, on the one hand, contribute to the quality of care, but, on the other hand, threaten patient confidentiality. As an overall conclusion, natural language processing of nursing documents holds the promise of great benefits if the potential risks are taken into consideration.

  10. Large-scale evidence of dependency length minimization in 37 languages

    PubMed Central

    Futrell, Richard; Mahowald, Kyle; Gibson, Edward

    2015-01-01

    Explaining the variation between human languages and the constraints on that variation is a core goal of linguistics. In the last 20 y, it has been claimed that many striking universals of cross-linguistic variation follow from a hypothetical principle that dependency length—the distance between syntactically related words in a sentence—is minimized. Various models of human sentence production and comprehension predict that long dependencies are difficult or inefficient to process; minimizing dependency length thus enables effective communication without incurring processing difficulty. However, despite widespread application of this idea in theoretical, empirical, and practical work, there is not yet large-scale evidence that dependency length is actually minimized in real utterances across many languages; previous work has focused either on a small number of languages or on limited kinds of data about each language. Here, using parsed corpora of 37 diverse languages, we show that overall dependency lengths for all languages are shorter than conservative random baselines. The results strongly suggest that dependency length minimization is a universal quantitative property of human languages and support explanations of linguistic variation in terms of general properties of human information processing. PMID:26240370

  11. Application programs written by using customizing tools of a computer-aided design system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, X.; Huang, R.; Juricic, D.

    1995-12-31

    Customizing tools of Computer-Aided Design Systems have been developed to such a degree as to become equivalent to powerful higher-level programming languages that are especially suitable for graphics applications. Two examples of application programs written by using AutoCAD`s customizing tools are given in some detail to illustrate their power. One tool uses AutoLISP list-processing language to develop an application program that produces four views of a given solid model. The other uses AutoCAD Developmental System, based on program modules written in C, to produce an application program that renders a freehand sketch from a given CAD drawing.

  12. Proceedings of the 1993 Conference on Intelligent Computer-Aided Training and Virtual Environment Technology

    NASA Technical Reports Server (NTRS)

    Hyde, Patricia R.; Loftin, R. Bowen

    1993-01-01

    The volume 2 proceedings from the 1993 Conference on Intelligent Computer-Aided Training and Virtual Environment Technology are presented. Topics discussed include intelligent computer assisted training (ICAT) systems architectures, ICAT educational and medical applications, virtual environment (VE) training and assessment, human factors engineering and VE, ICAT theory and natural language processing, ICAT military applications, VE engineering applications, ICAT knowledge acquisition processes and applications, and ICAT aerospace applications.

  13. Integrated Task And Data Parallel Programming: Language Design

    NASA Technical Reports Server (NTRS)

    Grimshaw, Andrew S.; West, Emily A.

    1998-01-01

    his research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers '95 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program m. Additional 1995 Activities During the fall I collaborated with Andrew Grimshaw and Adam Ferrari to write a book chapter which will be included in Parallel Processing in C++ edited by Gregory Wilson. I also finished two courses, Compilers and Advanced Compilers, in 1995. These courses complete my class requirements at the University of Virginia. I have only my dissertation research and defense to complete.

  14. Enhancement of Automatization through Vocabulary Learning Using CALL: Can Prompt Language Processing Lead to Better Comprehension in L2 Reading?

    ERIC Educational Resources Information Center

    Sato, Takeshi; Matsunuma, Mitsuyasu; Suzuki, Akio

    2013-01-01

    Our study aims to optimize a multimedia application for vocabulary learning for English as a Foreign Language (EFL). Our study is based on the concept that difficulty in reading a text in a second language is due to the need for more working memory for word decoding skills, although the working memory must also be used for text comprehension…

  15. Open-Source Multi-Language Audio Database for Spoken Language Processing Applications

    DTIC Science & Technology

    2012-12-01

    Mandarin, and Russian . Approximately 30 hours of speech were collected for each language. Each passage has been carefully transcribed at the...manual and automatic methods. The Russian passages have not yet been marked at the phonetic level. Another phase of the work was to explore...You Tube. 300 passages were collected in each of three languages—English, Mandarin, and Russian . Approximately 30 hours of speech were

  16. Spanish Language Processing at University of Maryland: Building Infrastructure for Multilingual Applications

    DTIC Science & Technology

    2001-09-01

    translation of the Spanish original sentence. Acquiring bilingual dictionary entries In addition to building and applying the more sophisticated LCS...porting LCS lexicons to new languages, as described above, and are also useful by themselves in improving dictionary -based cross language information...hold much of the time. Moreover, lexical dependencies have proven to be instrumental in advances in monolingual syntactic analysis (e.g. I-erg MY

  17. The Comparison of Inductive Reasoning under Risk Conditions between Chinese and Japanese Based on Computational Models: Toward the Application to CAE for Foreign Language

    ERIC Educational Resources Information Center

    Zhang, Yujie; Terai, Asuka; Nakagawa, Masanori

    2013-01-01

    Inductive reasoning under risk conditions is an important thinking process not only for sciences but also in our daily life. From this viewpoint, it is very useful for language learning to construct computational models of inductive reasoning which realize the CAE for foreign languages. This study proposes the comparison of inductive reasoning…

  18. A perspective on the advancement of natural language processing tasks via topological analysis of complex networks. Comment on "Approaching human language with complex networks" by Cong and Liu

    NASA Astrophysics Data System (ADS)

    Amancio, Diego Raphael

    2014-12-01

    Concepts and methods of complex networks have been applied to probe the properties of a myriad of real systems [1]. The finding that written texts modeled as graphs share several properties of other completely different real systems has inspired the study of language as a complex system [2]. Actually, language can be represented as a complex network in its several levels of complexity. As a consequence, morphological, syntactical and semantical properties have been employed in the construction of linguistic networks [3]. Even the character level has been useful to unfold particular patterns [4,5]. In the review by Cong and Liu [6], the authors emphasize the need to use the topological information of complex networks modeling the various spheres of the language to better understand its origins, evolution and organization. In addition, the authors cite the use of networks in applications aiming at holistic typology and stylistic variations. In this context, I will discuss some possible directions that could be followed in future research directed towards the understanding of language via topological characterization of complex linguistic networks. In addition, I will comment the use of network models for language processing applications. Additional prospects for future practical research lines will also be discussed in this comment.

  19. 78 FR 75528 - Federal Government Participation in the Automated Clearing House

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-12

    ... requirements for authorization of ACH entries, adopting the language of Regulation E that an authorization must... revised specific language within the NACHA Operating Rules regarding the application and expiration of a... that prevents automated check processing or creating of an image that may be used to produce a...

  20. First Toronto Conference on Database Users. Systems that Enhance User Performance.

    ERIC Educational Resources Information Center

    Doszkocs, Tamas E.; Toliver, David

    1987-01-01

    The first of two papers discusses natural language searching as a user performance enhancement tool, focusing on artificial intelligence applications for information retrieval and problems with natural language processing. The second presents a conceptual framework for further development and future design of front ends to online bibliographic…

  1. Executable medical guidelines with Arden Syntax-Applications in dermatology and obstetrics.

    PubMed

    Seitinger, Alexander; Rappelsberger, Andrea; Leitich, Harald; Binder, Michael; Adlassnig, Klaus-Peter

    2016-08-12

    Clinical decision support systems (CDSSs) are being developed to assist physicians in processing extensive data and new knowledge based on recent scientific advances. Structured medical knowledge in the form of clinical alerts or reminder rules, decision trees or tables, clinical protocols or practice guidelines, score algorithms, and others, constitute the core of CDSSs. Several medical knowledge representation and guideline languages have been developed for the formal computerized definition of such knowledge. One of these languages is Arden Syntax for Medical Logic Systems, an International Health Level Seven (HL7) standard whose development started in 1989. Its latest version is 2.10, which was presented in 2014. In the present report we discuss Arden Syntax as a modern medical knowledge representation and processing language, and show that this language is not only well suited to define clinical alerts, reminders, and recommendations, but can also be used to implement and process computerized medical practice guidelines. This section describes how contemporary software such as Java, server software, web-services, XML, is used to implement CDSSs based on Arden Syntax. Special emphasis is given to clinical decision support (CDS) that employs practice guidelines as its clinical knowledge base. Two guideline-based applications using Arden Syntax for medical knowledge representation and processing were developed. The first is a software platform for implementing practice guidelines from dermatology. This application employs fuzzy set theory and logic to represent linguistic and propositional uncertainty in medical data, knowledge, and conclusions. The second application implements a reminder system based on clinically published standard operating procedures in obstetrics to prevent deviations from state-of-the-art care. A to-do list with necessary actions specifically tailored to the gestational week/labor/delivery is generated. Today, with the latest versions of Arden Syntax and the application of contemporary software development methods, Arden Syntax has become a powerful and versatile medical knowledge representation and processing language, well suited to implement a large range of CDSSs, including clinical-practice-guideline-based CDSSs. Moreover, such CDS is provided and can be shared as a service by different medical institutions, redefining the sharing of medical knowledge. Arden Syntax is also highly flexible and provides developers the freedom to use up-to-date software design and programming patterns for external patient data access. Copyright © 2016. Published by Elsevier B.V.

  2. Process for selecting engineering tools : applied to selecting a SysML tool.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Spain, Mark J.; Post, Debra S.; Taylor, Jeffrey L.

    2011-02-01

    Process for Selecting Engineering Tools outlines the process and tools used to select a SysML (Systems Modeling Language) tool. The process is general in nature and users could use the process to select most engineering tools and software applications.

  3. oRis: multiagents approach for image processing

    NASA Astrophysics Data System (ADS)

    Rodin, Vincent; Harrouet, Fabrice; Ballet, Pascal; Tisseau, Jacques

    1998-09-01

    In this article, we present a parallel image processing system based on the concept of reactive agents. This means that, in our system, each agent has a very simple behavior which allows it to take a decision (find out an edge, a region, ...) according to its position in the image and to the information enclosed in it. Our system lies in the oRis language, which allows to describe very finely and simply the agents' behaviors. In fact, oRis is an interpreted and dynamic multiagent language. First of all, oRis is an object language with the use of classes regrouping attributes and methods. The syntax is close to the C++ language and includes notions of multiple inheritance, oRis is also an agent language: every object with a method `main()' becomes an agent. This method is cyclically executed by the system scheduler and corresponds to the agent behavior. We also present an application made with oRis. This application allows to detect concentric striae located on different natural `objects' (age-rings of tree, fish otolith growth rings, striae of some minerals, ...). The stopping of the multiagent system is implemented through a technique issued from immunology: the apoptosis.

  4. Adaptive Plasticity in the Healthy Language Network: Implications for Language Recovery after Stroke

    PubMed Central

    2016-01-01

    Across the last three decades, the application of noninvasive brain stimulation (NIBS) has substantially increased the current knowledge of the brain's potential to undergo rapid short-term reorganization on the systems level. A large number of studies applied transcranial magnetic stimulation (TMS) and transcranial direct current stimulation (tDCS) in the healthy brain to probe the functional relevance and interaction of specific areas for different cognitive processes. NIBS is also increasingly being used to induce adaptive plasticity in motor and cognitive networks and shape cognitive functions. Recently, NIBS has been combined with electrophysiological techniques to modulate neural oscillations of specific cortical networks. In this review, we will discuss recent advances in the use of NIBS to modulate neural activity and effective connectivity in the healthy language network, with a special focus on the combination of NIBS and neuroimaging or electrophysiological approaches. Moreover, we outline how these results can be transferred to the lesioned brain to unravel the dynamics of reorganization processes in poststroke aphasia. We conclude with a critical discussion on the potential of NIBS to facilitate language recovery after stroke and propose a phase-specific model for the application of NIBS in language rehabilitation. PMID:27830094

  5. A Distributed Operating System for BMD Applications.

    DTIC Science & Technology

    1982-01-01

    Defense) applications executing on distributed hardware with local and shared memories. The objective was to develop real - time operating system functions...make the Basic Real - Time Operating System , and the set of new EPL language primitives that provide BMD application processes with efficient mechanisms

  6. From Acoustic Segmentation to Language Processing: Evidence from Optical Imaging

    PubMed Central

    Obrig, Hellmuth; Rossi, Sonja; Telkemeyer, Silke; Wartenburger, Isabell

    2010-01-01

    During language acquisition in infancy and when learning a foreign language, the segmentation of the auditory stream into words and phrases is a complex process. Intuitively, learners use “anchors” to segment the acoustic speech stream into meaningful units like words and phrases. Regularities on a segmental (e.g., phonological) or suprasegmental (e.g., prosodic) level can provide such anchors. Regarding the neuronal processing of these two kinds of linguistic cues a left-hemispheric dominance for segmental and a right-hemispheric bias for suprasegmental information has been reported in adults. Though lateralization is common in a number of higher cognitive functions, its prominence in language may also be a key to understanding the rapid emergence of the language network in infants and the ease at which we master our language in adulthood. One question here is whether the hemispheric lateralization is driven by linguistic input per se or whether non-linguistic, especially acoustic factors, “guide” the lateralization process. Methodologically, functional magnetic resonance imaging provides unsurpassed anatomical detail for such an enquiry. However, instrumental noise, experimental constraints and interference with EEG assessment limit its applicability, pointedly in infants and also when investigating the link between auditory and linguistic processing. Optical methods have the potential to fill this gap. Here we review a number of recent studies using optical imaging to investigate hemispheric differences during segmentation and basic auditory feature analysis in language development. PMID:20725516

  7. Integrated Task and Data Parallel Programming

    NASA Technical Reports Server (NTRS)

    Grimshaw, A. S.

    1998-01-01

    This research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers 1995 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program. Additional 1995 Activities During the fall I collaborated with Andrew Grimshaw and Adam Ferrari to write a book chapter which will be included in Parallel Processing in C++ edited by Gregory Wilson. I also finished two courses, Compilers and Advanced Compilers, in 1995. These courses complete my class requirements at the University of Virginia. I have only my dissertation research and defense to complete.

  8. Developing tools and resources for the biomedical domain of the Greek language.

    PubMed

    Vagelatos, Aristides; Mantzari, Elena; Pantazara, Mavina; Tsalidis, Christos; Kalamara, Chryssoula

    2011-06-01

    This paper presents the design and implementation of terminological and specialized textual resources that were produced in the framework of the Greek research project "IATROLEXI". The aim of the project was to create the critical infrastructure for the Greek language, i.e. linguistic resources and tools for use in high level Natural Language Processing (NLP) applications in the domain of biomedicine. The project was built upon existing resources developed by the project partners and further enhanced within its framework, i.e. a Greek morphological lexicon of about 100,000 words, and language processing tools such as a lemmatiser and a morphosyntactic tagger. Christos Tsalidis, Additionally, it developed new assets, such as a specialized corpus of biomedical texts and an ontology of medical terminology.

  9. Grounded theory as a method for research in speech and language therapy.

    PubMed

    Skeat, J; Perry, A

    2008-01-01

    The use of qualitative methodologies in speech and language therapy has grown over the past two decades, and there is now a body of literature, both generally describing qualitative research, and detailing its applicability to health practice(s). However, there has been only limited profession-specific discussion of qualitative methodologies and their potential application to speech and language therapy. To describe the methodology of grounded theory, and to explain how it might usefully be applied to areas of speech and language research where theoretical frameworks or models are lacking. Grounded theory as a methodology for inductive theory-building from qualitative data is explained and discussed. Some differences between 'modes' of grounded theory are clarified and areas of controversy within the literature are highlighted. The past application of grounded theory to speech and language therapy, and its potential for informing research and clinical practice, are examined. This paper provides an in-depth critique of a qualitative research methodology, including an overview of the main difference between two major 'modes'. The article supports the application of a theory-building approach in the profession, which is sometimes complex to learn and apply, but worthwhile in its results. Grounded theory as a methodology has much to offer speech and language therapists and researchers. Although the majority of research and discussion around this methodology has rested within sociology and nursing, grounded theory can be applied by researchers in any field, including speech and language therapists. The benefit of the grounded theory method to researchers and practitioners lies in its application to social processes and human interactions. The resulting theory may support further research in the speech and language therapy profession.

  10. Searching Process with Raita Algorithm and its Application

    NASA Astrophysics Data System (ADS)

    Rahim, Robbi; Saleh Ahmar, Ansari; Abdullah, Dahlan; Hartama, Dedy; Napitupulu, Darmawan; Putera Utama Siahaan, Andysah; Hasan Siregar, Muhammad Noor; Nasution, Nurliana; Sundari, Siti; Sriadhi, S.

    2018-04-01

    Searching is a common process performed by many computer users, Raita algorithm is one algorithm that can be used to match and find information in accordance with the patterns entered. Raita algorithm applied to the file search application using java programming language and the results obtained from the testing process of the file search quickly and with accurate results and support many data types.

  11. Striatal Degeneration Impairs Language Learning: Evidence from Huntington's Disease

    ERIC Educational Resources Information Center

    De Diego-Balaguer, R.; Couette, M.; Dolbeau, G.; Durr, A.; Youssov, K.; Bachoud-Levi, A.-C.

    2008-01-01

    Although the role of the striatum in language processing is still largely unclear, a number of recent proposals have outlined its specific contribution. Different studies report evidence converging to a picture where the striatum may be involved in those aspects of rule-application requiring non-automatized behaviour. This is the main…

  12. Structural Technology Evaluation and Analysis Program (STEAP). Delivery Order 0035: Dynamics and Control and Computational Design of Flapping Wing Micro Air Vehicles

    DTIC Science & Technology

    2012-10-01

    library as a principal Requestor. The M3CT requestor is written in Java , leveraging the cross platform deployment capabilities needed for a broadly...each application to the Java programming language, the independently generated sources are wrapped with JNA or Groovy. The Java wrapping process...unlimited. Figure 13. Leveraging Languages Once the underlying product is available to the Java source as a library, the application leverages

  13. The road to language learning is iconic: evidence from British Sign Language.

    PubMed

    Thompson, Robin L; Vinson, David P; Woll, Bencie; Vigliocco, Gabriella

    2012-12-01

    An arbitrary link between linguistic form and meaning is generally considered a universal feature of language. However, iconic (i.e., nonarbitrary) mappings between properties of meaning and features of linguistic form are also widely present across languages, especially signed languages. Although recent research has shown a role for sign iconicity in language processing, research on the role of iconicity in sign-language development has been mixed. In this article, we present clear evidence that iconicity plays a role in sign-language acquisition for both the comprehension and production of signs. Signed languages were taken as a starting point because they tend to encode a higher degree of iconic form-meaning mappings in their lexicons than spoken languages do, but our findings are more broadly applicable: Specifically, we hypothesize that iconicity is fundamental to all languages (signed and spoken) and that it serves to bridge the gap between linguistic form and human experience.

  14. Laboratory process control using natural language commands from a personal computer

    NASA Technical Reports Server (NTRS)

    Will, Herbert A.; Mackin, Michael A.

    1989-01-01

    PC software is described which provides flexible natural language process control capability with an IBM PC or compatible machine. Hardware requirements include the PC, and suitable hardware interfaces to all controlled devices. Software required includes the Microsoft Disk Operating System (MS-DOS) operating system, a PC-based FORTRAN-77 compiler, and user-written device drivers. Instructions for use of the software are given as well as a description of an application of the system.

  15. Classifying free-text triage chief complaints into syndromic categories with natural language processing.

    PubMed

    Chapman, Wendy W; Christensen, Lee M; Wagner, Michael M; Haug, Peter J; Ivanov, Oleg; Dowling, John N; Olszewski, Robert T

    2005-01-01

    Develop and evaluate a natural language processing application for classifying chief complaints into syndromic categories for syndromic surveillance. Much of the input data for artificial intelligence applications in the medical field are free-text patient medical records, including dictated medical reports and triage chief complaints. To be useful for automated systems, the free-text must be translated into encoded form. We implemented a biosurveillance detection system from Pennsylvania to monitor the 2002 Winter Olympic Games. Because input data was in free-text format, we used a natural language processing text classifier to automatically classify free-text triage chief complaints into syndromic categories used by the biosurveillance system. The classifier was trained on 4700 chief complaints from Pennsylvania. We evaluated the ability of the classifier to classify free-text chief complaints into syndromic categories with a test set of 800 chief complaints from Utah. The classifier produced the following areas under the ROC curve: Constitutional = 0.95; Gastrointestinal = 0.97; Hemorrhagic = 0.99; Neurological = 0.96; Rash = 1.0; Respiratory = 0.99; Other = 0.96. Using information stored in the system's semantic model, we extracted from the Respiratory classifications lower respiratory complaints and lower respiratory complaints with fever with a precision of 0.97 and 0.96, respectively. Results suggest that a trainable natural language processing text classifier can accurately extract data from free-text chief complaints for biosurveillance.

  16. Design Automation Using Script Languages. High-Level CAD Templates in Non-Parametric Programs

    NASA Astrophysics Data System (ADS)

    Moreno, R.; Bazán, A. M.

    2017-10-01

    The main purpose of this work is to study the advantages offered by the application of traditional techniques of technical drawing in processes for automation of the design, with non-parametric CAD programs, provided with scripting languages. Given that an example drawing can be solved with traditional step-by-step detailed procedures, is possible to do the same with CAD applications and to generalize it later, incorporating references. In today’s modern CAD applications, there are striking absences of solutions for building engineering: oblique projections (military and cavalier), 3D modelling of complex stairs, roofs, furniture, and so on. The use of geometric references (using variables in script languages) and their incorporation into high-level CAD templates allows the automation of processes. Instead of repeatedly creating similar designs or modifying their data, users should be able to use these templates to generate future variations of the same design. This paper presents the automation process of several complex drawing examples based on CAD script files aided with parametric geometry calculation tools. The proposed method allows us to solve complex geometry designs not currently incorporated in the current CAD applications and to subsequently create other new derivatives without user intervention. Automation in the generation of complex designs not only saves time but also increases the quality of the presentations and reduces the possibility of human errors.

  17. Computers for symbolic processing

    NASA Technical Reports Server (NTRS)

    Wah, Benjamin W.; Lowrie, Matthew B.; Li, Guo-Jie

    1989-01-01

    A detailed survey on the motivations, design, applications, current status, and limitations of computers designed for symbolic processing is provided. Symbolic processing computations are performed at the word, relation, or meaning levels, and the knowledge used in symbolic applications may be fuzzy, uncertain, indeterminate, and ill represented. Various techniques for knowledge representation and processing are discussed from both the designers' and users' points of view. The design and choice of a suitable language for symbolic processing and the mapping of applications into a software architecture are then considered. The process of refining the application requirements into hardware and software architectures is treated, and state-of-the-art sequential and parallel computers designed for symbolic processing are discussed.

  18. The preliminary SOL (Sizing and Optimization Language) reference manual

    NASA Technical Reports Server (NTRS)

    Lucas, Stephen H.; Scotti, Stephen J.

    1989-01-01

    The Sizing and Optimization Language, SOL, a high-level special-purpose computer language has been developed to expedite application of numerical optimization to design problems and to make the process less error-prone. This document is a reference manual for those wishing to write SOL programs. SOL is presently available for DEC VAX/VMS systems. A SOL package is available which includes the SOL compiler and runtime library routines. An overview of SOL appears in NASA TM 100565.

  19. Graphical modeling and query language for hospitals.

    PubMed

    Barzdins, Janis; Barzdins, Juris; Rencis, Edgars; Sostaks, Agris

    2013-01-01

    So far there has been little evidence that implementation of the health information technologies (HIT) is leading to health care cost savings. One of the reasons for this lack of impact by the HIT likely lies in the complexity of the business process ownership in the hospitals. The goal of our research is to develop a business model-based method for hospital use which would allow doctors to retrieve directly the ad-hoc information from various hospital databases. We have developed a special domain-specific process modelling language called the MedMod. Formally, we define the MedMod language as a profile on UML Class diagrams, but we also demonstrate it on examples, where we explain the semantics of all its elements informally. Moreover, we have developed the Process Query Language (PQL) that is based on MedMod process definition language. The purpose of PQL is to allow a doctor querying (filtering) runtime data of hospital's processes described using MedMod. The MedMod language tries to overcome deficiencies in existing process modeling languages, allowing to specify the loosely-defined sequence of the steps to be performed in the clinical process. The main advantages of PQL are in two main areas - usability and efficiency. They are: 1) the view on data through "glasses" of familiar process, 2) the simple and easy-to-perceive means of setting filtering conditions require no more expertise than using spreadsheet applications, 3) the dynamic response to each step in construction of the complete query that shortens the learning curve greatly and reduces the error rate, and 4) the selected means of filtering and data retrieving allows to execute queries in O(n) time regarding the size of the dataset. We are about to continue developing this project with three further steps. First, we are planning to develop user-friendly graphical editors for the MedMod process modeling and query languages. The second step is to do evaluation of usability the proposed language and tool involving the physicians from several hospitals in Latvia and working with real data from these hospitals. Our third step is to develop an efficient implementation of the query language.

  20. A high-order language for a system of closely coupled processing elements

    NASA Technical Reports Server (NTRS)

    Feyock, S.; Collins, W. R.

    1986-01-01

    The research reported in this paper was occasioned by the requirements on part of the Real-Time Digital Simulator (RTDS) project under way at NASA Lewis Research Center. The RTDS simulation scheme employs a network of CPUs running lock-step cycles in the parallel computations of jet airplane simulations. Their need for a high order language (HOL) that would allow non-experts to write simulation applications and that could be implemented on a possibly varying network can best be fulfilled by using the programming language Ada. We describe how the simulation problems can be modeled in Ada, how to map a single, multi-processing Ada program into code for individual processors, regardless of network reconfiguration, and why some Ada language features are particulary well-suited to network simulations.

  1. A basic interpretation of the technical language of radiation processing

    NASA Astrophysics Data System (ADS)

    Deeley, Catherine M.

    2004-09-01

    For the food producer contemplating the purchase of radiation processing equipment the task of evaluating the strengths and weaknesses of the available technologies, electron beam (E-beam), X-ray and gamma, to determine the best option for their application, is onerous. Not only is the level of investment daunting but also, to be sure of comparing like with like, the evaluator requires a basic understanding of the science underpinning radiation processing. There have been many papers published that provide technical specialists with a rigorous interpretation of this science (In: Gaughran, E.R.L., Goudie, A.J. (Eds.), Technical Developments and Prospects of Sterilization by Ionizing Radiation, International Conference, Vienna. Multiscience Publications Ltd., pp. 145-172). The objective for this paper is to give non-specialists an introduction to the language of radiation processing and to clarify some of the terminology associated with the use of radioactive sources for this application.

  2. An Intelligent Computer-Based System for Sign Language Tutoring

    ERIC Educational Resources Information Center

    Ritchings, Tim; Khadragi, Ahmed; Saeb, Magdy

    2012-01-01

    A computer-based system for sign language tutoring has been developed using a low-cost data glove and a software application that processes the movement signals for signs in real-time and uses Pattern Matching techniques to decide if a trainee has closely replicated a teacher's recorded movements. The data glove provides 17 movement signals from…

  3. Using WhatsApp to Create a Space of Language and Content for Students of International Relations

    ERIC Educational Resources Information Center

    Keogh, Conor

    2017-01-01

    For language learners of this generation, the smart phone represents a key cultural artefact that complements the learning process. Instant messaging applications such as WhatsApp are widely used in personal, professional and, increasingly, academic circles to maintain constant contact among friends, colleagues, or classmates. This study seeks to…

  4. Detailed Phonetic Labeling of Multi-language Database for Spoken Language Processing Applications

    DTIC Science & Technology

    2015-03-01

    which contains about 60 interfering speakers as well as background music in a bar. The top panel is again clean training /noisy testing settings, and...recognition system for Mandarin was developed and tested. Character recognition rates as high as 88% were obtained, using an approximately 40 training ...Tool_ComputeFeat.m) .............................................................................................................. 50 6.3. Training

  5. A Model of Identity and Language Orientations: The Case of Immigrant Students from the Former Soviet Union in Israel

    ERIC Educational Resources Information Center

    Golan-Cook, Pnina; Olshtain, Elite

    2011-01-01

    A theoretical model featuring the relationship between identity and language orientations within the broader constellation of variables impacting immigration and acculturation processes was proposed within the framework of the current study and its applicability was tested with regards to 152 immigrant university students from the Former Soviet…

  6. Psychological and Pedagogical Conditions for Effective Application of Dialogic Communication among Teenagers

    ERIC Educational Resources Information Center

    Niyetbaeva, Gulmira; Shalabayeva, Laura; Zhigitbekova, Bakyt; Abdullayeva, Gulzira; Bekmuratova, Gulzhanar

    2016-01-01

    Our personality develops gradually, and this process is influenced by various factors, with language being one of the most important. We need to communicate with other people, to speak as much, as we need occupation, and this need determines the development of our personalities. Language is deeply embedded in our conscious and subconscious.…

  7. Natural Language Processing as a Discipline at LLNL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Firpo, M A

    The field of Natural Language Processing (NLP) is described as it applies to the needs of LLNL in handling free-text. The state of the practice is outlined with the emphasis placed on two specific aspects of NLP: Information Extraction and Discourse Integration. A brief description is included of the NLP applications currently being used at LLNL. A gap analysis provides a look at where the technology needs work in order to meet the needs of LLNL. Finally, recommendations are made to meet these needs.

  8. SyllabO+: A new tool to study sublexical phenomena in spoken Quebec French.

    PubMed

    Bédard, Pascale; Audet, Anne-Marie; Drouin, Patrick; Roy, Johanna-Pascale; Rivard, Julie; Tremblay, Pascale

    2017-10-01

    Sublexical phonotactic regularities in language have a major impact on language development, as well as on speech processing and production throughout the entire lifespan. To understand the impact of phonotactic regularities on speech and language functions at the behavioral and neural levels, it is essential to have access to oral language corpora to study these complex phenomena in different languages. Yet, probably because of their complexity, oral language corpora remain less common than written language corpora. This article presents the first corpus and database of spoken Quebec French syllables and phones: SyllabO+. This corpus contains phonetic transcriptions of over 300,000 syllables (over 690,000 phones) extracted from recordings of 184 healthy adult native Quebec French speakers, ranging in age from 20 to 97 years. To ensure the representativeness of the corpus, these recordings were made in both formal and familiar communication contexts. Phonotactic distributional statistics (e.g., syllable and co-occurrence frequencies, percentages, percentile ranks, transition probabilities, and pointwise mutual information) were computed from the corpus. An open-access online application to search the database was developed, and is available at www.speechneurolab.ca/syllabo . In this article, we present a brief overview of the corpus, as well as the syllable and phone databases, and we discuss their practical applications in various fields of research, including cognitive neuroscience, psycholinguistics, neurolinguistics, experimental psychology, phonetics, and phonology. Nonacademic practical applications are also discussed, including uses in speech-language pathology.

  9. A new programming metaphor for image processing procedures

    NASA Technical Reports Server (NTRS)

    Smirnov, O. M.; Piskunov, N. E.

    1992-01-01

    Most image processing systems, besides an Application Program Interface (API) which lets users write their own image processing programs, also feature a higher level of programmability. Traditionally, this is a command or macro language, which can be used to build large procedures (scripts) out of simple programs or commands. This approach, a legacy of the teletypewriter has serious drawbacks. A command language is clumsy when (and if! it attempts to utilize the capabilities of a multitasking or multiprocessor environment, it is but adequate for real-time data acquisition and processing, it has a fairly steep learning curve, and the user interface is very inefficient,. especially when compared to a graphical user interface (GUI) that systems running under Xll or Windows should otherwise be able to provide. ll these difficulties stem from one basic problem: a command language is not a natural metaphor for an image processing procedure. A more natural metaphor - an image processing factory is described in detail. A factory is a set of programs (applications) that execute separate operations on images, connected by pipes that carry data (images and parameters) between them. The programs function concurrently, processing images as they arrive along pipes, and querying the user for whatever other input they need. From the user's point of view, programming (constructing) factories is a lot like playing with LEGO blocks - much more intuitive than writing scripts. Focus is on some of the difficulties of implementing factory support, most notably the design of an appropriate API. It also shows that factories retain all the functionality of a command language (including loops and conditional branches), while suffering from none of the drawbacks outlined above. Other benefits of factory programming include self-tuning factories and the process of encapsulation, which lets a factory take the shape of a standard application both from the system and the user's point of view, and thus be used as a component of other factories. A bare-bones prototype of factory programming was implemented under the PcIPS image processing system, and a complete version (on a multitasking platform) is under development.

  10. GALEN: a third generation terminology tool to support a multipurpose national coding system for surgical procedures.

    PubMed

    Trombert-Paviot, B; Rodrigues, J M; Rogers, J E; Baud, R; van der Haring, E; Rassinoux, A M; Abrial, V; Clavel, L; Idir, H

    2000-09-01

    Generalised architecture for languages, encyclopedia and nomenclatures in medicine (GALEN) has developed a new generation of terminology tools based on a language independent model describing the semantics and allowing computer processing and multiple reuses as well as natural language understanding systems applications to facilitate the sharing and maintaining of consistent medical knowledge. During the European Union 4 Th. framework program project GALEN-IN-USE and later on within two contracts with the national health authorities we applied the modelling and the tools to the development of a new multipurpose coding system for surgical procedures named CCAM in a minority language country, France. On one hand, we contributed to a language independent knowledge repository and multilingual semantic dictionaries for multicultural Europe. On the other hand, we support the traditional process for creating a new coding system in medicine which is very much labour consuming by artificial intelligence tools using a medically oriented recursive ontology and natural language processing. We used an integrated software named CLAW (for classification workbench) to process French professional medical language rubrics produced by the national colleges of surgeons domain experts into intermediate dissections and to the Grail reference ontology model representation. From this language independent concept model representation, on one hand, we generate with the LNAT natural language generator controlled French natural language to support the finalization of the linguistic labels (first generation) in relation with the meanings of the conceptual system structure. On the other hand, the Claw classification manager proves to be very powerful to retrieve the initial domain experts rubrics list with different categories of concepts (second generation) within a semantic structured representation (third generation) bridge to the electronic patient record detailed terminology.

  11. Further developments in generating type-safe messaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neswold, R.; King, C.; /Fermilab

    2011-11-01

    At ICALEPCS 09, we introduced a source code generator that allows processes to communicate safely using data types native to each host language. In this paper, we discuss further development that has occurred since the conference in Kobe, Japan, including the addition of three more client languages, an optimization in network packet size and the addition of a new protocol data type. The protocol compiler is continuing to prove itself as an easy and robust way to get applications written in different languages hosted on different computer architectures to communicate. We have two active Erlang projects that are using themore » protocol compiler to access ACNET data at high data rates. We also used the protocol compiler output to deliver ACNET data to an iPhone/iPad application. Since it takes an average of two weeks to support a new language, we're willing to expand the protocol compiler to support new languages that our community uses.« less

  12. UMLS content views appropriate for NLP processing of the biomedical literature vs. clinical text.

    PubMed

    Demner-Fushman, Dina; Mork, James G; Shooshan, Sonya E; Aronson, Alan R

    2010-08-01

    Identification of medical terms in free text is a first step in such Natural Language Processing (NLP) tasks as automatic indexing of biomedical literature and extraction of patients' problem lists from the text of clinical notes. Many tools developed to perform these tasks use biomedical knowledge encoded in the Unified Medical Language System (UMLS) Metathesaurus. We continue our exploration of automatic approaches to creation of subsets (UMLS content views) which can support NLP processing of either the biomedical literature or clinical text. We found that suppression of highly ambiguous terms in the conservative AutoFilter content view can partially replace manual filtering for literature applications, and suppression of two character mappings in the same content view achieves 89.5% precision at 78.6% recall for clinical applications. Published by Elsevier Inc.

  13. Research accomplished at the Knowledge Based Systems Lab: IDEF3, version 1.0

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Menzel, Christopher P.; Mayer, Paula S. D.

    1991-01-01

    An overview is presented of the foundations and content of the evolving IDEF3 process flow and object state description capture method. This method is currently in beta test. Ongoing efforts in the formulation of formal semantics models for descriptions captured in the outlined form and in the actual application of this method can be expected to cause an evolution in the method language. A language is described for the representation of process and object state centered system description. IDEF3 is a scenario driven process flow modeling methodology created specifically for these types of descriptive activities.

  14. Zsyntax: a formal language for molecular biology with projected applications in text mining and biological prediction.

    PubMed

    Boniolo, Giovanni; D'Agostino, Marcello; Di Fiore, Pier Paolo

    2010-03-03

    We propose a formal language that allows for transposing biological information precisely and rigorously into machine-readable information. This language, which we call Zsyntax (where Z stands for the Greek word zetaomegaeta, life), is grounded on a particular type of non-classical logic, and it can be used to write algorithms and computer programs. We present it as a first step towards a comprehensive formal language for molecular biology in which any biological process can be written and analyzed as a sort of logical "deduction". Moreover, we illustrate the potential value of this language, both in the field of text mining and in that of biological prediction.

  15. Computer Assisted Reading in German as a Foreign Language, Developing and Testing an NLP-Based Application

    ERIC Educational Resources Information Center

    Wood, Peter

    2011-01-01

    "QuickAssist," the program presented in this paper, uses natural language processing (NLP) technologies. It places a range of NLP tools at the disposal of learners, intended to enable them to independently read and comprehend a German text of their choice while they extend their vocabulary, learn about different uses of particular words,…

  16. Reading English as a Second Language: Moving from Theory. Monographs in Teaching and Learning Number 4.

    ERIC Educational Resources Information Center

    Twyford, C. W., Ed.; And Others

    Because application of reading research and theory development to the English as a second language (ESL) classroom has not always been forthcoming, this monograph is aimed at helping teachers develop better, sounder reading instruction in the ESL classroom through a better understanding of the reading processes, of the factors that affect reading…

  17. The Blended Learning Environment on the Foreign Language Learning Process: A Balance for Motivation and Achievement

    ERIC Educational Resources Information Center

    Isiguzel, Bahar

    2014-01-01

    The purpose of this study is to determine the effects on motivation and success within the application of blended learning environments in the foreign language class. The research sample is formed by third grade students studying in the tourism and hotel management programs of the faculty for tourism and the faculty of economics and administrative…

  18. Analyses of requirements for computer control and data processing experiment subsystems. Volume 2: ATM experiment S-056 image data processing system software development

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The IDAPS (Image Data Processing System) is a user-oriented, computer-based, language and control system, which provides a framework or standard for implementing image data processing applications, simplifies set-up of image processing runs so that the system may be used without a working knowledge of computer programming or operation, streamlines operation of the image processing facility, and allows multiple applications to be run in sequence without operator interaction. The control system loads the operators, interprets the input, constructs the necessary parameters for each application, and cells the application. The overlay feature of the IBSYS loader (IBLDR) provides the means of running multiple operators which would otherwise overflow core storage.

  19. Linear- and Repetitive Feature Detection Within Remotely Sensed Imagery

    DTIC Science & Technology

    2017-04-01

    applicable to Python or other pro- gramming languages with image- processing capabilities. 4.1 Classification machine learning The first methodology uses...remotely sensed images that are in panchromatic or true-color formats. Image- processing techniques, in- cluding Hough transforms, machine learning, and...data fusion .................................................................................................... 44 6.3 Context-based processing

  20. Formulaic Language in Alzheimer’s Disease

    PubMed Central

    Bridges, Kelly Ann; Van Lancker Sidtis, Diana

    2013-01-01

    Background Studies of productive language in Alzheimer’s disease (AD) have focused on formal testing of syntax and semantics but have directed less attention to naturalistic discourse and formulaic language. Clinical observations suggest that individuals with AD retain the ability to produce formulaic language long after other cognitive abilities have deteriorated. Aims This study quantifies production of formulaic expressions in the spontaneous speech of individuals with AD. Persons with early- and late-onset forms of the disease were compared. Methods & Procedures Conversational language samples of individuals with early- (n = 5) and late-onset (n = 6) AD and healthy controls (n = 5) were analyzed to determine whether formulaic language, as measured by the number of words in formulaic expressions, differs between groups. Outcomes & Results Results indicate that individuals with AD, regardless of age of onset, used significantly more formulaic expressions than healthy controls. The early- and late-onset AD groups did not differ on formulaic language measures. Conclusions These findings contribute to a dual process model of cerebral function, which proposes differing processing principles for formulaic and novel expressions. In this model, subcortical areas, which remain intact into late in the progression of Alzheimer’s disease, play an important role in the production of formulaic language. Applications to clinical practice include identifying preserved formulaic language and providing informed counseling to patient and family. PMID:24187417

  1. PyPele Rewritten To Use MPI

    NASA Technical Reports Server (NTRS)

    Hockney, George; Lee, Seungwon

    2008-01-01

    A computer program known as PyPele, originally written as a Pythonlanguage extension module of a C++ language program, has been rewritten in pure Python language. The original version of PyPele dispatches and coordinates parallel-processing tasks on cluster computers and provides a conceptual framework for spacecraft-mission- design and -analysis software tools to run in an embarrassingly parallel mode. The original version of PyPele uses SSH (Secure Shell a set of standards and an associated network protocol for establishing a secure channel between a local and a remote computer) to coordinate parallel processing. Instead of SSH, the present Python version of PyPele uses Message Passing Interface (MPI) [an unofficial de-facto standard language-independent application programming interface for message- passing on a parallel computer] while keeping the same user interface. The use of MPI instead of SSH and the preservation of the original PyPele user interface make it possible for parallel application programs written previously for the original version of PyPele to run on MPI-based cluster computers. As a result, engineers using the previously written application programs can take advantage of embarrassing parallelism without need to rewrite those programs.

  2. Block Architecture Problem with Depth First Search Solution and Its Application

    NASA Astrophysics Data System (ADS)

    Rahim, Robbi; Abdullah, Dahlan; Simarmata, Janner; Pranolo, Andri; Saleh Ahmar, Ansari; Hidayat, Rahmat; Napitupulu, Darmawan; Nurdiyanto, Heri; Febriadi, Bayu; Zamzami, Z.

    2018-01-01

    Searching is a common process performed by many computer users, Raita algorithm is one algorithm that can be used to match and find information in accordance with the patterns entered. Raita algorithm applied to the file search application using java programming language and the results obtained from the testing process of the file search quickly and with accurate results and support many data types.

  3. Representing Information in Patient Reports Using Natural Language Processing and the Extensible Markup Language

    PubMed Central

    Friedman, Carol; Hripcsak, George; Shagina, Lyuda; Liu, Hongfang

    1999-01-01

    Objective: To design a document model that provides reliable and efficient access to clinical information in patient reports for a broad range of clinical applications, and to implement an automated method using natural language processing that maps textual reports to a form consistent with the model. Methods: A document model that encodes structured clinical information in patient reports while retaining the original contents was designed using the extensible markup language (XML), and a document type definition (DTD) was created. An existing natural language processor (NLP) was modified to generate output consistent with the model. Two hundred reports were processed using the modified NLP system, and the XML output that was generated was validated using an XML validating parser. Results: The modified NLP system successfully processed all 200 reports. The output of one report was invalid, and 199 reports were valid XML forms consistent with the DTD. Conclusions: Natural language processing can be used to automatically create an enriched document that contains a structured component whose elements are linked to portions of the original textual report. This integrated document model provides a representation where documents containing specific information can be accurately and efficiently retrieved by querying the structured components. If manual review of the documents is desired, the salient information in the original reports can also be identified and highlighted. Using an XML model of tagging provides an additional benefit in that software tools that manipulate XML documents are readily available. PMID:9925230

  4. PrismTech Data Distribution Service Java API Evaluation

    NASA Technical Reports Server (NTRS)

    Riggs, Cortney

    2008-01-01

    My internship duties with Launch Control Systems required me to start performance testing of an Object Management Group's (OMG) Data Distribution Service (DDS) specification implementation by PrismTech Limited through the Java programming language application programming interface (API). DDS is a networking middleware for Real-Time Data Distribution. The performance testing involves latency, redundant publishers, extended duration, redundant failover, and read performance. Time constraints allowed only for a data throughput test. I have designed the testing applications to perform all performance tests when time is allowed. Performance evaluation data such as megabits per second and central processing unit (CPU) time consumption were not easily attainable through the Java programming language; they required new methods and classes created in the test applications. Evaluation of this product showed the rate that data can be sent across the network. Performance rates are better on Linux platforms than AIX and Sun platforms. Compared to previous C++ programming language API, the performance evaluation also shows the language differences for the implementation. The Java API of the DDS has a lower throughput performance than the C++ API.

  5. Constraint processing in our extensible language for cooperative imaging system

    NASA Astrophysics Data System (ADS)

    Aoki, Minoru; Murao, Yo; Enomoto, Hajime

    1996-02-01

    The extensible WELL (Window-based elaboration language) has been developed using the concept of common platform, where both client and server can communicate with each other with support from a communication manager. This extensible language is based on an object oriented design by introducing constraint processing. Any kind of services including imaging in the extensible language is controlled by the constraints. Interactive functions between client and server are extended by introducing agent functions including a request-respond relation. Necessary service integrations are satisfied with some cooperative processes using constraints. Constraints are treated similarly to data, because the system should have flexibilities in the execution of many kinds of services. The similar control process is defined by using intentional logic. There are two kinds of constraints, temporal and modal constraints. Rendering the constraints, the predicate format as the relation between attribute values can be a warrant for entities' validity as data. As an imaging example, a processing procedure of interaction between multiple objects is shown as an image application for the extensible system. This paper describes how the procedure proceeds in the system, and that how the constraints work for generating moving pictures.

  6. Experimenting with C2 Applications and Federated Infrastructures for Integrated Full-Spectrum Operational Environments in Support of Collaborative Planning and Interoperable Execution

    DTIC Science & Technology

    2004-06-01

    Situation Understanding) Common Operational Pictures Planning & Decision Support Capabilities Message & Order Processing Common Operational...Pictures Planning & Decision Support Capabilities Message & Order Processing Common Languages & Data Models Modeling & Simulation Domain

  7. Application specific serial arithmetic arrays

    NASA Technical Reports Server (NTRS)

    Winters, K.; Mathews, D.; Thompson, T.

    1990-01-01

    High performance systolic arrays of serial-parallel multiplier elements may be rapidly constructed for specific applications by applying hardware description language techniques to a library of full-custom CMOS building blocks. Single clock pre-charged circuits have been implemented for these arrays at clock rates in excess of 100 Mhz using economical 2-micron (minimum feature size) CMOS processes, which may be quickly configured for a variety of applications. A number of application-specific arrays are presented, including a 2-D convolver for image processing, an integer polynomial solver, and a finite-field polynomial solver.

  8. Space Station Mission Planning System (MPS) development study. Volume 2

    NASA Technical Reports Server (NTRS)

    Klus, W. J.

    1987-01-01

    The process and existing software used for Spacelab payload mission planning were studied. A complete baseline definition of the Spacelab payload mission planning process was established, along with a definition of existing software capabilities for potential extrapolation to the Space Station. This information was used as a basis for defining system requirements to support Space Station mission planning. The Space Station mission planning concept was reviewed for the purpose of identifying areas where artificial intelligence concepts might offer substantially improved capability. Three specific artificial intelligence concepts were to be investigated for applicability: natural language interfaces; expert systems; and automatic programming. The advantages and disadvantages of interfacing an artificial intelligence language with existing FORTRAN programs or of converting totally to a new programming language were identified.

  9. Dual Sticky Hierarchical Dirichlet Process Hidden Markov Model and Its Application to Natural Language Description of Motions.

    PubMed

    Hu, Weiming; Tian, Guodong; Kang, Yongxin; Yuan, Chunfeng; Maybank, Stephen

    2017-09-25

    In this paper, a new nonparametric Bayesian model called the dual sticky hierarchical Dirichlet process hidden Markov model (HDP-HMM) is proposed for mining activities from a collection of time series data such as trajectories. All the time series data are clustered. Each cluster of time series data, corresponding to a motion pattern, is modeled by an HMM. Our model postulates a set of HMMs that share a common set of states (topics in an analogy with topic models for document processing), but have unique transition distributions. For the application to motion trajectory modeling, topics correspond to motion activities. The learnt topics are clustered into atomic activities which are assigned predicates. We propose a Bayesian inference method to decompose a given trajectory into a sequence of atomic activities. On combining the learnt sources and sinks, semantic motion regions, and the learnt sequence of atomic activities, the action represented by the trajectory can be described in natural language in as automatic a way as possible. The effectiveness of our dual sticky HDP-HMM is validated on several trajectory datasets. The effectiveness of the natural language descriptions for motions is demonstrated on the vehicle trajectories extracted from a traffic scene.

  10. Instruments speak global language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nudo, L.

    1993-07-01

    If all goes as planned, companies that use instruments for measurement and control will get more complete, reliable and repeatable information about their processes with advanced digital devices that speak a global language. That language, in technical terms, is known as international fieldbus. But it's not much different from English's role as the international language of business. Companies that use a remote measurement device for environmental applications, such as pH control and fugitive emissions control, are candidates for fieldbus devices, which are much faster and measure more process variables than their counterpart analog devices. With the advent of a globalmore » fieldbus, users will see digital valves, solenoids and multivariable transmitters. Fieldbus technology redefines the roles of the control system and field devices. The control system still serves as a central clearinghouse, but field devices will handle more control and reporting functions and generate data that can be used for trending and preventive maintenance.« less

  11. User's guide to the LLL BASIC interpreter. [For 8080-based MCS-80 microcomputer system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allison, T.; Eckard, R.; Barber, J.

    1977-06-09

    Scientists are finding increased applications for microcomputers as process controllers in their experiments. However, while microcomputers are small and inexpensive, they are difficult to program in machine or assembly language. A high-level language is needed to enable scientists to develop their own microcomputer programs for their experiments on location. Recognizing this need, LLL contracted to have such a language developed. This report describes the result--the LLL BASIC interpreter, which operates with LLL's 8080-based MCS-80 microcomputer system. 4 tables.

  12. Computer Music

    NASA Astrophysics Data System (ADS)

    Cook, Perry R.

    This chapter covers algorithms, technologies, computer languages, and systems for computer music. Computer music involves the application of computers and other digital/electronic technologies to music composition, performance, theory, history, and the study of perception. The field combines digital signal processing, computational algorithms, computer languages, hardware and software systems, acoustics, psychoacoustics (low-level perception of sounds from the raw acoustic signal), and music cognition (higher-level perception of musical style, form, emotion, etc.).

  13. Using Java for distributed computing in the Gaia satellite data processing

    NASA Astrophysics Data System (ADS)

    O'Mullane, William; Luri, Xavier; Parsons, Paul; Lammers, Uwe; Hoar, John; Hernandez, Jose

    2011-10-01

    In recent years Java has matured to a stable easy-to-use language with the flexibility of an interpreter (for reflection etc.) but the performance and type checking of a compiled language. When we started using Java for astronomical applications around 1999 they were the first of their kind in astronomy. Now a great deal of astronomy software is written in Java as are many business applications. We discuss the current environment and trends concerning the language and present an actual example of scientific use of Java for high-performance distributed computing: ESA's mission Gaia. The Gaia scanning satellite will perform a galactic census of about 1,000 million objects in our galaxy. The Gaia community has chosen to write its processing software in Java. We explore the manifold reasons for choosing Java for this large science collaboration. Gaia processing is numerically complex but highly distributable, some parts being embarrassingly parallel. We describe the Gaia processing architecture and its realisation in Java. We delve into the astrometric solution which is the most advanced and most complex part of the processing. The Gaia simulator is also written in Java and is the most mature code in the system. This has been successfully running since about 2005 on the supercomputer "Marenostrum" in Barcelona. We relate experiences of using Java on a large shared machine. Finally we discuss Java, including some of its problems, for scientific computing.

  14. Predictors of word-level literacy amongst Grade 3 children in five diverse languages.

    PubMed

    Smythe, Ian; Everatt, John; Al-Menaye, Nasser; He, Xianyou; Capellini, Simone; Gyarmathy, Eva; Siegel, Linda S

    2008-08-01

    Groups of Grade 3 children were tested on measures of word-level literacy and undertook tasks that required the ability to associate sounds with letter sequences and that involved visual, auditory and phonological-processing skills. These groups came from different language backgrounds in which the language of instruction was Arabic, Chinese, English, Hungarian or Portuguese. Similar measures were used across the groups, with tests being adapted to be appropriate for the language of the children. Findings indicated that measures of decoding and phonological-processing skills were good predictors of word reading and spelling among Arabic- and English-speaking children, but were less able to predict variability in these same early literacy skills among Chinese- and Hungarian-speaking children, and were better at predicting variability in Portuguese word reading than spelling. Results were discussed with reference to the relative transparency of the script and issues of dyslexia assessment across languages. Overall, the findings argue for the need to take account of features of the orthography used to represent a language when developing assessment procedures for a particular language and that assessment of word-level literacy skills and a phonological perspective of dyslexia may not be universally applicable across all language contexts. Copyright 2008 John Wiley & Sons, Ltd.

  15. Integrating language models into classifiers for BCI communication: a review

    NASA Astrophysics Data System (ADS)

    Speier, W.; Arnold, C.; Pouratian, N.

    2016-06-01

    Objective. The present review systematically examines the integration of language models to improve classifier performance in brain-computer interface (BCI) communication systems. Approach. The domain of natural language has been studied extensively in linguistics and has been used in the natural language processing field in applications including information extraction, machine translation, and speech recognition. While these methods have been used for years in traditional augmentative and assistive communication devices, information about the output domain has largely been ignored in BCI communication systems. Over the last few years, BCI communication systems have started to leverage this information through the inclusion of language models. Main results. Although this movement began only recently, studies have already shown the potential of language integration in BCI communication and it has become a growing field in BCI research. BCI communication systems using language models in their classifiers have progressed down several parallel paths, including: word completion; signal classification; integration of process models; dynamic stopping; unsupervised learning; error correction; and evaluation. Significance. Each of these methods have shown significant progress, but have largely been addressed separately. Combining these methods could use the full potential of language model, yielding further performance improvements. This integration should be a priority as the field works to create a BCI system that meets the needs of the amyotrophic lateral sclerosis population.

  16. Integrating language models into classifiers for BCI communication: a review.

    PubMed

    Speier, W; Arnold, C; Pouratian, N

    2016-06-01

    The present review systematically examines the integration of language models to improve classifier performance in brain-computer interface (BCI) communication systems. The domain of natural language has been studied extensively in linguistics and has been used in the natural language processing field in applications including information extraction, machine translation, and speech recognition. While these methods have been used for years in traditional augmentative and assistive communication devices, information about the output domain has largely been ignored in BCI communication systems. Over the last few years, BCI communication systems have started to leverage this information through the inclusion of language models. Although this movement began only recently, studies have already shown the potential of language integration in BCI communication and it has become a growing field in BCI research. BCI communication systems using language models in their classifiers have progressed down several parallel paths, including: word completion; signal classification; integration of process models; dynamic stopping; unsupervised learning; error correction; and evaluation. Each of these methods have shown significant progress, but have largely been addressed separately. Combining these methods could use the full potential of language model, yielding further performance improvements. This integration should be a priority as the field works to create a BCI system that meets the needs of the amyotrophic lateral sclerosis population.

  17. Pedagogy and Related Criteria: The Selection of Software for Computer Assisted Language Learning

    ERIC Educational Resources Information Center

    Samuels, Jeffrey D.

    2013-01-01

    Computer-Assisted Language Learning (CALL) is an established field of academic inquiry with distinct applications for second language teaching and learning. Many CALL professionals direct language labs or language resource centers (LRCs) in which CALL software applications and generic software applications support language learning programs and…

  18. OIL—Output input language for data connectivity between geoscientific software applications

    NASA Astrophysics Data System (ADS)

    Amin Khan, Khalid; Akhter, Gulraiz; Ahmad, Zulfiqar

    2010-05-01

    Geoscientific computing has become so complex that no single software application can perform all the processing steps required to get the desired results. Thus for a given set of analyses, several specialized software applications are required, which must be interconnected for electronic flow of data. In this network of applications the outputs of one application become inputs of other applications. Each of these applications usually involve more than one data type and may have their own data formats, making them incompatible with other applications in terms of data connectivity. Consequently several data format conversion utilities are developed in-house to provide data connectivity between applications. Practically there is no end to this problem as each time a new application is added to the system, a set of new data conversion utilities need to be developed. This paper presents a flexible data format engine, programmable through a platform independent, interpreted language named; Output Input Language (OIL). Its unique architecture allows input and output formats to be defined independent of each other by two separate programs. Thus read and write for each format is coded only once and data connectivity link between two formats is established by a combination of their read and write programs. This results in fewer programs with no redundancy and maximum reuse, enabling rapid application development and easy maintenance of data connectivity links.

  19. Benchmarking natural-language parsers for biological applications using dependency graphs.

    PubMed

    Clegg, Andrew B; Shepherd, Adrian J

    2007-01-25

    Interest is growing in the application of syntactic parsers to natural language processing problems in biology, but assessing their performance is difficult because differences in linguistic convention can falsely appear to be errors. We present a method for evaluating their accuracy using an intermediate representation based on dependency graphs, in which the semantic relationships important in most information extraction tasks are closer to the surface. We also demonstrate how this method can be easily tailored to various application-driven criteria. Using the GENIA corpus as a gold standard, we tested four open-source parsers which have been used in bioinformatics projects. We first present overall performance measures, and test the two leading tools, the Charniak-Lease and Bikel parsers, on subtasks tailored to reflect the requirements of a system for extracting gene expression relationships. These two tools clearly outperform the other parsers in the evaluation, and achieve accuracy levels comparable to or exceeding native dependency parsers on similar tasks in previous biological evaluations. Evaluating using dependency graphs allows parsers to be tested easily on criteria chosen according to the semantics of particular biological applications, drawing attention to important mistakes and soaking up many insignificant differences that would otherwise be reported as errors. Generating high-accuracy dependency graphs from the output of phrase-structure parsers also provides access to the more detailed syntax trees that are used in several natural-language processing techniques.

  20. Benchmarking natural-language parsers for biological applications using dependency graphs

    PubMed Central

    Clegg, Andrew B; Shepherd, Adrian J

    2007-01-01

    Background Interest is growing in the application of syntactic parsers to natural language processing problems in biology, but assessing their performance is difficult because differences in linguistic convention can falsely appear to be errors. We present a method for evaluating their accuracy using an intermediate representation based on dependency graphs, in which the semantic relationships important in most information extraction tasks are closer to the surface. We also demonstrate how this method can be easily tailored to various application-driven criteria. Results Using the GENIA corpus as a gold standard, we tested four open-source parsers which have been used in bioinformatics projects. We first present overall performance measures, and test the two leading tools, the Charniak-Lease and Bikel parsers, on subtasks tailored to reflect the requirements of a system for extracting gene expression relationships. These two tools clearly outperform the other parsers in the evaluation, and achieve accuracy levels comparable to or exceeding native dependency parsers on similar tasks in previous biological evaluations. Conclusion Evaluating using dependency graphs allows parsers to be tested easily on criteria chosen according to the semantics of particular biological applications, drawing attention to important mistakes and soaking up many insignificant differences that would otherwise be reported as errors. Generating high-accuracy dependency graphs from the output of phrase-structure parsers also provides access to the more detailed syntax trees that are used in several natural-language processing techniques. PMID:17254351

  1. Design Of Computer Based Test Using The Unified Modeling Language

    NASA Astrophysics Data System (ADS)

    Tedyyana, Agus; Danuri; Lidyawati

    2017-12-01

    The Admission selection of Politeknik Negeri Bengkalis through interest and talent search (PMDK), Joint Selection of admission test for state Polytechnics (SB-UMPN) and Independent (UM-Polbeng) were conducted by using paper-based Test (PBT). Paper Based Test model has some weaknesses. They are wasting too much paper, the leaking of the questios to the public, and data manipulation of the test result. This reasearch was Aimed to create a Computer-based Test (CBT) models by using Unified Modeling Language (UML) the which consists of Use Case diagrams, Activity diagram and sequence diagrams. During the designing process of the application, it is important to pay attention on the process of giving the password for the test questions before they were shown through encryption and description process. RSA cryptography algorithm was used in this process. Then, the questions shown in the questions banks were randomized by using the Fisher-Yates Shuffle method. The network architecture used in Computer Based test application was a client-server network models and Local Area Network (LAN). The result of the design was the Computer Based Test application for admission to the selection of Politeknik Negeri Bengkalis.

  2. Using Analytic Hierarchy Process in Textbook Evaluation

    ERIC Educational Resources Information Center

    Kato, Shigeo

    2014-01-01

    This study demonstrates the application of the analytic hierarchy process (AHP) in English language teaching materials evaluation, focusing in particular on its potential for systematically integrating different components of evaluation criteria in a variety of teaching contexts. AHP is a measurement procedure wherein pairwise comparisons are made…

  3. EVA - A Textual Data Processing Tool.

    ERIC Educational Resources Information Center

    Jakopin, Primoz

    EVA, a text processing tool designed to be self-contained and useful for a variety of languages, is described briefly, and its extensive coded character set is illustrated. Features, specifications, and database functions are noted. Its application in development of a Slovenian literary dictionary is also described. (MSE)

  4. Aphasia

    MedlinePlus

    ... of speech-generating applications on mobile devices like tablets can also provide an alternative way to communicate ... on using advanced imaging methods, such as functional magnetic resonance imaging (fMRI), to explore how language is processed in ...

  5. Enabling model checking for collaborative process analysis: from BPMN to `Network of Timed Automata'

    NASA Astrophysics Data System (ADS)

    Mallek, Sihem; Daclin, Nicolas; Chapurlat, Vincent; Vallespir, Bruno

    2015-04-01

    Interoperability is a prerequisite for partners involved in performing collaboration. As a consequence, the lack of interoperability is now considered a major obstacle. The research work presented in this paper aims to develop an approach that allows specifying and verifying a set of interoperability requirements to be satisfied by each partner in the collaborative process prior to process implementation. To enable the verification of these interoperability requirements, it is necessary first and foremost to generate a model of the targeted collaborative process; for this research effort, the standardised language BPMN 2.0 is used. Afterwards, a verification technique must be introduced, and model checking is the preferred option herein. This paper focuses on application of the model checker UPPAAL in order to verify interoperability requirements for the given collaborative process model. At first, this step entails translating the collaborative process model from BPMN into a UPPAAL modelling language called 'Network of Timed Automata'. Second, it becomes necessary to formalise interoperability requirements into properties with the dedicated UPPAAL language, i.e. the temporal logic TCTL.

  6. Reliable Electronic Text: The Elusive Prerequisite for a Host of Human Language Technologies

    DTIC Science & Technology

    2010-09-30

    is not always the case—for example, ligatures in Latin-fonts, and glyphs in Arabic fonts (King, 2008; Carrier, 2009). This complexity, and others...such effects can render electronic text useless for natural language processing ( NLP ). Typically, file converters do not expose the details of the...the many component NLP technologies typically used inside information extraction and text categorization applications, such as tokenization, part-of

  7. Quantifying the driving factors for language shift in a bilingual region.

    PubMed

    Prochazka, Katharina; Vogl, Gero

    2017-04-25

    Many of the world's around 6,000 languages are in danger of disappearing as people give up use of a minority language in favor of the majority language in a process called language shift. Language shift can be monitored on a large scale through the use of mathematical models by way of differential equations, for example, reaction-diffusion equations. Here, we use a different approach: we propose a model for language dynamics based on the principles of cellular automata/agent-based modeling and combine it with very detailed empirical data. Our model makes it possible to follow language dynamics over space and time, whereas existing models based on differential equations average over space and consequently provide no information on local changes in language use. Additionally, cellular automata models can be used even in cases where models based on differential equations are not applicable, for example, in situations where one language has become dispersed and retreated to language islands. Using data from a bilingual region in Austria, we show that the most important factor in determining the spread and retreat of a language is the interaction with speakers of the same language. External factors like bilingual schools or parish language have only a minor influence.

  8. An ontology model for nursing narratives with natural language generation technology.

    PubMed

    Min, Yul Ha; Park, Hyeoun-Ae; Jeon, Eunjoo; Lee, Joo Yun; Jo, Soo Jung

    2013-01-01

    The purpose of this study was to develop an ontology model to generate nursing narratives as natural as human language from the entity-attribute-value triplets of a detailed clinical model using natural language generation technology. The model was based on the types of information and documentation time of the information along the nursing process. The typesof information are data characterizing the patient status, inferences made by the nurse from the patient data, and nursing actions selected by the nurse to change the patient status. This information was linked to the nursing process based on the time of documentation. We describe a case study illustrating the application of this model in an acute-care setting. The proposed model provides a strategy for designing an electronic nursing record system.

  9. Scientific Programming Using Java: A Remote Sensing Example

    NASA Technical Reports Server (NTRS)

    Prados, Don; Mohamed, Mohamed A.; Johnson, Michael; Cao, Changyong; Gasser, Jerry

    1999-01-01

    This paper presents results of a project to port remote sensing code from the C programming language to Java. The advantages and disadvantages of using Java versus C as a scientific programming language in remote sensing applications are discussed. Remote sensing applications deal with voluminous data that require effective memory management, such as buffering operations, when processed. Some of these applications also implement complex computational algorithms, such as Fast Fourier Transformation analysis, that are very performance intensive. Factors considered include performance, precision, complexity, rapidity of development, ease of code reuse, ease of maintenance, memory management, and platform independence. Performance of radiometric calibration code written in Java for the graphical user interface and of using C for the domain model are also presented.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruiz, Steven Adriel

    The following discussion contains a high-level description of methods used to implement software for data processing. It describes the required directory structures and file handling required to use Excel's Visual Basic for Applications programming language and how to identify shot, test and capture types to appropriately process data. It also describes how to interface with the software.

  11. Virtual Vocabulary: Research and Learning in Lexical Processing

    ERIC Educational Resources Information Center

    Schuetze, Ulf; Weimer-Stuckmann, Gerlinde

    2010-01-01

    This article presents the concept development, research programming, and learning design of a lexical processing web application, Virtual Vocabulary, which was developed using theories in both cognitive psychology and second language acquisition (SLA). It is being tested with first-year students of German at the University of Victoria in Canada,…

  12. Early Childhood Classrooms and Computers: Programs with Promise.

    ERIC Educational Resources Information Center

    Hoot, James L.; Kimler, Michele

    Word processing and the LOGO programing language are two microcomputer applications that are beginning to show benefits as learning tools in elementary school classrooms. Word processing packages are especially useful with beginning writers, whose lack of motor coordination often slows down their acquisition of competence in written communication.…

  13. Extraction of phenotypic traits from taxonomic descriptions for the tree of life using natural language processing.

    PubMed

    Endara, Lorena; Cui, Hong; Burleigh, J Gordon

    2018-03-01

    Phenotypic data sets are necessary to elucidate the genealogy of life, but assembling phenotypic data for taxa across the tree of life can be technically challenging and prohibitively time consuming. We describe a semi-automated protocol to facilitate and expedite the assembly of phenotypic character matrices of plants from formal taxonomic descriptions. This pipeline uses new natural language processing (NLP) techniques and a glossary of over 9000 botanical terms. Our protocol includes the Explorer of Taxon Concepts (ETC), an online application that assembles taxon-by-character matrices from taxonomic descriptions, and MatrixConverter, a Java application that enables users to evaluate and discretize the characters extracted by ETC. We demonstrate this protocol using descriptions from Araucariaceae. The NLP pipeline unlocks the phenotypic data found in taxonomic descriptions and makes them usable for evolutionary analyses.

  14. Formal Analysis of BPMN Models Using Event-B

    NASA Astrophysics Data System (ADS)

    Bryans, Jeremy W.; Wei, Wei

    The use of business process models has gone far beyond documentation purposes. In the development of business applications, they can play the role of an artifact on which high level properties can be verified and design errors can be revealed in an effort to reduce overhead at later software development and diagnosis stages. This paper demonstrates how formal verification may add value to the specification, design and development of business process models in an industrial setting. The analysis of these models is achieved via an algorithmic translation from the de-facto standard business process modeling language BPMN to Event-B, a widely used formal language supported by the Rodin platform which offers a range of simulation and verification technologies.

  15. Polyglot Programming in Applications Used for Genetic Data Analysis

    PubMed Central

    Nowak, Robert M.

    2014-01-01

    Applications used for the analysis of genetic data process large volumes of data with complex algorithms. High performance, flexibility, and a user interface with a web browser are required by these solutions, which can be achieved by using multiple programming languages. In this study, I developed a freely available framework for building software to analyze genetic data, which uses C++, Python, JavaScript, and several libraries. This system was used to build a number of genetic data processing applications and it reduced the time and costs of development. PMID:25197633

  16. Polyglot programming in applications used for genetic data analysis.

    PubMed

    Nowak, Robert M

    2014-01-01

    Applications used for the analysis of genetic data process large volumes of data with complex algorithms. High performance, flexibility, and a user interface with a web browser are required by these solutions, which can be achieved by using multiple programming languages. In this study, I developed a freely available framework for building software to analyze genetic data, which uses C++, Python, JavaScript, and several libraries. This system was used to build a number of genetic data processing applications and it reduced the time and costs of development.

  17. Development and evaluation of a dynamic web-based application.

    PubMed

    Hsieh, Yichuan; Brennan, Patricia Flatley

    2007-10-11

    Traditional consumer health informatics (CHI) applications that were developed for lay public on the Web were commonly written in a Hypertext Markup Language (HTML). As genetics knowledge rapidly advances and requires updating information in a timely fashion, a different content structure is therefore needed to facilitate information delivery. This poster will present the process of developing a dynamic database-driven Web CHI application.

  18. The digital language of amino acids.

    PubMed

    Kurić, L

    2007-11-01

    The subject of this paper is a digital approach to the investigation of the biochemical basis of genetic processes. The digital mechanism of nucleic acid and protein bio-syntheses, the evolution of biomacromolecules and, especially, the biochemical evolution of genetic language have been analyzed by the application of cybernetic methods, information theory and system theory, respectively. This paper reports the discovery of new methods for developing the new technologies in genetics. It is about the most advanced digital technology which is based on program, cybernetics and informational systems and laws. The results in the practical application of the new technology could be useful in bioinformatics, genetics, biochemistry, medicine and other natural sciences.

  19. scikit-image: image processing in Python.

    PubMed

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  20. VHP - An environment for the remote visualization of heuristic processes

    NASA Technical Reports Server (NTRS)

    Crawford, Stuart L.; Leiner, Barry M.

    1991-01-01

    A software system called VHP is introduced which permits the visualization of heuristic algorithms on both resident and remote hardware platforms. The VHP is based on the DCF tool for interprocess communication and is applicable to remote algorithms which can be on different types of hardware and in languages other than VHP. The VHP system is of particular interest to systems in which the visualization of remote processes is required such as robotics for telescience applications.

  1. Aspiring to Unintended Consequences of Natural Language Processing: A Review of Recent Developments in Clinical and Consumer-Generated Text Processing.

    PubMed

    Demner-Fushman, D; Elhadad, N

    2016-11-10

    This paper reviews work over the past two years in Natural Language Processing (NLP) applied to clinical and consumer-generated texts. We included any application or methodological publication that leverages text to facilitate healthcare and address the health-related needs of consumers and populations. Many important developments in clinical text processing, both foundational and task-oriented, were addressed in community- wide evaluations and discussed in corresponding special issues that are referenced in this review. These focused issues and in-depth reviews of several other active research areas, such as pharmacovigilance and summarization, allowed us to discuss in greater depth disease modeling and predictive analytics using clinical texts, and text analysis in social media for healthcare quality assessment, trends towards online interventions based on rapid analysis of health-related posts, and consumer health question answering, among other issues. Our analysis shows that although clinical NLP continues to advance towards practical applications and more NLP methods are used in large-scale live health information applications, more needs to be done to make NLP use in clinical applications a routine widespread reality. Progress in clinical NLP is mirrored by developments in social media text analysis: the research is moving from capturing trends to addressing individual health-related posts, thus showing potential to become a tool for precision medicine and a valuable addition to the standard healthcare quality evaluation tools.

  2. Engineering and Scientific Applications: Using MatLab(Registered Trademark) for Data Processing and Visualization

    NASA Technical Reports Server (NTRS)

    Sen, Syamal K.; Shaykhian, Gholam Ali

    2011-01-01

    MatLab(TradeMark)(MATrix LABoratory) is a numerical computation and simulation tool that is used by thousands Scientists and Engineers in many countries. MatLab does purely numerical calculations, which can be used as a glorified calculator or interpreter programming language; its real strength is in matrix manipulations. Computer algebra functionalities are achieved within the MatLab environment using "symbolic" toolbox. This feature is similar to computer algebra programs, provided by Maple or Mathematica to calculate with mathematical equations using symbolic operations. MatLab in its interpreter programming language form (command interface) is similar with well known programming languages such as C/C++, support data structures and cell arrays to define classes in object oriented programming. As such, MatLab is equipped with most of the essential constructs of a higher programming language. MatLab is packaged with an editor and debugging functionality useful to perform analysis of large MatLab programs and find errors. We believe there are many ways to approach real-world problems; prescribed methods to ensure foregoing solutions are incorporated in design and analysis of data processing and visualization can benefit engineers and scientist in gaining wider insight in actual implementation of their perspective experiments. This presentation will focus on data processing and visualizations aspects of engineering and scientific applications. Specifically, it will discuss methods and techniques to perform intermediate-level data processing covering engineering and scientific problems. MatLab programming techniques including reading various data files formats to produce customized publication-quality graphics, importing engineering and/or scientific data, organizing data in tabular format, exporting data to be used by other software programs such as Microsoft Excel, data presentation and visualization will be discussed.

  3. Automated monitoring of medical protocols: a secure and distributed architecture.

    PubMed

    Alsinet, T; Ansótegui, C; Béjar, R; Fernández, C; Manyà, F

    2003-03-01

    The control of the right application of medical protocols is a key issue in hospital environments. For the automated monitoring of medical protocols, we need a domain-independent language for their representation and a fully, or semi, autonomous system that understands the protocols and supervises their application. In this paper we describe a specification language and a multi-agent system architecture for monitoring medical protocols. We model medical services in hospital environments as specialized domain agents and interpret a medical protocol as a negotiation process between agents. A medical service can be involved in multiple medical protocols, and so specialized domain agents are independent of negotiation processes and autonomous system agents perform monitoring tasks. We present the detailed architecture of the system agents and of an important domain agent, the database broker agent, that is responsible of obtaining relevant information about the clinical history of patients. We also describe how we tackle the problems of privacy, integrity and authentication during the process of exchanging information between agents.

  4. Parallel VLSI architecture emulation and the organization of APSA/MPP

    NASA Technical Reports Server (NTRS)

    Odonnell, John T.

    1987-01-01

    The Applicative Programming System Architecture (APSA) combines an applicative language interpreter with a novel parallel computer architecture that is well suited for Very Large Scale Integration (VLSI) implementation. The Massively Parallel Processor (MPP) can simulate VLSI circuits by allocating one processing element in its square array to an area on a square VLSI chip. As long as there are not too many long data paths, the MPP can simulate a VLSI clock cycle very rapidly. The APSA circuit contains a binary tree with a few long paths and many short ones. A skewed H-tree layout allows every processing element to simulate a leaf cell and up to four tree nodes, with no loss in parallelism. Emulation of a key APSA algorithm on the MPP resulted in performance 16,000 times faster than a Vax. This speed will make it possible for the APSA language interpreter to run fast enough to support research in parallel list processing algorithms.

  5. Designing Specification Languages for Process Control Systems: Lessons Learned and Steps to the Future

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy G.; Heimdahl, Mats P. E.; Reese, Jon Damon

    1999-01-01

    Previously, we defined a blackbox formal system modeling language called RSML (Requirements State Machine Language). The language was developed over several years while specifying the system requirements for a collision avoidance system for commercial passenger aircraft. During the language development, we received continual feedback and evaluation by FAA employees and industry representatives, which helped us to produce a specification language that is easily learned and used by application experts. Since the completion of the PSML project, we have continued our research on specification languages. This research is part of a larger effort to investigate the more general problem of providing tools to assist in developing embedded systems. Our latest experimental toolset is called SpecTRM (Specification Tools and Requirements Methodology), and the formal specification language is SpecTRM-RL (SpecTRM Requirements Language). This paper describes what we have learned from our use of RSML and how those lessons were applied to the design of SpecTRM-RL. We discuss our goals for SpecTRM-RL and the design features that support each of these goals.

  6. An XML-Based Manipulation and Query Language for Rule-Based Information

    NASA Astrophysics Data System (ADS)

    Mansour, Essam; Höpfner, Hagen

    Rules are utilized to assist in the monitoring process that is required in activities, such as disease management and customer relationship management. These rules are specified according to the application best practices. Most of research efforts emphasize on the specification and execution of these rules. Few research efforts focus on managing these rules as one object that has a management life-cycle. This paper presents our manipulation and query language that is developed to facilitate the maintenance of this object during its life-cycle and to query the information contained in this object. This language is based on an XML-based model. Furthermore, we evaluate the model and language using a prototype system applied to a clinical case study.

  7. Spanish language generation engine to enhance the syntactic quality of AAC systems

    NASA Astrophysics Data System (ADS)

    Narváez A., Cristian; Sastoque H., Sebastián.; Iregui G., Marcela

    2015-12-01

    People with Complex Communication Needs (CCN) face difficulties to communicate their ideas, feelings and needs. Augmentative and Alternative Communication (AAC) approaches aim to provide support to enhance socialization of these individuals. However, there are many limitations in current applications related with systems operation, target scenarios and language consistency. This work presents an AAC approach to enhance produced messages by applying elements of Natural Language Generation. Specifically, a Spanish language engine, composed of a grammar ontology and a set of linguistic rules, is proposed to improve the naturalness in the communication process, when persons with CCN tell stories about their daily activities to non-disabled receivers. The assessment of the proposed method confirms the validity of the model to improve messages quality.

  8. Quantifying the driving factors for language shift in a bilingual region

    PubMed Central

    Prochazka, Katharina; Vogl, Gero

    2017-01-01

    Many of the world’s around 6,000 languages are in danger of disappearing as people give up use of a minority language in favor of the majority language in a process called language shift. Language shift can be monitored on a large scale through the use of mathematical models by way of differential equations, for example, reaction–diffusion equations. Here, we use a different approach: we propose a model for language dynamics based on the principles of cellular automata/agent-based modeling and combine it with very detailed empirical data. Our model makes it possible to follow language dynamics over space and time, whereas existing models based on differential equations average over space and consequently provide no information on local changes in language use. Additionally, cellular automata models can be used even in cases where models based on differential equations are not applicable, for example, in situations where one language has become dispersed and retreated to language islands. Using data from a bilingual region in Austria, we show that the most important factor in determining the spread and retreat of a language is the interaction with speakers of the same language. External factors like bilingual schools or parish language have only a minor influence. PMID:28298530

  9. 75 FR 13058 - Approval and Promulgation of Implementation Plans; Idaho

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-18

    ... for the control of nonmetallic mineral processing plants (IDEQ Docket 58-0101-0002 and a portion of..., 204 and 205 now include language stating that the applicable Federal regulations are incorporated by....01.01.200. IDAPA 58.01.01.225 Permit to Construct Processing Fee, was revised for consistency with...

  10. Common data buffer

    NASA Technical Reports Server (NTRS)

    Byrne, F.

    1981-01-01

    Time-shared interface speeds data processing in distributed computer network. Two-level high-speed scanning approach routes information to buffer, portion of which is reserved for series of "first-in, first-out" memory stacks. Buffer address structure and memory are protected from noise or failed components by error correcting code. System is applicable to any computer or processing language.

  11. Proceedings of the Fourth Annual Workshop on the Use of Digital Computers in Process Control.

    ERIC Educational Resources Information Center

    Smith, Cecil L., Ed.

    Contents: Computer hardware testing (results of vendor-user interaction); CODIL (a new language for process control programing); the design and implementation of control systems utilizing CRT display consoles; the systems contractor - valuable professional or unnecessary middle man; power station digital computer applications; from inspiration to…

  12. Word Recognition Processing Efficiency as a Component of Second Language Listening

    ERIC Educational Resources Information Center

    Joyce, Paul

    2013-01-01

    This study investigated the application of the speeded lexical decision task to L2 aural processing efficiency. One-hundred and twenty Japanese university students completed an aural word/nonword task. When the variation of lexical decision time (CV) was correlated with reaction time (RT), the results suggested that the single-word recognition…

  13. Neural Mechanisms Underlying Musical Pitch Perception and Clinical Applications including Developmental Dyselxia

    PubMed Central

    Yuskaitis, Christopher J.; Parviz, Mahsa; Loui, Psyche; Wan, Catherine Y.; Pearl, Phillip L.

    2017-01-01

    Music production and perception invoke a complex set of cognitive functions that rely on the integration of sensory-motor, cognitive, and emotional pathways. Pitch is a fundamental perceptual attribute of sound and a building block for both music and speech. Although the cerebral processing of pitch is not completely understood, recent advances in imaging and electrophysiology have provided insight into the functional and anatomical pathways of pitch processing. This review examines the current understanding of pitch processing, behavioral and neural variations that give rise to difficulties in pitch processing, and potential applications of music education for language processing disorders such as dyslexia. PMID:26092314

  14. QPA-CLIPS: A language and representation for process control

    NASA Technical Reports Server (NTRS)

    Freund, Thomas G.

    1994-01-01

    QPA-CLIPS is an extension of CLIPS oriented towards process control applications. Its constructs define a dependency network of process actions driven by sensor information. The language consists of three basic constructs: TASK, SENSOR, and FILTER. TASK's define the dependency network describing alternative state transitions for a process. SENSOR's and FILTER's define sensor information sources used to activate state transitions within the network. Deftemplate's define these constructs and their run-time environment is an interpreter knowledge base, performing pattern matching on sensor information and so activating TASK's in the dependency network. The pattern matching technique is based on the repeatable occurrence of a sensor data pattern. QPA-CIPS has been successfully tested on a SPARCStation providing supervisory control to an Allen-Bradley PLC 5 controller driving molding equipment.

  15. Prospective English Language Teachers' Experiences in Facebook: Adoption, Use and Educational Use in Turkish Context

    ERIC Educational Resources Information Center

    Balcikanli, Cem

    2015-01-01

    There has been an increasing attention given to the role of social networking in educational settings. Teacher education is not an exception to this, for teacher education is approaching social media on two fronts: a) application to enhance learning in the process of teacher preparation or professional development b) applications in classrooms…

  16. Research into Practice: Scaffolding Learning Processes to Improve Speaking Performance

    ERIC Educational Resources Information Center

    Goh, Christine C. M.

    2017-01-01

    This article is a personal view of the application of results from three areas of research that I believe are relevant to developing second language speaking in the classroom: task repetition, pre-task planning and communication strategies. I will discuss these three areas in terms of level of research application--where research is not applied…

  17. Online Survey, Enrollment, and Examination: Special Internet Applications in Teacher Education.

    ERIC Educational Resources Information Center

    Tu, Jho-Ju; Babione, Carolyn; Chen, Hsin-Chu

    The Teachers College at Emporia State University in Kansas is now utilizing World Wide Web technology for automating the application procedure for student teaching. The general concepts and some of the key terms that are important for understanding the process involved in this project include: a client-server model, HyperText Markup Language,…

  18. Preface to FP-UML 2009

    NASA Astrophysics Data System (ADS)

    Trujillo, Juan; Kim, Dae-Kyoo

    The Unified Modeling Language (UML) has been widely accepted as the standard object-oriented (OO) modeling language for modeling various aspects of software and information systems. The UML is an extensible language, in the sense that it provides mechanisms to introduce new elements for specific domains if necessary, such as web applications, database applications, business modeling, software development processes, data warehouses. Furthermore, the latest version of UML 2.0 got even bigger and more complicated with more diagrams for some good reasons. Although UML provides different diagrams for modeling different aspects of a software system, not all of them need to be applied in most cases. Therefore, heuristics, design guidelines, lessons learned from experiences are extremely important for the effective use of UML 2.0 and to avoid unnecessary complication. Also, approaches are needed to better manage UML 2.0 and its extensions so they do not become too complex too manage in the end.

  19. Data-Flow Based Model Analysis

    NASA Technical Reports Server (NTRS)

    Saad, Christian; Bauer, Bernhard

    2010-01-01

    The concept of (meta) modeling combines an intuitive way of formalizing the structure of an application domain with a high expressiveness that makes it suitable for a wide variety of use cases and has therefore become an integral part of many areas in computer science. While the definition of modeling languages through the use of meta models, e.g. in Unified Modeling Language (UML), is a well-understood process, their validation and the extraction of behavioral information is still a challenge. In this paper we present a novel approach for dynamic model analysis along with several fields of application. Examining the propagation of information along the edges and nodes of the model graph allows to extend and simplify the definition of semantic constraints in comparison to the capabilities offered by e.g. the Object Constraint Language. Performing a flow-based analysis also enables the simulation of dynamic behavior, thus providing an "abstract interpretation"-like analysis method for the modeling domain.

  20. The application of the unified modeling language in object-oriented analysis of healthcare information systems.

    PubMed

    Aggarwal, Vinod

    2002-10-01

    This paper concerns itself with the beneficial effects of the Unified Modeling Language (UML), a nonproprietary object modeling standard, in specifying, visualizing, constructing, documenting, and communicating the model of a healthcare information system from the user's perspective. The author outlines the process of object-oriented analysis (OOA) using the UML and illustrates this with healthcare examples to demonstrate the practicality of application of the UML by healthcare personnel to real-world information system problems. The UML will accelerate advanced uses of object-orientation such as reuse technology, resulting in significantly higher software productivity. The UML is also applicable in the context of a component paradigm that promises to enhance the capabilities of healthcare information systems and simplify their management and maintenance.

  1. High-temperature Superconductivity in Diamond Films - from Fundamentals to Device Applications

    DTIC Science & Technology

    2014-12-20

    film is later removed by acid boiling in nitric acid. The laser cutting process is completely based on CNC machine language. Therefore arbitrary...designed Hall bar shapes and converted them in CNC language. Fig 6. Laser Cutter (Alpha) to create holes in the diamond plates (Oxford Lasers). [5...diamond density is not uniform throughout the plate as it appears lighter on the right side. This could be caused by the plasma being of different

  2. Rapid Prototyping of Application Specific Signal Processors (RASSP)

    DTIC Science & Technology

    1993-12-23

    Compilers 2-9 - Cadre Teamwork 2-13 - CodeCenter (Centerline) 2-15 - dbx/dbxtool (UNIXm) 2-17 - Falcon (Mentor) 2-19 - FrameMaker (Frame Tech) 2-21 - gprof...UNIXm C debuggers Falcon Mentor ECAD Framework FrameMaker Frame Tech Word Processing gcc GNU CIC++ compiler gprof GNU Software profiling tool...organization can put their own documentation on-line using the BOLD Com- poser for Framemaker . " The AMPLE programming language is a C like language used for

  3. Natural Language Processing Technologies in Radiology Research and Clinical Applications.

    PubMed

    Cai, Tianrun; Giannopoulos, Andreas A; Yu, Sheng; Kelil, Tatiana; Ripley, Beth; Kumamaru, Kanako K; Rybicki, Frank J; Mitsouras, Dimitrios

    2016-01-01

    The migration of imaging reports to electronic medical record systems holds great potential in terms of advancing radiology research and practice by leveraging the large volume of data continuously being updated, integrated, and shared. However, there are significant challenges as well, largely due to the heterogeneity of how these data are formatted. Indeed, although there is movement toward structured reporting in radiology (ie, hierarchically itemized reporting with use of standardized terminology), the majority of radiology reports remain unstructured and use free-form language. To effectively "mine" these large datasets for hypothesis testing, a robust strategy for extracting the necessary information is needed. Manual extraction of information is a time-consuming and often unmanageable task. "Intelligent" search engines that instead rely on natural language processing (NLP), a computer-based approach to analyzing free-form text or speech, can be used to automate this data mining task. The overall goal of NLP is to translate natural human language into a structured format (ie, a fixed collection of elements), each with a standardized set of choices for its value, that is easily manipulated by computer programs to (among other things) order into subcategories or query for the presence or absence of a finding. The authors review the fundamentals of NLP and describe various techniques that constitute NLP in radiology, along with some key applications. ©RSNA, 2016.

  4. Natural Language Processing Technologies in Radiology Research and Clinical Applications

    PubMed Central

    Cai, Tianrun; Giannopoulos, Andreas A.; Yu, Sheng; Kelil, Tatiana; Ripley, Beth; Kumamaru, Kanako K.; Rybicki, Frank J.

    2016-01-01

    The migration of imaging reports to electronic medical record systems holds great potential in terms of advancing radiology research and practice by leveraging the large volume of data continuously being updated, integrated, and shared. However, there are significant challenges as well, largely due to the heterogeneity of how these data are formatted. Indeed, although there is movement toward structured reporting in radiology (ie, hierarchically itemized reporting with use of standardized terminology), the majority of radiology reports remain unstructured and use free-form language. To effectively “mine” these large datasets for hypothesis testing, a robust strategy for extracting the necessary information is needed. Manual extraction of information is a time-consuming and often unmanageable task. “Intelligent” search engines that instead rely on natural language processing (NLP), a computer-based approach to analyzing free-form text or speech, can be used to automate this data mining task. The overall goal of NLP is to translate natural human language into a structured format (ie, a fixed collection of elements), each with a standardized set of choices for its value, that is easily manipulated by computer programs to (among other things) order into subcategories or query for the presence or absence of a finding. The authors review the fundamentals of NLP and describe various techniques that constitute NLP in radiology, along with some key applications. ©RSNA, 2016 PMID:26761536

  5. Robust model selection and the statistical classification of languages

    NASA Astrophysics Data System (ADS)

    García, J. E.; González-López, V. A.; Viola, M. L. L.

    2012-10-01

    In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating a model which represent the main law for each language. Our findings agree with the linguistic conjecture, related to the rhythm of the languages included on our dataset.

  6. An integrated framework for high level design of high performance signal processing circuits on FPGAs

    NASA Astrophysics Data System (ADS)

    Benkrid, K.; Belkacemi, S.; Sukhsawas, S.

    2005-06-01

    This paper proposes an integrated framework for the high level design of high performance signal processing algorithms' implementations on FPGAs. The framework emerged from a constant need to rapidly implement increasingly complicated algorithms on FPGAs while maintaining the high performance needed in many real time digital signal processing applications. This is particularly important for application developers who often rely on iterative and interactive development methodologies. The central idea behind the proposed framework is to dynamically integrate high performance structural hardware description languages with higher level hardware languages in other to help satisfy the dual requirement of high level design and high performance implementation. The paper illustrates this by integrating two environments: Celoxica's Handel-C language, and HIDE, a structural hardware environment developed at the Queen's University of Belfast. On the one hand, Handel-C has been proven to be very useful in the rapid design and prototyping of FPGA circuits, especially control intensive ones. On the other hand, HIDE, has been used extensively, and successfully, in the generation of highly optimised parameterisable FPGA cores. In this paper, this is illustrated in the construction of a scalable and fully parameterisable core for image algebra's five core neighbourhood operations, where fully floorplanned efficient FPGA configurations, in the form of EDIF netlists, are generated automatically for instances of the core. In the proposed combined framework, highly optimised data paths are invoked dynamically from within Handel-C, and are synthesized using HIDE. Although the idea might seem simple prima facie, it could have serious implications on the design of future generations of hardware description languages.

  7. What Artificial Intelligence Is Doing for Training.

    ERIC Educational Resources Information Center

    Kirrane, Peter R.; Kirrane, Diane E.

    1989-01-01

    Discusses the three areas of research and application of artificial intelligence: (1) robotics, (2) natural language processing, and (3) knowledge-based or expert systems. Focuses on what expert systems can do, especially in the area of training. (JOW)

  8. Workshop on using natural language processing applications for enhancing clinical decision making: an executive summary

    PubMed Central

    Pai, Vinay M; Rodgers, Mary; Conroy, Richard; Luo, James; Zhou, Ruixia; Seto, Belinda

    2014-01-01

    In April 2012, the National Institutes of Health organized a two-day workshop entitled ‘Natural Language Processing: State of the Art, Future Directions and Applications for Enhancing Clinical Decision-Making’ (NLP-CDS). This report is a summary of the discussions during the second day of the workshop. Collectively, the workshop presenters and participants emphasized the need for unstructured clinical notes to be included in the decision making workflow and the need for individualized longitudinal data tracking. The workshop also discussed the need to: (1) combine evidence-based literature and patient records with machine-learning and prediction models; (2) provide trusted and reproducible clinical advice; (3) prioritize evidence and test results; and (4) engage healthcare professionals, caregivers, and patients. The overall consensus of the NLP-CDS workshop was that there are promising opportunities for NLP and CDS to deliver cognitive support for healthcare professionals, caregivers, and patients. PMID:23921193

  9. Proposed Framework for the Evaluation of Standalone Corpora Processing Systems: An Application to Arabic Corpora

    PubMed Central

    Al-Thubaity, Abdulmohsen; Alqifari, Reem

    2014-01-01

    Despite the accessibility of numerous online corpora, students and researchers engaged in the fields of Natural Language Processing (NLP), corpus linguistics, and language learning and teaching may encounter situations in which they need to develop their own corpora. Several commercial and free standalone corpora processing systems are available to process such corpora. In this study, we first propose a framework for the evaluation of standalone corpora processing systems and then use it to evaluate seven freely available systems. The proposed framework considers the usability, functionality, and performance of the evaluated systems while taking into consideration their suitability for Arabic corpora. While the results show that most of the evaluated systems exhibited comparable usability scores, the scores for functionality and performance were substantially different with respect to support for the Arabic language and N-grams profile generation. The results of our evaluation will help potential users of the evaluated systems to choose the system that best meets their needs. More importantly, the results will help the developers of the evaluated systems to enhance their systems and developers of new corpora processing systems by providing them with a reference framework. PMID:25610910

  10. Proposed framework for the evaluation of standalone corpora processing systems: an application to Arabic corpora.

    PubMed

    Al-Thubaity, Abdulmohsen; Al-Khalifa, Hend; Alqifari, Reem; Almazrua, Manal

    2014-01-01

    Despite the accessibility of numerous online corpora, students and researchers engaged in the fields of Natural Language Processing (NLP), corpus linguistics, and language learning and teaching may encounter situations in which they need to develop their own corpora. Several commercial and free standalone corpora processing systems are available to process such corpora. In this study, we first propose a framework for the evaluation of standalone corpora processing systems and then use it to evaluate seven freely available systems. The proposed framework considers the usability, functionality, and performance of the evaluated systems while taking into consideration their suitability for Arabic corpora. While the results show that most of the evaluated systems exhibited comparable usability scores, the scores for functionality and performance were substantially different with respect to support for the Arabic language and N-grams profile generation. The results of our evaluation will help potential users of the evaluated systems to choose the system that best meets their needs. More importantly, the results will help the developers of the evaluated systems to enhance their systems and developers of new corpora processing systems by providing them with a reference framework.

  11. A Simulation Model Articulation of the REA Ontology

    NASA Astrophysics Data System (ADS)

    Laurier, Wim; Poels, Geert

    This paper demonstrates how the REA enterprise ontology can be used to construct simulation models for business processes, value chains and collaboration spaces in supply chains. These models support various high-level and operational management simulation applications, e.g. the analysis of enterprise sustainability and day-to-day planning. First, the basic constructs of the REA ontology and the ExSpect modelling language for simulation are introduced. Second, collaboration space, value chain and business process models and their conceptual dependencies are shown, using the ExSpect language. Third, an exhibit demonstrates the use of value chain models in predicting the financial performance of an enterprise.

  12. Morpheme matching based text tokenization for a scarce resourced language.

    PubMed

    Rehman, Zobia; Anwar, Waqas; Bajwa, Usama Ijaz; Xuan, Wang; Chaoying, Zhou

    2013-01-01

    Text tokenization is a fundamental pre-processing step for almost all the information processing applications. This task is nontrivial for the scarce resourced languages such as Urdu, as there is inconsistent use of space between words. In this paper a morpheme matching based approach has been proposed for Urdu text tokenization, along with some other algorithms to solve the additional issues of boundary detection of compound words, affixation, reduplication, names and abbreviations. This study resulted into 97.28% precision, 93.71% recall, and 95.46% F1-measure; while tokenizing a corpus of 57000 words by using a morpheme list with 6400 entries.

  13. Astronomical Data Processing Using SciQL, an SQL Based Query Language for Array Data

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Scheers, B.; Kersten, M.; Ivanova, M.; Nes, N.

    2012-09-01

    SciQL (pronounced as ‘cycle’) is a novel SQL-based array query language for scientific applications with both tables and arrays as first class citizens. SciQL lowers the entrance fee of adopting relational DBMS (RDBMS) in scientific domains, because it includes functionality often only found in mathematics software packages. In this paper, we demonstrate the usefulness of SciQL for astronomical data processing using examples from the Transient Key Project of the LOFAR radio telescope. In particular, how the LOFAR light-curve database of all detected sources can be constructed, by correlating sources across the spatial, frequency, time and polarisation domains.

  14. Morpheme Matching Based Text Tokenization for a Scarce Resourced Language

    PubMed Central

    Rehman, Zobia; Anwar, Waqas; Bajwa, Usama Ijaz; Xuan, Wang; Chaoying, Zhou

    2013-01-01

    Text tokenization is a fundamental pre-processing step for almost all the information processing applications. This task is nontrivial for the scarce resourced languages such as Urdu, as there is inconsistent use of space between words. In this paper a morpheme matching based approach has been proposed for Urdu text tokenization, along with some other algorithms to solve the additional issues of boundary detection of compound words, affixation, reduplication, names and abbreviations. This study resulted into 97.28% precision, 93.71% recall, and 95.46% F1-measure; while tokenizing a corpus of 57000 words by using a morpheme list with 6400 entries. PMID:23990871

  15. Channel Access in Erlang

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicklaus, Dennis J.

    2013-10-13

    We have developed an Erlang language implementation of the Channel Access protocol. Included are low-level functions for encoding and decoding Channel Access protocol network packets as well as higher level functions for monitoring or setting EPICS process variables. This provides access to EPICS process variables for the Fermilab Acnet control system via our Erlang-based front-end architecture without having to interface to C/C++ programs and libraries. Erlang is a functional programming language originally developed for real-time telecommunications applications. Its network programming features and list management functions make it particularly well-suited for the task of managing multiple Channel Access circuits and PVmore » monitors.« less

  16. ObsPy: A Python Toolbox for Seismology - Recent Developments and Applications

    NASA Astrophysics Data System (ADS)

    Megies, T.; Krischer, L.; Barsch, R.; Sales de Andrade, E.; Beyreuther, M.

    2014-12-01

    ObsPy (http://www.obspy.org) is a community-driven, open-source project dedicated to building a bridge for seismology into the scientific Python ecosystem. It offersa) read and write support for essentially all commonly used waveform, station, and event metadata file formats with a unified interface,b) a comprehensive signal processing toolbox tuned to the needs of seismologists,c) integrated access to all large data centers, web services and databases, andd) convenient wrappers to legacy codes like libtau and evalresp.Python, currently the most popular language for teaching introductory computer science courses at top-ranked U.S. departments, is a full-blown programming language with the flexibility of an interactive scripting language. Its extensive standard library and large variety of freely available high quality scientific modules cover most needs in developing scientific processing workflows. Together with packages like NumPy, SciPy, Matplotlib, IPython, Pandas, lxml, and PyQt, ObsPy enables the construction of complete workflows in Python. These vary from reading locally stored data or requesting data from one or more different data centers through to signal analysis and data processing and on to visualizations in GUI and web applications, output of modified/derived data and the creation of publication-quality figures.ObsPy enjoys a large world-wide rate of adoption in the community. Applications successfully using it include time-dependent and rotational seismology, big data processing, event relocations, and synthetic studies about attenuation kernels and full-waveform inversions to name a few examples. All functionality is extensively documented and the ObsPy tutorial and gallery give a good impression of the wide range of possible use cases.We will present the basic features of ObsPy, new developments and applications, and a roadmap for the near future and discuss the sustainability of our open-source development model.

  17. LLL 8080 BASIC-II interpreter user's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGoldrick, P.R.; Dickinson, J.; Allison, T.G.

    1978-04-03

    Scientists are finding increased applications for microprocessors as process controllers in their experiments. However, while microprocessors are small and inexpensive, they are difficult to program in machine or assembly language. A high-level language is needed to enable scientists to develop their own microcomputer programs for their experiments on location. Recognizing this need, LLL contracted to have such a language developed. This report describes the resulting LLL BASIC interpreter, which opeates with LLL's 8080-based MCS-8 microcomputer system. All numerical operations are done using Advanced Micro Device's Am9511 arithmetic processor chip or optionally by using a software simulation of that chip. 1more » figure.« less

  18. Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML)

    PubMed Central

    Lechevalier, D.; Ak, R.; Ferguson, M.; Law, K. H.; Lee, Y.-T. T.; Rachuri, S.

    2017-01-01

    This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain. PMID:29202125

  19. Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML).

    PubMed

    Park, J; Lechevalier, D; Ak, R; Ferguson, M; Law, K H; Lee, Y-T T; Rachuri, S

    2017-01-01

    This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain.

  20. Overview of the DART project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, K.R.; Hansen, F.R.; Napolitano, L.M.

    1992-01-01

    DART (DSP Arrary for Reconfigurable Tasks) is a parallel architecture of two high-performance SDP (digital signal processing) chips with the flexibility to handle a wide range of real-time applications. Each of the 32-bit floating-point DSP processes in DART is programmable in a high-level languate ( C'' or Ada). We have added extensions to the real-time operating system used by DART in order to support parallel processor. The combination of high-level language programmability, a real-time operating system, and parallel processing support significantly reduces the development cost of application software for signal processing and control applications. We have demonstrated this capability bymore » using DART to reconstruct images in the prototype VIP (Video Imaging Projectile) groundstation.« less

  1. Overview of the DART project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, K.R.; Hansen, F.R.; Napolitano, L.M.

    1992-01-01

    DART (DSP Arrary for Reconfigurable Tasks) is a parallel architecture of two high-performance SDP (digital signal processing) chips with the flexibility to handle a wide range of real-time applications. Each of the 32-bit floating-point DSP processes in DART is programmable in a high-level languate (``C`` or Ada). We have added extensions to the real-time operating system used by DART in order to support parallel processor. The combination of high-level language programmability, a real-time operating system, and parallel processing support significantly reduces the development cost of application software for signal processing and control applications. We have demonstrated this capability by usingmore » DART to reconstruct images in the prototype VIP (Video Imaging Projectile) groundstation.« less

  2. Natural Language Processing in Radiology: A Systematic Review.

    PubMed

    Pons, Ewoud; Braun, Loes M M; Hunink, M G Myriam; Kors, Jan A

    2016-05-01

    Radiological reporting has generated large quantities of digital content within the electronic health record, which is potentially a valuable source of information for improving clinical care and supporting research. Although radiology reports are stored for communication and documentation of diagnostic imaging, harnessing their potential requires efficient and automated information extraction: they exist mainly as free-text clinical narrative, from which it is a major challenge to obtain structured data. Natural language processing (NLP) provides techniques that aid the conversion of text into a structured representation, and thus enables computers to derive meaning from human (ie, natural language) input. Used on radiology reports, NLP techniques enable automatic identification and extraction of information. By exploring the various purposes for their use, this review examines how radiology benefits from NLP. A systematic literature search identified 67 relevant publications describing NLP methods that support practical applications in radiology. This review takes a close look at the individual studies in terms of tasks (ie, the extracted information), the NLP methodology and tools used, and their application purpose and performance results. Additionally, limitations, future challenges, and requirements for advancing NLP in radiology will be discussed. (©) RSNA, 2016 Online supplemental material is available for this article.

  3. Working Memory for Linguistic and Non-linguistic Manual Gestures: Evidence, Theory, and Application.

    PubMed

    Rudner, Mary

    2018-01-01

    Linguistic manual gestures are the basis of sign languages used by deaf individuals. Working memory and language processing are intimately connected and thus when language is gesture-based, it is important to understand related working memory mechanisms. This article reviews work on working memory for linguistic and non-linguistic manual gestures and discusses theoretical and applied implications. Empirical evidence shows that there are effects of load and stimulus degradation on working memory for manual gestures. These effects are similar to those found for working memory for speech-based language. Further, there are effects of pre-existing linguistic representation that are partially similar across language modalities. But above all, deaf signers score higher than hearing non-signers on an n-back task with sign-based stimuli, irrespective of their semantic and phonological content, but not with non-linguistic manual actions. This pattern may be partially explained by recent findings relating to cross-modal plasticity in deaf individuals. It suggests that in linguistic gesture-based working memory, semantic aspects may outweigh phonological aspects when processing takes place under challenging conditions. The close association between working memory and language development should be taken into account in understanding and alleviating the challenges faced by deaf children growing up with cochlear implants as well as other clinical populations.

  4. Working Memory for Linguistic and Non-linguistic Manual Gestures: Evidence, Theory, and Application

    PubMed Central

    Rudner, Mary

    2018-01-01

    Linguistic manual gestures are the basis of sign languages used by deaf individuals. Working memory and language processing are intimately connected and thus when language is gesture-based, it is important to understand related working memory mechanisms. This article reviews work on working memory for linguistic and non-linguistic manual gestures and discusses theoretical and applied implications. Empirical evidence shows that there are effects of load and stimulus degradation on working memory for manual gestures. These effects are similar to those found for working memory for speech-based language. Further, there are effects of pre-existing linguistic representation that are partially similar across language modalities. But above all, deaf signers score higher than hearing non-signers on an n-back task with sign-based stimuli, irrespective of their semantic and phonological content, but not with non-linguistic manual actions. This pattern may be partially explained by recent findings relating to cross-modal plasticity in deaf individuals. It suggests that in linguistic gesture-based working memory, semantic aspects may outweigh phonological aspects when processing takes place under challenging conditions. The close association between working memory and language development should be taken into account in understanding and alleviating the challenges faced by deaf children growing up with cochlear implants as well as other clinical populations. PMID:29867655

  5. Standardized Semantic Markup for Reference Terminologies, Thesauri and Coding Systems: Benefits for distributed E-Health Applications.

    PubMed

    Hoelzer, Simon; Schweiger, Ralf K; Liu, Raymond; Rudolf, Dirk; Rieger, Joerg; Dudeck, Joachim

    2005-01-01

    With the introduction of the ICD-10 as the standard for diagnosis, the development of an electronic representation of its complete content, inherent semantics and coding rules is necessary. Our concept refers to current efforts of the CEN/TC 251 to establish a European standard for hierarchical classification systems in healthcare. We have developed an electronic representation of the ICD-10 with the extensible Markup Language (XML) that facilitates the integration in current information systems or coding software taking into account different languages and versions. In this context, XML offers a complete framework of related technologies and standard tools for processing that helps to develop interoperable applications.

  6. System and method for deriving a process-based specification

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael Gerard (Inventor); Rouff, Christopher A. (Inventor); Rash, James Larry (Inventor)

    2009-01-01

    A system and method for deriving a process-based specification for a system is disclosed. The process-based specification is mathematically inferred from a trace-based specification. The trace-based specification is derived from a non-empty set of traces or natural language scenarios. The process-based specification is mathematically equivalent to the trace-based specification. Code is generated, if applicable, from the process-based specification. A process, or phases of a process, using the features disclosed can be reversed and repeated to allow for an interactive development and modification of legacy systems. The process is applicable to any class of system, including, but not limited to, biological and physical systems, electrical and electro-mechanical systems in addition to software, hardware and hybrid hardware-software systems.

  7. Reconciliation of ontology and terminology to cope with linguistics.

    PubMed

    Baud, Robert H; Ceusters, Werner; Ruch, Patrick; Rassinoux, Anne-Marie; Lovis, Christian; Geissbühler, Antoine

    2007-01-01

    To discuss the relationships between ontologies, terminologies and language in the context of Natural Language Processing (NLP) applications in order to show the negative consequences of confusing them. The viewpoints of the terminologist and (computational) linguist are developed separately, and then compared, leading to the presentation of reconciliation among these points of view, with consideration of the role of the ontologist. In order to encourage appropriate usage of terminologies, guidelines are presented advocating the simultaneous publication of pragmatic vocabularies supported by terminological material based on adequate ontological analysis. Ontologies, terminologies and natural languages each have their own purpose. Ontologies support machine understanding, natural languages support human communication, and terminologies should form the bridge between them. Therefore, future terminology standards should be based on sound ontology and do justice to the diversities in natural languages. Moreover, they should support local vocabularies, in order to be easily adaptable to local needs and practices.

  8. Functional description of a command and control language tutor

    NASA Technical Reports Server (NTRS)

    Elke, David R.; Seamster, Thomas L.; Truszkowski, Walter

    1990-01-01

    The status of an ongoing project to explore the application of Intelligent Tutoring System (ITS) technology to NASA command and control languages is described. The primary objective of the current phase of the project is to develop a user interface for an ITS to assist NASA control center personnel in learning Systems Test and Operations Language (STOL). Although this ITS will be developed for Gamma Ray Observatory operators, it will be designed with sufficient flexibility so that its modules may serve as an ITS for other control languages such as the User Interface Language (UIL). The focus of this phase is to develop at least one other form of STOL representation to complement the operational STOL interface. Such an alternative representation would be adaptively employed during the tutoring session to facilitate the learning process. This is a key feature of this ITS which distinguishes it from a simulator that is only capable of representing the operational environment.

  9. The Application of Timing in Therapy of Children and Adults with Language Disorders

    PubMed Central

    Szelag, Elzbieta; Dacewicz, Anna; Szymaszek, Aneta; Wolak, Tomasz; Senderski, Andrzej; Domitrz, Izabela; Oron, Anna

    2015-01-01

    A number of evidence revealed a link between temporal information processing (TIP) and language. Both literature data and results of our studies indicated an overlapping of deficient TIP and disordered language, pointing to the existence of an association between these two functions. On this background the new approach is to apply such knowledge in therapy of patients suffering from language disorders. In two studies we asked the following questions: (1) can the temporal training reduce language deficits in aphasic patients (Study 1) or in children with specific language impairment (SLI, Study 2)? (2) can such training ameliorate also the other cognitive functions? Each of these studies employed pre-training assessment, training application, post-training and follow-up assessment. In Study 1 we tested 28 patients suffering from post-stroke aphasia. They were assigned either to the temporal training (Group A, n = 15) in milliseconds range, or to the non-temporal training (Group B, n = 13). Following the training we found only in Group A improved TIP, accompanied by a transfer of improvement to language and working memory functions. In Study 2 we tested 32 children aged from 5 to 8 years, affected by SLI who were classified into the temporal training (Group A, n = 17) or non-temporal training (Group B, n = 15). Group A underwent the multileveled audio-visual computer training Dr. Neuronowski®, recently developed in our laboratory. Group B performed the computer speech therapy exercises extended by playing computer games. Similarly as in Study 1, in Group A we found significant improvements of TIP, auditory comprehension and working memory. These results indicated benefits of temporal training for amelioration of language and other cognitive functions in both aphasic patients and children with SLI. The novel powerful therapy tools provide evidence for future promising clinical applications. PMID:26617547

  10. Development of C++ Application Program for Solving Quadratic Equation in Elementary School in Nigeria

    ERIC Educational Resources Information Center

    Bandele, Samuel Oye; Adekunle, Adeyemi Suraju

    2015-01-01

    The study was conducted to design, develop and test a c++ application program CAP-QUAD for solving quadratic equation in elementary school in Nigeria. The package was developed in c++ using object-oriented programming language, other computer program that were also utilized during the development process is DevC++ compiler, it was used for…

  11. webpic: A flexible web application for collecting distance and count measurements from images

    PubMed Central

    2018-01-01

    Despite increasing ability to store and analyze large amounts of data for organismal and ecological studies, the process of collecting distance and count measurements from images has largely remained time consuming and error-prone, particularly for tasks for which automation is difficult or impossible. Improving the efficiency of these tasks, which allows for more high quality data to be collected in a shorter amount of time, is therefore a high priority. The open-source web application, webpic, implements common web languages and widely available libraries and productivity apps to streamline the process of collecting distance and count measurements from images. In this paper, I introduce the framework of webpic and demonstrate one readily available feature of this application, linear measurements, using fossil leaf specimens. This application fills the gap between workflows accomplishable by individuals through existing software and those accomplishable by large, unmoderated crowds. It demonstrates that flexible web languages can be used to streamline time-intensive research tasks without the use of specialized equipment or proprietary software and highlights the potential for web resources to facilitate data collection in research tasks and outreach activities with improved efficiency. PMID:29608592

  12. scikit-image: image processing in Python

    PubMed Central

    Schönberger, Johannes L.; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D.; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921

  13. Adaptation of a Control Center Development Environment for Industrial Process Control

    NASA Technical Reports Server (NTRS)

    Killough, Ronnie L.; Malik, James M.

    1994-01-01

    In the control center, raw telemetry data is received for storage, display, and analysis. This raw data must be combined and manipulated in various ways by mathematical computations to facilitate analysis, provide diversified fault detection mechanisms, and enhance display readability. A development tool called the Graphical Computation Builder (GCB) has been implemented which provides flight controllers with the capability to implement computations for use in the control center. The GCB provides a language that contains both general programming constructs and language elements specifically tailored for the control center environment. The GCB concept allows staff who are not skilled in computer programming to author and maintain computer programs. The GCB user is isolated from the details of external subsystem interfaces and has access to high-level functions such as matrix operators, trigonometric functions, and unit conversion macros. The GCB provides a high level of feedback during computation development that improves upon the often cryptic errors produced by computer language compilers. An equivalent need can be identified in the industrial data acquisition and process control domain: that of an integrated graphical development tool tailored to the application to hide the operating system, computer language, and data acquisition interface details. The GCB features a modular design which makes it suitable for technology transfer without significant rework. Control center-specific language elements can be replaced by elements specific to industrial process control.

  14. Interactive Vulnerability Analysis Enhancement Results

    DTIC Science & Technology

    2012-12-01

    from JavaEE web based applications to other non-web based Java programs. Technology developed in this effort should be generally applicable to other...Generating a rule is a 2 click process that requires no input from the user. • Task 3: Added support for non- Java EE applications Aspect’s...investigated a variety of Java -based technologies and how IAST can support them. We were successful in adding support for Scala, a popular new language, and

  15. The Sizing and Optimization Language, (SOL): Computer language for design problems

    NASA Technical Reports Server (NTRS)

    Lucas, Stephen H.; Scotti, Stephen J.

    1988-01-01

    The Sizing and Optimization Language, (SOL), a new high level, special purpose computer language was developed to expedite application of numerical optimization to design problems and to make the process less error prone. SOL utilizes the ADS optimization software and provides a clear, concise syntax for describing an optimization problem, the OPTIMIZE description, which closely parallels the mathematical description of the problem. SOL offers language statements which can be used to model a design mathematically, with subroutines or code logic, and with existing FORTRAN routines. In addition, SOL provides error checking and clear output of the optimization results. Because of these language features, SOL is best suited to model and optimize a design concept when the model consits of mathematical expressions written in SOL. For such cases, SOL's unique syntax and error checking can be fully utilized. SOL is presently available for DEC VAX/VMS systems. A SOL package is available which includes the SOL compiler, runtime library routines, and a SOL reference manual.

  16. Modern Technology in Foreign Language Education: Applications and Projects. The ACTFL Foreign Language Education Series.

    ERIC Educational Resources Information Center

    Smith, Wm. Flint, Ed.

    This book, the second of two volumes devoted to instructional media in second language instruction, focuses on specific applications of advanced technology in the classroom. The first part, "Applications," contains seven chapters. They are: "The Language Laboratory in the Computer Age" (S. E. K. Otto); "Television…

  17. Aspiring to Unintended Consequences of Natural Language Processing: A Review of Recent Developments in Clinical and Consumer-Generated Text Processing

    PubMed Central

    Elhadad, N.

    2016-01-01

    Summary Objectives This paper reviews work over the past two years in Natural Language Processing (NLP) applied to clinical and consumer-generated texts. Methods We included any application or methodological publication that leverages text to facilitate healthcare and address the health-related needs of consumers and populations. Results Many important developments in clinical text processing, both foundational and task-oriented, were addressed in community-wide evaluations and discussed in corresponding special issues that are referenced in this review. These focused issues and in-depth reviews of several other active research areas, such as pharmacovigilance and summarization, allowed us to discuss in greater depth disease modeling and predictive analytics using clinical texts, and text analysis in social media for healthcare quality assessment, trends towards online interventions based on rapid analysis of health-related posts, and consumer health question answering, among other issues. Conclusions Our analysis shows that although clinical NLP continues to advance towards practical applications and more NLP methods are used in large-scale live health information applications, more needs to be done to make NLP use in clinical applications a routine widespread reality. Progress in clinical NLP is mirrored by developments in social media text analysis: the research is moving from capturing trends to addressing individual health-related posts, thus showing potential to become a tool for precision medicine and a valuable addition to the standard healthcare quality evaluation tools. PMID:27830255

  18. Influence of musical expertise and musical training on pitch processing in music and language.

    PubMed

    Besson, Mireille; Schön, Daniele; Moreno, Sylvain; Santos, Andréia; Magne, Cyrille

    2007-01-01

    We review a series of experiments aimed at studying pitch processing in music and speech. These studies were conducted with musician and non musician adults and children. We found that musical expertise improved pitch processing not only in music but also in speech. Demonstrating transfer of training between music and language has interesting applications for second language learning. We also addressed the issue of whether the positive effects of musical expertise are linked with specific predispositions for music or with extensive musical practice. Results of longitudinal studies argue for the later. Finally, we also examined pitch processing in dyslexic children and found that they had difficulties discriminating strong pitch changes that are easily discriminate by normal readers. These results argue for a strong link between basic auditory perception abilities and reading abilities. We used conjointly the behavioral method (Reaction Times and error rates) and the electrophysiological method (recording of the changes in brain electrical activity time-locked to stimulus presentation, Event-Related brain Potentials or ERPs). A set of common processes may be responsible for pitch processing in music and in speech and these processes are shaped by musical practice. These data add evidence in favor of brain plasticity and open interesting perspectives for the remediation of dyslexia using musical training.

  19. BOWS (bioinformatics open web services) to centralize bioinformatics tools in web services.

    PubMed

    Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel

    2015-06-02

    Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.

  20. Creating Web-Based Scientific Applications Using Java Servlets

    NASA Technical Reports Server (NTRS)

    Palmer, Grant; Arnold, James O. (Technical Monitor)

    2001-01-01

    There are many advantages to developing web-based scientific applications. Any number of people can access the application concurrently. The application can be accessed from a remote location. The application becomes essentially platform-independent because it can be run from any machine that has internet access and can run a web browser. Maintenance and upgrades to the application are simplified since only one copy of the application exists in a centralized location. This paper details the creation of web-based applications using Java servlets. Java is a powerful, versatile programming language that is well suited to developing web-based programs. A Java servlet provides the interface between the central server and the remote client machines. The servlet accepts input data from the client, runs the application on the server, and sends the output back to the client machine. The type of servlet that supports the HTTP protocol will be discussed in depth. Among the topics the paper will discuss are how to write an http servlet, how the servlet can run applications written in Java and other languages, and how to set up a Java web server. The entire process will be demonstrated by building a web-based application to compute stagnation point heat transfer.

  1. Application of new type of distributed multimedia databases to networked electronic museum

    NASA Astrophysics Data System (ADS)

    Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki

    1999-01-01

    Recently, various kinds of multimedia application systems have actively been developed based on the achievement of advanced high sped communication networks, computer processing technologies, and digital contents-handling technologies. Under this background, this paper proposed a new distributed multimedia database system which can effectively perform a new function of cooperative retrieval among distributed databases. The proposed system introduces a new concept of 'Retrieval manager' which functions as an intelligent controller so that the user can recognize a set of distributed databases as one logical database. The logical database dynamically generates and performs a preferred combination of retrieving parameters on the basis of both directory data and the system environment. Moreover, a concept of 'domain' is defined in the system as a managing unit of retrieval. The retrieval can effectively be performed by cooperation of processing among multiple domains. Communication language and protocols are also defined in the system. These are used in every action for communications in the system. A language interpreter in each machine translates a communication language into an internal language used in each machine. Using the language interpreter, internal processing, such internal modules as DBMS and user interface modules can freely be selected. A concept of 'content-set' is also introduced. A content-set is defined as a package of contents. Contents in the content-set are related to each other. The system handles a content-set as one object. The user terminal can effectively control the displaying of retrieved contents, referring to data indicating the relation of the contents in the content- set. In order to verify the function of the proposed system, a networked electronic museum was experimentally built. The results of this experiment indicate that the proposed system can effectively retrieve the objective contents under the control to a number of distributed domains. The result also indicate that the system can effectively work even if the system becomes large.

  2. Analysis and Development of a Web-Enabled Planning and Scheduling Database Application

    DTIC Science & Technology

    2013-09-01

    establishes an entity—relationship diagram for the desired process, constructs an operable database using MySQL , and provides a web- enabled interface for...development, develop, design, process, re- engineering, reengineering, MySQL , structured query language, SQL, myPHPadmin. 15. NUMBER OF PAGES 107 16...relationship diagram for the desired process, constructs an operable database using MySQL , and provides a web-enabled interface for the population of

  3. Creating affordable Internet map server applications for regional scale applications.

    PubMed

    Lembo, Arthur J; Wagenet, Linda P; Schusler, Tania; DeGloria, Stephen D

    2007-12-01

    This paper presents an overview and process for developing an Internet Map Server (IMS) application for a local volunteer watershed group using an Internal Internet Map Server (IIMS) strategy. The paper illustrates that modern GIS architectures utilizing an internal Internet map server coupled with a spatial SQL command language allow for rapid development of IMS applications. The implication of this approach means that powerful IMS applications can be rapidly and affordably developed for volunteer organizations that lack significant funds or a full time information technology staff.

  4. Software life cycle methodologies and environments

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest

    1991-01-01

    Products of this project will significantly improve the quality and productivity of Space Station Freedom Program software processes by: improving software reliability and safety; and broadening the range of problems that can be solved with computational solutions. Projects brings in Computer Aided Software Engineering (CASE) technology for: Environments such as Engineering Script Language/Parts Composition System (ESL/PCS) application generator, Intelligent User Interface for cost avoidance in setting up operational computer runs, Framework programmable platform for defining process and software development work flow control, Process for bringing CASE technology into an organization's culture, and CLIPS/CLIPS Ada language for developing expert systems; and methodologies such as Method for developing fault tolerant, distributed systems and a method for developing systems for common sense reasoning and for solving expert systems problems when only approximate truths are known.

  5. Formal ontology for natural language processing and the integration of biomedical databases.

    PubMed

    Simon, Jonathan; Dos Santos, Mariana; Fielding, James; Smith, Barry

    2006-01-01

    The central hypothesis underlying this communication is that the methodology and conceptual rigor of a philosophically inspired formal ontology can bring significant benefits in the development and maintenance of application ontologies [A. Flett, M. Dos Santos, W. Ceusters, Some Ontology Engineering Procedures and their Supporting Technologies, EKAW2002, 2003]. This hypothesis has been tested in the collaboration between Language and Computing (L&C), a company specializing in software for supporting natural language processing especially in the medical field, and the Institute for Formal Ontology and Medical Information Science (IFOMIS), an academic research institution concerned with the theoretical foundations of ontology. In the course of this collaboration L&C's ontology, LinKBase, which is designed to integrate and support reasoning across a plurality of external databases, has been subjected to a thorough auditing on the basis of the principles underlying IFOMIS's Basic Formal Ontology (BFO) [B. Smith, Basic Formal Ontology, 2002. http://ontology.buffalo.edu/bfo]. The goal is to transform a large terminology-based ontology into one with the ability to support reasoning applications. Our general procedure has been the implementation of a meta-ontological definition space in which the definitions of all the concepts and relations in LinKBase are standardized in the framework of first-order logic. In this paper we describe how this principles-based standardization has led to a greater degree of internal coherence of the LinKBase structure, and how it has facilitated the construction of mappings between external databases using LinKBase as translation hub. We argue that the collaboration here described represents a new phase in the quest to solve the so-called "Tower of Babel" problem of ontology integration [F. Montayne, J. Flanagan, Formal Ontology: The Foundation for Natural Language Processing, 2003. http://www.landcglobal.com/].

  6. GeoGML - a Mark-up Language for 4-dimensional geomorphic objects and processes

    NASA Astrophysics Data System (ADS)

    Löwner, M.-O.

    2009-04-01

    We developed an use-oriented GML3 based data model that enables researchers to share 4-dimensional information about landforms and their process related interaction. Using the Unified Modelling Language it is implemented as a GML3-based application schema available on the Internet. As the science of the land's surface Geomorphology investigates landforms, their change, and the processes causing this change. The main problem of comparing research results in geomorphology is that the objects under investigation are composed of 3-dimensional geometries that change in time due to processes of material fluxes, e. g. soil erosion or mass movements. They have internal properties, e. g. soil texture or bulk density, that determine the effectiveness of these processes but are under change as well. Worldwide geographical data can be shared over the Internet using Web Feature Services. The precondition is the development of a semantic model or ontology based on international standards like GML3 as an implementation of the ISO 109107 and others. Here we present a GML3-based Mark-up Language or application schema for geomorphic purposes that fulfils the following requirements: First, an object-oriented view of landforms with a true 3-dimensional geometric data format was established. Second, the internal structure and attributes of landforms can be stored. Third, the interaction of processes and landforms is represented. Fourth, the change of all these mentioned attributes over time was considered. The presented application schema is available on the Internet and therefore a first step to enable researchers to share information using an OGC's Web feature service. In this vein comparing modelling results of landscape evolution with results of other scientist's observations is possible. Compared to prevalent data concepts the model presented makes it possible to store information about landforms, their geometry and the characteristics in more detail. It allows to represent the 3D-geometry, the set of material properties and the genesis of a landform by associating processes to a geoobject. Thus, time slices of a geomorphic system can be represented as well as scenarios of landscape modelling. Commercial GI-software is not adapted to the needs of the science of geomorphology. Therefore the development of an application model i. e. a formal description of semantics is imperative to partake in technologies like Web Feature Services supporting interoperable data transfer.

  7. Use of Co-occurrences for Temporal Expressions Annotation

    NASA Astrophysics Data System (ADS)

    Craveiro, Olga; Macedo, Joaquim; Madeira, Henrique

    The annotation or extraction of temporal information from text documents is becoming increasingly important in many natural language processing applications such as text summarization, information retrieval, question answering, etc.. This paper presents an original method for easy recognition of temporal expressions in text documents. The method creates semantically classified temporal patterns, using word co-occurrences obtained from training corpora and a pre-defined seed keywords set, derived from the used language temporal references. A participation on a Portuguese named entity evaluation contest showed promising effectiveness and efficiency results. This approach can be adapted to recognize other type of expressions or languages, within other contexts, by defining the suitable word sets and training corpora.

  8. Natural Language Processing Techniques for Extracting and Categorizing Finding Measurements in Narrative Radiology Reports.

    PubMed

    Sevenster, M; Buurman, J; Liu, P; Peters, J F; Chang, P J

    2015-01-01

    Accumulating quantitative outcome parameters may contribute to constructing a healthcare organization in which outcomes of clinical procedures are reproducible and predictable. In imaging studies, measurements are the principal category of quantitative para meters. The purpose of this work is to develop and evaluate two natural language processing engines that extract finding and organ measurements from narrative radiology reports and to categorize extracted measurements by their "temporality". The measurement extraction engine is developed as a set of regular expressions. The engine was evaluated against a manually created ground truth. Automated categorization of measurement temporality is defined as a machine learning problem. A ground truth was manually developed based on a corpus of radiology reports. A maximum entropy model was created using features that characterize the measurement itself and its narrative context. The model was evaluated in a ten-fold cross validation protocol. The measurement extraction engine has precision 0.994 and recall 0.991. Accuracy of the measurement classification engine is 0.960. The work contributes to machine understanding of radiology reports and may find application in software applications that process medical data.

  9. Multiprocessor architecture: Synthesis and evaluation

    NASA Technical Reports Server (NTRS)

    Standley, Hilda M.

    1990-01-01

    Multiprocessor computed architecture evaluation for structural computations is the focus of the research effort described. Results obtained are expected to lead to more efficient use of existing architectures and to suggest designs for new, application specific, architectures. The brief descriptions given outline a number of related efforts directed toward this purpose. The difficulty is analyzing an existing architecture or in designing a new computer architecture lies in the fact that the performance of a particular architecture, within the context of a given application, is determined by a number of factors. These include, but are not limited to, the efficiency of the computation algorithm, the programming language and support environment, the quality of the program written in the programming language, the multiplicity of the processing elements, the characteristics of the individual processing elements, the interconnection network connecting processors and non-local memories, and the shared memory organization covering the spectrum from no shared memory (all local memory) to one global access memory. These performance determiners may be loosely classified as being software or hardware related. This distinction is not clear or even appropriate in many cases. The effect of the choice of algorithm is ignored by assuming that the algorithm is specified as given. Effort directed toward the removal of the effect of the programming language and program resulted in the design of a high-level parallel programming language. Two characteristics of the fundamental structure of the architecture (memory organization and interconnection network) are examined.

  10. Tensoral for post-processing users and simulation authors

    NASA Technical Reports Server (NTRS)

    Dresselhaus, Eliot

    1993-01-01

    The CTR post-processing effort aims to make turbulence simulations and data more readily and usefully available to the research and industrial communities. The Tensoral language, which provides the foundation for this effort, is introduced here in the form of a user's guide. The Tensoral user's guide is presented in two main sections. Section one acts as a general introduction and guides database users who wish to post-process simulation databases. Section two gives a brief description of how database authors and other advanced users can make simulation codes and/or the databases they generate available to the user community via Tensoral database back ends. The two-part structure of this document conforms to the two-level design structure of the Tensoral language. Tensoral has been designed to be a general computer language for performing tensor calculus and statistics on numerical data. Tensoral's generality allows it to be used for stand-alone native coding of high-level post-processing tasks (as described in section one of this guide). At the same time, Tensoral's specialization to a minute task (namely, to numerical tensor calculus and statistics) allows it to be easily embedded into applications written partly in Tensoral and partly in other computer languages (here, C and Vectoral). Embedded Tensoral, aimed at advanced users for more general coding (e.g. of efficient simulations, for interfacing with pre-existing software, for visualization, etc.), is described in section two of this guide.

  11. Enterprise Pattern: integrating the business process into a unified enterprise model of modern service company

    NASA Astrophysics Data System (ADS)

    Li, Ying; Luo, Zhiling; Yin, Jianwei; Xu, Lida; Yin, Yuyu; Wu, Zhaohui

    2017-01-01

    Modern service company (MSC), the enterprise involving special domains, such as the financial industry, information service industry and technology development industry, depends heavily on information technology. Modelling of such enterprise has attracted much research attention because it promises to help enterprise managers to analyse basic business strategies (e.g. the pricing strategy) and even optimise the business process (BP) to gain benefits. While the existing models proposed by economists cover the economic elements, they fail to address the basic BP and its relationship with the economic characteristics. Those proposed in computer science regardless of achieving great success in BP modelling perform poorly in supporting the economic analysis. Therefore, the existing approaches fail to satisfy the requirement of enterprise modelling for MSC, which demands simultaneous consideration of both economic analysing and business processing. In this article, we provide a unified enterprise modelling approach named Enterprise Pattern (EP) which bridges the gap between the BP model and the enterprise economic model of MSC. Proposing a language named Enterprise Pattern Description Language (EPDL) covering all the basic language elements of EP, we formulate the language syntaxes and two basic extraction rules assisting economic analysis. Furthermore, we extend Business Process Model and Notation (BPMN) to support EPDL, named BPMN for Enterprise Pattern (BPMN4EP). The example of mobile application platform is studied in detail for a better understanding of EPDL.

  12. Space Shuttle Usage of z/OS

    NASA Technical Reports Server (NTRS)

    Green, Jan

    2009-01-01

    This viewgraph presentation gives a detailed description of the avionics associated with the Space Shuttle's data processing system and its usage of z/OS. The contents include: 1) Mission, Products, and Customers; 2) Facility Overview; 3) Shuttle Data Processing System; 4) Languages and Compilers; 5) Application Tools; 6) Shuttle Flight Software Simulator; 7) Software Development and Build Tools; and 8) Fun Facts and Acronyms.

  13. An overview of artificial intelligence and robotics. Volume 1: Artificial intelligence. Part B: Applications

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1983-01-01

    Artificial Intelligence (AI) is an emerging technology that has recently attracted considerable attention. Many applications are now under development. This report, Part B of a three part report on AI, presents overviews of the key application areas: Expert Systems, Computer Vision, Natural Language Processing, Speech Interfaces, and Problem Solving and Planning. The basic approaches to such systems, the state-of-the-art, existing systems and future trends and expectations are covered.

  14. Development of a Mandarin-English Bilingual Speech Recognition System for Real World Music Retrieval

    NASA Astrophysics Data System (ADS)

    Zhang, Qingqing; Pan, Jielin; Lin, Yang; Shao, Jian; Yan, Yonghong

    In recent decades, there has been a great deal of research into the problem of bilingual speech recognition-to develop a recognizer that can handle inter- and intra-sentential language switching between two languages. This paper presents our recent work on the development of a grammar-constrained, Mandarin-English bilingual Speech Recognition System (MESRS) for real world music retrieval. Two of the main difficult issues in handling the bilingual speech recognition systems for real world applications are tackled in this paper. One is to balance the performance and the complexity of the bilingual speech recognition system; the other is to effectively deal with the matrix language accents in embedded language**. In order to process the intra-sentential language switching and reduce the amount of data required to robustly estimate statistical models, a compact single set of bilingual acoustic models derived by phone set merging and clustering is developed instead of using two separate monolingual models for each language. In our study, a novel Two-pass phone clustering method based on Confusion Matrix (TCM) is presented and compared with the log-likelihood measure method. Experiments testify that TCM can achieve better performance. Since potential system users' native language is Mandarin which is regarded as a matrix language in our application, their pronunciations of English as the embedded language usually contain Mandarin accents. In order to deal with the matrix language accents in embedded language, different non-native adaptation approaches are investigated. Experiments show that model retraining method outperforms the other common adaptation methods such as Maximum A Posteriori (MAP). With the effective incorporation of approaches on phone clustering and non-native adaptation, the Phrase Error Rate (PER) of MESRS for English utterances was reduced by 24.47% relatively compared to the baseline monolingual English system while the PER on Mandarin utterances was comparable to that of the baseline monolingual Mandarin system. The performance for bilingual utterances achieved 22.37% relative PER reduction.

  15. In Pursuit of Artificial Intelligence.

    ERIC Educational Resources Information Center

    Watstein, Sarah; Kesselman, Martin

    1986-01-01

    Defines artificial intelligence and reviews current research in natural language processing, expert systems, and robotics and sensory systems. Discussion covers current commercial applications of artificial intelligence and projections of uses and limitations in library technical and public services, e.g., in cataloging and online information and…

  16. Image databases: Problems and perspectives

    NASA Technical Reports Server (NTRS)

    Gudivada, V. Naidu

    1989-01-01

    With the increasing number of computer graphics, image processing, and pattern recognition applications, economical storage, efficient representation and manipulation, and powerful and flexible query languages for retrieval of image data are of paramount importance. These and related issues pertinent to image data bases are examined.

  17. X-Graphs: Language and Algorithms for Heterogeneous Graph Streams

    DTIC Science & Technology

    2017-09-01

    INTRODUCTION 1 3 METHODS , ASUMPTIONS, AND PROCEDURES 2 Software Abstractions for Graph Analytic Applications 2 High performance Platforms for Graph Processing...data is stored in a distributed file system. 3 METHODS , ASUMPTIONS, AND PROCEDURES Software Abstractions for Graph Analytic Applications To...implementations of novel methods for networks analysis: several methods for detection of overlapping communities, personalized PageRank, node embeddings into a d

  18. Tensor Arithmetic, Geometric and Mathematic Principles of Fluid Mechanics in Implementation of Direct Computational Experiments

    NASA Astrophysics Data System (ADS)

    Bogdanov, Alexander; Khramushin, Vasily

    2016-02-01

    The architecture of a digital computing system determines the technical foundation of a unified mathematical language for exact arithmetic-logical description of phenomena and laws of continuum mechanics for applications in fluid mechanics and theoretical physics. The deep parallelization of the computing processes results in functional programming at a new technological level, providing traceability of the computing processes with automatic application of multiscale hybrid circuits and adaptive mathematical models for the true reproduction of the fundamental laws of physics and continuum mechanics.

  19. tOWL: Integrating Time in OWL

    NASA Astrophysics Data System (ADS)

    Frasincar, Flavius; Milea, Viorel; Kaymak, Uzay

    The Web Ontology Language (OWL) is the most expressive standard language for modeling ontologies on the Semantic Web. In this chapter, we present the temporal OWL (tOWL) language: a temporal extension of the OWL DL language. tOWL is based on three layers added on top of OWL DL. The first layer is the Concrete Domains layer, which allows the representation of restrictions using concrete domain binary predicates. The second layer is the Time Representation layer, which adds time points, intervals, and Allen's 13 interval relations. The third layer is the Change Representation layer which supports a perdurantist view on the world, and allows the representation of complex temporal axioms, such as state transitions. A Leveraged Buyout process is used to exemplify the different tOWL constructs and show the tOWL applicability in a business context.

  20. Time is of the Essence: A Review of Electroencephalography (EEG) and Event-Related Brain Potentials (ERPs) in Language Research.

    PubMed

    Beres, Anna M

    2017-12-01

    The discovery of electroencephalography (EEG) over a century ago has changed the way we understand brain structure and function, in terms of both clinical and research applications. This paper starts with a short description of EEG and then focuses on the event-related brain potentials (ERPs), and their use in experimental settings. It describes the typical set-up of an ERP experiment. A description of a number of ERP components typically involved in language research is presented. Finally, the advantages and disadvantages of using ERPs in language research are discussed. EEG has an extensive use in today's world, including medical, psychology, or linguistic research. The excellent temporal resolution of EEG information allows one to track a brain response in milliseconds and therefore makes it uniquely suited to research concerning language processing.

  1. LogScope

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Smith, Margaret H.; Barringer, Howard; Groce, Alex

    2012-01-01

    LogScope is a software package for analyzing log files. The intended use is for offline post-processing of such logs, after the execution of the system under test. LogScope can, however, in principle, also be used to monitor systems online during their execution. Logs are checked against requirements formulated as monitors expressed in a rule-based specification language. This language has similarities to a state machine language, but is more expressive, for example, in its handling of data parameters. The specification language is user friendly, simple, and yet expressive enough for many practical scenarios. The LogScope software was initially developed to specifically assist in testing JPL s Mars Science Laboratory (MSL) flight software, but it is very generic in nature and can be applied to any application that produces some form of logging information (which almost any software does).

  2. Webulous and the Webulous Google Add-On--a web service and application for ontology building from templates.

    PubMed

    Jupp, Simon; Burdett, Tony; Welter, Danielle; Sarntivijai, Sirarat; Parkinson, Helen; Malone, James

    2016-01-01

    Authoring bio-ontologies is a task that has traditionally been undertaken by skilled experts trained in understanding complex languages such as the Web Ontology Language (OWL), in tools designed for such experts. As requests for new terms are made, the need for expert ontologists represents a bottleneck in the development process. Furthermore, the ability to rigorously enforce ontology design patterns in large, collaboratively developed ontologies is difficult with existing ontology authoring software. We present Webulous, an application suite for supporting ontology creation by design patterns. Webulous provides infrastructure to specify templates for populating ontology design patterns that get transformed into OWL assertions in a target ontology. Webulous provides programmatic access to the template server and a client application has been developed for Google Sheets that allows templates to be loaded, populated and resubmitted to the Webulous server for processing. The development and delivery of ontologies to the community requires software support that goes beyond the ontology editor. Building ontologies by design patterns and providing simple mechanisms for the addition of new content helps reduce the overall cost and effort required to develop an ontology. The Webulous system provides support for this process and is used as part of the development of several ontologies at the European Bioinformatics Institute.

  3. Semantic biomedical resource discovery: a Natural Language Processing framework.

    PubMed

    Sfakianaki, Pepi; Koumakis, Lefteris; Sfakianakis, Stelios; Iatraki, Galatia; Zacharioudakis, Giorgos; Graf, Norbert; Marias, Kostas; Tsiknakis, Manolis

    2015-09-30

    A plethora of publicly available biomedical resources do currently exist and are constantly increasing at a fast rate. In parallel, specialized repositories are been developed, indexing numerous clinical and biomedical tools. The main drawback of such repositories is the difficulty in locating appropriate resources for a clinical or biomedical decision task, especially for non-Information Technology expert users. In parallel, although NLP research in the clinical domain has been active since the 1960s, progress in the development of NLP applications has been slow and lags behind progress in the general NLP domain. The aim of the present study is to investigate the use of semantics for biomedical resources annotation with domain specific ontologies and exploit Natural Language Processing methods in empowering the non-Information Technology expert users to efficiently search for biomedical resources using natural language. A Natural Language Processing engine which can "translate" free text into targeted queries, automatically transforming a clinical research question into a request description that contains only terms of ontologies, has been implemented. The implementation is based on information extraction techniques for text in natural language, guided by integrated ontologies. Furthermore, knowledge from robust text mining methods has been incorporated to map descriptions into suitable domain ontologies in order to ensure that the biomedical resources descriptions are domain oriented and enhance the accuracy of services discovery. The framework is freely available as a web application at ( http://calchas.ics.forth.gr/ ). For our experiments, a range of clinical questions were established based on descriptions of clinical trials from the ClinicalTrials.gov registry as well as recommendations from clinicians. Domain experts manually identified the available tools in a tools repository which are suitable for addressing the clinical questions at hand, either individually or as a set of tools forming a computational pipeline. The results were compared with those obtained from an automated discovery of candidate biomedical tools. For the evaluation of the results, precision and recall measurements were used. Our results indicate that the proposed framework has a high precision and low recall, implying that the system returns essentially more relevant results than irrelevant. There are adequate biomedical ontologies already available, sufficiency of existing NLP tools and quality of biomedical annotation systems for the implementation of a biomedical resources discovery framework, based on the semantic annotation of resources and the use on NLP techniques. The results of the present study demonstrate the clinical utility of the application of the proposed framework which aims to bridge the gap between clinical question in natural language and efficient dynamic biomedical resources discovery.

  4. Sign language recognition and translation: a multidisciplined approach from the field of artificial intelligence.

    PubMed

    Parton, Becky Sue

    2006-01-01

    In recent years, research has progressed steadily in regard to the use of computers to recognize and render sign language. This paper reviews significant projects in the field beginning with finger-spelling hands such as "Ralph" (robotics), CyberGloves (virtual reality sensors to capture isolated and continuous signs), camera-based projects such as the CopyCat interactive American Sign Language game (computer vision), and sign recognition software (Hidden Markov Modeling and neural network systems). Avatars such as "Tessa" (Text and Sign Support Assistant; three-dimensional imaging) and spoken language to sign language translation systems such as Poland's project entitled "THETOS" (Text into Sign Language Automatic Translator, which operates in Polish; natural language processing) are addressed. The application of this research to education is also explored. The "ICICLE" (Interactive Computer Identification and Correction of Language Errors) project, for example, uses intelligent computer-aided instruction to build a tutorial system for deaf or hard-of-hearing children that analyzes their English writing and makes tailored lessons and recommendations. Finally, the article considers synthesized sign, which is being added to educational material and has the potential to be developed by students themselves.

  5. An integrated domain specific language for post-processing and visualizing electrophysiological signals in Java.

    PubMed

    Strasser, T; Peters, T; Jagle, H; Zrenner, E; Wilke, R

    2010-01-01

    Electrophysiology of vision - especially the electroretinogram (ERG) - is used as a non-invasive way for functional testing of the visual system. The ERG is a combined electrical response generated by neural and non-neuronal cells in the retina in response to light stimulation. This response can be recorded and used for diagnosis of numerous disorders. For both clinical practice and clinical trials it is important to process those signals in an accurate and fast way and to provide the results as structured, consistent reports. Therefore, we developed a freely available and open-source framework in Java (http://www.eye.uni-tuebingen.de/project/idsI4sigproc). The framework is focused on an easy integration with existing applications. By leveraging well-established software patterns like pipes-and-filters and fluent interfaces as well as by designing the application programming interfaces (API) as an integrated domain specific language (DSL) the overall framework provides a smooth learning curve. Additionally, it already contains several processing methods and visualization features and can be extended easily by implementing the provided interfaces. In this way, not only can new processing methods be added but the framework can also be adopted for other areas of signal processing. This article describes in detail the structure and implementation of the framework and demonstrate its application through the software package used in clinical practice and clinical trials at the University Eye Hospital Tuebingen one of the largest departments in the field of visual electrophysiology in Europe.

  6. 76 FR 18538 - Applications for New Awards; National Professional Development Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-04

    ... DEPARTMENT OF EDUCATION [CFDA 84.195N] Applications for New Awards; National Professional Development Program AGENCY: Office of English Language Acquisition, Language Enhancement, and Academic... Language Acquisition, Language Enhancement and Academic Achievement for Limited English Proficient Students...

  7. Structuring Broadcast Audio for Information Access

    NASA Astrophysics Data System (ADS)

    Gauvain, Jean-Luc; Lamel, Lori

    2003-12-01

    One rapidly expanding application area for state-of-the-art speech recognition technology is the automatic processing of broadcast audiovisual data for information access. Since much of the linguistic information is found in the audio channel, speech recognition is a key enabling technology which, when combined with information retrieval techniques, can be used for searching large audiovisual document collections. Audio indexing must take into account the specificities of audio data such as needing to deal with the continuous data stream and an imperfect word transcription. Other important considerations are dealing with language specificities and facilitating language portability. At Laboratoire d'Informatique pour la Mécanique et les Sciences de l'Ingénieur (LIMSI), broadcast news transcription systems have been developed for seven languages: English, French, German, Mandarin, Portuguese, Spanish, and Arabic. The transcription systems have been integrated into prototype demonstrators for several application areas such as audio data mining, structuring audiovisual archives, selective dissemination of information, and topic tracking for media monitoring. As examples, this paper addresses the spoken document retrieval and topic tracking tasks.

  8. Presentation of the Coding and Assessment System for Narratives of Trauma (CASNOT): Application in Spanish Battered Women and Preliminary Analyses.

    PubMed

    Fernández-Lansac, Violeta; Crespo, María

    2017-07-26

    This study introduces a new coding system, the Coding and Assessment System for Narratives of Trauma (CASNOT), to analyse several language domains in narratives of autobiographical memories, especially in trauma narratives. The development of the coding system is described. It was applied to assess positive and traumatic/negative narratives in 50 battered women (trauma-exposed group) and 50 nontrauma-exposed women (control group). Three blind raters coded each narrative. Inter-rater reliability analyses were conducted for the CASNOT language categories (multirater Kfree coefficients) and dimensions (intraclass correlation coefficients). High levels of inter-rater agreement were found for most of the language domains. Categories that did not reach the expected reliability were mainly those related to cognitive processes, which reflects difficulties in operationalizing constructs such as lack of control or helplessness, control or planning, and rationalization or memory elaboration. Applications and limitations of the CASNOT are discussed to enhance narrative measures for autobiographical memories.

  9. x-y-recording in transmission electron microscopy. A versatile and inexpensive interface to personal computers with application to stereology.

    PubMed

    Rickmann, M; Siklós, L; Joó, F; Wolff, J R

    1990-09-01

    An interface for IBM XT/AT-compatible computers is described which has been designed to read the actual specimen stage position of electron microscopes. The complete system consists of (i) optical incremental encoders attached to the x- and y-stage drivers of the microscope, (ii) two keypads for operator input, (iii) an interface card fitted to the bus of the personal computer, (iv) a standard configuration IBM XT (or compatible) personal computer optionally equipped with a (v) HP Graphic Language controllable colour plotter. The small size of the encoders and their connection to the stage drivers by simple ribbed belts allows an easy adaptation of the system to most electron microscopes. Operation of the interface card itself is supported by any high-level language available for personal computers. By the modular concept of these languages, the system can be customized to various applications, and no computer expertise is needed for actual operation. The present configuration offers an inexpensive attachment, which covers a wide range of applications from a simple notebook to high-resolution (200-nm) mapping of tissue. Since section coordinates can be processed in real-time, stereological estimations can be derived directly "on microscope". This is exemplified by an application in which particle numbers were determined by the disector method.

  10. Linguistic measures of the referential process in psychodynamic treatment: the English and Italian versions.

    PubMed

    Mariani, Rachele; Maskit, Bernard; Bucci, Wilma; De Coro, Alessandra

    2013-01-01

    The referential process is defined in the context of Bucci's multiple code theory as the process by which nonverbal experience is connected to language. The English computerized measures of the referential process, which have been applied in psychotherapy research, include the Weighted Referential Activity Dictionary (WRAD), and measures of Reflection, Affect and Disfluency. This paper presents the development of the Italian version of the IWRAD by modeling Italian texts scored by judges, and shows the application of the IWRAD and other Italian measures in three psychodynamic treatments evaluated for personality change using the Shedler-Westen Assessment Procedure (SWAP-200). Clinical predictions based on applications of the English measures were supported.

  11. Research and application of online measurement system of tire tread profile in automobile tire production

    NASA Astrophysics Data System (ADS)

    Wang, Pengyao; Chen, Xiangguang; Yang, Kai; Liu, Xuejiao

    2017-01-01

    To improve the measuring efficiency of width and thickness of tire tread in the process of automobile tire production, the actual condition for the tire production process is analyzed, and a fast online measurement system based on moving tire tread of tire specifications is established in this paper. The coordinate data of tire tread profile is acquired by 3D laser sensor, and we use C# language for programming which is an object-oriented programming language to complete the development of client program. The system with laser sensor can provide real-time display of tire tread profile and the data to require in the process of tire production. Experimental results demonstrate that the measuring precision of the system is <= 1mm, it can meet the measurement requirements of the production process, and the system has the characteristics of convenient installation and testing, system stable operation.

  12. Specification, Design, and Analysis of Advanced HUMS Architectures

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    2004-01-01

    During the two-year project period, we have worked on several aspects of domain-specific architectures for HUMS. In particular, we looked at using scenario-based approach for the design and designed a language for describing such architectures. The language is now being used in all aspects of our HUMS design. In particular, we have made contributions in the following areas. 1) We have employed scenarios in the development of HUMS in three main areas. They are: (a) To improve reusability by using scenarios as a library indexing tool and as a domain analysis tool; (b) To improve maintainability by recording design rationales from two perspectives - problem domain and solution domain; (c) To evaluate the software architecture. 2) We have defined a new architectural language called HADL or HUMS Architectural Definition Language. It is a customized version of xArch/xADL. It is based on XML and, hence, is easily portable from domain to domain, application to application, and machine to machine. Specifications written in HADL can be easily read and parsed using the currently available XML parsers. Thus, there is no need to develop a plethora of software to support HADL. 3) We have developed an automated design process that involves two main techniques: (a) Selection of solutions from a large space of designs; (b) Synthesis of designs. However, the automation process is not an absolute Artificial Intelligence (AI) approach though it uses a knowledge-based system that epitomizes a specific HUMS domain. The process uses a database of solutions as an aid to solve the problems rather than creating a new design in the literal sense. Since searching is adopted as the main technique, the challenges involved are: (a) To minimize the effort in searching the database where a very large number of possibilities exist; (b) To develop representations that could conveniently allow us to depict design knowledge evolved over many years; (c) To capture the required information that aid the automation process.

  13. Native Language Processing using Exegy Text Miner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Compton, J

    2007-10-18

    Lawrence Livermore National Laboratory's New Architectures Testbed recently evaluated Exegy's Text Miner appliance to assess its applicability to high-performance, automated native language analysis. The evaluation was performed with support from the Computing Applications and Research Department in close collaboration with Global Security programs, and institutional activities in native language analysis. The Exegy Text Miner is a special-purpose device for detecting and flagging user-supplied patterns of characters, whether in streaming text or in collections of documents at very high rates. Patterns may consist of simple lists of words or complex expressions with sub-patterns linked by logical operators. These searches are accomplishedmore » through a combination of specialized hardware (i.e., one or more field-programmable gates arrays in addition to general-purpose processors) and proprietary software that exploits these individual components in an optimal manner (through parallelism and pipelining). For this application the Text Miner has performed accurately and reproducibly at high speeds approaching those documented by Exegy in its technical specifications. The Exegy Text Miner is primarily intended for the single-byte ASCII characters used in English, but at a technical level its capabilities are language-neutral and can be applied to multi-byte character sets such as those found in Arabic and Chinese. The system is used for searching databases or tracking streaming text with respect to one or more lexicons. In a real operational environment it is likely that data would need to be processed separately for each lexicon or search technique. However, the searches would be so fast that multiple passes should not be considered as a limitation a priori. Indeed, it is conceivable that large databases could be searched as often as necessary if new queries were deemed worthwhile. This project is concerned with evaluating the Exegy Text Miner installed in the New Architectures Testbed running under software version 2.0. The concrete goals of the evaluation were to test the speed and accuracy of the Exegy and explore ways that it could be employed in current or future text-processing projects at Lawrence Livermore National Laboratory (LLNL). This study extended beyond this to evaluate its suitability for processing foreign language sources. The scope of this study was limited to the capabilities of the Exegy Text Miner in the file search mode and does not attempt simulating the streaming mode. Since the capabilities of the machine are invariant to the choice of input mode and since timing should not depend on this choice, it was felt that the added effort was not necessary for this restricted study.« less

  14. Understanding Language Learning: Review of the Application of the Interaction Model in Foreign Language Contexts

    ERIC Educational Resources Information Center

    Dixon, L. Quentin; Wu, Shuang

    2014-01-01

    Purpose: This paper examined the application of the input-interaction-output model in English-as-Foreign-Language (EFL) learning environments with four specific questions: (1) How do the three components function in the model? (2) Does interaction in the foreign language classroom seem to be effective for foreign language acquisition? (3) What…

  15. A Grammar Library for Information Structure

    ERIC Educational Resources Information Center

    Song, Sanghoun

    2014-01-01

    This dissertation makes substantial contributions to both the theoretical and computational treatment of information structure, with an eye toward creating natural language processing applications such as multilingual machine translation systems. The aim of the present dissertation is to create a grammar library of information structure for the…

  16. Incorporating Translation in Qualitative Studies: Two Case Studies in Education

    ERIC Educational Resources Information Center

    Sutrisno, Agustian; Nguyen, Nga Thanh; Tangen, Donna

    2014-01-01

    Cross-language qualitative research in education continues to increase. However, there has been inadequate discussion in the literature concerning the translation process that ensures research trustworthiness applicable for bilingual researchers. Informed by the literature on evaluation criteria for qualitative data translation, this paper…

  17. Crowdsourcing and curation: perspectives from biology and natural language processing

    DOE PAGES

    Hirschman, Lynette; Fort, Karën; Boué, Stéphanie; ...

    2016-08-08

    Crowdsourcing is increasingly utilized for performing tasks in both natural language processing and biocuration. Although there have been many applications of crowdsourcing in these fields, there have been fewer high-level discussions of the methodology and its applicability to biocuration. This paper explores crowdsourcing for biocuration through several case studies that highlight different ways of leveraging ‘the crowd’; these raise issues about the kind(s) of expertise needed, the motivations of participants, and questions related to feasibility, cost and quality. The paper is an outgrowth of a panel session held at BioCreative V (Seville, September 9–11, 2015). The session consisted of fourmore » short talks, followed by a discussion. In their talks, the panelists explored the role of expertise and the potential to improve crowd performance by training; the challenge of decomposing tasks to make them amenable to crowdsourcing; and the capture of biological data and metadata through community editing.« less

  18. Crowdsourcing and curation: perspectives from biology and natural language processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirschman, Lynette; Fort, Karën; Boué, Stéphanie

    Crowdsourcing is increasingly utilized for performing tasks in both natural language processing and biocuration. Although there have been many applications of crowdsourcing in these fields, there have been fewer high-level discussions of the methodology and its applicability to biocuration. This paper explores crowdsourcing for biocuration through several case studies that highlight different ways of leveraging ‘the crowd’; these raise issues about the kind(s) of expertise needed, the motivations of participants, and questions related to feasibility, cost and quality. The paper is an outgrowth of a panel session held at BioCreative V (Seville, September 9–11, 2015). The session consisted of fourmore » short talks, followed by a discussion. In their talks, the panelists explored the role of expertise and the potential to improve crowd performance by training; the challenge of decomposing tasks to make them amenable to crowdsourcing; and the capture of biological data and metadata through community editing.« less

  19. A methodology for extending domain coverage in SemRep.

    PubMed

    Rosemblat, Graciela; Shin, Dongwook; Kilicoglu, Halil; Sneiderman, Charles; Rindflesch, Thomas C

    2013-12-01

    We describe a domain-independent methodology to extend SemRep coverage beyond the biomedical domain. SemRep, a natural language processing application originally designed for biomedical texts, uses the knowledge sources provided by the Unified Medical Language System (UMLS©). Ontological and terminological extensions to the system are needed in order to support other areas of knowledge. We extended SemRep's application by developing a semantic representation of a previously unsupported domain. This was achieved by adapting well-known ontology engineering phases and integrating them with the UMLS knowledge sources on which SemRep crucially depends. While the process to extend SemRep coverage has been successfully applied in earlier projects, this paper presents in detail the step-wise approach we followed and the mechanisms implemented. A case study in the field of medical informatics illustrates how the ontology engineering phases have been adapted for optimal integration with the UMLS. We provide qualitative and quantitative results, which indicate the validity and usefulness of our methodology. Published by Elsevier Inc.

  20. caTIES: a grid based system for coding and retrieval of surgical pathology reports and tissue specimens in support of translational research.

    PubMed

    Crowley, Rebecca S; Castine, Melissa; Mitchell, Kevin; Chavan, Girish; McSherry, Tara; Feldman, Michael

    2010-01-01

    The authors report on the development of the Cancer Tissue Information Extraction System (caTIES)--an application that supports collaborative tissue banking and text mining by leveraging existing natural language processing methods and algorithms, grid communication and security frameworks, and query visualization methods. The system fills an important need for text-derived clinical data in translational research such as tissue-banking and clinical trials. The design of caTIES addresses three critical issues for informatics support of translational research: (1) federation of research data sources derived from clinical systems; (2) expressive graphical interfaces for concept-based text mining; and (3) regulatory and security model for supporting multi-center collaborative research. Implementation of the system at several Cancer Centers across the country is creating a potential network of caTIES repositories that could provide millions of de-identified clinical reports to users. The system provides an end-to-end application of medical natural language processing to support multi-institutional translational research programs.

  1. Application of a model of the auditory primal sketch to cross-linguistic differences in speech rhythm: Implications for the acquisition and recognition of speech

    NASA Astrophysics Data System (ADS)

    Todd, Neil P. M.; Lee, Christopher S.

    2002-05-01

    It has long been noted that the world's languages vary considerably in their rhythmic organization. Different languages seem to privilege different phonological units as their basic rhythmic unit, and there is now a large body of evidence that such differences have important consequences for crucial aspects of language acquisition and processing. The most fundamental finding is that the rhythmic structure of a language strongly influences the process of spoken-word recognition. This finding, together with evidence that infants are sensitive from birth to rhythmic differences between languages, and exploit rhythmic cues to segmentation at an earlier developmental stage than other cues prompted the claim that rhythm is the key which allows infants to begin building a lexicon and then go on to acquire syntax. It is therefore of interest to determine how differences in rhythmic organization arise at the acoustic/auditory level. In this paper, it is shown how an auditory model of the primitive representation of sound provides just such an account of rhythmic differences. Its performance is evaluated on a data set of French and English sentences and compared with the results yielded by the phonetic accounts of Frank Ramus and his colleagues and Esther Grabe and her colleagues.

  2. Specifying real-time systems with interval logic

    NASA Technical Reports Server (NTRS)

    Rushby, John

    1988-01-01

    Pure temporal logic makes no reference to time. An interval temporal logic and an extension to that logic which includes real time constraints are described. The application of this logic by giving a specification for the well-known lift (elevator) example is demonstrated. It is shown how interval logic can be extended to include a notion of process. How the specification language and verification environment of EHDM could be enhanced to support this logic is described. A specification of the alternating bit protocol in this extended version of the specification language of EHDM is given.

  3. EChem++--an object-oriented problem solving environment for electrochemistry. 2. The kinetic facilities of Ecco--a compiler for (electro-)chemistry.

    PubMed

    Ludwig, Kai; Speiser, Bernd

    2004-01-01

    We describe a modeling software component Ecco, implemented in the C++ programming language. It assists in the formulation of physicochemical systems including, in particular, electrochemical processes within general geometries. Ecco's kinetic part then translates any user defined reaction mechanism into an object-oriented representation and generates the according mathematical model equations. The input language, its grammar, the object-oriented design of Ecco, based on design patterns, and its integration into the open source software project EChem++ are discussed. Application Strategies are given.

  4. a Framework for Distributed Mixed Language Scientific Applications

    NASA Astrophysics Data System (ADS)

    Quarrie, D. R.

    The Object Management Group has defined an architecture (CORBA) for distributed object applications based on an Object Request Broker and Interface Definition Language. This project builds upon this architecture to establish a framework for the creation of mixed language scientific applications. A prototype compiler has been written that generates FORTRAN 90 or Eiffel stubs and skeletons and the required C++ glue code from an input IDL file that specifies object interfaces. This generated code can be used directly for non-distributed mixed language applications or in conjunction with the C++ code generated from a commercial IDL compiler for distributed applications. A feasibility study is presently underway to see whether a fully integrated software development environment for distributed, mixed-language applications can be created by modifying the back-end code generator of a commercial CASE tool to emit IDL.

  5. La Description des langues naturelles en vue d'applications linguistiques: Actes du colloque (The Description of Natural Languages with a View to Linguistic Applications: Conference Papers). Publication K-10.

    ERIC Educational Resources Information Center

    Ouellon, Conrad, Comp.

    Presentations from a colloquium on applications of research on natural languages to computer science address the following topics: (1) analysis of complex adverbs; (2) parser use in computerized text analysis; (3) French language utilities; (4) lexicographic mapping of official language notices; (5) phonographic codification of Spanish; (6)…

  6. Automatic classification of written descriptions by healthy adults: An overview of the application of natural language processing and machine learning techniques to clinical discourse analysis.

    PubMed

    Toledo, Cíntia Matsuda; Cunha, Andre; Scarton, Carolina; Aluísio, Sandra

    2014-01-01

    Discourse production is an important aspect in the evaluation of brain-injured individuals. We believe that studies comparing the performance of brain-injured subjects with that of healthy controls must use groups with compatible education. A pioneering application of machine learning methods using Brazilian Portuguese for clinical purposes is described, highlighting education as an important variable in the Brazilian scenario. The aims were to describe how to:(i) develop machine learning classifiers using features generated by natural language processing tools to distinguish descriptions produced by healthy individuals into classes based on their years of education; and(ii) automatically identify the features that best distinguish the groups. The approach proposed here extracts linguistic features automatically from the written descriptions with the aid of two Natural Language Processing tools: Coh-Metrix-Port and AIC. It also includes nine task-specific features (three new ones, two extracted manually, besides description time; type of scene described - simple or complex; presentation order - which type of picture was described first; and age). In this study, the descriptions by 144 of the subjects studied in Toledo 18 were used,which included 200 healthy Brazilians of both genders. A Support Vector Machine (SVM) with a radial basis function (RBF) kernel is the most recommended approach for the binary classification of our data, classifying three of the four initial classes. CfsSubsetEval (CFS) is a strong candidate to replace manual feature selection methods.

  7. Software-safety and software quality assurance in real-time applications Part 2: Real-time structures and languages

    NASA Astrophysics Data System (ADS)

    Schoitsch, Erwin

    1988-07-01

    Our society is depending more and more on the reliability of embedded (real-time) computer systems even in every-day life. Considering the complexity of the real world, this might become a severe threat. Real-time programming is a discipline important not only in process control and data acquisition systems, but also in fields like communication, office automation, interactive databases, interactive graphics and operating systems development. General concepts of concurrent programming and constructs for process-synchronization are discussed in detail. Tasking and synchronization concepts, methods of process communication, interrupt- and timeout handling in systems based on semaphores, signals, conditional critical regions or on real-time languages like Concurrent PASCAL, MODULA, CHILL and ADA are explained and compared with each other and with respect to their potential to quality and safety.

  8. Automated speech understanding: the next generation

    NASA Astrophysics Data System (ADS)

    Picone, J.; Ebel, W. J.; Deshmukh, N.

    1995-04-01

    Modern speech understanding systems merge interdisciplinary technologies from Signal Processing, Pattern Recognition, Natural Language, and Linguistics into a unified statistical framework. These systems, which have applications in a wide range of signal processing problems, represent a revolution in Digital Signal Processing (DSP). Once a field dominated by vector-oriented processors and linear algebra-based mathematics, the current generation of DSP-based systems rely on sophisticated statistical models implemented using a complex software paradigm. Such systems are now capable of understanding continuous speech input for vocabularies of several thousand words in operational environments. The current generation of deployed systems, based on small vocabularies of isolated words, will soon be replaced by a new technology offering natural language access to vast information resources such as the Internet, and provide completely automated voice interfaces for mundane tasks such as travel planning and directory assistance.

  9. Support for linguistic macrofamilies from weighted sequence alignment

    PubMed Central

    Jäger, Gerhard

    2015-01-01

    Computational phylogenetics is in the process of revolutionizing historical linguistics. Recent applications have shed new light on controversial issues, such as the location and time depth of language families and the dynamics of their spread. So far, these approaches have been limited to single-language families because they rely on a large body of expert cognacy judgments or grammatical classifications, which is currently unavailable for most language families. The present study pursues a different approach. Starting from raw phonetic transcription of core vocabulary items from very diverse languages, it applies weighted string alignment to track both phonetic and lexical change. Applied to a collection of ∼1,000 Eurasian languages and dialects, this method, combined with phylogenetic inference, leads to a classification in excellent agreement with established findings of historical linguistics. Furthermore, it provides strong statistical support for several putative macrofamilies contested in current historical linguistics. In particular, there is a solid signal for the Nostratic/Eurasiatic macrofamily. PMID:26403857

  10. Enhancing Web applications in radiology with Java: estimating MR imaging relaxation times.

    PubMed

    Dagher, A P; Fitzpatrick, M; Flanders, A E; Eng, J

    1998-01-01

    Java is a relatively new programming language that has been used to develop a World Wide Web-based tool for estimating magnetic resonance (MR) imaging relaxation times, thereby demonstrating how Java may be used for Web-based radiology applications beyond improving the user interface of teaching files. A standard processing algorithm coded with Java is downloaded along with the hypertext markup language (HTML) document. The user (client) selects the desired pulse sequence and inputs data obtained from a region of interest on the MR images. The algorithm is used to modify selected MR imaging parameters in an equation that models the phenomenon being evaluated. MR imaging relaxation times are estimated, and confidence intervals and a P value expressing the accuracy of the final results are calculated. Design features such as simplicity, object-oriented programming, and security restrictions allow Java to expand the capabilities of HTML by offering a more versatile user interface that includes dynamic annotations and graphics. Java also allows the client to perform more sophisticated information processing and computation than is usually associated with Web applications. Java is likely to become a standard programming option, and the development of stand-alone Java applications may become more common as Java is integrated into future versions of computer operating systems.

  11. The Design of Lexical Database for Indonesian Language

    NASA Astrophysics Data System (ADS)

    Gunawan, D.; Amalia, A.

    2017-03-01

    Kamus Besar Bahasa Indonesia (KBBI), an official dictionary for Indonesian language, provides lists of words with their meaning. The online version can be accessed via Internet network. Another online dictionary is Kateglo. KBBI online and Kateglo only provides an interface for human. A machine cannot retrieve data from the dictionary easily without using advanced techniques. Whereas, lexical of words is required in research or application development which related to natural language processing, text mining, information retrieval or sentiment analysis. To address this requirement, we need to build a lexical database which provides well-defined structured information about words. A well-known lexical database is WordNet, which provides the relation among words in English. This paper proposes the design of a lexical database for Indonesian language based on the combination of KBBI 4th edition, Kateglo and WordNet structure. Knowledge representation by utilizing semantic networks depict the relation among words and provide the new structure of lexical database for Indonesian language. The result of this design can be used as the foundation to build the lexical database for Indonesian language.

  12. Educational and interpersonal uses of home computers by adolescents with and without specific language impairment.

    PubMed

    Durkin, Kevin; Conti-Ramsden, Gina; Walker, Allan; Simkin, Zoë

    2009-03-01

    Many uses of new media entail processing language content, yet little is known about the relationship between language ability and media use in young people. This study compares educational versus interpersonal uses of home computers in adolescents with and without a history of specific language impairment (SLI). Participants were 55 17-year-olds with SLI and 72 typically developing peers. Measures of frequency and ease of computer use were obtained as well as assessments of participants' psycholinguistic skills. Results showed a strong preference for interpersonal computer use in both groups. Virtually all participants engaged with interpersonal new media, finding them relatively easy to use. In contrast, one third of adolescents with SLI did not use educational applications during a typical week. Regression analyses revealed that lower frequency of educational use was associated with poorer language and literacy skills. However, in adolescents with SLI, this association was mediated by perceived ease of use. The findings show that language ability contributes to new media use and that adolescents with SLI are at a greater risk of low levels of engagement with educational technology.

  13. Understanding Language: An Information-Processing Analysis of Speech Perception, Reading, and Psycholinguistics.

    ERIC Educational Resources Information Center

    Massaro, Dominic W., Ed.

    In an information-processing approach to language processing, language processing is viewed as a sequence of psychological stages that occur between the initial presentation of the language stimulus and the meaning in the mind of the language processor. This book defines each of the processes and structures involved, explains how each of them…

  14. Spoken Language Processing Model: Bridging Auditory and Language Processing to Guide Assessment and Intervention

    ERIC Educational Resources Information Center

    Medwetsky, Larry

    2011-01-01

    Purpose: This article outlines the author's conceptualization of the key mechanisms that are engaged in the processing of spoken language, referred to as the spoken language processing model. The act of processing what is heard is very complex and involves the successful intertwining of auditory, cognitive, and language mechanisms. Spoken language…

  15. Applicability of the Compensatory Encoding Model in Foreign Language Reading: An Investigation with Chinese College English Language Learners

    PubMed Central

    Han, Feifei

    2017-01-01

    While some first language (L1) reading models suggest that inefficient word recognition and small working memory tend to inhibit higher-level comprehension processes; the Compensatory Encoding Model maintains that slow word recognition and small working memory do not normally hinder reading comprehension, as readers are able to operate metacognitive strategies to compensate for inefficient word recognition and working memory limitation as long as readers process a reading task without time constraint. Although empirical evidence is accumulated for support of the Compensatory Encoding Model in L1 reading, there is lack of research for testing of the Compensatory Encoding Model in foreign language (FL) reading. This research empirically tested the Compensatory Encoding Model in English reading among Chinese college English language learners (ELLs). Two studies were conducted. Study one focused on testing whether reading condition varying time affects the relationship between word recognition, working memory, and reading comprehension. Students were tested on a computerized English word recognition test, a computerized Operation Span task, and reading comprehension in time constraint and non-time constraint reading. The correlation and regression analyses showed that the strength of association was much stronger between word recognition, working memory, and reading comprehension in time constraint than that in non-time constraint reading condition. Study two examined whether FL readers were able to operate metacognitive reading strategies as a compensatory way of reading comprehension for inefficient word recognition and working memory limitation in non-time constraint reading. The participants were tested on the same computerized English word recognition test and Operation Span test. They were required to think aloud while reading and to complete the comprehension questions. The think-aloud protocols were coded for concurrent use of reading strategies, classified into language-oriented strategies, content-oriented strategies, re-reading, pausing, and meta-comment. The correlation analyses showed that while word recognition and working memory were only significantly related to frequency of language-oriented strategies, re-reading, and pausing, but not with reading comprehension. Jointly viewed, the results of the two studies, complimenting each other, supported the applicability of the Compensatory Encoding Model in FL reading with Chinese college ELLs. PMID:28522984

  16. Applicability of the Compensatory Encoding Model in Foreign Language Reading: An Investigation with Chinese College English Language Learners.

    PubMed

    Han, Feifei

    2017-01-01

    While some first language (L1) reading models suggest that inefficient word recognition and small working memory tend to inhibit higher-level comprehension processes; the Compensatory Encoding Model maintains that slow word recognition and small working memory do not normally hinder reading comprehension, as readers are able to operate metacognitive strategies to compensate for inefficient word recognition and working memory limitation as long as readers process a reading task without time constraint. Although empirical evidence is accumulated for support of the Compensatory Encoding Model in L1 reading, there is lack of research for testing of the Compensatory Encoding Model in foreign language (FL) reading. This research empirically tested the Compensatory Encoding Model in English reading among Chinese college English language learners (ELLs). Two studies were conducted. Study one focused on testing whether reading condition varying time affects the relationship between word recognition, working memory, and reading comprehension. Students were tested on a computerized English word recognition test, a computerized Operation Span task, and reading comprehension in time constraint and non-time constraint reading. The correlation and regression analyses showed that the strength of association was much stronger between word recognition, working memory, and reading comprehension in time constraint than that in non-time constraint reading condition. Study two examined whether FL readers were able to operate metacognitive reading strategies as a compensatory way of reading comprehension for inefficient word recognition and working memory limitation in non-time constraint reading. The participants were tested on the same computerized English word recognition test and Operation Span test. They were required to think aloud while reading and to complete the comprehension questions. The think-aloud protocols were coded for concurrent use of reading strategies, classified into language-oriented strategies, content-oriented strategies, re-reading, pausing, and meta-comment. The correlation analyses showed that while word recognition and working memory were only significantly related to frequency of language-oriented strategies, re-reading, and pausing, but not with reading comprehension. Jointly viewed, the results of the two studies, complimenting each other, supported the applicability of the Compensatory Encoding Model in FL reading with Chinese college ELLs.

  17. Extracting important information from Chinese Operation Notes with natural language processing methods.

    PubMed

    Wang, Hui; Zhang, Weide; Zeng, Qiang; Li, Zuofeng; Feng, Kaiyan; Liu, Lei

    2014-04-01

    Extracting information from unstructured clinical narratives is valuable for many clinical applications. Although natural Language Processing (NLP) methods have been profoundly studied in electronic medical records (EMR), few studies have explored NLP in extracting information from Chinese clinical narratives. In this study, we report the development and evaluation of extracting tumor-related information from operation notes of hepatic carcinomas which were written in Chinese. Using 86 operation notes manually annotated by physicians as the training set, we explored both rule-based and supervised machine-learning approaches. Evaluating on unseen 29 operation notes, our best approach yielded 69.6% in precision, 58.3% in recall and 63.5% F-score. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. The Relationship between User Expertise and Structural Ontology Characteristics

    ERIC Educational Resources Information Center

    Waldstein, Ilya Michael

    2014-01-01

    Ontologies are commonly used to support application tasks such as natural language processing, knowledge management, learning, browsing, and search. Literature recommends considering specific context during ontology design, and highlights that a different context is responsible for problems in ontology reuse. However, there is still no clear…

  19. Artificial Intelligence in ADA: Pattern-Directed Processing. Final Report.

    ERIC Educational Resources Information Center

    Reeker, Larry H.; And Others

    To demonstrate to computer programmers that the programming language Ada provides superior facilities for use in artificial intelligence applications, the three papers included in this report investigate the capabilities that exist within Ada for "pattern-directed" programming. The first paper (Larry H. Reeker, Tulane University) is…

  20. Null Element Restoration

    ERIC Educational Resources Information Center

    Gabbard, Ryan

    2010-01-01

    Understanding the syntactic structure of a sentence is a necessary preliminary to understanding its semantics and therefore for many practical applications. The field of natural language processing has achieved a high degree of accuracy in parsing, at least in English. However, the syntactic structures produced by the most commonly used parsers…

  1. 40 CFR 69.52 - Non-motor vehicle diesel fuel.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... heating oil shall include the language specified in 40 CFR 80.590(a) applicable to undyed diesel fuel for the appropriate sulfur level, and the following additional language as applicable: (1) For exempt NRLM... motor vehicle diesel fuel shall contain the following language for the applicable sulfur level and time...

  2. Engaging Language Learners through Technology Integration: Theory, Applications, and Outcomes

    ERIC Educational Resources Information Center

    Li, Shuai, Ed.; Swanson, Peter, Ed.

    2014-01-01

    Web 2.0 technologies, open source software platforms, and mobile applications have transformed teaching and learning of second and foreign languages. Language teaching has transitioned from a teacher-centered approach to a student-centered approach through the use of Computer-Assisted Language Learning (CALL) and new teaching approaches.…

  3. 77 FR 65927 - 60-Day Notice of Proposed Information Collection: Office of Language Services Contractor...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-31

    ... of Language Services Contractor Application Form ACTION: Notice of request for public comment... Collection: Office of Language Services Contractor Application Form. OMB Control Number: 1405-0191. Type of... become contractors for the U.S. Department of State, Office of Language Services, the information...

  4. The politics of psycholinguistics.

    PubMed

    Cohen-Cole, Jamie

    2015-01-01

    This article narrates the history of the interdisciplinary field of psycholinguistics from its modern organization in the 1950s to its application and influence in the field of reading instruction. Beginning as a combination of structural linguistics, behaviorist psychology, and information theory, the field was revolutionized by the collaboration of the psychologist George Miller and the linguist Noam Chomsky. This transformation was, at root, the adoption of the view that humans should be best understood as creative users of language and the rejection of behaviorist or machine models. Under their influence the field came to treat humans as creative, nonmechanical learners and users of language who, like scientists, hypothesize in order to understand and even perceive the world. This vision of language as a nondeterministic process shaped the field of reading instruction by providing the central model to advocates of the whole-language pedagogical method. © 2014 Wiley Periodicals, Inc.

  5. Plan recognition and generalization in command languages with application to telerobotics

    NASA Technical Reports Server (NTRS)

    Yared, Wael I.; Sheridan, Thomas B.

    1991-01-01

    A method for pragmatic inference as a necessary accompaniment to command languages is proposed. The approach taken focuses on the modeling and recognition of the human operator's intent, which relates sequences of domain actions ('plans') to changes in some model of the task environment. The salient feature of this module is that it captures some of the physical and linguistic contextual aspects of an instruction. This provides a basis for generalization and reinterpretation of the instruction in different task environments. The theoretical development is founded on previous work in computational linguistics and some recent models in the theory of action and intention. To illustrate these ideas, an experimental command language to a telerobot is implemented. The program consists of three different components: a robot graphic simulation, the command language itself, and the domain-independent pragmatic inference module. Examples of task instruction processes are provided to demonstrate the benefits of this approach.

  6. The Bilingual Language Interaction Network for Comprehension of Speech*

    PubMed Central

    Marian, Viorica

    2013-01-01

    During speech comprehension, bilinguals co-activate both of their languages, resulting in cross-linguistic interaction at various levels of processing. This interaction has important consequences for both the structure of the language system and the mechanisms by which the system processes spoken language. Using computational modeling, we can examine how cross-linguistic interaction affects language processing in a controlled, simulated environment. Here we present a connectionist model of bilingual language processing, the Bilingual Language Interaction Network for Comprehension of Speech (BLINCS), wherein interconnected levels of processing are created using dynamic, self-organizing maps. BLINCS can account for a variety of psycholinguistic phenomena, including cross-linguistic interaction at and across multiple levels of processing, cognate facilitation effects, and audio-visual integration during speech comprehension. The model also provides a way to separate two languages without requiring a global language-identification system. We conclude that BLINCS serves as a promising new model of bilingual spoken language comprehension. PMID:24363602

  7. A verification strategy for web services composition using enhanced stacked automata model.

    PubMed

    Nagamouttou, Danapaquiame; Egambaram, Ilavarasan; Krishnan, Muthumanickam; Narasingam, Poonkuzhali

    2015-01-01

    Currently, Service-Oriented Architecture (SOA) is becoming the most popular software architecture of contemporary enterprise applications, and one crucial technique of its implementation is web services. Individual service offered by some service providers may symbolize limited business functionality; however, by composing individual services from different service providers, a composite service describing the intact business process of an enterprise can be made. Many new standards have been defined to decipher web service composition problem namely Business Process Execution Language (BPEL). BPEL provides an initial work for forming an Extended Markup Language (XML) specification language for defining and implementing business practice workflows for web services. The problems with most realistic approaches to service composition are the verification of composed web services. It has to depend on formal verification method to ensure the correctness of composed services. A few research works has been carried out in the literature survey for verification of web services for deterministic system. Moreover the existing models did not address the verification properties like dead transition, deadlock, reachability and safetyness. In this paper, a new model to verify the composed web services using Enhanced Stacked Automata Model (ESAM) has been proposed. The correctness properties of the non-deterministic system have been evaluated based on the properties like dead transition, deadlock, safetyness, liveness and reachability. Initially web services are composed using Business Process Execution Language for Web Service (BPEL4WS) and it is converted into ESAM (combination of Muller Automata (MA) and Push Down Automata (PDA)) and it is transformed into Promela language, an input language for Simple ProMeLa Interpreter (SPIN) tool. The model is verified using SPIN tool and the results revealed better recital in terms of finding dead transition and deadlock in contrast to the existing models.

  8. Towards programming languages for genetic engineering of living cells

    PubMed Central

    Pedersen, Michael; Phillips, Andrew

    2009-01-01

    Synthetic biology aims at producing novel biological systems to carry out some desired and well-defined functions. An ultimate dream is to design these systems at a high level of abstraction using engineering-based tools and programming languages, press a button, and have the design translated to DNA sequences that can be synthesized and put to work in living cells. We introduce such a programming language, which allows logical interactions between potentially undetermined proteins and genes to be expressed in a modular manner. Programs can be translated by a compiler into sequences of standard biological parts, a process that relies on logic programming and prototype databases that contain known biological parts and protein interactions. Programs can also be translated to reactions, allowing simulations to be carried out. While current limitations on available data prevent full use of the language in practical applications, the language can be used to develop formal models of synthetic systems, which are otherwise often presented by informal notations. The language can also serve as a concrete proposal on which future language designs can be discussed, and can help to guide the emerging standard of biological parts which so far has focused on biological, rather than logical, properties of parts. PMID:19369220

  9. Towards programming languages for genetic engineering of living cells.

    PubMed

    Pedersen, Michael; Phillips, Andrew

    2009-08-06

    Synthetic biology aims at producing novel biological systems to carry out some desired and well-defined functions. An ultimate dream is to design these systems at a high level of abstraction using engineering-based tools and programming languages, press a button, and have the design translated to DNA sequences that can be synthesized and put to work in living cells. We introduce such a programming language, which allows logical interactions between potentially undetermined proteins and genes to be expressed in a modular manner. Programs can be translated by a compiler into sequences of standard biological parts, a process that relies on logic programming and prototype databases that contain known biological parts and protein interactions. Programs can also be translated to reactions, allowing simulations to be carried out. While current limitations on available data prevent full use of the language in practical applications, the language can be used to develop formal models of synthetic systems, which are otherwise often presented by informal notations. The language can also serve as a concrete proposal on which future language designs can be discussed, and can help to guide the emerging standard of biological parts which so far has focused on biological, rather than logical, properties of parts.

  10. Employing mobile technology to improve language skills of young students with language-based disabilities.

    PubMed

    Rodríguez, Cathi Draper; Cumming, Therese M

    2017-01-01

    This exploratory study investigated the effects of a language building iPad application on the language skills (i.e., receptive vocabulary, expressive vocabulary, and sentence formation) of young students with language-based disabilities. The study utilized a pre-test-post-test control group design. Students in the treatment group used the iPad language building application, Language Builder, for 30 minutes a day. Participants were 31 first-grade to third-grade students with identified language-based disabilities. Students were assigned to two groups for the 8-week intervention. Data indicated that students in the treatment group made significantly greater gains in the area of sentence formation than the control group. Results revealed no significant difference between the two groups in the areas of expressive and receptive vocabulary. A short intervention of using Language Builder via the iPad may increase the sentence formation skills of young students with language delays. Additionally, discussion regarding the usefulness of iPad applications in education is presented.

  11. Monitoring Different Phonological Parameters of Sign Language Engages the Same Cortical Language Network but Distinctive Perceptual Ones.

    PubMed

    Cardin, Velia; Orfanidou, Eleni; Kästner, Lena; Rönnberg, Jerker; Woll, Bencie; Capek, Cheryl M; Rudner, Mary

    2016-01-01

    The study of signed languages allows the dissociation of sensorimotor and cognitive neural components of the language signal. Here we investigated the neurocognitive processes underlying the monitoring of two phonological parameters of sign languages: handshape and location. Our goal was to determine if brain regions processing sensorimotor characteristics of different phonological parameters of sign languages were also involved in phonological processing, with their activity being modulated by the linguistic content of manual actions. We conducted an fMRI experiment using manual actions varying in phonological structure and semantics: (1) signs of a familiar sign language (British Sign Language), (2) signs of an unfamiliar sign language (Swedish Sign Language), and (3) invented nonsigns that violate the phonological rules of British Sign Language and Swedish Sign Language or consist of nonoccurring combinations of phonological parameters. Three groups of participants were tested: deaf native signers, deaf nonsigners, and hearing nonsigners. Results show that the linguistic processing of different phonological parameters of sign language is independent of the sensorimotor characteristics of the language signal. Handshape and location were processed by different perceptual and task-related brain networks but recruited the same language areas. The semantic content of the stimuli did not influence this process, but phonological structure did, with nonsigns being associated with longer RTs and stronger activations in an action observation network in all participants and in the supramarginal gyrus exclusively in deaf signers. These results suggest higher processing demands for stimuli that contravene the phonological rules of a signed language, independently of previous knowledge of signed languages. We suggest that the phonological characteristics of a language may arise as a consequence of more efficient neural processing for its perception and production.

  12. Machine Learning and Radiology

    PubMed Central

    Wang, Shijun; Summers, Ronald M.

    2012-01-01

    In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers. PMID:22465077

  13. Talking Cure Models: A Framework of Analysis

    PubMed Central

    Marx, Christopher; Benecke, Cord; Gumz, Antje

    2017-01-01

    Psychotherapy is commonly described as a “talking cure,” a treatment method that operates through linguistic action and interaction. The operative specifics of therapeutic language use, however, are insufficiently understood, mainly due to a multitude of disparate approaches that advance different notions of what “talking” means and what “cure” implies in the respective context. Accordingly, a clarification of the basic theoretical structure of “talking cure models,” i.e., models that describe therapeutic processes with a focus on language use, is a desideratum of language-oriented psychotherapy research. Against this background the present paper suggests a theoretical framework of analysis which distinguishes four basic components of “talking cure models”: (1) a foundational theory (which suggests how linguistic activity can affect and transform human experience), (2) an experiential problem state (which defines the problem or pathology of the patient), (3) a curative linguistic activity (which defines linguistic activities that are supposed to effectuate a curative transformation of the experiential problem state), and (4) a change mechanism (which defines the processes and effects involved in such transformations). The purpose of the framework is to establish a terminological foundation that allows for systematically reconstructing basic properties and operative mechanisms of “talking cure models.” To demonstrate the applicability and utility of the framework, five distinct “talking cure models” which spell out the details of curative “talking” processes in terms of (1) catharsis, (2) symbolization, (3) narrative, (4) metaphor, and (5) neurocognitive inhibition are introduced and discussed in terms of the framework components. In summary, we hope that our framework will prove useful for the objective of clarifying the theoretical underpinnings of language-oriented psychotherapy research and help to establish a more comprehensive understanding of how curative language use contributes to the process of therapeutic change. PMID:28955286

  14. Vivaldi: A Domain-Specific Language for Volume Processing and Visualization on Distributed Heterogeneous Systems.

    PubMed

    Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki

    2014-12-01

    As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.

  15. "Language Is the Skin of My Thought": Integrating Wikipedia and AI to Support a Guillotine Player

    NASA Astrophysics Data System (ADS)

    Lops, Pasquale; Basile, Pierpaolo; de Gemmis, Marco; Semeraro, Giovanni

    This paper describes OTTHO (On the Tip of my THOught), a system designed for solving a language game, called Guillotine, which demands knowledge covering a broad range of topics, such as movies, politics, literature, history, proverbs, and popular culture. The rule of the game is simple: the player observes five words, generally unrelated to each other, and in one minute she has to provide a sixth word, semantically connected to the others. The system exploits several knowledge sources, such as a dictionary, a set of proverbs, and Wikipedia to realize a knowledge infusion process. The paper describes the process of modeling these sources and the reasoning mechanism to find the solution of the game. The main motivation for designing an artificial player for Guillotine is the challenge of providing the machine with the cultural and linguistic background knowledge which makes it similar to a human being, with the ability of interpreting natural language documents and reasoning on their content. Experiments carried out showed promising results. Our feeling is that the presented approach has a great potential for other more practical applications besides solving a language game.

  16. Declarative Business Process Modelling and the Generation of ERP Systems

    NASA Astrophysics Data System (ADS)

    Schultz-Møller, Nicholas Poul; Hølmer, Christian; Hansen, Michael R.

    We present an approach to the construction of Enterprise Resource Planning (ERP) Systems, which is based on the Resources, Events and Agents (REA) ontology. This framework deals with processes involving exchange and flow of resources in a declarative, graphically-based manner describing what the major entities are rather than how they engage in computations. We show how to develop a domain-specific language on the basis of REA, and a tool which automatically can generate running web-applications. A main contribution is a proof-of-concept showing that business-domain experts can generate their own applications without worrying about implementation details.

  17. Developing a Large Lexical Database for Information Retrieval, Parsing, and Text Generation Systems.

    ERIC Educational Resources Information Center

    Conlon, Sumali Pin-Ngern; And Others

    1993-01-01

    Important characteristics of lexical databases and their applications in information retrieval and natural language processing are explained. An ongoing project using various machine-readable sources to build a lexical database is described, and detailed designs of individual entries with examples are included. (Contains 66 references.) (EAM)

  18. Representational Issues in Systemic Functional Grammar and Systemic Grammar and Functional Unification Grammar. ISI Reprint Series.

    ERIC Educational Resources Information Center

    Matthiessen, Christian; Kasper, Robert

    Consisting of two separate papers, "Representational Issues in Systemic Functional Grammar," by Christian Matthiessen and "Systemic Grammar and Functional Unification Grammar," by Robert Kasper, this document deals with systemic aspects of natural language processing and linguistic theory and with computational applications of…

  19. Simple Logic for Big Problems: An Inside Look at Relational Databases.

    ERIC Educational Resources Information Center

    Seba, Douglas B.; Smith, Pat

    1982-01-01

    Discusses database design concept termed "normalization" (process replacing associations between data with associations in two-dimensional tabular form) which results in formation of relational databases (they are to computers what dictionaries are to spoken languages). Applications of the database in serials control and complex systems…

  20. Impairment: The Case of Phonotactic Probability and Nonword Repetition

    ERIC Educational Resources Information Center

    McKean, Cristina; Letts, Carolyn; Howard, David

    2013-01-01

    Purpose: In this study, the authors aimed to explore the relationship between lexical and phonological knowledge in children with primary language impairment (PLI) through the application of a developmental methodology. Specifically, they tested whether there is evidence for an impairment in the process of phonological abstraction in this group of…

  1. Overcoming the Grammar Deficit: The Role of Information Technology in Teaching German Grammar to Undergraduates.

    ERIC Educational Resources Information Center

    Hall, Christopher

    1998-01-01

    Examines how application of computer-assisted language learning (CALL) and information technology can be used to overcome "grammar deficit" seen in many British undergraduate German students. A combination of explicit, implicit, and exploratory grammar teaching approaches uses diverse resources, including word processing packages,…

  2. 25 CFR 537.1 - Applications for approval.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... designated by the Commission. (b) For each natural person identified in paragraph (a) of this section, the..., citizenship, and gender; (ii) A current photograph, driver's license number, and a list of all languages... voluntary. However, failure to supply a SSN may result in errors in processing the information provided. (5...

  3. 25 CFR 537.1 - Applications for approval.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... designated by the Commission. (b) For each natural person identified in paragraph (a) of this section, the..., citizenship, and gender; (ii) A current photograph, driver's license number, and a list of all languages... voluntary. However, failure to supply a SSN may result in errors in processing the information provided. (5...

  4. 25 CFR 537.1 - Applications for approval.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... designated by the Commission. (b) For each natural person identified in paragraph (a) of this section, the..., citizenship, and gender; (ii) A current photograph, driver's license number, and a list of all languages... voluntary. However, failure to supply a SSN may result in errors in processing the information provided. (5...

  5. Semantic Search of Web Services

    ERIC Educational Resources Information Center

    Hao, Ke

    2013-01-01

    This dissertation addresses semantic search of Web services using natural language processing. We first survey various existing approaches, focusing on the fact that the expensive costs of current semantic annotation frameworks result in limited use of semantic search for large scale applications. We then propose a vector space model based service…

  6. 25 CFR 537.1 - Applications for approval.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... designated by the Commission. (b) For each natural person identified in paragraph (a) of this section, the..., citizenship, and gender; (ii) A current photograph, driver's license number, and a list of all languages... voluntary. However, failure to supply a SSN may result in errors in processing the information provided. (5...

  7. 25 CFR 537.1 - Applications for approval.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... designated by the Commission. (b) For each natural person identified in paragraph (a) of this section, the..., citizenship, and gender; (ii) A current photograph, driver's license number, and a list of all languages... voluntary. However, failure to supply a SSN may result in errors in processing the information provided. (5...

  8. Lexical Link Analysis Application: Improving Web Service to Acquisition Visibility Portal

    DTIC Science & Technology

    2013-09-30

    during the Empire Challenge 2008 and 2009 (EC08/09) field experiments and for numerous other field experiments of new technologies during Trident Warrior...Empirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/ VLC -2000) (pp. 63–70). Retrieved from http://nlp.stanford.edu/manning

  9. "Intelligent" Computer Assisted Instruction (CAI) Applications. Interim Report.

    ERIC Educational Resources Information Center

    Brown, John Seely; And Others

    Interim work is documented describing efforts to modify computer techniques used to recognize and process English language requests to an instructional simulator. The conversion from a hand-coded to a table driven technique are described in detail. Other modifications to a simulation based computer assisted instruction program to allow a gaming…

  10. 78 FR 75580 - Information Collection Request; Submission for OMB Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-12

    ... information collection request to the Office of Management and Budget (OMB) for review and approval. The... individuals, including technical and language skills, and availability for Peace Corps service. The Peace... skills and interests. Due to this change in the way applicants are processed and an overall agency effort...

  11. Native-language N400 and P600 predict dissociable language-learning abilities in adults

    PubMed Central

    Qi, Zhenghan; Beach, Sara D.; Finn, Amy S.; Minas, Jennifer; Goetz, Calvin; Chan, Brian; Gabrieli, John D.E.

    2018-01-01

    Language learning aptitude during adulthood varies markedly across individuals. An individual’s native-language ability has been associated with success in learning a new language as an adult. However, little is known about how native-language processing affects learning success and what neural markers of native-language processing, if any, are related to success in learning. We therefore related variation in electrophysiology during native-language processing to success in learning a novel artificial language. Event-related potentials (ERPs) were recorded while native English speakers judged the acceptability of English sentences prior to learning an artificial language. There was a trend towards a double dissociation between native-language ERPs and their relationships to novel syntax and vocabulary learning. Individuals who exhibited a greater N400 effect when processing English semantics showed better future learning of the artificial language overall. The N400 effect was related to syntax learning via its specific relationship to vocabulary learning. In contrast, the P600 effect size when processing English syntax predicted future syntax learning but not vocabulary learning. These findings show that distinct neural signatures of native-language processing relate to dissociable abilities for learning novel semantic and syntactic information. PMID:27737775

  12. Hebrew Brain vs. English Brain: Language Modulates the Way It Is Processed

    ERIC Educational Resources Information Center

    Bick, Atira S.; Goelman, Gadi; Frost, Ram

    2011-01-01

    Is language processing universal? How do the specific properties of each language influence the way it is processed? In this study, we compare the neural correlates of morphological processing in Hebrew--a Semitic language with a rich and systematic morphology, to those revealed in English--an Indo-European language with a linear morphology. Using…

  13. Mayo clinical Text Analysis and Knowledge Extraction System (cTAKES): architecture, component evaluation and applications

    PubMed Central

    Masanz, James J; Ogren, Philip V; Zheng, Jiaping; Sohn, Sunghwan; Kipper-Schuler, Karin C; Chute, Christopher G

    2010-01-01

    We aim to build and evaluate an open-source natural language processing system for information extraction from electronic medical record clinical free-text. We describe and evaluate our system, the clinical Text Analysis and Knowledge Extraction System (cTAKES), released open-source at http://www.ohnlp.org. The cTAKES builds on existing open-source technologies—the Unstructured Information Management Architecture framework and OpenNLP natural language processing toolkit. Its components, specifically trained for the clinical domain, create rich linguistic and semantic annotations. Performance of individual components: sentence boundary detector accuracy=0.949; tokenizer accuracy=0.949; part-of-speech tagger accuracy=0.936; shallow parser F-score=0.924; named entity recognizer and system-level evaluation F-score=0.715 for exact and 0.824 for overlapping spans, and accuracy for concept mapping, negation, and status attributes for exact and overlapping spans of 0.957, 0.943, 0.859, and 0.580, 0.939, and 0.839, respectively. Overall performance is discussed against five applications. The cTAKES annotations are the foundation for methods and modules for higher-level semantic processing of clinical free-text. PMID:20819853

  14. Process Management and Exception Handling in Multiprocessor Operating Systems Using Object-Oriented Design Techniques. Revised Sep. 1988

    NASA Technical Reports Server (NTRS)

    Russo, Vincent; Johnston, Gary; Campbell, Roy

    1988-01-01

    The programming of the interrupt handling mechanisms, process switching primitives, scheduling mechanism, and synchronization primitives of an operating system for a multiprocessor require both efficient code in order to support the needs of high- performance or real-time applications and careful organization to facilitate maintenance. Although many advantages have been claimed for object-oriented class hierarchical languages and their corresponding design methodologies, the application of these techniques to the design of the primitives within an operating system has not been widely demonstrated. To investigate the role of class hierarchical design in systems programming, the authors have constructed the Choices multiprocessor operating system architecture the C++ programming language. During the implementation, it was found that many operating system design concerns can be represented advantageously using a class hierarchical approach, including: the separation of mechanism and policy; the organization of an operating system into layers, each of which represents an abstract machine; and the notions of process and exception management. In this paper, we discuss an implementation of the low-level primitives of this system and outline the strategy by which we developed our solution.

  15. Language and vertical space: on the automaticity of language action interconnections.

    PubMed

    Dudschig, Carolin; de la Vega, Irmgard; De Filippis, Monica; Kaup, Barbara

    2014-09-01

    Grounded models of language processing propose a strong connection between language and sensorimotor processes (Barsalou, 1999, 2008; Glenberg & Kaschak, 2002). However, it remains unclear how functional and automatic these connections are for understanding diverse sets of words (Ansorge, Kiefer, Khalid, Grassl, & König, 2010). Here, we investigate whether words referring to entities with a typical location in the upper or lower visual field (e.g., sun, ground) automatically influence subsequent motor responses even when language-processing levels are kept minimal. The results show that even subliminally presented words influence subsequent actions, as can be seen in a reversed compatibility effect. These finding have several implications for grounded language processing models. Specifically, these results suggest that language-action interconnections are not only the result of strategic language processes, but already play an important role during pre-attentional language processing stages. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. XML — an opportunity for data standards in the geosciences

    NASA Astrophysics Data System (ADS)

    Houlding, Simon W.

    2001-08-01

    Extensible markup language (XML) is a recently introduced meta-language standard on the Web. It provides the rules for development of metadata (markup) standards for information transfer in specific fields. XML allows development of markup languages that describe what information is rather than how it should be presented. This allows computer applications to process the information in intelligent ways. In contrast hypertext markup language (HTML), which fuelled the initial growth of the Web, is a metadata standard concerned exclusively with presentation of information. Besides its potential for revolutionizing Web activities, XML provides an opportunity for development of meaningful data standards in specific application fields. The rapid endorsement of XML by science, industry and e-commerce has already spawned new metadata standards in such fields as mathematics, chemistry, astronomy, multi-media and Web micro-payments. Development of XML-based data standards in the geosciences would significantly reduce the effort currently wasted on manipulating and reformatting data between different computer platforms and applications and would ensure compatibility with the new generation of Web browsers. This paper explores the evolution, benefits and status of XML and related standards in the more general context of Web activities and uses this as a platform for discussion of its potential for development of data standards in the geosciences. Some of the advantages of XML are illustrated by a simple, browser-compatible demonstration of XML functionality applied to a borehole log dataset. The XML dataset and the associated stylesheet and schema declarations are available for FTP download.

  17. Sequence and batch language programs and alarm-related ``C`` programs for the 242-A MCS. Revision 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berger, J.F.

    1995-03-01

    A Distributive Process Control system was purchased by Project B-534, ``242-A Evaporator/Crystallizer Upgrades``. This control system, called the Monitor and Control System (MCS), was installed in the 242-A Evaporator located in the 200 East Area. The purpose of the MCS is to monitor and control the Evaporator and monitor a number of alarms and other signals from various Tank Farm facilities. Applications software for the MCS was developed by the Waste Treatment Systems Engineering (WTSE) group of Westinghouse. The standard displays and alarm scheme provide for control and monitoring, but do not directly indicate the signal location or depict themore » overall process. To do this, WTSE developed a second alarm scheme which uses special programs, annunciator keys, and process graphics. The special programs are written in two languages; Sequence and Batch Language (SABL), and ``C`` language. The WTSE-developed alarm scheme works as described below: SABL relates signals and alarms to the annunciator keys, called SKID keys. When an alarm occurs, a SABL program causes a SKID key to flash, and if the alarm is of yellow or white priority then a ``C`` program turns on an audible horn (the D/3 system uses a different audible horn for the red priority alarms). The horn and flashing key draws the attention of the operator.« less

  18. Real-Time Language Processing in School-Age Children with Specific Language Impairment

    ERIC Educational Resources Information Center

    Montgomery, James W.

    2006-01-01

    Background:School-age children with specific language impairment (SLI) exhibit slower real-time (i.e. immediate) language processing relative to same-age peers and younger, language-matched peers. Results of the few studies that have been done seem to indicate that the slower language processing of children with SLI is due to inefficient…

  19. FPGA Coprocessor for Accelerated Classification of Images

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Scharenbroich, Lucas J.; Werne, Thomas A.

    2008-01-01

    An effort related to that described in the preceding article focuses on developing a spaceborne processing platform for fast and accurate onboard classification of image data, a critical part of modern satellite image processing. The approach again has been to exploit the versatility of recently developed hybrid Virtex-4FX field-programmable gate array (FPGA) to run diverse science applications on embedded processors while taking advantage of the reconfigurable hardware resources of the FPGAs. In this case, the FPGA serves as a coprocessor that implements legacy C-language support-vector-machine (SVM) image-classification algorithms to detect and identify natural phenomena such as flooding, volcanic eruptions, and sea-ice break-up. The FPGA provides hardware acceleration for increased onboard processing capability than previously demonstrated in software. The original C-language program demonstrated on an imaging instrument aboard the Earth Observing-1 (EO-1) satellite implements a linear-kernel SVM algorithm for classifying parts of the images as snow, water, ice, land, or cloud or unclassified. Current onboard processors, such as on EO-1, have limited computing power, extremely limited active storage capability and are no longer considered state-of-the-art. Using commercially available software that translates C-language programs into hardware description language (HDL) files, the legacy C-language program, and two newly formulated programs for a more capable expanded-linear-kernel and a more accurate polynomial-kernel SVM algorithm, have been implemented in the Virtex-4FX FPGA. In tests, the FPGA implementations have exhibited significant speedups over conventional software implementations running on general-purpose hardware.

  20. A VBA Desktop Database for Proposal Processing at National Optical Astronomy Observatories

    NASA Astrophysics Data System (ADS)

    Brown, Christa L.

    National Optical Astronomy Observatories (NOAO) has developed a relational Microsoft Windows desktop database using Microsoft Access and the Microsoft Office programming language, Visual Basic for Applications (VBA). The database is used to track data relating to observing proposals from original receipt through the review process, scheduling, observing, and final statistical reporting. The database has automated proposal processing and distribution of information. It allows NOAO to collect and archive data so as to query and analyze information about our science programs in new ways.

  1. Usability of English Note-Taking Applications in a Foreign Language Learning Context

    ERIC Educational Resources Information Center

    Roy, Debopriyo; Brine, John; Murasawa, Fuyuki

    2016-01-01

    The act of note-taking offloads cognitive pressure and note-taking applications could be used as an important tool for foreign language acquisition. Its use, importance, and efficacy in a foreign language learning context could be justifiably debated. However, existing computer-assisted language learning literature is almost silent on the topic.…

  2. Comparison of the recovery patterns of language and cognitive functions in patients with post-traumatic language processing deficits and in patients with aphasia following a stroke.

    PubMed

    Vukovic, Mile; Vuksanovic, Jasmina; Vukovic, Irena

    2008-01-01

    In this study we investigated the recovery patterns of language and cognitive functions in patients with post-traumatic language processing deficits and in patients with aphasia following a stroke. The correlation of specific language functions and cognitive functions was analyzed in the acute phase and 6 months later. Significant recovery of the tested functions was observed in both groups. However, in patients with post-traumatic language processing deficits the degree of recovery of most language functions and some cognitive functions was higher. A significantly greater correlation was revealed within language and cognitive functions, as well as between language functions and other aspects of cognition in patients with post-traumatic language processing deficits than in patients with aphasia following a stroke. Our results show that patients with post-traumatic language processing deficits have a different recovery pattern and a different pattern of correlation between language and cognitive functions compared to patients with aphasia following a stroke. (1) Better understanding of the differences in recovery of language and cognitive functions in patients who have suffered strokes and those who have experienced traumatic brain injury. (2) Better understanding of the relationship between language and cognitive functions in patients with post-traumatic language processing deficits and in patients with aphasia following a stroke. (3) Better understanding of the factors influencing recovery.

  3. Computer Music

    NASA Astrophysics Data System (ADS)

    Cook, Perry

    This chapter covers algorithms, technologies, computer languages, and systems for computer music. Computer music involves the application of computers and other digital/electronic technologies to music composition, performance, theory, history, and perception. The field combines digital signal processing, computational algorithms, computer languages, hardware and software systems, acoustics, psychoacoustics (low-level perception of sounds from the raw acoustic signal), and music cognition (higher-level perception of musical style, form, emotion, etc.). Although most people would think that analog synthesizers and electronic music substantially predate the use of computers in music, many experiments and complete computer music systems were being constructed and used as early as the 1950s.

  4. An Application of Epidemiological Modeling to Information Diffusion

    NASA Astrophysics Data System (ADS)

    McCormack, Robert; Salter, William

    Messages often spread within a population through unofficial - particularly web-based - media. Such ideas have been termed "memes." To impede the flow of terrorist messages and to promote counter messages within a population, intelligence analysts must understand how messages spread. We used statistical language processing technologies to operationalize "memes" as latent topics in electronic text and applied epidemiological techniques to describe and analyze patterns of message propagation. We developed our methods and applied them to English-language newspapers and blogs in the Arab world. We found that a relatively simple epidemiological model can reproduce some dynamics of observed empirical relationships.

  5. An introduction to scripting in Ruby for biologists.

    PubMed

    Aerts, Jan; Law, Andy

    2009-07-16

    The Ruby programming language has a lot to offer to any scientist with electronic data to process. Not only is the initial learning curve very shallow, but its reflection and meta-programming capabilities allow for the rapid creation of relatively complex applications while still keeping the code short and readable. This paper provides a gentle introduction to this scripting language for researchers without formal informatics training such as many wet-lab scientists. We hope this will provide such researchers an idea of how powerful a tool Ruby can be for their data management tasks and encourage them to learn more about it.

  6. Modality-specific processing precedes amodal linguistic processing during L2 sign language acquisition: A longitudinal study.

    PubMed

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-01

    The present study tracked activation pattern differences in response to sign language processing by late hearing second language learners of American Sign Language. Learners were scanned before the start of their language courses. They were scanned again after their first semester of instruction and their second, for a total of 10 months of instruction. The study aimed to characterize modality-specific to modality-general processing throughout the acquisition of sign language. Results indicated that before the acquisition of sign language, neural substrates related to modality-specific processing were present. After approximately 45 h of instruction, the learners transitioned into processing signs on a phonological basis (e.g., supramarginal gyrus, putamen). After one more semester of input, learners transitioned once more to a lexico-semantic processing stage (e.g., left inferior frontal gyrus) at which language control mechanisms (e.g., left caudate, cingulate gyrus) were activated. During these transitional steps right hemispheric recruitment was observed, with increasing left-lateralization, which is similar to other native signers and L2 learners of spoken language; however, specialization for sign language processing with activation in the inferior parietal lobule (i.e., angular gyrus), even for late learners, was observed. As such, the present study is the first to track L2 acquisition of sign language learners in order to characterize modality-independent and modality-specific mechanisms for bilingual language processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Modeling languages for biochemical network simulation: reaction vs equation based approaches.

    PubMed

    Wiechert, Wolfgang; Noack, Stephan; Elsheikh, Atya

    2010-01-01

    Biochemical network modeling and simulation is an essential task in any systems biology project. The systems biology markup language (SBML) was established as a standardized model exchange language for mechanistic models. A specific strength of SBML is that numerous tools for formulating, processing, simulation and analysis of models are freely available. Interestingly, in the field of multidisciplinary simulation, the problem of model exchange between different simulation tools occurred much earlier. Several general modeling languages like Modelica have been developed in the 1990s. Modelica enables an equation based modular specification of arbitrary hierarchical differential algebraic equation models. Moreover, libraries for special application domains can be rapidly developed. This contribution compares the reaction based approach of SBML with the equation based approach of Modelica and explains the specific strengths of both tools. Several biological examples illustrating essential SBML and Modelica concepts are given. The chosen criteria for tool comparison are flexibility for constraint specification, different modeling flavors, hierarchical, modular and multidisciplinary modeling. Additionally, support for spatially distributed systems, event handling and network analysis features is discussed. As a major result it is shown that the choice of the modeling tool has a strong impact on the expressivity of the specified models but also strongly depends on the requirements of the application context.

  8. Implicit Schemata and Categories in Memory-Based Language Processing

    ERIC Educational Resources Information Center

    van den Bosch, Antal; Daelemans, Walter

    2013-01-01

    Memory-based language processing (MBLP) is an approach to language processing based on exemplar storage during learning and analogical reasoning during processing. From a cognitive perspective, the approach is attractive as a model for human language processing because it does not make any assumptions about the way abstractions are shaped, nor any…

  9. Analysis of matters associated with the transferring of object-oriented applications to platform .Net using C# programming language

    NASA Astrophysics Data System (ADS)

    Sarsimbayeva, S. M.; Kospanova, K. K.

    2015-11-01

    The article provides the discussion of matters associated with the problems of transferring of object-oriented Windows applications from C++ programming language to .Net platform using C# programming language. C++ has always been considered to be the best language for the software development, but the implicit mistakes that come along with the tool may lead to infinite memory leaks and other errors. The platform .Net and the C#, made by Microsoft, are the solutions to the issues mentioned above. The world economy and production are highly demanding applications developed by C++, but the new language with its stability and transferability to .Net will bring many advantages. An example can be presented using the applications that imitate the work of queuing systems. Authors solved the problem of transferring of an application, imitating seaport works, from C++ to the platform .Net using C# in the scope of Visual Studio.

  10. Intelligent Text Retrieval and Knowledge Acquisition from Texts for NASA Applications: Preprocessing Issues

    NASA Technical Reports Server (NTRS)

    2001-01-01

    In this contract, which is a component of a larger contract that we plan to submit in the coming months, we plan to study the preprocessing issues which arise in applying natural language processing techniques to NASA-KSC problem reports. The goals of this work will be to deal with the issues of: a) automatically obtaining the problem reports from NASA-KSC data bases, b) the format of these reports and c) the conversion of these reports to a format that will be adequate for our natural language software. At the end of this contract, we expect that these problems will be solved and that we will be ready to apply our natural language software to a text database of over 1000 KSC problem reports.

  11. Semantic message oriented middleware for publish/subscribe networks

    NASA Astrophysics Data System (ADS)

    Li, Han; Jiang, Guofei

    2004-09-01

    The publish/subscribe paradigm of Message Oriented Middleware provides a loosely coupled communication model between distributed applications. Traditional publish/subscribe middleware uses keywords to match advertisements and subscriptions and does not support deep semantic matching. To this end, we designed and implemented a Semantic Message Oriented Middleware system to provide such capabilities for semantic description and matching. We adopted the DARPA Agent Markup Language and Ontology Inference Layer, a formal knowledge representation language for expressing sophisticated classifications and enabling automated inference, as the topic description language in our middleware system. A simple description logic inference system was implemented to handle the matching process between the subscriptions of subscribers and the advertisements of publishers. Moreover our middleware system also has a security architecture to support secure communication and user privilege control.

  12. An analysis of the Petri net based model of the human body iron homeostasis process.

    PubMed

    Sackmann, Andrea; Formanowicz, Dorota; Formanowicz, Piotr; Koch, Ina; Blazewicz, Jacek

    2007-02-01

    In the paper a Petri net based model of the human body iron homeostasis is presented and analyzed. The body iron homeostasis is an important but not fully understood complex process. The modeling of the process presented in the paper is expressed in the language of Petri net theory. An application of this theory to the description of biological processes allows for very precise analysis of the resulting models. Here, such an analysis of the body iron homeostasis model from a mathematical point of view is given.

  13. Processing of speech signals for physical and sensory disabilities.

    PubMed Central

    Levitt, H

    1995-01-01

    Assistive technology involving voice communication is used primarily by people who are deaf, hard of hearing, or who have speech and/or language disabilities. It is also used to a lesser extent by people with visual or motor disabilities. A very wide range of devices has been developed for people with hearing loss. These devices can be categorized not only by the modality of stimulation [i.e., auditory, visual, tactile, or direct electrical stimulation of the auditory nerve (auditory-neural)] but also in terms of the degree of speech processing that is used. At least four such categories can be distinguished: assistive devices (a) that are not designed specifically for speech, (b) that take the average characteristics of speech into account, (c) that process articulatory or phonetic characteristics of speech, and (d) that embody some degree of automatic speech recognition. Assistive devices for people with speech and/or language disabilities typically involve some form of speech synthesis or symbol generation for severe forms of language disability. Speech synthesis is also used in text-to-speech systems for sightless persons. Other applications of assistive technology involving voice communication include voice control of wheelchairs and other devices for people with mobility disabilities. Images Fig. 4 PMID:7479816

  14. CLAMP - a toolkit for efficiently building customized clinical natural language processing pipelines.

    PubMed

    Soysal, Ergin; Wang, Jingqi; Jiang, Min; Wu, Yonghui; Pakhomov, Serguei; Liu, Hongfang; Xu, Hua

    2017-11-24

    Existing general clinical natural language processing (NLP) systems such as MetaMap and Clinical Text Analysis and Knowledge Extraction System have been successfully applied to information extraction from clinical text. However, end users often have to customize existing systems for their individual tasks, which can require substantial NLP skills. Here we present CLAMP (Clinical Language Annotation, Modeling, and Processing), a newly developed clinical NLP toolkit that provides not only state-of-the-art NLP components, but also a user-friendly graphic user interface that can help users quickly build customized NLP pipelines for their individual applications. Our evaluation shows that the CLAMP default pipeline achieved good performance on named entity recognition and concept encoding. We also demonstrate the efficiency of the CLAMP graphic user interface in building customized, high-performance NLP pipelines with 2 use cases, extracting smoking status and lab test values. CLAMP is publicly available for research use, and we believe it is a unique asset for the clinical NLP community. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Processing of Speech Signals for Physical and Sensory Disabilities

    NASA Astrophysics Data System (ADS)

    Levitt, Harry

    1995-10-01

    Assistive technology involving voice communication is used primarily by people who are deaf, hard of hearing, or who have speech and/or language disabilities. It is also used to a lesser extent by people with visual or motor disabilities. A very wide range of devices has been developed for people with hearing loss. These devices can be categorized not only by the modality of stimulation [i.e., auditory, visual, tactile, or direct electrical stimulation of the auditory nerve (auditory-neural)] but also in terms of the degree of speech processing that is used. At least four such categories can be distinguished: assistive devices (a) that are not designed specifically for speech, (b) that take the average characteristics of speech into account, (c) that process articulatory or phonetic characteristics of speech, and (d) that embody some degree of automatic speech recognition. Assistive devices for people with speech and/or language disabilities typically involve some form of speech synthesis or symbol generation for severe forms of language disability. Speech synthesis is also used in text-to-speech systems for sightless persons. Other applications of assistive technology involving voice communication include voice control of wheelchairs and other devices for people with mobility disabilities.

  16. A Proposal of 3-dimensional Self-organizing Memory and Its Application to Knowledge Extraction from Natural Language

    NASA Astrophysics Data System (ADS)

    Sakakibara, Kai; Hagiwara, Masafumi

    In this paper, we propose a 3-dimensional self-organizing memory and describe its application to knowledge extraction from natural language. First, the proposed system extracts a relation between words by JUMAN (morpheme analysis system) and KNP (syntax analysis system), and stores it in short-term memory. In the short-term memory, the relations are attenuated with the passage of processing. However, the relations with high frequency of appearance are stored in the long-term memory without attenuation. The relations in the long-term memory are placed to the proposed 3-dimensional self-organizing memory. We used a new learning algorithm called ``Potential Firing'' in the learning phase. In the recall phase, the proposed system recalls relational knowledge from the learned knowledge based on the input sentence. We used a new recall algorithm called ``Waterfall Recall'' in the recall phase. We added a function to respond to questions in natural language with ``yes/no'' in order to confirm the validity of proposed system by evaluating the quantity of correct answers.

  17. Harnessing QbD, Programming Languages, and Automation for Reproducible Biology.

    PubMed

    Sadowski, Michael I; Grant, Chris; Fell, Tim S

    2016-03-01

    Building robust manufacturing processes from biological components is a task that is highly complex and requires sophisticated tools to describe processes, inputs, and measurements and administrate management of knowledge, data, and materials. We argue that for bioengineering to fully access biological potential, it will require application of statistically designed experiments to derive detailed empirical models of underlying systems. This requires execution of large-scale structured experimentation for which laboratory automation is necessary. This requires development of expressive, high-level languages that allow reusability of protocols, characterization of their reliability, and a change in focus from implementation details to functional properties. We review recent developments in these areas and identify what we believe is an exciting trend that promises to revolutionize biotechnology. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Meaning matters: a clinician's/student's guide to general sign theory and its applicability in clinical settings.

    PubMed

    Oller, Stephen D

    2005-01-01

    The pragmatic mapping process and its variants have proven effective in second language learning and teaching. The goal of this paper is to show that the same process applies in teaching and intervention with disordered populations. A secondary goal, ultimately more important, is to give clinicians, teachers, and other educators a tool-kit, or a framework, from which they can evaluate and implement interventions. What is offered is an introduction to a general theory of signs and some examples of how it can be applied in treating communication disorders. (1) Readers will be able to relate the three theoretical consistency requirements to language teaching and intervention. (2) Readers will be introduced to a general theory of signs that provides a basis for evaluating and implementing interventions.

  19. Native-language N400 and P600 predict dissociable language-learning abilities in adults.

    PubMed

    Qi, Zhenghan; Beach, Sara D; Finn, Amy S; Minas, Jennifer; Goetz, Calvin; Chan, Brian; Gabrieli, John D E

    2017-04-01

    Language learning aptitude during adulthood varies markedly across individuals. An individual's native-language ability has been associated with success in learning a new language as an adult. However, little is known about how native-language processing affects learning success and what neural markers of native-language processing, if any, are related to success in learning. We therefore related variation in electrophysiology during native-language processing to success in learning a novel artificial language. Event-related potentials (ERPs) were recorded while native English speakers judged the acceptability of English sentences prior to learning an artificial language. There was a trend towards a double dissociation between native-language ERPs and their relationships to novel syntax and vocabulary learning. Individuals who exhibited a greater N400 effect when processing English semantics showed better future learning of the artificial language overall. The N400 effect was related to syntax learning via its specific relationship to vocabulary learning. In contrast, the P600 effect size when processing English syntax predicted future syntax learning but not vocabulary learning. These findings show that distinct neural signatures of native-language processing relate to dissociable abilities for learning novel semantic and syntactic information. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Automatic classification of written descriptions by healthy adults: An overview of the application of natural language processing and machine learning techniques to clinical discourse analysis

    PubMed Central

    Toledo, Cíntia Matsuda; Cunha, Andre; Scarton, Carolina; Aluísio, Sandra

    2014-01-01

    Discourse production is an important aspect in the evaluation of brain-injured individuals. We believe that studies comparing the performance of brain-injured subjects with that of healthy controls must use groups with compatible education. A pioneering application of machine learning methods using Brazilian Portuguese for clinical purposes is described, highlighting education as an important variable in the Brazilian scenario. Objective The aims were to describe how to: (i) develop machine learning classifiers using features generated by natural language processing tools to distinguish descriptions produced by healthy individuals into classes based on their years of education; and (ii) automatically identify the features that best distinguish the groups. Methods The approach proposed here extracts linguistic features automatically from the written descriptions with the aid of two Natural Language Processing tools: Coh-Metrix-Port and AIC. It also includes nine task-specific features (three new ones, two extracted manually, besides description time; type of scene described – simple or complex; presentation order – which type of picture was described first; and age). In this study, the descriptions by 144 of the subjects studied in Toledo18 were used,which included 200 healthy Brazilians of both genders. Results and Conclusion A Support Vector Machine (SVM) with a radial basis function (RBF) kernel is the most recommended approach for the binary classification of our data, classifying three of the four initial classes. CfsSubsetEval (CFS) is a strong candidate to replace manual feature selection methods. PMID:29213908

  1. When language comprehension goes wrong for the right reasons: Good-enough, underspecified, or shallow language processing.

    PubMed

    Christianson, Kiel

    2016-01-01

    This paper contains an overview of language processing that can be described as "good enough", "underspecified", or "shallow". The central idea is that a nontrivial proportion of misunderstanding, misinterpretation, and miscommunication can be attributed not to random error, but instead to processing preferences of the human language processing system. In other words, the very architecture of the language processor favours certain types of processing errors because in a majority of instances, this "fast and frugal", less effortful processing is good enough to support communication. By way of historical background, connections are made between this relatively recent facet of psycholinguistic study, other recent language processing models, and related concepts in other areas of cognitive science. Finally, the nine papers included in this special issue are introduced as representative of novel explorations of good-enough, or underspecified, language processing.

  2. ScaMo: Realisation of an OO-functional DSL for cross platform mobile applications development

    NASA Astrophysics Data System (ADS)

    Macos, Dragan; Solymosi, Andreas

    2013-10-01

    The software market is dynamically changing: the Internet is going mobile, the software applications are shifting from the desktop hardware onto the mobile devices. The largest markets are the mobile applications for iOS, Android and Windows Phone and for the purpose the typical programming languages include Objective-C, Java and C ♯. The realization of the native applications implies the integration of the developed software into the environments of mentioned mobile operating systems to enable the access to different hardware components of the devices: GPS module, display, GSM module, etc. This paper deals with the definition and possible implementation of an environment for the automatic application generation for multiple mobile platforms. It is based on a DSL for mobile application development, which includes the programming language Scala and a DSL defined in Scala. As part of a multi-stage cross-compiling algorithm, this language is translated into the language of the affected mobile platform. The advantage of our method lies in the expressiveness of the defined language and the transparent source code translation between different languages, which implies, for example, the advantages of debugging and development of the generated code.

  3. Multicriteria framework for selecting a process modelling language

    NASA Astrophysics Data System (ADS)

    Scanavachi Moreira Campos, Ana Carolina; Teixeira de Almeida, Adiel

    2016-01-01

    The choice of process modelling language can affect business process management (BPM) since each modelling language shows different features of a given process and may limit the ways in which a process can be described and analysed. However, choosing the appropriate modelling language for process modelling has become a difficult task because of the availability of a large number modelling languages and also due to the lack of guidelines on evaluating, and comparing languages so as to assist in selecting the most appropriate one. This paper proposes a framework for selecting a modelling language in accordance with the purposes of modelling. This framework is based on the semiotic quality framework (SEQUAL) for evaluating process modelling languages and a multicriteria decision aid (MCDA) approach in order to select the most appropriate language for BPM. This study does not attempt to set out new forms of assessment and evaluation criteria, but does attempt to demonstrate how two existing approaches can be combined so as to solve the problem of selection of modelling language. The framework is described in this paper and then demonstrated by means of an example. Finally, the advantages and disadvantages of using SEQUAL and MCDA in an integrated manner are discussed.

  4. Pragmatic service development and customisation with the CEDA OGC Web Services framework

    NASA Astrophysics Data System (ADS)

    Pascoe, Stephen; Stephens, Ag; Lowe, Dominic

    2010-05-01

    The CEDA OGC Web Services framework (COWS) emphasises rapid service development by providing a lightweight layer of OGC web service logic on top of Pylons, a mature web application framework for the Python language. This approach gives developers a flexible web service development environment without compromising access to the full range of web application tools and patterns: Model-View-Controller paradigm, XML templating, Object-Relational-Mapper integration and authentication/authorization. We have found this approach useful for exploring evolving standards and implementing protocol extensions to meet the requirements of operational deployments. This paper outlines how COWS is being used to implement customised WMS, WCS, WFS and WPS services in a variety of web applications from experimental prototypes to load-balanced cluster deployments serving 10-100 simultaneous users. In particular we will cover 1) The use of Climate Science Modeling Language (CSML) in complex-feature aware WMS, WCS and WFS services, 2) Extending WMS to support applications with features specific to earth system science and 3) A cluster-enabled Web Processing Service (WPS) supporting asynchronous data processing. The COWS WPS underpins all backend services in the UK Climate Projections User Interface where users can extract, plot and further process outputs from a multi-dimensional probabilistic climate model dataset. The COWS WPS supports cluster job execution, result caching, execution time estimation and user management. The COWS WMS and WCS components drive the project-specific NCEO and QESDI portals developed by the British Atmospheric Data Centre. These portals use CSML as a backend description format and implement features such as multiple WMS layer dimensions and climatology axes that are beyond the scope of general purpose GIS tools and yet vital for atmospheric science applications.

  5. A pattern-based analysis of clinical computer-interpretable guideline modeling languages.

    PubMed

    Mulyar, Nataliya; van der Aalst, Wil M P; Peleg, Mor

    2007-01-01

    Languages used to specify computer-interpretable guidelines (CIGs) differ in their approaches to addressing particular modeling challenges. The main goals of this article are: (1) to examine the expressive power of CIG modeling languages, and (2) to define the differences, from the control-flow perspective, between process languages in workflow management systems and modeling languages used to design clinical guidelines. The pattern-based analysis was applied to guideline modeling languages Asbru, EON, GLIF, and PROforma. We focused on control-flow and left other perspectives out of consideration. We evaluated the selected CIG modeling languages and identified their degree of support of 43 control-flow patterns. We used a set of explicitly defined evaluation criteria to determine whether each pattern is supported directly, indirectly, or not at all. PROforma offers direct support for 22 of 43 patterns, Asbru 20, GLIF 17, and EON 11. All four directly support basic control-flow patterns, cancellation patterns, and some advance branching and synchronization patterns. None support multiple instances patterns. They offer varying levels of support for synchronizing merge patterns and state-based patterns. Some support a few scenarios not covered by the 43 control-flow patterns. CIG modeling languages are remarkably close to traditional workflow languages from the control-flow perspective, but cover many fewer workflow patterns. CIG languages offer some flexibility that supports modeling of complex decisions and provide ways for modeling some decisions not covered by workflow management systems. Workflow management systems may be suitable for clinical guideline applications.

  6. Simulated learning environments in speech-language pathology: an Australian response.

    PubMed

    MacBean, Naomi; Theodoros, Deborah; Davidson, Bronwyn; Hill, Anne E

    2013-06-01

    The rising demand for health professionals to service the Australian population is placing pressure on traditional approaches to clinical education in the allied health professions. Existing research suggests that simulated learning environments (SLEs) have the potential to increase student placement capacity while providing quality learning experiences with comparable or superior outcomes to traditional methods. This project investigated the current use of SLEs in Australian speech-language pathology curricula, and the potential future applications of SLEs to the clinical education curricula through an extensive consultative process with stakeholders (all 10 Australian universities offering speech-language pathology programs in 2010, Speech Pathology Australia, members of the speech-language pathology profession, and current student body). Current use of SLEs in speech-language pathology education was found to be limited, with additional resources required to further develop SLEs and maintain their use within the curriculum. Perceived benefits included: students' increased clinical skills prior to workforce placement, additional exposure to specialized areas of speech-language pathology practice, inter-professional learning, and richer observational experiences for novice students. Stakeholders perceived SLEs to have considerable potential for clinical learning. A nationally endorsed recommendation for SLE development and curricula integration was prepared.

  7. Best Case Practices of Technology at Eastern New Mexico University.

    ERIC Educational Resources Information Center

    DeWitt, Calvin W.; Nutter, Scott; Ayala, Mary; Hall, Debra

    This paper presents examples of best case practices of technology use in classes at Eastern New Mexico University (ENMU). The examples include successful and not-so-successful applications, with insights on the overall process of incorporating technology into the classroom. The paper focuses on the authors' experience in languages, business, and…

  8. Statistics and Style. Mathematical Linguistics and Automatic Language Processing No. 6.

    ERIC Educational Resources Information Center

    Dolezel, Lubomir, Ed.; Bailey, Richard W., Ed.

    This collection of 17 articles concerning the application of mathematical models and techniques to the study of literary style is an attempt to overcome the communication barriers that exist between scholars in the various fields that find their meeting ground in statistical stylistics. The articles selected were chosen to represent the best…

  9. Interlingual Machine Translation: Prospects and Setbacks

    ERIC Educational Resources Information Center

    Acikgoz, Firat; Sert, Olcay

    2006-01-01

    This study, in an attempt to rise above the intricacy of "being informed on the verge of globalization," is founded on the premise that Machine Translation (MT) applications searching for an ideal key to find a universal foundation for all natural languages have a restricted say over the translation process at various discourse levels. Our paper…

  10. Computer science, artificial intelligence, and cybernetics: Applied artificial intelligence in Japan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubinger, B.

    1988-01-01

    This sourcebook provides information on the developments in artificial intelligence originating in Japan. Spanning such innovations as software productivity, natural language processing, CAD, and parallel inference machines, this volume lists leading organizations conducting research or implementing AI systems, describes AI applications being pursued, illustrates current results achieved, and highlights sources reporting progress.

  11. 76 FR 71276 - Common Crop Insurance Regulations; Pecan Revenue Crop Insurance Provisions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-17

    ... CFR part 400, subpart J for the informal administrative review process of good farming practices as... farming operation. For instance, all producers are required to submit an application and acreage report to... and discourages good management practices. Language in sections 3(d)(3) and 6(b) provides consequences...

  12. Striatal degeneration impairs language learning: evidence from Huntington's disease.

    PubMed

    De Diego-Balaguer, R; Couette, M; Dolbeau, G; Dürr, A; Youssov, K; Bachoud-Lévi, A-C

    2008-11-01

    Although the role of the striatum in language processing is still largely unclear, a number of recent proposals have outlined its specific contribution. Different studies report evidence converging to a picture where the striatum may be involved in those aspects of rule-application requiring non-automatized behaviour. This is the main characteristic of the earliest phases of language acquisition that require the online detection of distant dependencies and the creation of syntactic categories by means of rule learning. Learning of sequences and categorization processes in non-language domains has been known to require striatal recruitment. Thus, we hypothesized that the striatum should play a prominent role in the extraction of rules in learning a language. We studied 13 pre-symptomatic gene-carriers and 22 early stage patients of Huntington's disease (pre-HD), both characterized by a progressive degeneration of the striatum and 21 late stage patients Huntington's disease (18 stage II, two stage III and one stage IV) where cortical degeneration accompanies striatal degeneration. When presented with a simplified artificial language where words and rules could be extracted, early stage Huntington's disease patients (stage I) were impaired in the learning test, demonstrating a greater impairment in rule than word learning compared to the 20 age- and education-matched controls. Huntington's disease patients at later stages were impaired both on word and rule learning. While spared in their overall performance, gene-carriers having learned a set of abstract artificial language rules were then impaired in the transfer of those rules to similar artificial language structures. The correlation analyses among several neuropsychological tests assessing executive function showed that rule learning correlated with tests requiring working memory and attentional control, while word learning correlated with a test involving episodic memory. These learning impairments significantly correlated with the bicaudate ratio. The overall results support striatal involvement in rule extraction from speech and suggest that language acquisition requires several aspects of memory and executive functions for word and rule learning.

  13. Wireless access to a pharmaceutical database: a demonstrator for data driven Wireless Application Protocol (WAP) applications in medical information processing.

    PubMed

    Schacht Hansen, M; Dørup, J

    2001-01-01

    The Wireless Application Protocol technology implemented in newer mobile phones has built-in facilities for handling much of the information processing needed in clinical work. To test a practical approach we ported a relational database of the Danish pharmaceutical catalogue to Wireless Application Protocol using open source freeware at all steps. We used Apache 1.3 web software on a Linux server. Data containing the Danish pharmaceutical catalogue were imported from an ASCII file into a MySQL 3.22.32 database using a Practical Extraction and Report Language script for easy update of the database. Data were distributed in 35 interrelated tables. Each pharmaceutical brand name was given its own card with links to general information about the drug, active substances, contraindications etc. Access was available through 1) browsing therapeutic groups and 2) searching for a brand name. The database interface was programmed in the server-side scripting language PHP3. A free, open source Wireless Application Protocol gateway to a pharmaceutical catalogue was established to allow dial-in access independent of commercial Wireless Application Protocol service providers. The application was tested on the Nokia 7110 and Ericsson R320s cellular phones. We have demonstrated that Wireless Application Protocol-based access to a dynamic clinical database can be established using open source freeware. The project opens perspectives for a further integration of Wireless Application Protocol phone functions in clinical information processing: Global System for Mobile communication telephony for bilateral communication, asynchronous unilateral communication via e-mail and Short Message Service, built-in calculator, calendar, personal organizer, phone number catalogue and Dictaphone function via answering machine technology. An independent Wireless Application Protocol gateway may be placed within hospital firewalls, which may be an advantage with respect to security. However, if Wireless Application Protocol phones are to become effective tools for physicians, special attention must be paid to the limitations of the devices. Input tools of Wireless Application Protocol phones should be improved, for instance by increased use of speech control.

  14. Wireless access to a pharmaceutical database: A demonstrator for data driven Wireless Application Protocol applications in medical information processing

    PubMed Central

    Hansen, Michael Schacht

    2001-01-01

    Background The Wireless Application Protocol technology implemented in newer mobile phones has built-in facilities for handling much of the information processing needed in clinical work. Objectives To test a practical approach we ported a relational database of the Danish pharmaceutical catalogue to Wireless Application Protocol using open source freeware at all steps. Methods We used Apache 1.3 web software on a Linux server. Data containing the Danish pharmaceutical catalogue were imported from an ASCII file into a MySQL 3.22.32 database using a Practical Extraction and Report Language script for easy update of the database. Data were distributed in 35 interrelated tables. Each pharmaceutical brand name was given its own card with links to general information about the drug, active substances, contraindications etc. Access was available through 1) browsing therapeutic groups and 2) searching for a brand name. The database interface was programmed in the server-side scripting language PHP3. Results A free, open source Wireless Application Protocol gateway to a pharmaceutical catalogue was established to allow dial-in access independent of commercial Wireless Application Protocol service providers. The application was tested on the Nokia 7110 and Ericsson R320s cellular phones. Conclusions We have demonstrated that Wireless Application Protocol-based access to a dynamic clinical database can be established using open source freeware. The project opens perspectives for a further integration of Wireless Application Protocol phone functions in clinical information processing: Global System for Mobile communication telephony for bilateral communication, asynchronous unilateral communication via e-mail and Short Message Service, built-in calculator, calendar, personal organizer, phone number catalogue and Dictaphone function via answering machine technology. An independent Wireless Application Protocol gateway may be placed within hospital firewalls, which may be an advantage with respect to security. However, if Wireless Application Protocol phones are to become effective tools for physicians, special attention must be paid to the limitations of the devices. Input tools of Wireless Application Protocol phones should be improved, for instance by increased use of speech control. PMID:11720946

  15. Enabling Future Robotic Missions with Multicore Processors

    NASA Technical Reports Server (NTRS)

    Powell, Wesley A.; Johnson, Michael A.; Wilmot, Jonathan; Some, Raphael; Gostelow, Kim P.; Reeves, Glenn; Doyle, Richard J.

    2011-01-01

    Recent commercial developments in multicore processors (e.g. Tilera, Clearspeed, HyperX) have provided an option for high performance embedded computing that rivals the performance attainable with FPGA-based reconfigurable computing architectures. Furthermore, these processors offer more straightforward and streamlined application development by allowing the use of conventional programming languages and software tools in lieu of hardware design languages such as VHDL and Verilog. With these advantages, multicore processors can significantly enhance the capabilities of future robotic space missions. This paper will discuss these benefits, along with onboard processing applications where multicore processing can offer advantages over existing or competing approaches. This paper will also discuss the key artchitecural features of current commercial multicore processors. In comparison to the current art, the features and advancements necessary for spaceflight multicore processors will be identified. These include power reduction, radiation hardening, inherent fault tolerance, and support for common spacecraft bus interfaces. Lastly, this paper will explore how multicore processors might evolve with advances in electronics technology and how avionics architectures might evolve once multicore processors are inserted into NASA robotic spacecraft.

  16. Challenges in adapting existing clinical natural language processing systems to multiple, diverse health care settings.

    PubMed

    Carrell, David S; Schoen, Robert E; Leffler, Daniel A; Morris, Michele; Rose, Sherri; Baer, Andrew; Crockett, Seth D; Gourevitch, Rebecca A; Dean, Katie M; Mehrotra, Ateev

    2017-09-01

    Widespread application of clinical natural language processing (NLP) systems requires taking existing NLP systems and adapting them to diverse and heterogeneous settings. We describe the challenges faced and lessons learned in adapting an existing NLP system for measuring colonoscopy quality. Colonoscopy and pathology reports from 4 settings during 2013-2015, varying by geographic location, practice type, compensation structure, and electronic health record. Though successful, adaptation required considerably more time and effort than anticipated. Typical NLP challenges in assembling corpora, diverse report structures, and idiosyncratic linguistic content were greatly magnified. Strategies for addressing adaptation challenges include assessing site-specific diversity, setting realistic timelines, leveraging local electronic health record expertise, and undertaking extensive iterative development. More research is needed on how to make it easier to adapt NLP systems to new clinical settings. A key challenge in widespread application of NLP is adapting existing systems to new clinical settings. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  17. From Sour Grapes to Low-Hanging Fruit: A Case Study Demonstrating a Practical Strategy for Natural Language Processing Portability.

    PubMed

    Johnson, Stephen B; Adekkanattu, Prakash; Campion, Thomas R; Flory, James; Pathak, Jyotishman; Patterson, Olga V; DuVall, Scott L; Major, Vincent; Aphinyanaphongs, Yindalon

    2018-01-01

    Natural Language Processing (NLP) holds potential for patient care and clinical research, but a gap exists between promise and reality. While some studies have demonstrated portability of NLP systems across multiple sites, challenges remain. Strategies to mitigate these challenges can strive for complex NLP problems using advanced methods (hard-to-reach fruit), or focus on simple NLP problems using practical methods (low-hanging fruit). This paper investigates a practical strategy for NLP portability using extraction of left ventricular ejection fraction (LVEF) as a use case. We used a tool developed at the Department of Veterans Affair (VA) to extract the LVEF values from free-text echocardiograms in the MIMIC-III database. The approach showed an accuracy of 98.4%, sensitivity of 99.4%, a positive predictive value of 98.7%, and F-score of 99.0%. This experience, in which a simple NLP solution proved highly portable with excellent performance, illustrates the point that simple NLP applications may be easier to disseminate and adapt, and in the short term may prove more useful, than complex applications.

  18. Language Policing: Micro-Level Language Policy-in-Process in the Foreign Language Classroom

    ERIC Educational Resources Information Center

    Amir, Alia; Musk, Nigel

    2013-01-01

    This article examines what we call "micro-level language policy-in-process"--that is, how a target--language-only policy emerges "in situ" in the foreign language classroom. More precisely, we investigate the role of "language policing", the mechanism deployed by the teacher and/or pupils to (re-)establish the…

  19. Lexical prediction via forward models: N400 evidence from German Sign Language.

    PubMed

    Hosemann, Jana; Herrmann, Annika; Steinbach, Markus; Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias

    2013-09-01

    Models of language processing in the human brain often emphasize the prediction of upcoming input-for example in order to explain the rapidity of language understanding. However, the precise mechanisms of prediction are still poorly understood. Forward models, which draw upon the language production system to set up expectations during comprehension, provide a promising approach in this regard. Here, we present an event-related potential (ERP) study on German Sign Language (DGS) which tested the hypotheses of a forward model perspective on prediction. Sign languages involve relatively long transition phases between one sign and the next, which should be anticipated as part of a forward model-based prediction even though they are semantically empty. Native speakers of DGS watched videos of naturally signed DGS sentences which either ended with an expected or a (semantically) unexpected sign. Unexpected signs engendered a biphasic N400-late positivity pattern. Crucially, N400 onset preceded critical sign onset and was thus clearly elicited by properties of the transition phase. The comprehension system thereby clearly anticipated modality-specific information about the realization of the predicted semantic item. These results provide strong converging support for the application of forward models in language comprehension. © 2013 Elsevier Ltd. All rights reserved.

  20. Neural Language Processing in Adolescent First-Language Learners: Longitudinal Case Studies in American Sign Language

    PubMed Central

    Ferjan Ramirez, Naja; Leonard, Matthew K.; Davenport, Tristan S.; Torres, Christina; Halgren, Eric; Mayberry, Rachel I.

    2016-01-01

    One key question in neurolinguistics is the extent to which the neural processing system for language requires linguistic experience during early life to develop fully. We conducted a longitudinal anatomically constrained magnetoencephalography (aMEG) analysis of lexico-semantic processing in 2 deaf adolescents who had no sustained language input until 14 years of age, when they became fully immersed in American Sign Language. After 2 to 3 years of language, the adolescents' neural responses to signed words were highly atypical, localizing mainly to right dorsal frontoparietal regions and often responding more strongly to semantically primed words (Ferjan Ramirez N, Leonard MK, Torres C, Hatrak M, Halgren E, Mayberry RI. 2014. Neural language processing in adolescent first-language learners. Cereb Cortex. 24 (10): 2772–2783). Here, we show that after an additional 15 months of language experience, the adolescents' neural responses remained atypical in terms of polarity. While their responses to less familiar signed words still showed atypical localization patterns, the localization of responses to highly familiar signed words became more concentrated in the left perisylvian language network. Our findings suggest that the timing of language experience affects the organization of neural language processing; however, even in adolescence, language representation in the human brain continues to evolve with experience. PMID:25410427

  1. Oral-diadochokinesis rates across languages: English and Hebrew norms.

    PubMed

    Icht, Michal; Ben-David, Boaz M

    2014-01-01

    Oro-facial and speech motor control disorders represent a variety of speech and language pathologies. Early identification of such problems is important and carries clinical implications. A common and simple tool for gauging the presence and severity of speech motor control impairments is oral-diadochokinesis (oral-DDK). Surprisingly, norms for adult performance are missing from the literature. The goals of this study were: (1) to establish a norm for oral-DDK rate for (young to middle-age) adult English speakers, by collecting data from the literature (five studies, N=141); (2) to investigate the possible effect of language (and culture) on oral-DDK performance, by analyzing studies conducted in other languages (five studies, N=140), alongside the English norm; and (3) to find a new norm for adult Hebrew speakers, by testing 115 speakers. We first offer an English norm with a mean of 6.2syllables/s (SD=.8), and a lower boundary of 5.4syllables/s that can be used to indicate possible abnormality. Next, we found significant differences between four tested languages (English, Portuguese, Farsi and Greek) in oral-DDK rates. Results suggest the need to set language and culture sensitive norms for the application of the oral-DDK task world-wide. Finally, we found the oral-DDK performance for adult Hebrew speakers to be 6.4syllables/s (SD=.8), not significantly different than the English norms. This implies possible phonological similarities between English and Hebrew. We further note that no gender effects were found in our study. We recommend using oral-DDK as an important tool in the speech language pathologist's arsenal. Yet, application of this task should be done carefully, comparing individual performance to a set norm within the specific language. Readers will be able to: (1) identify the Speech-Language Pathologist assessment process using the oral-DDK task, by comparing an individual performance to the present English norm, (2) describe the impact of language on oral-DDK performance, and (3) accurately detect Hebrew speakers' patients using this tool. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Culpability and pain management/control in peripheral vascular disease using the ethics of principles and care.

    PubMed

    Omery, A

    1991-09-01

    The purposes of this article were to provide insight into the process of ethics and ethical inquiry and to explore the ethical issues of culpability and pain management/control. Critical care nurses who currently care for vascular patients identified these issues as occurring frequently in their practice. Authors in critical care nursing generally have limited the process of ethical inquiry to a theoretical framework built around an ethic of principles. The message many critical care nurses heard was that this one type of theoretical ethical framework was the totality of ethics. The application of these principles was ethical inquiry. For some nurses, the ethic of principles is sufficient. For others, an ethic of principles is either incomplete or foreign. This second group of nurses may believe that they have no moral voice if the language of ethics is only the language of principles. The language of principles, however, is not the only theoretical framework available. There is also the ethic of care, and ethical inquiry can include the application of that framework. Indeed, the language of the ethic of care may give a voice to nurses who previously felt morally mute. In fact, these two theoretical frameworks are not the only frameworks available to nurses. There is also virtue ethics, a framework not discussed in this article. A multiplicity of ethical frameworks is available for nurses to use in analyzing their professional and personal dilemmas. Recognizing that multiplicity, nurses can analyze their ethical dilemmas more comprehensively and effectively. Applying differing ethical frameworks can result in the same conclusions. This was the case for the issue of culpability.(ABSTRACT TRUNCATED AT 250 WORDS)

  3. Computer Programming Languages for Health Care

    PubMed Central

    O'Neill, Joseph T.

    1979-01-01

    This paper advocates the use of standard high level programming languages for medical computing. It recommends that U.S. Government agencies having health care missions implement coordinated policies that encourage the use of existing standard languages and the development of new ones, thereby enabling them and the medical computing community at large to share state-of-the-art application programs. Examples are based on a model that characterizes language and language translator influence upon the specification, development, test, evaluation, and transfer of application programs.

  4. A Research on Second Language Acquisition and College English Teaching

    ERIC Educational Resources Information Center

    Li, Changyu

    2009-01-01

    It was in the 1970s that American linguist S.D. Krashen created the theory of "language acquisition". The theories on second language acquisition were proposed based on the study on the second language acquisition process and its rules. Here, the second language acquisition process refers to the process in which a learner with the…

  5. Input Processing at First Exposure to a Sign Language

    ERIC Educational Resources Information Center

    Ortega, Gerardo; Morgan, Gary

    2015-01-01

    There is growing interest in learners' cognitive capacities to process a second language (L2) at first exposure to the target language. Evidence suggests that L2 learners are capable of processing novel words by exploiting phonological information from their first language (L1). Hearing adult learners of a sign language, however, cannot fall back…

  6. Help Me Please!: Designing and Developing Application for Emergencies

    NASA Astrophysics Data System (ADS)

    Hong, Ng Ken; Hafit, Hanayanti; Wahid, Norfaradilla; Kasim, Shahreen; Yusof, Munirah Mohd

    2017-08-01

    Help Me Please! Application is an android platform emergency button application that is designed to transmit emergency messages to target receivers with real time information. The purpose of developing this application is to help people to notify any emergency circumstances via Short Message Service (SMS) in android platform. The application will receive the current location from Global Positioning System (GPS), will obtain the current time from the mobile device and send this information to the receivers when user presses the emergency button. Simultaneously, the application will keep sending the emergency alerts to receivers and will update to database based on the time interval set by user until user stop the function. Object-oriented Software Development model is employed to guide the development of this application with the knowledge of Java language and Android Studio. In conclusion, this application plays an important role in rescuing process when emergency circumstances happen. The rescue process will become more effective by notifying the emergency circumstances and send the current location of user to others in the early hours.

  7. Engineering and Scientific Applications: Using MatLab(Registered Trademark) for Data Processing and Visualization

    NASA Technical Reports Server (NTRS)

    Sen, Syamal K.; Shaykhian, Gholam Ali

    2011-01-01

    MatLab(R) (MATrix LABoratory) is a numerical computation and simulation tool that is used by thousands Scientists and Engineers in many cou ntries. MatLab does purely numerical calculations, which can be used as a glorified calculator or interpreter programming language; its re al strength is in matrix manipulations. Computer algebra functionalities are achieved within the MatLab environment using "symbolic" toolbo x. This feature is similar to computer algebra programs, provided by Maple or Mathematica to calculate with mathematical equations using s ymbolic operations. MatLab in its interpreter programming language fo rm (command interface) is similar with well known programming languag es such as C/C++, support data structures and cell arrays to define c lasses in object oriented programming. As such, MatLab is equipped with most ofthe essential constructs of a higher programming language. M atLab is packaged with an editor and debugging functionality useful t o perform analysis of large MatLab programs and find errors. We belie ve there are many ways to approach real-world problems; prescribed methods to ensure foregoing solutions are incorporated in design and ana lysis of data processing and visualization can benefit engineers and scientist in gaining wider insight in actual implementation of their perspective experiments. This presentation will focus on data processing and visualizations aspects of engineering and scientific applicati ons. Specifically, it will discuss methods and techniques to perform intermediate-level data processing covering engineering and scientifi c problems. MatLab programming techniques including reading various data files formats to produce customized publication-quality graphics, importing engineering and/or scientific data, organizing data in tabu lar format, exporting data to be used by other software programs such as Microsoft Excel, data presentation and visualization will be discussed. The presentation will emphasize creating practIcal scripts (pro grams) that extend the basic features of MatLab TOPICS mclude (1) Ma trix and vector analysis and manipulations (2) Mathematical functions (3) Symbolic calculations & functions (4) Import/export data files (5) Program lOgic and flow control (6) Writing function and passing parameters (7) Test application programs

  8. Unity and disunity in evolutionary sciences: process-based analogies open common research avenues for biology and linguistics.

    PubMed

    List, Johann-Mattis; Pathmanathan, Jananan Sylvestre; Lopez, Philippe; Bapteste, Eric

    2016-08-20

    For a long time biologists and linguists have been noticing surprising similarities between the evolution of life forms and languages. Most of the proposed analogies have been rejected. Some, however, have persisted, and some even turned out to be fruitful, inspiring the transfer of methods and models between biology and linguistics up to today. Most proposed analogies were based on a comparison of the research objects rather than the processes that shaped their evolution. Focusing on process-based analogies, however, has the advantage of minimizing the risk of overstating similarities, while at the same time reflecting the common strategy to use processes to explain the evolution of complexity in both fields. We compared important evolutionary processes in biology and linguistics and identified processes specific to only one of the two disciplines as well as processes which seem to be analogous, potentially reflecting core evolutionary processes. These new process-based analogies support novel methodological transfer, expanding the application range of biological methods to the field of historical linguistics. We illustrate this by showing (i) how methods dealing with incomplete lineage sorting offer an introgression-free framework to analyze highly mosaic word distributions across languages; (ii) how sequence similarity networks can be used to identify composite and borrowed words across different languages; (iii) how research on partial homology can inspire new methods and models in both fields; and (iv) how constructive neutral evolution provides an original framework for analyzing convergent evolution in languages resulting from common descent (Sapir's drift). Apart from new analogies between evolutionary processes, we also identified processes which are specific to either biology or linguistics. This shows that general evolution cannot be studied from within one discipline alone. In order to get a full picture of evolution, biologists and linguists need to complement their studies, trying to identify cross-disciplinary and discipline-specific evolutionary processes. The fact that we found many process-based analogies favoring transfer from biology to linguistics further shows that certain biological methods and models have a broader scope than previously recognized. This opens fruitful paths for collaboration between the two disciplines. This article was reviewed by W. Ford Doolittle and Eugene V. Koonin.

  9. Template construction grammar: from visual scene description to language comprehension and agrammatism.

    PubMed

    Barrès, Victor; Lee, Jinyong

    2014-01-01

    How does the language system coordinate with our visual system to yield flexible integration of linguistic, perceptual, and world-knowledge information when we communicate about the world we perceive? Schema theory is a computational framework that allows the simulation of perceptuo-motor coordination programs on the basis of known brain operating principles such as cooperative computation and distributed processing. We present first its application to a model of language production, SemRep/TCG, which combines a semantic representation of visual scenes (SemRep) with Template Construction Grammar (TCG) as a means to generate verbal descriptions of a scene from its associated SemRep graph. SemRep/TCG combines the neurocomputational framework of schema theory with the representational format of construction grammar in a model linking eye-tracking data to visual scene descriptions. We then offer a conceptual extension of TCG to include language comprehension and address data on the role of both world knowledge and grammatical semantics in the comprehension performances of agrammatic aphasic patients. This extension introduces a distinction between heavy and light semantics. The TCG model of language comprehension offers a computational framework to quantitatively analyze the distributed dynamics of language processes, focusing on the interactions between grammatical, world knowledge, and visual information. In particular, it reveals interesting implications for the understanding of the various patterns of comprehension performances of agrammatic aphasics measured using sentence-picture matching tasks. This new step in the life cycle of the model serves as a basis for exploring the specific challenges that neurolinguistic computational modeling poses to the neuroinformatics community.

  10. Novel methodology to examine cognitive and experiential factors in language development: combining eye-tracking and LENA technology

    PubMed Central

    Odean, Rosalie; Nazareth, Alina; Pruden, Shannon M.

    2015-01-01

    Developmental systems theory posits that development cannot be segmented by influences acting in isolation, but should be studied through a scientific lens that highlights the complex interactions between these forces over time (Overton, 2013a). This poses a unique challenge for developmental psychologists studying complex processes like language development. In this paper, we advocate for the combining of highly sophisticated data collection technologies in an effort to move toward a more systemic approach to studying language development. We investigate the efficiency and appropriateness of combining eye-tracking technology and the LENA (Language Environment Analysis) system, an automated language analysis tool, in an effort to explore the relation between language processing in early development, and external dynamic influences like parent and educator language input in the home and school environments. Eye-tracking allows us to study language processing via eye movement analysis; these eye movements have been linked to both conscious and unconscious cognitive processing, and thus provide one means of evaluating cognitive processes underlying language development that does not require the use of subjective parent reports or checklists. The LENA system, on the other hand, provides automated language output that describes a child’s language-rich environment. In combination, these technologies provide critical information not only about a child’s language processing abilities but also about the complexity of the child’s language environment. Thus, when used in conjunction these technologies allow researchers to explore the nature of interacting systems involved in language development. PMID:26379591

  11. Age-Dependent Effects of Catechol-O-Methyltransferase (COMT) Gene Val158Met Polymorphism on Language Function in Developing Children.

    PubMed

    Sugiura, Lisa; Toyota, Tomoko; Matsuba-Kurita, Hiroko; Iwayama, Yoshimi; Mazuka, Reiko; Yoshikawa, Takeo; Hagiwara, Hiroko

    2017-01-01

    The genetic basis controlling language development remains elusive. Previous studies of the catechol-O-methyltransferase (COMT) Val158Met genotype and cognition have focused on prefrontally guided executive functions involving dopamine. However, COMT may further influence posterior cortical regions implicated in language perception. We investigated whether COMT influences language ability and cortical language processing involving the posterior language regions in 246 children aged 6-10 years. We assessed language ability using a language test and cortical responses recorded during language processing using a word repetition task and functional near-infrared spectroscopy. The COMT genotype had significant effects on language performance and processing. Importantly, Met carriers outperformed Val homozygotes in language ability during the early elementary school years (6-8 years), whereas Val homozygotes exhibited significant language development during the later elementary school years. Both genotype groups exhibited equal language performance at approximately 10 years of age. Val homozygotes exhibited significantly less cortical activation compared with Met carriers during word processing, particularly at older ages. These findings regarding dopamine transmission efficacy may be explained by a hypothetical inverted U-shaped curve. Our findings indicate that the effects of the COMT genotype on language ability and cortical language processing may change in a narrow age window of 6-10 years. © The Author 2016. Published by Oxford University Press.

  12. Dynamic Server-Based KML Code Generator Method for Level-of-Detail Traversal of Geospatial Data

    NASA Technical Reports Server (NTRS)

    Baxes, Gregory; Mixon, Brian; Linger, TIm

    2013-01-01

    Web-based geospatial client applications such as Google Earth and NASA World Wind must listen to data requests, access appropriate stored data, and compile a data response to the requesting client application. This process occurs repeatedly to support multiple client requests and application instances. Newer Web-based geospatial clients also provide user-interactive functionality that is dependent on fast and efficient server responses. With massively large datasets, server-client interaction can become severely impeded because the server must determine the best way to assemble data to meet the client applications request. In client applications such as Google Earth, the user interactively wanders through the data using visually guided panning and zooming actions. With these actions, the client application is continually issuing data requests to the server without knowledge of the server s data structure or extraction/assembly paradigm. A method for efficiently controlling the networked access of a Web-based geospatial browser to server-based datasets in particular, massively sized datasets has been developed. The method specifically uses the Keyhole Markup Language (KML), an Open Geospatial Consortium (OGS) standard used by Google Earth and other KML-compliant geospatial client applications. The innovation is based on establishing a dynamic cascading KML strategy that is initiated by a KML launch file provided by a data server host to a Google Earth or similar KMLcompliant geospatial client application user. Upon execution, the launch KML code issues a request for image data covering an initial geographic region. The server responds with the requested data along with subsequent dynamically generated KML code that directs the client application to make follow-on requests for higher level of detail (LOD) imagery to replace the initial imagery as the user navigates into the dataset. The approach provides an efficient data traversal path and mechanism that can be flexibly established for any dataset regardless of size or other characteristics. The method yields significant improvements in userinteractive geospatial client and data server interaction and associated network bandwidth requirements. The innovation uses a C- or PHP-code-like grammar that provides a high degree of processing flexibility. A set of language lexer and parser elements is provided that offers a complete language grammar for writing and executing language directives. A script is wrapped and passed to the geospatial data server by a client application as a component of a standard KML-compliant statement. The approach provides an efficient means for a geospatial client application to request server preprocessing of data prior to client delivery. Data is structured in a quadtree format. As the user zooms into the dataset, geographic regions are subdivided into four child regions. Conversely, as the user zooms out, four child regions collapse into a single, lower-LOD region. The approach provides an efficient data traversal path and mechanism that can be flexibly established for any dataset regardless of size or other characteristics.

  13. An Abstract Plan Preparation Language

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Munoz, Cesar A.

    2006-01-01

    This paper presents a new planning language that is more abstract than most existing planning languages such as the Planning Domain Definition Language (PDDL) or the New Domain Description Language (NDDL). The goal of this language is to simplify the formal analysis and specification of planning problems that are intended for safety-critical applications such as power management or automated rendezvous in future manned spacecraft. The new language has been named the Abstract Plan Preparation Language (APPL). A translator from APPL to NDDL has been developed in support of the Spacecraft Autonomy for Vehicles and Habitats Project (SAVH) sponsored by the Explorations Technology Development Program, which is seeking to mature autonomy technology for application to the new Crew Exploration Vehicle (CEV) that will replace the Space Shuttle.

  14. Machine learning and radiology.

    PubMed

    Wang, Shijun; Summers, Ronald M

    2012-07-01

    In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers. Copyright © 2012. Published by Elsevier B.V.

  15. The Sizing and Optimization Language (SOL): A computer language to improve the user/optimizer interface

    NASA Technical Reports Server (NTRS)

    Lucas, S. H.; Scotti, S. J.

    1989-01-01

    The nonlinear mathematical programming method (formal optimization) has had many applications in engineering design. A figure illustrates the use of optimization techniques in the design process. The design process begins with the design problem, such as the classic example of the two-bar truss designed for minimum weight as seen in the leftmost part of the figure. If formal optimization is to be applied, the design problem must be recast in the form of an optimization problem consisting of an objective function, design variables, and constraint function relations. The middle part of the figure shows the two-bar truss design posed as an optimization problem. The total truss weight is the objective function, the tube diameter and truss height are design variables, with stress and Euler buckling considered as constraint function relations. Lastly, the designer develops or obtains analysis software containing a mathematical model of the object being optimized, and then interfaces the analysis routine with existing optimization software such as CONMIN, ADS, or NPSOL. This final state of software development can be both tedious and error-prone. The Sizing and Optimization Language (SOL), a special-purpose computer language whose goal is to make the software implementation phase of optimum design easier and less error-prone, is presented.

  16. Applications of Quality Management in Language Education

    ERIC Educational Resources Information Center

    Heyworth, Frank

    2013-01-01

    This review examines applications of quality management (QM) in language education. QM approaches have been adapted from methodologies developed in industrial and commercial settings, and these are briefly described. Key aspects of QM in language education are the definition of purpose, descriptions of principles and practice, including various…

  17. Theory and Application in English Language Teaching.

    ERIC Educational Resources Information Center

    Kitao, S. Kathleen

    A collection of papers, all by the author, looks at a variety of theories and theoretical approaches from linguistics, sociolinguistics, and psycholinguistics and their applications to the teaching of English as a second language. Two studies are also presented. Titles include: "Content Schemata and Second Language Learning";…

  18. From General Game Descriptions to a Market Specification Language for General Trading Agents

    NASA Astrophysics Data System (ADS)

    Thielscher, Michael; Zhang, Dongmo

    The idea behind General Game Playing is to build systems that, instead of being programmed for one specific task, are intelligent and flexible enough to negotiate an unknown environment solely on the basis of the rules which govern it. In this paper, we argue that this principle has the great potential to bring to a new level artificially intelligent systems in other application areas as well. Our specific interest lies in General Trading Agents, which are able to understand the rules of unknown markets and then to actively participate in them without human intervention. To this end, we extend the general Game Description Language into a language that allows to formally describe arbitrary markets in such a way that these specifications can be automatically processed by a computer. We present both syntax and a transition-based semantics for this Market Specification Language and illustrate its expressive power by presenting axiomatizations of several well-known auction types.

  19. Design of Instant Messaging System of Multi-language E-commerce Platform

    NASA Astrophysics Data System (ADS)

    Yang, Heng; Chen, Xinyi; Li, Jiajia; Cao, Yaru

    2017-09-01

    This paper aims at researching the message system in the instant messaging system based on the multi-language e-commerce platform in order to design the instant messaging system in multi-language environment and exhibit the national characteristics based information as well as applying national languages to e-commerce. In order to develop beautiful and friendly system interface for the front end of the message system and reduce the development cost, the mature jQuery framework is adopted in this paper. The high-performance server Tomcat is adopted at the back end to process user requests, and MySQL database is adopted for data storage to persistently store user data, and meanwhile Oracle database is adopted as the message buffer for system optimization. Moreover, AJAX technology is adopted for the client to actively pull the newest data from the server at the specified time. In practical application, the system has strong reliability, good expansibility, short response time, high system throughput capacity and high user concurrency.

  20. Ideas on Learning a New Language Intertwined with the Current State of Natural Language Processing and Computational Linguistics

    ERIC Educational Resources Information Center

    Snyder, Robin M.

    2015-01-01

    In 2014, in conjunction with doing research in natural language processing and attending a global conference on computational linguistics, the author decided to learn a new foreign language, Greek, that uses a non-English character set. This paper/session will present/discuss an overview of the current state of natural language processing and…

  1. Teaching computer interfacing with virtual instruments in an object-oriented language.

    PubMed Central

    Gulotta, M

    1995-01-01

    LabVIEW is a graphic object-oriented computer language developed to facilitate hardware/software communication. LabVIEW is a complete computer language that can be used like Basic, FORTRAN, or C. In LabVIEW one creates virtual instruments that aesthetically look like real instruments but are controlled by sophisticated computer programs. There are several levels of data acquisition VIs that make it easy to control data flow, and many signal processing and analysis algorithms come with the software as premade VIs. In the classroom, the similarity between virtual and real instruments helps students understand how information is passed between the computer and attached instruments. The software may be used in the absence of hardware so that students can work at home as well as in the classroom. This article demonstrates how LabVIEW can be used to control data flow between computers and instruments, points out important features for signal processing and analysis, and shows how virtual instruments may be used in place of physical instrumentation. Applications of LabVIEW to the teaching laboratory are also discussed, and a plausible course outline is given. PMID:8580361

  2. Natural language processing and advanced information management

    NASA Technical Reports Server (NTRS)

    Hoard, James E.

    1989-01-01

    Integrating diverse information sources and application software in a principled and general manner will require a very capable advanced information management (AIM) system. In particular, such a system will need a comprehensive addressing scheme to locate the material in its docuverse. It will also need a natural language processing (NLP) system of great sophistication. It seems that the NLP system must serve three functions. First, it provides an natural language interface (NLI) for the users. Second, it serves as the core component that understands and makes use of the real-world interpretations (RWIs) contained in the docuverse. Third, it enables the reasoning specialists (RSs) to arrive at conclusions that can be transformed into procedures that will satisfy the users' requests. The best candidate for an intelligent agent that can satisfactorily make use of RSs and transform documents (TDs) appears to be an object oriented data base (OODB). OODBs have, apparently, an inherent capacity to use the large numbers of RSs and TDs that will be required by an AIM system and an inherent capacity to use them in an effective way.

  3. Unified modeling language and design of a case-based retrieval system in medical imaging.

    PubMed Central

    LeBozec, C.; Jaulent, M. C.; Zapletal, E.; Degoulet, P.

    1998-01-01

    One goal of artificial intelligence research into case-based reasoning (CBR) systems is to develop approaches for designing useful and practical interactive case-based environments. Explaining each step of the design of the case-base and of the retrieval process is critical for the application of case-based systems to the real world. We describe herein our approach to the design of IDEM--Images and Diagnosis from Examples in Medicine--a medical image case-based retrieval system for pathologists. Our approach is based on the expressiveness of an object-oriented modeling language standard: the Unified Modeling Language (UML). We created a set of diagrams in UML notation illustrating the steps of the CBR methodology we used. The key aspect of this approach was selecting the relevant objects of the system according to user requirements and making visualization of cases and of the components of the case retrieval process. Further evaluation of the expressiveness of the design document is required but UML seems to be a promising formalism, improving the communication between the developers and users. Images Figure 6 Figure 7 PMID:9929346

  4. Unified modeling language and design of a case-based retrieval system in medical imaging.

    PubMed

    LeBozec, C; Jaulent, M C; Zapletal, E; Degoulet, P

    1998-01-01

    One goal of artificial intelligence research into case-based reasoning (CBR) systems is to develop approaches for designing useful and practical interactive case-based environments. Explaining each step of the design of the case-base and of the retrieval process is critical for the application of case-based systems to the real world. We describe herein our approach to the design of IDEM--Images and Diagnosis from Examples in Medicine--a medical image case-based retrieval system for pathologists. Our approach is based on the expressiveness of an object-oriented modeling language standard: the Unified Modeling Language (UML). We created a set of diagrams in UML notation illustrating the steps of the CBR methodology we used. The key aspect of this approach was selecting the relevant objects of the system according to user requirements and making visualization of cases and of the components of the case retrieval process. Further evaluation of the expressiveness of the design document is required but UML seems to be a promising formalism, improving the communication between the developers and users.

  5. Teaching computer interfacing with virtual instruments in an object-oriented language.

    PubMed

    Gulotta, M

    1995-11-01

    LabVIEW is a graphic object-oriented computer language developed to facilitate hardware/software communication. LabVIEW is a complete computer language that can be used like Basic, FORTRAN, or C. In LabVIEW one creates virtual instruments that aesthetically look like real instruments but are controlled by sophisticated computer programs. There are several levels of data acquisition VIs that make it easy to control data flow, and many signal processing and analysis algorithms come with the software as premade VIs. In the classroom, the similarity between virtual and real instruments helps students understand how information is passed between the computer and attached instruments. The software may be used in the absence of hardware so that students can work at home as well as in the classroom. This article demonstrates how LabVIEW can be used to control data flow between computers and instruments, points out important features for signal processing and analysis, and shows how virtual instruments may be used in place of physical instrumentation. Applications of LabVIEW to the teaching laboratory are also discussed, and a plausible course outline is given.

  6. Application of the informational reference system OZhUR to the automated processing of data from satellites of the Kosmos series

    NASA Technical Reports Server (NTRS)

    Pokras, V. M.; Yevdokimov, V. P.; Maslov, V. D.

    1978-01-01

    The structure and potential of the information reference system OZhUR designed for the automated data processing systems of scientific space vehicles (SV) is considered. The system OZhUR ensures control of the extraction phase of processing with respect to a concrete SV and the exchange of data between phases.The practical application of the system OZhUR is exemplified in the construction of a data processing system for satellites of the Cosmos series. As a result of automating the operations of exchange and control, the volume of manual preparation of data is significantly reduced, and there is no longer any need for individual logs which fix the status of data processing. The system Ozhur is included in the automated data processing system Nauka which is realized in language PL-1 in a binary one-address system one-state (BOS OS) electronic computer.

  7. Towards a semantic lexicon for biological language processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verspoor, K.

    It is well understood that natural language processing (NLP) applications require sophisticated lexical resources to support their processing goals. In the biomedical domain, we are privileged to have access to extensive terminological resources in the form of controlled vocabularies and ontologies, which have been integrated into the framework of the National Library of Medicine's Unified Medical Language System's (UMLS) Metathesaurus. However, the existence of such terminological resources does not guarantee their utility for NLP. In particular, we have two core requirements for lexical resources for NLP in addition to the basic enumeration of important domain terms: representation of morphosyntactic informationmore » about those terms, specifically part of speech information and inflectional patterns to support parsing and lemma assignment, and representation of semantic information indicating general categorical information about terms, and significant relations between terms to support text understanding and inference (Hahn et at, 1999). Biomedical vocabularies by and large commonly leave out morphosyntactic information, and where they address semantic considerations, they often do so in an unprincipled manner, for instance by indicating a relation between two concepts without indicating the type of that relation. But all is not lost. The UMLS knowledge sources include two additional resources which are relevant - the SPECIALIST lexicon, a lexicon addressing our morphosyntactic requirements, and the Semantic Network, a representation of core conceptual categories in the biomedical domain. The coverage of these two knowledge sources with respect to the full coverage of the Metathesaurus is, however, not entirely clear. Furthermore, when our goals are specifically to process biological text - and often more specifically, text in the molecular biology domain - it is difficult to say whether the coverage of these resources is meaningful. The utility of the UMLS knowledge sources for medical language processing (MLP) has been explored (Johnson, 1999; Friedman et al 2001); the time has now come to repeat these experiments with respect to biological language processing (BLP). To that end, this paper presents an analysis of ihe UMLS resources, specifically with an eye towards constructing lexical resources suitable for BLP. We follow the paradigm presented in Johnson (1999) for medical language, exploring overlap between the UMLS Metathesaurus and SPECIALIST lexicon to construct a morphosyntactic and semantically-specified lexicon, and then further explore the overlap with a relevant domain corpus for molecular biology.« less

  8. MCAT Verbal Reasoning score: less predictive of medical school performance for English language learners.

    PubMed

    Winegarden, Babbi; Glaser, Dale; Schwartz, Alan; Kelly, Carolyn

    2012-09-01

    Medical College Admission Test (MCAT) scores are widely used as part of the decision-making process for selecting candidates for admission to medical school. Applicants who learned English as a second language may be at a disadvantage when taking tests in their non-native language. Preliminary research found significant differences between English language learners (ELLs), applicants who learned English after the age of 11 years, and non-ELL examinees on the Verbal Reasoning (VR) sub-test of the MCAT. The purpose of this study was to determine if relationships between VR sub-test scores and measures of medical school performance differed between ELL and non-ELL students. Scores on the MCAT VR sub-test and student performance outcomes (grades, examination scores, and markers of distinction and difficulty) were extracted from University of California San Diego School of Medicine admissions files and the Association of American Medical Colleges database for 924 students who matriculated in 1998-2005 (graduation years 2002-2009). Regression models were fitted to determine whether MCAT VR sub-test scores predicted medical school performance similarly for ELLs and non-ELLs. For several outcomes, including pre-clerkship grades, academic distinction, US Medical Licensing Examination Step 2 Clinical Knowledge scores and two clerkship shelf examinations, ELL status significantly affects the ability of the VR score to predict performance. Higher correlations between VR score and medical school performance emerged for non-ELL students than for ELL students for each of these outcomes. The MCAT VR score should be used with discretion when assessing ELL applicants for admission to medical school. © Blackwell Publishing Ltd 2012.

  9. HDL Based FPGA Interface Library for Data Acquisition and Multipurpose Real Time Algorithms

    NASA Astrophysics Data System (ADS)

    Fernandes, Ana M.; Pereira, R. C.; Sousa, J.; Batista, A. J. N.; Combo, A.; Carvalho, B. B.; Correia, C. M. B. A.; Varandas, C. A. F.

    2011-08-01

    The inherent parallelism of the logic resources, the flexibility in its configuration and the performance at high processing frequencies makes the field programmable gate array (FPGA) the most suitable device to be used both for real time algorithm processing and data transfer in instrumentation modules. Moreover, the reconfigurability of these FPGA based modules enables exploiting different applications on the same module. When using a reconfigurable module for various applications, the availability of a common interface library for easier implementation of the algorithms on the FPGA leads to more efficient development. The FPGA configuration is usually specified in a hardware description language (HDL) or other higher level descriptive language. The critical paths, such as the management of internal hardware clocks that require deep knowledge of the module behavior shall be implemented in HDL to optimize the timing constraints. The common interface library should include these critical paths, freeing the application designer from hardware complexity and able to choose any of the available high-level abstraction languages for the algorithm implementation. With this purpose a modular Verilog code was developed for the Virtex 4 FPGA of the in-house Transient Recorder and Processor (TRP) hardware module, based on the Advanced Telecommunications Computing Architecture (ATCA), with eight channels sampling at up to 400 MSamples/s (MSPS). The TRP was designed to perform real time Pulse Height Analysis (PHA), Pulse Shape Discrimination (PSD) and Pile-Up Rejection (PUR) algorithms at a high count rate (few Mevent/s). A brief description of this modular code is presented and examples of its use as an interface with end user algorithms, including a PHA with PUR, are described.

  10. Parallel Activation in Bilingual Phonological Processing

    ERIC Educational Resources Information Center

    Lee, Su-Yeon

    2011-01-01

    In bilingual language processing, the parallel activation hypothesis suggests that bilinguals activate their two languages simultaneously during language processing. Support for the parallel activation mainly comes from studies of lexical (word-form) processing, with relatively less attention to phonological (sound) processing. According to…

  11. Quicksilver: Middleware for Scalable Self-Regenerative Systems

    DTIC Science & Technology

    2006-04-01

    Applications can be coded in any of about 25 programming languages ranging from the obvious ones to some very obscure languages , such as OCaml ...technology. Like Tempest, Quicksilver can support applications written in any of a wide range of programming languages supported by .NET. However, whereas...so that developers can work in standard languages and with standard tools and still exploit those solutions. Vendors need to see some success

  12. Generalizing the Arden Syntax to a Common Clinical Application Language.

    PubMed

    Kraus, Stefan

    2018-01-01

    The Arden Syntax for Medical Logic Systems is a standard for encoding and sharing knowledge in the form of Medical Logic Modules (MLMs). Although the Arden Syntax has been designed to meet the requirements of data-driven clinical event monitoring, multiple studies suggest that its language constructs may be suitable for use outside the intended application area and even as a common clinical application language. Such a broader context, however, requires to reconsider some language features. The purpose of this paper is to outline the related modifications on the basis of a generalized Arden Syntax version. The implemented prototype provides multiple adjustments to the standard, such as an option to use programming language constructs without the frame-like MLM structure, a JSON compliant data type system, a means to use MLMs as user-defined functions, and native support of restful web services with integrated data mapping. This study does not aim to promote an actually new language, but a more generic version of the proven Arden Syntax standard. Such an easy-to-understand domain-specific language for common clinical applications might cover multiple additional medical subdomains and serve as a lingua franca for arbitrary clinical algorithms, therefore avoiding a patchwork of multiple all-purpose languages between, and even within, institutions.

  13. PLEXIL-DL: Language and Runtime for Context-Aware Robot Behaviour

    NASA Astrophysics Data System (ADS)

    Moser, Herwig; Reichelt, Toni; Oswald, Norbert; Förster, Stefan

    Faced with the growing complexity of application scenarios social robots are involved with, the perception of environmental circumstances and the sentient reactions are becoming more and more important abilities. Rather than regarding both abilities in isolation, the entire transformation process, from context-awareness to purposive behaviour, forms a robot’s adaptivity. While attaining context-awareness has received much attention in literature so far, translating it into appropriate actions still lacks a comprehensive approach. In this paper, we present PLEXIL-DL, an expressive language allowing complex context expressions as an integral part of constructs that define sophisticated behavioural reactions. Our approach extends NASA’s PLEXIL language by Description Logic queries, both in syntax and formal semantics. A prototypical implementation of a PLEXIL-DL interpreter shows the basic mechanisms facilitating the robot’s adaptivity through context-awareness.

  14. Ruby on Rails Applications

    NASA Technical Reports Server (NTRS)

    Hochstadt, Jake

    2011-01-01

    Ruby on Rails is an open source web application framework for the Ruby programming language. The first application I built was a web application to manage and authenticate other applications. One of the main requirements for this application was a single sign-on service. This allowed authentication to be built in one location and be implemented in many different applications. For example, users would be able to login using their existing credentials, and be able to access other NASA applications without authenticating again. The second application I worked on was an internal qualification plan app. Previously, the viewing of employee qualifications was managed through Excel spread sheets. I built a database driven application to streamline the process of managing qualifications. Employees would be able to login securely to view, edit and update their personal qualifications.

  15. Neural Language Processing in Adolescent First-Language Learners: Longitudinal Case Studies in American Sign Language.

    PubMed

    Ferjan Ramirez, Naja; Leonard, Matthew K; Davenport, Tristan S; Torres, Christina; Halgren, Eric; Mayberry, Rachel I

    2016-03-01

    One key question in neurolinguistics is the extent to which the neural processing system for language requires linguistic experience during early life to develop fully. We conducted a longitudinal anatomically constrained magnetoencephalography (aMEG) analysis of lexico-semantic processing in 2 deaf adolescents who had no sustained language input until 14 years of age, when they became fully immersed in American Sign Language. After 2 to 3 years of language, the adolescents' neural responses to signed words were highly atypical, localizing mainly to right dorsal frontoparietal regions and often responding more strongly to semantically primed words (Ferjan Ramirez N, Leonard MK, Torres C, Hatrak M, Halgren E, Mayberry RI. 2014. Neural language processing in adolescent first-language learners. Cereb Cortex. 24 (10): 2772-2783). Here, we show that after an additional 15 months of language experience, the adolescents' neural responses remained atypical in terms of polarity. While their responses to less familiar signed words still showed atypical localization patterns, the localization of responses to highly familiar signed words became more concentrated in the left perisylvian language network. Our findings suggest that the timing of language experience affects the organization of neural language processing; however, even in adolescence, language representation in the human brain continues to evolve with experience. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. An intuitive Python interface for Bioconductor libraries demonstrates the utility of language translators.

    PubMed

    Gautier, Laurent

    2010-12-21

    Computer languages can be domain-related, and in the case of multidisciplinary projects, knowledge of several languages will be needed in order to quickly implements ideas. Moreover, each computer language has relative strong points, making some languages better suited than others for a given task to be implemented. The Bioconductor project, based on the R language, has become a reference for the numerical processing and statistical analysis of data coming from high-throughput biological assays, providing a rich selection of methods and algorithms to the research community. At the same time, Python has matured as a rich and reliable language for the agile development of prototypes or final implementations, as well as for handling large data sets. The data structures and functions from Bioconductor can be exposed to Python as a regular library. This allows a fully transparent and native use of Bioconductor from Python, without one having to know the R language and with only a small community of translators required to know both. To demonstrate this, we have implemented such Python representations for key infrastructure packages in Bioconductor, letting a Python programmer handle annotation data, microarray data, and next-generation sequencing data. Bioconductor is now not solely reserved to R users. Building a Python application using Bioconductor functionality can be done just like if Bioconductor was a Python package. Moreover, similar principles can be applied to other languages and libraries. Our Python package is available at: http://pypi.python.org/pypi/rpy2-bioconductor-extensions/.

  17. 34 CFR 655.4 - What definitions apply to the International Education Programs?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... are defined in 34 CFR part 77: Acquisition Applicant Application Award Budget Contract EDGAR Equipment... higher education for the purpose of carrying out a common objective on their behalf. Critical languages means each of the languages contained in the list of critical languages designated by the Secretary...

  18. 34 CFR 655.4 - What definitions apply to the International Education Programs?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... are defined in 34 CFR part 77: Acquisition Applicant Application Award Budget Contract EDGAR Equipment... higher education for the purpose of carrying out a common objective on their behalf. Critical languages means each of the languages contained in the list of critical languages designated by the Secretary...

  19. 34 CFR 655.4 - What definitions apply to the International Education Programs?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... are defined in 34 CFR part 77: Acquisition Applicant Application Award Budget Contract EDGAR Equipment... higher education for the purpose of carrying out a common objective on their behalf. Critical languages means each of the languages contained in the list of critical languages designated by the Secretary...

  20. 34 CFR 655.4 - What definitions apply to the International Education Programs?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... are defined in 34 CFR part 77: Acquisition Applicant Application Award Budget Contract EDGAR Equipment... higher education for the purpose of carrying out a common objective on their behalf. Critical languages means each of the languages contained in the list of critical languages designated by the Secretary...

  1. 34 CFR 655.4 - What definitions apply to the International Education Programs?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... are defined in 34 CFR part 77: Acquisition Applicant Application Award Budget Contract EDGAR Equipment... higher education for the purpose of carrying out a common objective on their behalf. Critical languages means each of the languages contained in the list of critical languages designated by the Secretary...

  2. 78 FR 13394 - 30-Day Notice of Proposed Information Collection: Office of Language Services Contractor...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-27

    ... of Language Services Contractor Application Form ACTION: Notice of request for public comment and...: Title of Information Collection: Office of Language Services Contractor Application Form. OMB Control... States. If candidates successfully become contractors for the U.S. Department of State, Office of...

  3. A Study of Multimedia Application-Based Vocabulary Acquisition

    ERIC Educational Resources Information Center

    Shao, Jing

    2012-01-01

    The development of computer-assisted language learning (CALL) has created the opportunity for exploring the effects of the multimedia application on foreign language vocabulary acquisition in recent years. This study provides an overview the computer-assisted language learning (CALL) and detailed a developing result of CALL--multimedia. With the…

  4. vSPARQL: A View Definition Language for the Semantic Web

    PubMed Central

    Shaw, Marianne; Detwiler, Landon T.; Noy, Natalya; Brinkley, James; Suciu, Dan

    2010-01-01

    Translational medicine applications would like to leverage the biological and biomedical ontologies, vocabularies, and data sets available on the semantic web. We present a general solution for RDF information set reuse inspired by database views. Our view definition language, vSPARQL, allows applications to specify the exact content that they are interested in and how that content should be restructured or modified. Applications can access relevant content by querying against these view definitions. We evaluate the expressivity of our approach by defining views for practical use cases and comparing our view definition language to existing query languages. PMID:20800106

  5. Building model analysis applications with the Joint Universal Parameter IdenTification and Evaluation of Reliability (JUPITER) API

    USGS Publications Warehouse

    Banta, E.R.; Hill, M.C.; Poeter, E.; Doherty, J.E.; Babendreier, J.

    2008-01-01

    The open-source, public domain JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) API (Application Programming Interface) provides conventions and Fortran-90 modules to develop applications (computer programs) for analyzing process models. The input and output conventions allow application users to access various applications and the analysis methods they embody with a minimum of time and effort. Process models simulate, for example, physical, chemical, and (or) biological systems of interest using phenomenological, theoretical, or heuristic approaches. The types of model analyses supported by the JUPITER API include, but are not limited to, sensitivity analysis, data needs assessment, calibration, uncertainty analysis, model discrimination, and optimization. The advantages provided by the JUPITER API for users and programmers allow for rapid programming and testing of new ideas. Application-specific coding can be in languages other than the Fortran-90 of the API. This article briefly describes the capabilities and utility of the JUPITER API, lists existing applications, and uses UCODE_2005 as an example.

  6. Improved compliance by BPM-driven workflow automation.

    PubMed

    Holzmüller-Laue, Silke; Göde, Bernd; Fleischer, Heidi; Thurow, Kerstin

    2014-12-01

    Using methods and technologies of business process management (BPM) for the laboratory automation has important benefits (i.e., the agility of high-level automation processes, rapid interdisciplinary prototyping and implementation of laboratory tasks and procedures, and efficient real-time process documentation). A principal goal of the model-driven development is the improved transparency of processes and the alignment of process diagrams and technical code. First experiences of using the business process model and notation (BPMN) show that easy-to-read graphical process models can achieve and provide standardization of laboratory workflows. The model-based development allows one to change processes quickly and an easy adaption to changing requirements. The process models are able to host work procedures and their scheduling in compliance with predefined guidelines and policies. Finally, the process-controlled documentation of complex workflow results addresses modern laboratory needs of quality assurance. BPMN 2.0 as an automation language to control every kind of activity or subprocess is directed to complete workflows in end-to-end relationships. BPMN is applicable as a system-independent and cross-disciplinary graphical language to document all methods in laboratories (i.e., screening procedures or analytical processes). That means, with the BPM standard, a communication method of sharing process knowledge of laboratories is also available. © 2014 Society for Laboratory Automation and Screening.

  7. 5 CFR 838.302 - Language not acceptable for processing.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Language not acceptable for processing... Affecting Employee Annuities § 838.302 Language not acceptable for processing. (a) Qualifying Domestic... accordance with the terminology used in this part. (3) Although any language satisfying the requirements of...

  8. 5 CFR 838.302 - Language not acceptable for processing.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Language not acceptable for processing... Affecting Employee Annuities § 838.302 Language not acceptable for processing. (a) Qualifying Domestic... accordance with the terminology used in this part. (3) Although any language satisfying the requirements of...

  9. Query Language for Location-Based Services: A Model Checking Approach

    NASA Astrophysics Data System (ADS)

    Hoareau, Christian; Satoh, Ichiro

    We present a model checking approach to the rationale, implementation, and applications of a query language for location-based services. Such query mechanisms are necessary so that users, objects, and/or services can effectively benefit from the location-awareness of their surrounding environment. The underlying data model is founded on a symbolic model of space organized in a tree structure. Once extended to a semantic model for modal logic, we regard location query processing as a model checking problem, and thus define location queries as hybrid logicbased formulas. Our approach is unique to existing research because it explores the connection between location models and query processing in ubiquitous computing systems, relies on a sound theoretical basis, and provides modal logic-based query mechanisms for expressive searches over a decentralized data structure. A prototype implementation is also presented and will be discussed.

  10. Speaking in Multiple Languages: Neural Correlates of Language Proficiency in Multilingual Word Production

    ERIC Educational Resources Information Center

    Videsott, Gerda; Herrnberger, Barbel; Hoenig, Klaus; Schilly, Edgar; Grothe, Jo; Wiater, Werner; Spitzer, Manfred; Kiefer, Markus

    2010-01-01

    The human brain has the fascinating ability to represent and to process several languages. Although the first and further languages activate partially different brain networks, the linguistic factors underlying these differences in language processing have to be further specified. We investigated the neural correlates of language proficiency in a…

  11. Global Language Identities and Ideologies in an Indonesian University Context

    ERIC Educational Resources Information Center

    Zentz, Lauren

    2012-01-01

    This ethnographic study of language use and English language learners in Central Java, Indonesia examines globalization processes within and beyond language; processes of language shift and change in language ecologies; and critical and comprehensive approaches to the teaching of English around the world. From my position as teacher-researcher and…

  12. Parental Ethnotheories and Family Language Policy in Transnational Adoptive Families

    ERIC Educational Resources Information Center

    Fogle, Lyn Wright

    2013-01-01

    Family language policy refers to explicit and overt decisions parents make about language use and language learning as well as implicit processes that legitimize certain language and literacy practices over others in the home. Studies in family language policy have emphasized the ways in which family-internal processes are shaped by and shape…

  13. Preface to MOST-ONISW 2009

    NASA Astrophysics Data System (ADS)

    Doerr, Martin; Freitas, Fred; Guizzardi, Giancarlo; Han, Hyoil

    Ontology is a cross-disciplinary field concerned with the study of concepts and theories that can be used for representing shared conceptualizations of specific domains. Ontological Engineering is a discipline in computer and information science concerned with the development of techniques, methods, languages and tools for the systematic construction of concrete artifacts capturing these representations, i.e., models (e.g., domain ontologies) and metamodels (e.g., upper-level ontologies). In recent years, there has been a growing interest in the application of formal ontology and ontological engineering to solve modeling problems in diverse areas in computer science such as software and data engineering, knowledge representation, natural language processing, information science, among many others.

  14. Building an information model (with the help of PSL/PSA). [Problem Statement Language/Problem Statement Analyzer

    NASA Technical Reports Server (NTRS)

    Callender, E. D.; Farny, A. M.

    1983-01-01

    Problem Statement Language/Problem Statement Analyzer (PSL/PSA) applications, which were once a one-step process in which product system information was immediately translated into PSL statements, have in light of experience been shown to result in inconsistent representations. These shortcomings have prompted the development of an intermediate step, designated the Product System Information Model (PSIM), which provides a basis for the mutual understanding of customer terminology and the formal, conceptual representation of that product system in a PSA data base. The PSIM is initially captured as a paper diagram, followed by formal capture in the PSL/PSA data base.

  15. An introduction to scripting in Ruby for biologists

    PubMed Central

    Aerts, Jan; Law, Andy

    2009-01-01

    The Ruby programming language has a lot to offer to any scientist with electronic data to process. Not only is the initial learning curve very shallow, but its reflection and meta-programming capabilities allow for the rapid creation of relatively complex applications while still keeping the code short and readable. This paper provides a gentle introduction to this scripting language for researchers without formal informatics training such as many wet-lab scientists. We hope this will provide such researchers an idea of how powerful a tool Ruby can be for their data management tasks and encourage them to learn more about it. PMID:19607723

  16. Using PHP/MySQL to Manage Potential Mass Impacts

    NASA Technical Reports Server (NTRS)

    Hager, Benjamin I.

    2010-01-01

    This paper presents a new application using commercially available software to manage mass properties for spaceflight vehicles. PHP/MySQL(PHP: Hypertext Preprocessor and My Structured Query Language) are a web scripting language and a database language commonly used in concert with each other. They open up new opportunities to develop cutting edge mass properties tools, and in particular, tools for the management of potential mass impacts (threats and opportunities). The paper begins by providing an overview of the functions and capabilities of PHP/MySQL. The focus of this paper is on how PHP/MySQL are being used to develop an advanced "web accessible" database system for identifying and managing mass impacts on NASA's Ares I Upper Stage program, managed by the Marshall Space Flight Center. To fully describe this application, examples of the data, search functions, and views are provided to promote, not only the function, but the security, ease of use, simplicity, and eye-appeal of this new application. This paper concludes with an overview of the other potential mass properties applications and tools that could be developed using PHP/MySQL. The premise behind this paper is that PHP/MySQL are software tools that are easy to use and readily available for the development of cutting edge mass properties applications. These tools are capable of providing "real-time" searching and status of an active database, automated report generation, and other capabilities to streamline and enhance mass properties management application. By using PHP/MySQL, proven existing methods for managing mass properties can be adapted to present-day information technology to accelerate mass properties data gathering, analysis, and reporting, allowing mass property management to keep pace with today's fast-pace design and development processes.

  17. Programming Language Software For Graphics Applications

    NASA Technical Reports Server (NTRS)

    Beckman, Brian C.

    1993-01-01

    New approach reduces repetitive development of features common to different applications. High-level programming language and interactive environment with access to graphical hardware and software created by adding graphical commands and other constructs to standardized, general-purpose programming language, "Scheme". Designed for use in developing other software incorporating interactive computer-graphics capabilities into application programs. Provides alternative to programming entire applications in C or FORTRAN, specifically ameliorating design and implementation of complex control and data structures typifying applications with interactive graphics. Enables experimental programming and rapid development of prototype software, and yields high-level programs serving as executable versions of software-design documentation.

  18. The Influence of Working Memory and Phonological Processing on English Language Learner Children's Bilingual Reading and Language Acquisition

    ERIC Educational Resources Information Center

    Swanson, H. Lee; Orosco, Michael J.; Lussier, Cathy M.; Gerber, Michael M.; Guzman-Orth, Danielle A.

    2011-01-01

    In this study, we explored whether the contribution of working memory (WM) to children's (N = 471) 2nd language (L2) reading and language acquisition was best accounted for by processing efficiency at a phonological level and/or by executive processes independent of phonological processing. Elementary school children (Grades 1, 2, & 3) whose…

  19. Second Language Processing: When Are First and Second Languages Processed Similarly?

    ERIC Educational Resources Information Center

    Sabourin, Laura; Stowe, Laurie A.

    2008-01-01

    In this article we investigate the effects of first language (L1) on second language (L2) neural processing for two grammatical constructions (verbal domain dependency and grammatical gender), focusing on the event-related potential P600 effect, which has been found in both L1 and L2 processing. Native Dutch speakers showed a P600 effect for both…

  20. Translated Versions of Voice Handicap Index (VHI)-30 across Languages: A Systematic Review

    PubMed Central

    SEIFPANAHI, Sadegh; JALAIE, Shohreh; NIKOO, Mohammad Reza; SOBHANI-RAD, Davood

    2015-01-01

    Background: In this systematic review, the aim is to investigate different VHI-30 versions between languages regarding their validity, reliability and their translation process. Methods: Articles were extracted systematically from some of the prime databases including Cochrane, googlescholar, MEDLINE (via PubMed gate), Sciencedirect, Web of science, and their reference lists by Voice Handicap Index keyword with only title limitation and time of publication (from 1997 to 2014). However the other limitations (e.g. excluding non-English, other versions of VHI ones, and so on) applied manually after studying the papers. In order to appraise the methodology of the papers, three authors did it by 12-item diagnostic test checklist in “Critical Appraisal Skills Programme” or (CASP) site. After applying all of the screenings, the papers that had the study eligibility criteria such as; translation, validity, and reliability processes, included in this review. Results: The remained non-repeated articles were 12 from different languages. All of them reported validity, reliability and translation method, which presented in details in this review. Conclusion: Mainly the preferred method for translation in the gathered papers was “Brislin’s classic back-translation model (1970), although the procedure was not performed completely but it was more prominent than other translation procedures. High test-retest reliability, internal consistency and moderate construct validity between different languages in regards to all 3 VHI-30 domains confirm the applicability of translated VHI-30 version across languages. PMID:26056664

  1. Bilingualism, Mind, and Brain.

    PubMed

    Kroll, Judith F; Dussias, Paola E; Bice, Kinsey; Perrotti, Lauren

    2015-01-01

    The use of two or more languages is common in most of the world. Yet, until recently, bilingualism was considered to be a complicating factor for language processing, cognition, and the brain. The past 20 years have witnessed an upsurge of research on bilingualism to examine language acquisition and processing, their cognitive and neural bases, and the consequences that bilingualism holds for cognition and the brain over the life span. Contrary to the view that bilingualism complicates the language system, this new research demonstrates that all of the languages that are known and used become part of the same language system. The interactions that arise when two languages are in play have consequences for the mind and the brain and, indeed, for language processing itself, but those consequences are not additive. Thus, bilingualism helps reveal the fundamental architecture and mechanisms of language processing that are otherwise hidden in monolingual speakers.

  2. Bilingualism, Mind, and Brain

    PubMed Central

    Dussias, Paola E.; Bice, Kinsey; Perrotti, Lauren

    2016-01-01

    The use of two or more languages is common in most of the world. Yet, until recently, bilingualism was considered to be a complicating factor for language processing, cognition, and the brain. The past 20 years have witnessed an upsurge of research on bilingualism to examine language acquisition and processing, their cognitive and neural bases, and the consequences that bilingualism holds for cognition and the brain over the life span. Contrary to the view that bilingualism complicates the language system, this new research demonstrates that all of the languages that are known and used become part of the same language system. The interactions that arise when two languages are in play have consequences for the mind and the brain and, indeed, for language processing itself, but those consequences are not additive. Thus, bilingualism helps reveal the fundamental architecture and mechanisms of language processing that are otherwise hidden in monolingual speakers. PMID:28642932

  3. Is Vocabulary Growth Influenced by the Relations among Words in a Language Learner's Vocabulary?

    ERIC Educational Resources Information Center

    Sailor, Kevin M.

    2013-01-01

    Several recent studies have explored the applicability of the preferential attachment principle to account for vocabulary growth. According to this principle, network growth can be described by a process in which existing nodes recruit new nodes with a probability that is an increasing function of their connectivity within the existing network.…

  4. Applying a Qualitative Modeling Shell to Process Diagnosis: The Caster System. ONR Technical Report #16.

    ERIC Educational Resources Information Center

    Thompson, Timothy F.; Clancey, William J.

    This report describes the application of a shell expert system from the medical diagnostic system, Neomycin, to Caster, a diagnostic system for malfunctions in industrial sandcasting. This system was developed to test the hypothesis that starting with a well-developed classification procedure and a relational language for stating the…

  5. Evaluating English Language Teaching Software for Kids: Education or Entertainment or Both?

    ERIC Educational Resources Information Center

    Kazanci, Zekeriya; Okan, Zuhal

    2009-01-01

    The purpose of this study is to offer a critical consideration of instructional software designed particularly for children. Since the early 1990s computer applications integrating education with entertainment have been adopted on a large scale by both educators and parents. It is expected that through edutainment software the process of learning…

  6. Optical Inference Machines

    DTIC Science & Technology

    1988-06-27

    de olf nessse end Id e ;-tl Sb ieeI smleo) ,Optical Artificial Intellegence ; Optical inference engines; Optical logic; Optical informationprocessing...common. They arise in areas such as expert systems and other artificial intelligence systems. In recent years, the computer science language PROLOG has...cal processors should in principle be well suited for : I artificial intelligence applications. In recent years, symbolic logic processing. , the

  7. Get That Job! A Project on the German Job Application Process

    ERIC Educational Resources Information Center

    Magedera-Hofhansl, Hanna

    2016-01-01

    With decreasing numbers of students studying German at Higher Education Institutions in the United Kingdom, there is an increasing demand for graduate Germanists. This project, designed for C1/C2 level students according to the Common European Framework of Reference for languages, prepares finalist students for a job market in which UK and German…

  8. Intercultural Communication and the Decision-Making Process: Americans and Malaysians in a Cooperative University Setting.

    ERIC Educational Resources Information Center

    Wilhelm, Kim Hughes

    A study investigated the application of Geert Hofstede's theory of cultural dimensions in management to the situation of Malaysian (n=8) and American (n=4) instructors in implementing a new English-as-a-Second-Language curriculum in Malaysia. American and Malaysian cultures are compared on four dimensions: social differentiation by gender; desire…

  9. Mastering Overdetection and Underdetection in Learner-Answer Processing: Simple Techniques for Analysis and Diagnosis

    ERIC Educational Resources Information Center

    Blanchard, Alexia; Kraif, Olivier; Ponton, Claude

    2009-01-01

    This paper presents a "didactic triangulation" strategy to cope with the problem of reliability of NLP applications for computer-assisted language learning (CALL) systems. It is based on the implementation of basic but well mastered NLP techniques and puts the emphasis on an adapted gearing between computable linguistic clues and didactic features…

  10. Identifying Head Start Children for Higher Tiers of Language and Literacy Instruction

    ERIC Educational Resources Information Center

    Albritton, Kizzy; Stuckey, Adrienne; Patton Terry, Nicole

    2017-01-01

    The application of Response to Intervention (RtI) to early childhood settings presents many opportunities and challenges; however, it remains unclear how best to implement this framework in settings in which children at risk of academic difficulty are overrepresented, like Head Start. One of the first steps in implementing any RtI process is the…

  11. Auditory processing theories of language disorders: past, present, and future.

    PubMed

    Miller, Carol A

    2011-07-01

    The purpose of this article is to provide information that will assist readers in understanding and interpreting research literature on the role of auditory processing in communication disorders. A narrative review was used to summarize and synthesize the literature on auditory processing deficits in children with auditory processing disorder (APD), specific language impairment (SLI), and dyslexia. The history of auditory processing theories of these 3 disorders is described, points of convergence and controversy within and among the different branches of research literature are considered, and the influence of research on practice is discussed. The theoretical and clinical contributions of neurophysiological methods are also reviewed, and suggested approaches for critical reading of the research literature are provided. Research on the role of auditory processing in communication disorders springs from a variety of theoretical perspectives and assumptions, and this variety, combined with controversies over the interpretation of research results, makes it difficult to draw clinical implications from the literature. Neurophysiological research methods are a promising route to better understanding of auditory processing. Progress in theory development and its clinical application is most likely to be made when researchers from different disciplines and theoretical perspectives communicate clearly and combine the strengths of their approaches.

  12. Five heads are better than one: preliminary results of team-based learning in a communication disorders graduate course.

    PubMed

    Epstein, Baila

    2016-01-01

    Clinical problem-solving is fundamental to the role of the speech-language pathologist in both the diagnostic and treatment processes. The problem-solving often involves collaboration with clients and their families, supervisors, and other professionals. Considering the importance of cooperative problem-solving in the profession, graduate education in speech-language pathology should provide experiences to foster the development of these skills. One evidence-based pedagogical approach that directly targets these abilities is team-based learning (TBL). TBL is a small-group instructional method that focuses on students' in-class application of conceptual knowledge in solving complex problems that they will likely encounter in their future clinical careers. The purpose of this pilot study was to investigate the educational outcomes and students' perceptions of TBL in a communication disorders graduate course on speech and language-based learning disabilities. Nineteen graduate students (mean age = 26 years, SD = 4.93), divided into three groups of five students and one group of four students, who were enrolled in a required graduate course, participated by fulfilling the key components of TBL: individual student preparation; individual and team readiness assurance tests (iRATs and tRATs) that assessed preparedness to apply course content; and application activities that challenged teams to solve complex and authentic clinical problems using course material. Performance on the tRATs was significantly higher than the individual students' scores on the iRATs (p < .001, Cohen's d = 4.08). Students generally reported favourable perceptions of TBL on an end-of-semester questionnaire. Qualitative analysis of responses to open-ended questions organized thematically indicated students' high satisfaction with application activities, discontent with the RATs, and recommendations for increased lecture in the TBL process. The outcomes of this pilot study suggest the effectiveness of TBL as an instructional method that provides student teams with opportunities to apply course content in problem-solving activities followed by immediate feedback. This research also addresses the dearth of empirical information on how graduate programmes in speech-language pathology bridge students' didactic learning and clinical practice. Future studies should examine the utility of this approach in other courses within the field and with more heterogeneous student populations. © 2015 Royal College of Speech and Language Therapists.

  13. Language Is a Complex Adaptive System: Position Paper

    ERIC Educational Resources Information Center

    Beckner, Clay; Blythe, Richard; Bybee, Joan; Christiansen, Morten H.; Croft, William; Ellis, Nick C.; Holland, John; Ke, Jinyun; Larsen-Freeman, Diane; Schoenemann, Tom

    2009-01-01

    Language has a fundamentally social function. Processes of human interaction along with domain-general cognitive processes shape the structure and knowledge of language. Recent research in the cognitive sciences has demonstrated that patterns of use strongly affect how language is acquired, is used, and changes. These processes are not independent…

  14. Thread, Web and Tapestry-making: Processes of Development and Language.

    ERIC Educational Resources Information Center

    Robinson, Clinton D. W.

    1999-01-01

    Reviews the major features of participatory development, asking how far similar processes are applied in promoting the use of local languages. Argues that language development processes must figure into participatory approaches to develop multilingual environments and that attention to language must proceed along similar participatory lines. (CMK)

  15. Testing the Shallow Structure Hypothesis in L2 Japanese

    ERIC Educational Resources Information Center

    Smith, Megan

    2016-01-01

    Language processing heuristics are one of the possible sources of divergence between first and second language systems. The Shallow Structure Hypothesis (SSH) (Clahsen and Felser, 2006) proposes that non-native language processing relies primarily on semantic, and not syntactic, information, and that second language (L2) processing is therefore…

  16. 5 CFR 838.803 - Language not acceptable for processing.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Language not acceptable for processing... Awarding Former Spouse Survivor Annuities § 838.803 Language not acceptable for processing. (a) Qualifying... drafted in accordance with the terminology used in this part. (3) Although any language satisfying the...

  17. 5 CFR 838.803 - Language not acceptable for processing.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Language not acceptable for processing... Awarding Former Spouse Survivor Annuities § 838.803 Language not acceptable for processing. (a) Qualifying... drafted in accordance with the terminology used in this part. (3) Although any language satisfying the...

  18. BPMN as a Communication Language for the Process- and Event-Oriented Perspectives in Fact-Oriented Conceptual Models

    NASA Astrophysics Data System (ADS)

    Bollen, Peter

    In this paper we will show how the OMG specification of BPMN (Business Process Modeling Notation) can be used to model the process- and event-oriented perspectives of an application subject area. We will illustrate how the fact-oriented conceptual models for the information-, process- and event perspectives can be used in a 'bottom-up' approach for creating a BPMN model in combination with other approaches, e.g. the use of a textual description. We will use the common doctor's office example as a running example in this article.

  19. The multilingual matrix test: Principles, applications, and comparison across languages: A review.

    PubMed

    Kollmeier, Birger; Warzybok, Anna; Hochmuth, Sabine; Zokoll, Melanie A; Uslar, Verena; Brand, Thomas; Wagener, Kirsten C

    2015-01-01

    A review of the development, evaluation, and application of the so-called 'matrix sentence test' for speech intelligibility testing in a multilingual society is provided. The format allows for repeated use with the same patient in her or his native language even if the experimenter does not understand the language. Using a closed-set format, the syntactically fixed, semantically unpredictable sentences (e.g. 'Peter bought eight white ships') provide a vocabulary of 50 words (10 alternatives for each position in the sentence). The principles (i.e. construction, optimization, evaluation, and validation) for 14 different languages are reviewed. Studies of the influence of talker, language, noise, the training effect, open vs. closed conduct of the test, and the subjects' language proficiency are reported and application examples are discussed. The optimization principles result in a steep intelligibility function and a high homogeneity of the speech materials presented and test lists employed, yielding a high efficiency and excellent comparability across languages. The characteristics of speakers generally dominate the differences across languages. The matrix test format with the principles outlined here is recommended for producing efficient, reliable, and comparable speech reception thresholds across different languages.

  20. APGEN Scheduling: 15 Years of Experience in Planning Automation

    NASA Technical Reports Server (NTRS)

    Maldague, Pierre F.; Wissler, Steve; Lenda, Matthew; Finnerty, Daniel

    2014-01-01

    In this paper, we discuss the scheduling capability of APGEN (Activity Plan Generator), a multi-mission planning application that is part of the NASA AMMOS (Advanced Multi- Mission Operations System), and how APGEN scheduling evolved over its applications to specific Space Missions. Our analysis identifies two major reasons for the successful application of APGEN scheduling to real problems: an expressive DSL (Domain-Specific Language) for formulating scheduling algorithms, and a well-defined process for enlisting the help of auxiliary modeling tools in providing high-fidelity, system-level simulations of the combined spacecraft and ground support system.

  1. Research in speech communication.

    PubMed

    Flanagan, J

    1995-10-24

    Advances in digital speech processing are now supporting application and deployment of a variety of speech technologies for human/machine communication. In fact, new businesses are rapidly forming about these technologies. But these capabilities are of little use unless society can afford them. Happily, explosive advances in microelectronics over the past two decades have assured affordable access to this sophistication as well as to the underlying computing technology. The research challenges in speech processing remain in the traditionally identified areas of recognition, synthesis, and coding. These three areas have typically been addressed individually, often with significant isolation among the efforts. But they are all facets of the same fundamental issue--how to represent and quantify the information in the speech signal. This implies deeper understanding of the physics of speech production, the constraints that the conventions of language impose, and the mechanism for information processing in the auditory system. In ongoing research, therefore, we seek more accurate models of speech generation, better computational formulations of language, and realistic perceptual guides for speech processing--along with ways to coalesce the fundamental issues of recognition, synthesis, and coding. Successful solution will yield the long-sought dictation machine, high-quality synthesis from text, and the ultimate in low bit-rate transmission of speech. It will also open the door to language-translating telephony, where the synthetic foreign translation can be in the voice of the originating talker.

  2. GaAs Supercomputing: Architecture, Language, And Algorithms For Image Processing

    NASA Astrophysics Data System (ADS)

    Johl, John T.; Baker, Nick C.

    1988-10-01

    The application of high-speed GaAs processors in a parallel system matches the demanding computational requirements of image processing. The architecture of the McDonnell Douglas Astronautics Company (MDAC) vector processor is described along with the algorithms and language translator. Most image and signal processing algorithms can utilize parallel processing and show a significant performance improvement over sequential versions. The parallelization performed by this system is within each vector instruction. Since each vector has many elements, each requiring some computation, useful concurrent arithmetic operations can easily be performed. Balancing the memory bandwidth with the computation rate of the processors is an important design consideration for high efficiency and utilization. The architecture features a bus-based execution unit consisting of four to eight 32-bit GaAs RISC microprocessors running at a 200 MHz clock rate for a peak performance of 1.6 BOPS. The execution unit is connected to a vector memory with three buses capable of transferring two input words and one output word every 10 nsec. The address generators inside the vector memory perform different vector addressing modes and feed the data to the execution unit. The functions discussed in this paper include basic MATRIX OPERATIONS, 2-D SPATIAL CONVOLUTION, HISTOGRAM, and FFT. For each of these algorithms, assembly language programs were run on a behavioral model of the system to obtain performance figures.

  3. Neurolinguistic approach to natural language processing with applications to medical text analysis.

    PubMed

    Duch, Włodzisław; Matykiewicz, Paweł; Pestian, John

    2008-12-01

    Understanding written or spoken language presumably involves spreading neural activation in the brain. This process may be approximated by spreading activation in semantic networks, providing enhanced representations that involve concepts not found directly in the text. The approximation of this process is of great practical and theoretical interest. Although activations of neural circuits involved in representation of words rapidly change in time snapshots of these activations spreading through associative networks may be captured in a vector model. Concepts of similar type activate larger clusters of neurons, priming areas in the left and right hemisphere. Analysis of recent brain imaging experiments shows the importance of the right hemisphere non-verbal clusterization. Medical ontologies enable development of a large-scale practical algorithm to re-create pathways of spreading neural activations. First concepts of specific semantic type are identified in the text, and then all related concepts of the same type are added to the text, providing expanded representations. To avoid rapid growth of the extended feature space after each step only the most useful features that increase document clusterization are retained. Short hospital discharge summaries are used to illustrate how this process works on a real, very noisy data. Expanded texts show significantly improved clustering and may be classified with much higher accuracy. Although better approximations to the spreading of neural activations may be devised a practical approach presented in this paper helps to discover pathways used by the brain to process specific concepts, and may be used in large-scale applications.

  4. The Now-or-Never bottleneck: A fundamental constraint on language.

    PubMed

    Christiansen, Morten H; Chater, Nick

    2016-01-01

    Memory is fleeting. New material rapidly obliterates previous material. How, then, can the brain deal successfully with the continual deluge of linguistic input? We argue that, to deal with this "Now-or-Never" bottleneck, the brain must compress and recode linguistic input as rapidly as possible. This observation has strong implications for the nature of language processing: (1) the language system must "eagerly" recode and compress linguistic input; (2) as the bottleneck recurs at each new representational level, the language system must build a multilevel linguistic representation; and (3) the language system must deploy all available information predictively to ensure that local linguistic ambiguities are dealt with "Right-First-Time"; once the original input is lost, there is no way for the language system to recover. This is "Chunk-and-Pass" processing. Similarly, language learning must also occur in the here and now, which implies that language acquisition is learning to process, rather than inducing, a grammar. Moreover, this perspective provides a cognitive foundation for grammaticalization and other aspects of language change. Chunk-and-Pass processing also helps explain a variety of core properties of language, including its multilevel representational structure and duality of patterning. This approach promises to create a direct relationship between psycholinguistics and linguistic theory. More generally, we outline a framework within which to integrate often disconnected inquiries into language processing, language acquisition, and language change and evolution.

  5. Psycholinguistics: a cross-language perspective.

    PubMed

    Bates, E; Devescovi, A; Wulfeck, B

    2001-01-01

    Cross-linguistic studies are essential to the identification of universal processes in language development, language use, and language breakdown. Comparative studies in all three areas are reviewed, demonstrating powerful differences across languages in the order in which specific structures are acquired by children, the sparing and impairment of those structures in aphasic patients, and the structures that normal adults rely upon most heavily in real-time word and sentence processing. It is proposed that these differences reflect a cost-benefit trade-off among universal mechanisms for learning and processing (perception, attention, motor planning, memory) that are critical for language, but are not unique to language.

  6. Model-based query language for analyzing clinical processes.

    PubMed

    Barzdins, Janis; Barzdins, Juris; Rencis, Edgars; Sostaks, Agris

    2013-01-01

    Nowadays large databases of clinical process data exist in hospitals. However, these data are rarely used in full scope. In order to perform queries on hospital processes, one must either choose from the predefined queries or develop queries using MS Excel-type software system, which is not always a trivial task. In this paper we propose a new query language for analyzing clinical processes that is easily perceptible also by non-IT professionals. We develop this language based on a process modeling language which is also described in this paper. Prototypes of both languages have already been verified using real examples from hospitals.

  7. Forward and adjoint spectral-element simulations of seismic wave propagation using hardware accelerators

    NASA Astrophysics Data System (ADS)

    Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri

    2015-04-01

    Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  8. Integrating UIMA annotators in a web-based text processing framework.

    PubMed

    Chen, Xiang; Arnold, Corey W

    2013-01-01

    The Unstructured Information Management Architecture (UIMA) [1] framework is a growing platform for natural language processing (NLP) applications. However, such applications may be difficult for non-technical users deploy. This project presents a web-based framework that wraps UIMA-based annotator systems into a graphical user interface for researchers and clinicians, and a web service for developers. An annotator that extracts data elements from lung cancer radiology reports is presented to illustrate the use of the system. Annotation results from the web system can be exported to multiple formats for users to utilize in other aspects of their research and workflow. This project demonstrates the benefits of a lay-user interface for complex NLP applications. Efforts such as this can lead to increased interest and support for NLP work in the clinical domain.

  9. Common Problems of Mobile Applications for Foreign Language Testing

    ERIC Educational Resources Information Center

    Garcia Laborda, Jesus; Magal-Royo, Teresa; Lopez, Jose Luis Gimenez

    2011-01-01

    As the use of mobile learning educational applications has become more common anywhere in the world, new concerns have appeared in the classroom, human interaction in software engineering and ergonomics. new tests of foreign languages for a number of purposes have become more and more common recently. However, studies interrelating language tests…

  10. Visual Basic Applications to Physics Teaching

    ERIC Educational Resources Information Center

    Chitu, Catalin; Inpuscatu, Razvan Constantin; Viziru, Marilena

    2011-01-01

    Derived from basic language, VB (Visual Basic) is a programming language focused on the video interface component. With graphics and functional components implemented, the programmer is able to bring and use their components to achieve the desired application in a relatively short time. Language VB is a useful tool in physics teaching by creating…

  11. State of the App: A Taxonomy and Framework for Evaluating Language Learning Mobile Applications

    ERIC Educational Resources Information Center

    Rosell-Aguilar, Fernando

    2017-01-01

    The widespread growth in availability and use of smartphones and tablets has facilitated an unprecedented avalanche of new software applications with language learning and teaching capabilities. However, little has been published in terms of effective design and evaluation of language learning apps. This article reviews current research about the…

  12. The Effects of Global Education in the English Language Conversation Classroom

    ERIC Educational Resources Information Center

    Omidvar, Reza; Sukumar, Benjamin

    2013-01-01

    Global education is the backbone of balanced teaching. This is also applicable in the second language teaching domain where its application could result in enhancing global awareness and the linguistic competence of learners. It is, however, important to consider the platform of teaching English to speakers of other languages where the…

  13. Managing Scientific Software Complexity with Bocca and CCA

    DOE PAGES

    Allan, Benjamin A.; Norris, Boyana; Elwasif, Wael R.; ...

    2008-01-01

    In high-performance scientific software development, the emphasis is often on short time to first solution. Even when the development of new components mostly reuses existing components or libraries and only small amounts of new code must be created, dealing with the component glue code and software build processes to obtain complete applications is still tedious and error-prone. Component-based software meant to reduce complexity at the application level increases complexity to the extent that the user must learn and remember the interfaces and conventions of the component model itself. To address these needs, we introduce Bocca, the first tool to enablemore » application developers to perform rapid component prototyping while maintaining robust software-engineering practices suitable to HPC environments. Bocca provides project management and a comprehensive build environment for creating and managing applications composed of Common Component Architecture components. Of critical importance for high-performance computing (HPC) applications, Bocca is designed to operate in a language-agnostic way, simultaneously handling components written in any of the languages commonly used in scientific applications: C, C++, Fortran, Python and Java. Bocca automates the tasks related to the component glue code, freeing the user to focus on the scientific aspects of the application. Bocca embraces the philosophy pioneered by Ruby on Rails for web applications: start with something that works, and evolve it to the user's purpose.« less

  14. One Mind, Two Languages: Bilingual Language Processing. Explaining Linguistics.

    ERIC Educational Resources Information Center

    Nicol, Janet L., Ed.

    This collection of papers presents research on language processing among second language learners and bilinguals. The nine papers include the following: (1) "The Bilingual's Language Modes" (Francois Grosjean); (2) "The Voicing Contrast in English and Spanish: The Relationship between Perception and Production" (Mary L. Zampini…

  15. 75 FR 26942 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-13

    ... Management. Office of English Language Acquisitions Type of Review: Reinstatement. Title: Application for Grants Under English Language Acquisition and Language Enhancement: Native American and Alaska Native... Grants Under English Language Acquisition and Language Enhancement: Native American and Alaska Native...

  16. Information and Language for Effective Communication

    ERIC Educational Resources Information Center

    Pitoy, Sammy P.

    2012-01-01

    Information and Language for Effective Communication (ILEC) is a language teaching approach emphasizing learners' extensive exposure in different language communicative sources. In ILEC, the language learners will first receive instructions of ILEC principles and application. Afterwards, they will receive autonomous, direct, purposeful, and…

  17. Infant discrimination of rapid auditory cues predicts later language impairment.

    PubMed

    Benasich, April A; Tallal, Paula

    2002-10-17

    The etiology and mechanisms of specific language impairment (SLI) in children are unknown. Differences in basic auditory processing abilities have been suggested to underlie their language deficits. Studies suggest that the neuropathology, such as atypical patterns of cerebral lateralization and cortical cellular anomalies, implicated in such impairments likely occur early in life. Such anomalies may play a part in the rapid processing deficits seen in this disorder. However, prospective, longitudinal studies in infant populations that are critical to examining these hypotheses have not been done. In the study described, performance on brief, rapidly-presented, successive auditory processing and perceptual-cognitive tasks were assessed in two groups of infants: normal control infants with no family history of language disorders and infants from families with a positive family history for language impairment. Initial assessments were obtained when infants were 6-9 months of age (M=7.5 months) and the sample was then followed through age 36 months. At the first visit, infants' processing of rapid auditory cues as well as global processing speed and memory were assessed. Significant differences in mean thresholds were seen in infants born into families with a history of SLI as compared with controls. Examination of relations between infant processing abilities and emerging language through 24 months-of-age revealed that threshold for rapid auditory processing at 7.5 months was the single best predictor of language outcome. At age 3, rapid auditory processing threshold and being male, together predicted 39-41% of the variance in language outcome. Thus, early deficits in rapid auditory processing abilities both precede and predict subsequent language delays. These findings support an essential role for basic nonlinguistic, central auditory processes, particularly rapid spectrotemporal processing, in early language development. Further, these findings provide a temporal diagnostic window during which future language impairments may be addressed.

  18. A Cognitive Approach to the Development of Early Language

    ERIC Educational Resources Information Center

    Rose, Susan A.; Feldman, Judith F.; Jankowski, Jeffery J.

    2009-01-01

    A controversial issue in the field of language development is whether language emergence and growth is dependent solely on processes specifically tied to language or could also depend on basic cognitive processes that affect all aspects of cognitive competence (domain-general processes). The present article examines this issue using a large…

  19. The Promise of NLP and Speech Processing Technologies in Language Assessment

    ERIC Educational Resources Information Center

    Chapelle, Carol A.; Chung, Yoo-Ree

    2010-01-01

    Advances in natural language processing (NLP) and automatic speech recognition and processing technologies offer new opportunities for language testing. Despite their potential uses on a range of language test item types, relatively little work has been done in this area, and it is therefore not well understood by test developers, researchers or…

  20. Flexibility in Young Second-Language Learners: Examining the Language Specificity of Orthographic Processing

    ERIC Educational Resources Information Center

    Deacon, S. Helene; Wade-Woolley, Lesly; Kirby, John R.

    2009-01-01

    This study examines whether orthographic processing transfers across languages to reading when the writing systems under acquisition are sufficiently related. We conducted a study with 76 7-year-old English-first-language children in French immersion. Measures of English and French orthographic processing (orthographic choice tasks) and…

  1. Generating Safety-Critical PLC Code From a High-Level Application Software Specification

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The benefits of automatic-application code generation are widely accepted within the software engineering community. These benefits include raised abstraction level of application programming, shorter product development time, lower maintenance costs, and increased code quality and consistency. Surprisingly, code generation concepts have not yet found wide acceptance and use in the field of programmable logic controller (PLC) software development. Software engineers at Kennedy Space Center recognized the need for PLC code generation while developing the new ground checkout and launch processing system, called the Launch Control System (LCS). Engineers developed a process and a prototype software tool that automatically translates a high-level representation or specification of application software into ladder logic that executes on a PLC. All the computer hardware in the LCS is planned to be commercial off the shelf (COTS), including industrial controllers or PLCs that are connected to the sensors and end items out in the field. Most of the software in LCS is also planned to be COTS, with only small adapter software modules that must be developed in order to interface between the various COTS software products. A domain-specific language (DSL) is a programming language designed to perform tasks and to solve problems in a particular domain, such as ground processing of launch vehicles. The LCS engineers created a DSL for developing test sequences of ground checkout and launch operations of future launch vehicle and spacecraft elements, and they are developing a tabular specification format that uses the DSL keywords and functions familiar to the ground and flight system users. The tabular specification format, or tabular spec, allows most ground and flight system users to document how the application software is intended to function and requires little or no software programming knowledge or experience. A small sample from a prototype tabular spec application is shown.

  2. Evidence for shared cognitive processing of pitch in music and language.

    PubMed

    Perrachione, Tyler K; Fedorenko, Evelina G; Vinke, Louis; Gibson, Edward; Dilley, Laura C

    2013-01-01

    Language and music epitomize the complex representational and computational capacities of the human mind. Strikingly similar in their structural and expressive features, a longstanding question is whether the perceptual and cognitive mechanisms underlying these abilities are shared or distinct--either from each other or from other mental processes. One prominent feature shared between language and music is signal encoding using pitch, conveying pragmatics and semantics in language and melody in music. We investigated how pitch processing is shared between language and music by measuring consistency in individual differences in pitch perception across language, music, and three control conditions intended to assess basic sensory and domain-general cognitive processes. Individuals' pitch perception abilities in language and music were most strongly related, even after accounting for performance in all control conditions. These results provide behavioral evidence, based on patterns of individual differences, that is consistent with the hypothesis that cognitive mechanisms for pitch processing may be shared between language and music.

  3. An application of software design and documentation language. [Galileo spacecraft command and data subsystem

    NASA Technical Reports Server (NTRS)

    Callender, E. D.; Clarkson, T. B.; Frasier, C. E.

    1980-01-01

    The software design and documentation language (SDDL) is a general purpose processor to support a lanugage for the description of any system, structure, concept, or procedure that may be presented from the viewpoint of a collection of hierarchical entities linked together by means of binary connections. The language comprises a set of rules of syntax, primitive construct classes (module, block, and module invocation), and language control directives. The result is a language with a fixed grammar, variable alphabet and punctuation, and an extendable vocabulary. The application of SDDL to the detailed software design of the Command Data Subsystem for the Galileo Spacecraft is discussed. A set of constructs was developed and applied. These constructs are evaluated and examples of their application are considered.

  4. Shuttle-Data-Tape XML Translator

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Osborne, Richard N.

    2005-01-01

    JSDTImport is a computer program for translating native Shuttle Data Tape (SDT) files from American Standard Code for Information Interchange (ASCII) format into databases in other formats. JSDTImport solves the problem of organizing the SDT content, affording flexibility to enable users to choose how to store the information in a database to better support client and server applications. JSDTImport can be dynamically configured by use of a simple Extensible Markup Language (XML) file. JSDTImport uses this XML file to define how each record and field will be parsed, its layout and definition, and how the resulting database will be structured. JSDTImport also includes a client application programming interface (API) layer that provides abstraction for the data-querying process. The API enables a user to specify the search criteria to apply in gathering all the data relevant to a query. The API can be used to organize the SDT content and translate into a native XML database. The XML format is structured into efficient sections, enabling excellent query performance by use of the XPath query language. Optionally, the content can be translated into a Structured Query Language (SQL) database for fast, reliable SQL queries on standard database server computers.

  5. vSPARQL: a view definition language for the semantic web.

    PubMed

    Shaw, Marianne; Detwiler, Landon T; Noy, Natalya; Brinkley, James; Suciu, Dan

    2011-02-01

    Translational medicine applications would like to leverage the biological and biomedical ontologies, vocabularies, and data sets available on the semantic web. We present a general solution for RDF information set reuse inspired by database views. Our view definition language, vSPARQL, allows applications to specify the exact content that they are interested in and how that content should be restructured or modified. Applications can access relevant content by querying against these view definitions. We evaluate the expressivity of our approach by defining views for practical use cases and comparing our view definition language to existing query languages. Copyright © 2010 Elsevier Inc. All rights reserved.

  6. TurboTech Technical Evaluation Automated System

    NASA Technical Reports Server (NTRS)

    Tiffany, Dorothy J.

    2009-01-01

    TurboTech software is a Web-based process that simplifies and semiautomates technical evaluation of NASA proposals for Contracting Officer's Technical Representatives (COTRs). At the time of this reporting, there have been no set standards or systems for training new COTRs in technical evaluations. This new process provides boilerplate text in response to interview style questions. This text is collected into a Microsoft Word document that can then be further edited to conform to specific cases. By providing technical language and a structured format, TurboTech allows the COTRs to concentrate more on the actual evaluation, and less on deciding what language would be most appropriate. Since the actual word choice is one of the more time-consuming parts of a COTRs job, this process should allow for an increase in quantity of proposals evaluated. TurboTech is applicable to composing technical evaluations of contractor proposals, task and delivery orders, change order modifications, requests for proposals, new work modifications, task assignments, as well as any changes to existing contracts.

  7. Real-time lexical comprehension in young children learning American Sign Language.

    PubMed

    MacDonald, Kyle; LaMarr, Todd; Corina, David; Marchman, Virginia A; Fernald, Anne

    2018-04-16

    When children interpret spoken language in real time, linguistic information drives rapid shifts in visual attention to objects in the visual world. This language-vision interaction can provide insights into children's developing efficiency in language comprehension. But how does language influence visual attention when the linguistic signal and the visual world are both processed via the visual channel? Here, we measured eye movements during real-time comprehension of a visual-manual language, American Sign Language (ASL), by 29 native ASL-learning children (16-53 mos, 16 deaf, 13 hearing) and 16 fluent deaf adult signers. All signers showed evidence of rapid, incremental language comprehension, tending to initiate an eye movement before sign offset. Deaf and hearing ASL-learners showed similar gaze patterns, suggesting that the in-the-moment dynamics of eye movements during ASL processing are shaped by the constraints of processing a visual language in real time and not by differential access to auditory information in day-to-day life. Finally, variation in children's ASL processing was positively correlated with age and vocabulary size. Thus, despite competition for attention within a single modality, the timing and accuracy of visual fixations during ASL comprehension reflect information processing skills that are important for language acquisition regardless of language modality. © 2018 John Wiley & Sons Ltd.

  8. The Jupyter/IPython architecture: a unified view of computational research, from interactive exploration to communication and publication.

    NASA Astrophysics Data System (ADS)

    Ragan-Kelley, M.; Perez, F.; Granger, B.; Kluyver, T.; Ivanov, P.; Frederic, J.; Bussonnier, M.

    2014-12-01

    IPython has provided terminal-based tools for interactive computing in Python since 2001. The notebook document format and multi-process architecture introduced in 2011 have expanded the applicable scope of IPython into teaching, presenting, and sharing computational work, in addition to interactive exploration. The new architecture also allows users to work in any language, with implementations in Python, R, Julia, Haskell, and several other languages. The language agnostic parts of IPython have been renamed to Jupyter, to better capture the notion that a cross-language design can encapsulate commonalities present in computational research regardless of the programming language being used. This architecture offers components like the web-based Notebook interface, that supports rich documents that combine code and computational results with text narratives, mathematics, images, video and any media that a modern browser can display. This interface can be used not only in research, but also for publication and education, as notebooks can be converted to a variety of output formats, including HTML and PDF. Recent developments in the Jupyter project include a multi-user environment for hosting notebooks for a class or research group, a live collaboration notebook via Google Docs, and better support for languages other than Python.

  9. Incorporating advanced language models into the P300 speller using particle filtering

    NASA Astrophysics Data System (ADS)

    Speier, W.; Arnold, C. W.; Deshpande, A.; Knall, J.; Pouratian, N.

    2015-08-01

    Objective. The P300 speller is a common brain-computer interface (BCI) application designed to communicate language by detecting event related potentials in a subject’s electroencephalogram signal. Information about the structure of natural language can be valuable for BCI communication, but attempts to use this information have thus far been limited to rudimentary n-gram models. While more sophisticated language models are prevalent in natural language processing literature, current BCI analysis methods based on dynamic programming cannot handle their complexity. Approach. Sampling methods can overcome this complexity by estimating the posterior distribution without searching the entire state space of the model. In this study, we implement sequential importance resampling, a commonly used particle filtering (PF) algorithm, to integrate a probabilistic automaton language model. Main result. This method was first evaluated offline on a dataset of 15 healthy subjects, which showed significant increases in speed and accuracy when compared to standard classification methods as well as a recently published approach using a hidden Markov model (HMM). An online pilot study verified these results as the average speed and accuracy achieved using the PF method was significantly higher than that using the HMM method. Significance. These findings strongly support the integration of domain-specific knowledge into BCI classification to improve system performance.

  10. Interactive Cohort Identification of Sleep Disorder Patients Using Natural Language Processing and i2b2.

    PubMed

    Chen, W; Kowatch, R; Lin, S; Splaingard, M; Huang, Y

    2015-01-01

    Nationwide Children's Hospital established an i2b2 (Informatics for Integrating Biology & the Bedside) application for sleep disorder cohort identification. Discrete data were gleaned from semistructured sleep study reports. The system showed to work more efficiently than the traditional manual chart review method, and it also enabled searching capabilities that were previously not possible. We report on the development and implementation of the sleep disorder i2b2 cohort identification system using natural language processing of semi-structured documents. We developed a natural language processing approach to automatically parse concepts and their values from semi-structured sleep study documents. Two parsers were developed: a regular expression parser for extracting numeric concepts and a NLP based tree parser for extracting textual concepts. Concepts were further organized into i2b2 ontologies based on document structures and in-domain knowledge. 26,550 concepts were extracted with 99% being textual concepts. 1.01 million facts were extracted from sleep study documents such as demographic information, sleep study lab results, medications, procedures, diagnoses, among others. The average accuracy of terminology parsing was over 83% when comparing against those by experts. The system is capable of capturing both standard and non-standard terminologies. The time for cohort identification has been reduced significantly from a few weeks to a few seconds. Natural language processing was shown to be powerful for quickly converting large amount of semi-structured or unstructured clinical data into discrete concepts, which in combination of intuitive domain specific ontologies, allows fast and effective interactive cohort identification through the i2b2 platform for research and clinical use.

  11. Interactive Cohort Identification of Sleep Disorder Patients Using Natural Language Processing and i2b2

    PubMed Central

    Chen, W.; Kowatch, R.; Lin, S.; Splaingard, M.

    2015-01-01

    Summary Nationwide Children’s Hospital established an i2b2 (Informatics for Integrating Biology & the Bedside) application for sleep disorder cohort identification. Discrete data were gleaned from semistructured sleep study reports. The system showed to work more efficiently than the traditional manual chart review method, and it also enabled searching capabilities that were previously not possible. Objective We report on the development and implementation of the sleep disorder i2b2 cohort identification system using natural language processing of semi-structured documents. Methods We developed a natural language processing approach to automatically parse concepts and their values from semi-structured sleep study documents. Two parsers were developed: a regular expression parser for extracting numeric concepts and a NLP based tree parser for extracting textual concepts. Concepts were further organized into i2b2 ontologies based on document structures and in-domain knowledge. Results 26,550 concepts were extracted with 99% being textual concepts. 1.01 million facts were extracted from sleep study documents such as demographic information, sleep study lab results, medications, procedures, diagnoses, among others. The average accuracy of terminology parsing was over 83% when comparing against those by experts. The system is capable of capturing both standard and non-standard terminologies. The time for cohort identification has been reduced significantly from a few weeks to a few seconds. Conclusion Natural language processing was shown to be powerful for quickly converting large amount of semi-structured or unstructured clinical data into discrete concepts, which in combination of intuitive domain specific ontologies, allows fast and effective interactive cohort identification through the i2b2 platform for research and clinical use. PMID:26171080

  12. Cultural and climatic changes shape the evolutionary history of the Uralic languages.

    PubMed

    Honkola, T; Vesakoski, O; Korhonen, K; Lehtinen, J; Syrjänen, K; Wahlberg, N

    2013-06-01

    Quantitative phylogenetic methods have been used to study the evolutionary relationships and divergence times of biological species, and recently, these have also been applied to linguistic data to elucidate the evolutionary history of language families. In biology, the factors driving macroevolutionary processes are assumed to be either mainly biotic (the Red Queen model) or mainly abiotic (the Court Jester model) or a combination of both. The applicability of these models is assumed to depend on the temporal and spatial scale observed as biotic factors act on species divergence faster and in smaller spatial scale than the abiotic factors. Here, we used the Uralic language family to investigate whether both 'biotic' interactions (i.e. cultural interactions) and abiotic changes (i.e. climatic fluctuations) are also connected to language diversification. We estimated the times of divergence using Bayesian phylogenetics with a relaxed-clock method and related our results to climatic, historical and archaeological information. Our timing results paralleled the previous linguistic studies but suggested a later divergence of Finno-Ugric, Finnic and Saami languages. Some of the divergences co-occurred with climatic fluctuation and some with cultural interaction and migrations of populations. Thus, we suggest that both 'biotic' and abiotic factors contribute either directly or indirectly to the diversification of languages and that both models can be applied when studying language evolution. © 2013 The Authors. Journal of Evolutionary Biology © 2013 European Society For Evolutionary Biology.

  13. Development of a Zulu speech reception threshold test for Zulu first language speakers in Kwa Zulu-Natal.

    PubMed

    Panday, Seema; Kathard, Harsha; Pillay, Mershen; Govender, Cyril

    2007-01-01

    The measurement of speech reception threshold (SRT) is best evaluated in an individual's first language. The present study focused on the development of a Zulu SRT word list, according to adapted criteria for SRT in Zulu. The aim of this paper is to present the process involved in the development of the Zulu word list. In acquiring the data to realize this aim, 131 common bisyllabic Zulu words were identified by two Zulu speaking language interpreters and two tertiary level educators. Eighty two percent of these words were described as bisyllabic verbs. Thereafter using a three point Likert scale, 58 bisyllabic verbs were rated by 5 linguistic experts as being familiar, phonetically dissimilar and being low tone verbs. According to the Kendall's co-efficient of concordance at 95% level of confidence the agreement among the raters was good for each criterion. The results highlighted the importance of adapting the criteria for SRT to suit the structure of the language. An important research implication emerging from the study is the theoretical guidelines proposed for the development of SRT material in other African Languages. Furthermore, the importance of using speech material appropriate to the language has also being highlighted. The developed SRT word list in Zulu is applicable to the adult Zulu First Language Speaker in KZN.

  14. Advanced software techniques for data management systems. Volume 3: Programming language characteristics and comparison reference

    NASA Technical Reports Server (NTRS)

    James, T. A.; Hall, B. C.; Newbold, P. M.

    1972-01-01

    A comparative evaluation was made of eight higher order languages of general interest in the aerospace field: PL/1; HAL; JOVIAL/J3; SPL/J6; CLASP; ALGOL 60; FORTRAN 4; and MAC360. A summary of the functional requirements for a language for general use in manned aerodynamic applications is presented. The evaluation supplies background material to be used in assessing the worth of each language for some particular application.

  15. Role des congeneres interlinguaux dans le developpement du vocabulaire receptif: Application au francais langue seconde (The Role of Interlingual Cognates in the Development of Receptive Vocabulary: Application to French as a Second Language).

    ERIC Educational Resources Information Center

    Treville, Marie-Claude

    This study investigated the effects of systematic use of similarities between the native and second languages on the lexical competence of second language learners. Subjects were 209 first- and second-year English-speaking university students in French language classes. The students were pre- and post-tested for their visual recognition of…

  16. Sensitivity and Specificity of French Language and Processing Measures for the Identification of Primary Language Impairment at Age 5

    ERIC Educational Resources Information Center

    Thordardottir, Elin; Kehayia, Eva; Mazer, Barbara; Lessard, Nicole; Majnemer, Annette; Sutton, Ann; Trudeau, Natacha; Chilingaryan, Gevorg

    2011-01-01

    Purpose: Research on the diagnostic accuracy of different language measures has focused primarily on English. This study examined the sensitivity and specificity of a range of measures of language knowledge and language processing for the identification of primary language impairment (PLI) in French-speaking children. Because of the lack of…

  17. A Query Language for Handling Big Observation Data Sets in the Sensor Web

    NASA Astrophysics Data System (ADS)

    Autermann, Christian; Stasch, Christoph; Jirka, Simon; Koppe, Roland

    2017-04-01

    The Sensor Web provides a framework for the standardized Web-based sharing of environmental observations and sensor metadata. While the issue of varying data formats and protocols is addressed by these standards, the fast growing size of observational data is imposing new challenges for the application of these standards. Most solutions for handling big observational datasets currently focus on remote sensing applications, while big in-situ datasets relying on vector features still lack a solid approach. Conventional Sensor Web technologies may not be adequate, as the sheer size of the data transmitted and the amount of metadata accumulated may render traditional OGC Sensor Observation Services (SOS) unusable. Besides novel approaches to store and process observation data in place, e.g. by harnessing big data technologies from mainstream IT, the access layer has to be amended to utilize and integrate these large observational data archives into applications and to enable analysis. For this, an extension to the SOS will be discussed that establishes a query language to dynamically process and filter observations at storage level, similar to the OGC Web Coverage Service (WCS) and it's Web Coverage Processing Service (WCPS) extension. This will enable applications to request e.g. spatial or temporal aggregated data sets in a resolution it is able to display or it requires. The approach will be developed and implemented in cooperation with the The Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research whose catalogue of data compromises marine observations of physical, chemical and biological phenomena from a wide variety of sensors, including mobile (like research vessels, aircrafts or underwater vehicles) and stationary (like buoys or research stations). Observations are made with a high temporal resolution and the resulting time series may span multiple decades.

  18. A formal approach to the analysis of clinical computer-interpretable guideline modeling languages.

    PubMed

    Grando, M Adela; Glasspool, David; Fox, John

    2012-01-01

    To develop proof strategies to formally study the expressiveness of workflow-based languages, and to investigate their applicability to clinical computer-interpretable guideline (CIG) modeling languages. We propose two strategies for studying the expressiveness of workflow-based languages based on a standard set of workflow patterns expressed as Petri nets (PNs) and notions of congruence and bisimilarity from process calculus. Proof that a PN-based pattern P can be expressed in a language L can be carried out semi-automatically. Proof that a language L cannot provide the behavior specified by a PNP requires proof by exhaustion based on analysis of cases and cannot be performed automatically. The proof strategies are generic but we exemplify their use with a particular CIG modeling language, PROforma. To illustrate the method we evaluate the expressiveness of PROforma against three standard workflow patterns and compare our results with a previous similar but informal comparison. We show that the two proof strategies are effective in evaluating a CIG modeling language against standard workflow patterns. We find that using the proposed formal techniques we obtain different results to a comparable previously published but less formal study. We discuss the utility of these analyses as the basis for principled extensions to CIG modeling languages. Additionally we explain how the same proof strategies can be reused to prove the satisfaction of patterns expressed in the declarative language CIGDec. The proof strategies we propose are useful tools for analysing the expressiveness of CIG modeling languages. This study provides good evidence of the benefits of applying formal methods of proof over semi-formal ones. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. An Efficient Universal Trajectory Language

    NASA Technical Reports Server (NTRS)

    Hagen, George E.; Guerreiro, Nelson M.; Maddalon, Jeffrey M.; Butler, Ricky W.

    2017-01-01

    The Efficient Universal Trajectory Language (EUTL) is a language for specifying and representing trajectories for Air Traffic Management (ATM) concepts such as Trajectory-Based Operations (TBO). In these concepts, the communication of a trajectory between an aircraft and ground automation is fundamental. Historically, this trajectory exchange has not been done, leading to trajectory definitions that have been centered around particular application domains and, therefore, are not well suited for TBO applications. The EUTL trajectory language has been defined in the Prototype Verification System (PVS) formal specification language, which provides an operational semantics for the EUTL language. The hope is that EUTL will provide a foundation for mathematically verified algorithms that manipulate trajectories. Additionally, the EUTL language provides well-defined methods to unambiguously determine position and velocity information between the reported trajectory points. In this paper, we present the EUTL trajectory language in mathematical detail.

  20. Rethinking clinical language mapping approaches: discordant receptive and expressive hemispheric language dominance in epilepsy surgery candidates.

    PubMed

    Gage, Nicole M; Eliashiv, Dawn S; Isenberg, Anna L; Fillmore, Paul T; Kurelowech, Lacey; Quint, Patti J; Chung, Jeffrey M; Otis, Shirley M

    2011-06-01

    Neuroimaging studies have shed light on cortical language organization, with findings implicating the left and right temporal lobes in speech processing converging to a left-dominant pattern. Findings highlight the fact that the state of theoretical language knowledge is ahead of current clinical language mapping methods, motivating a rethinking of these approaches. The authors used magnetoencephalography and multiple tasks in seven candidates for resective epilepsy surgery to investigate language organization. The authors scanned 12 control subjects to investigate the time course of bilateral receptive speech processes. Laterality indices were calculated for left and right hemisphere late fields ∼150 to 400 milliseconds. The authors report that (1) in healthy adults, speech processes activated superior temporal regions bilaterally converging to a left-dominant pattern, (2) in four of six patients, this was reversed, with bilateral processing converging to a right-dominant pattern, and (3) in three of four of these patients, receptive and expressive language processes were laterally discordant. Results provide evidence that receptive and expressive language may have divergent hemispheric dominance. Right-sided receptive language dominance in epilepsy patients emphasizes the need to assess both receptive and expressive language. Findings indicate that it is critical to use multiple tasks tapping separable aspects of language function to provide sensitive and specific estimates of language localization in surgical patients.

Top