ERIC Educational Resources Information Center
Chowdhury, Gobinda G.
2003-01-01
Discusses issues related to natural language processing, including theoretical developments; natural language understanding; tools and techniques; natural language text processing systems; abstracting; information extraction; information retrieval; interfaces; software; Internet, Web, and digital library applications; machine translation for…
Overcoming Learning Time and Space Constraints through Technological Tool
ERIC Educational Resources Information Center
Zarei, Nafiseh; Hussin, Supyan; Rashid, Taufik
2015-01-01
Today the use of technological tools has become an evolution in language learning and language acquisition. Many instructors and lecturers believe that integrating Web-based learning tools into language courses allows pupils to become active learners during learning process. This study investigates how the Learning Management Blog (LMB) overcomes…
Analyzing Discourse Processing Using a Simple Natural Language Processing Tool
ERIC Educational Resources Information Center
Crossley, Scott A.; Allen, Laura K.; Kyle, Kristopher; McNamara, Danielle S.
2014-01-01
Natural language processing (NLP) provides a powerful approach for discourse processing researchers. However, there remains a notable degree of hesitation by some researchers to consider using NLP, at least on their own. The purpose of this article is to introduce and make available a "simple" NLP (SiNLP) tool. The overarching goal of…
Process for selecting engineering tools : applied to selecting a SysML tool.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Spain, Mark J.; Post, Debra S.; Taylor, Jeffrey L.
2011-02-01
Process for Selecting Engineering Tools outlines the process and tools used to select a SysML (Systems Modeling Language) tool. The process is general in nature and users could use the process to select most engineering tools and software applications.
The ALICE System: A Workbench for Learning and Using Language.
ERIC Educational Resources Information Center
Levin, Lori; And Others
1991-01-01
ALICE, a multimedia framework for intelligent computer-assisted language instruction (ICALI) at Carnegie Mellon University (PA), consists of a set of tools for building a number of different types of ICALI programs in any language. Its Natural Language Processing tools for syntactic error detection, morphological analysis, and generation of…
Emerging Approach of Natural Language Processing in Opinion Mining: A Review
NASA Astrophysics Data System (ADS)
Kim, Tai-Hoon
Natural language processing (NLP) is a subfield of artificial intelligence and computational linguistics. It studies the problems of automated generation and understanding of natural human languages. This paper outlines a framework to use computer and natural language techniques for various levels of learners to learn foreign languages in Computer-based Learning environment. We propose some ideas for using the computer as a practical tool for learning foreign language where the most of courseware is generated automatically. We then describe how to build Computer Based Learning tools, discuss its effectiveness, and conclude with some possibilities using on-line resources.
ERIC Educational Resources Information Center
Crossley, Scott A.
2013-01-01
This paper provides an agenda for replication studies focusing on second language (L2) writing and the use of natural language processing (NLP) tools and machine learning algorithms. Specifically, it introduces a range of the available NLP tools and machine learning algorithms and demonstrates how these could be used to replicate seminal studies…
Trombert-Paviot, B; Rodrigues, J M; Rogers, J E; Baud, R; van der Haring, E; Rassinoux, A M; Abrial, V; Clavel, L; Idir, H
1999-01-01
GALEN has developed a new generation of terminology tools based on a language independent concept reference model using a compositional formalism allowing computer processing and multiple reuses. During the 4th framework program project Galen-In-Use we applied the modelling and the tools to the development of a new multipurpose coding system for surgical procedures (CCAM) in France. On one hand we contributed to a language independent knowledge repository for multicultural Europe. On the other hand we support the traditional process for creating a new coding system in medicine which is very much labour consuming by artificial intelligence tools using a medically oriented recursive ontology and natural language processing. We used an integrated software named CLAW to process French professional medical language rubrics produced by the national colleges of surgeons into intermediate dissections and to the Grail reference ontology model representation. From this language independent concept model representation on one hand we generate controlled French natural language to support the finalization of the linguistic labels in relation with the meanings of the conceptual system structure. On the other hand the classification manager of third generation proves to be very powerful to retrieve the initial professional rubrics with different categories of concepts within a semantic network.
Rodrigues, J M; Trombert-Paviot, B; Baud, R; Wagner, J; Meusnier-Carriot, F
1998-01-01
GALEN has developed a language independent common reference model based on a medically oriented ontology and practical tools and techniques for managing healthcare terminology including natural language processing. GALEN-IN-USE is the current phase which applied the modelling and the tools to the development or the updating of coding systems for surgical procedures in different national coding centers co-operating within the European Federation of Coding Centre (EFCC) to create a language independent knowledge repository for multicultural Europe. We used an integrated set of artificial intelligence terminology tools named CLAssification Manager workbench to process French professional medical language rubrics into intermediate dissections and to the Grail reference ontology model representation. From this language independent concept model representation we generate controlled French natural language. The French national coding centre is then able to retrieve the initial professional rubrics with different categories of concepts, to compare the professional language proposed by expert clinicians to the French generated controlled vocabulary and to finalize the linguistic labels of the coding system in relation with the meanings of the conceptual system structure.
Trombert-Paviot, B; Rodrigues, J M; Rogers, J E; Baud, R; van der Haring, E; Rassinoux, A M; Abrial, V; Clavel, L; Idir, H
2000-09-01
Generalised architecture for languages, encyclopedia and nomenclatures in medicine (GALEN) has developed a new generation of terminology tools based on a language independent model describing the semantics and allowing computer processing and multiple reuses as well as natural language understanding systems applications to facilitate the sharing and maintaining of consistent medical knowledge. During the European Union 4 Th. framework program project GALEN-IN-USE and later on within two contracts with the national health authorities we applied the modelling and the tools to the development of a new multipurpose coding system for surgical procedures named CCAM in a minority language country, France. On one hand, we contributed to a language independent knowledge repository and multilingual semantic dictionaries for multicultural Europe. On the other hand, we support the traditional process for creating a new coding system in medicine which is very much labour consuming by artificial intelligence tools using a medically oriented recursive ontology and natural language processing. We used an integrated software named CLAW (for classification workbench) to process French professional medical language rubrics produced by the national colleges of surgeons domain experts into intermediate dissections and to the Grail reference ontology model representation. From this language independent concept model representation, on one hand, we generate with the LNAT natural language generator controlled French natural language to support the finalization of the linguistic labels (first generation) in relation with the meanings of the conceptual system structure. On the other hand, the Claw classification manager proves to be very powerful to retrieve the initial domain experts rubrics list with different categories of concepts (second generation) within a semantic structured representation (third generation) bridge to the electronic patient record detailed terminology.
WebQuests as Language-Learning Tools
ERIC Educational Resources Information Center
Aydin, Selami
2016-01-01
This study presents a review of the literature that examines WebQuests as tools for second-language acquisition and foreign language-learning processes to guide teachers in their teaching activities and researchers in further research on the issue. The study first introduces the theoretical background behind WebQuest use in the mentioned…
Uomini, Natalie Thaïs; Meyer, Georg Friedrich
2013-01-01
The popular theory that complex tool-making and language co-evolved in the human lineage rests on the hypothesis that both skills share underlying brain processes and systems. However, language and stone tool-making have so far only been studied separately using a range of neuroimaging techniques and diverse paradigms. We present the first-ever study of brain activation that directly compares active Acheulean tool-making and language. Using functional transcranial Doppler ultrasonography (fTCD), we measured brain blood flow lateralization patterns (hemodynamics) in subjects who performed two tasks designed to isolate the planning component of Acheulean stone tool-making and cued word generation as a language task. We show highly correlated hemodynamics in the initial 10 seconds of task execution. Stone tool-making and cued word generation cause common cerebral blood flow lateralization signatures in our participants. This is consistent with a shared neural substrate for prehistoric stone tool-making and language, and is compatible with language evolution theories that posit a co-evolution of language and manual praxis. In turn, our results support the hypothesis that aspects of language might have emerged as early as 1.75 million years ago, with the start of Acheulean technology.
Developing tools and resources for the biomedical domain of the Greek language.
Vagelatos, Aristides; Mantzari, Elena; Pantazara, Mavina; Tsalidis, Christos; Kalamara, Chryssoula
2011-06-01
This paper presents the design and implementation of terminological and specialized textual resources that were produced in the framework of the Greek research project "IATROLEXI". The aim of the project was to create the critical infrastructure for the Greek language, i.e. linguistic resources and tools for use in high level Natural Language Processing (NLP) applications in the domain of biomedicine. The project was built upon existing resources developed by the project partners and further enhanced within its framework, i.e. a Greek morphological lexicon of about 100,000 words, and language processing tools such as a lemmatiser and a morphosyntactic tagger. Christos Tsalidis, Additionally, it developed new assets, such as a specialized corpus of biomedical texts and an ontology of medical terminology.
Generating Systems Biology Markup Language Models from the Synthetic Biology Open Language.
Roehner, Nicholas; Zhang, Zhen; Nguyen, Tramy; Myers, Chris J
2015-08-21
In the context of synthetic biology, model generation is the automated process of constructing biochemical models based on genetic designs. This paper discusses the use cases for model generation in genetic design automation (GDA) software tools and introduces the foundational concepts of standards and model annotation that make this process useful. Finally, this paper presents an implementation of model generation in the GDA software tool iBioSim and provides an example of generating a Systems Biology Markup Language (SBML) model from a design of a 4-input AND sensor written in the Synthetic Biology Open Language (SBOL).
Teaching a Foreign Language to Deaf People via Vodcasting & Web 2.0 Tools
NASA Astrophysics Data System (ADS)
Drigas, Athanasios; Vrettaros, John; Tagoulis, Alexandors; Kouremenos, Dimitris
This paper presents the design and development of an e-learning course in teaching deaf people in a foreign language, whose first language is the sign language. The course is based in e-material, vodcasting and web 2.0 tools such as social networking and blog The course has been designed especially for deaf people and it is exploring the possibilities that e-learning material vodcasting and web 2.0 tools can offer to enhance the learning process and achieve more effective learning results.
You Are Your Words: Modeling Students' Vocabulary Knowledge with Natural Language Processing Tools
ERIC Educational Resources Information Center
Allen, Laura K.; McNamara, Danielle S.
2015-01-01
The current study investigates the degree to which the lexical properties of students' essays can inform stealth assessments of their vocabulary knowledge. In particular, we used indices calculated with the natural language processing tool, TAALES, to predict students' performance on a measure of vocabulary knowledge. To this end, two corpora were…
Steele, James; Ferrari, Pier Francesco; Fogassi, Leonardo
2012-01-01
The papers in this Special Issue examine tool use and manual gestures in primates as a window on the evolution of the human capacity for language. Neurophysiological research has supported the hypothesis of a close association between some aspects of human action organization and of language representation, in both phonology and semantics. Tool use provides an excellent experimental context to investigate analogies between action organization and linguistic syntax. Contributors report and contextualize experimental evidence from monkeys, great apes, humans and fossil hominins, and consider the nature and the extent of overlaps between the neural representations of tool use, manual gestures and linguistic processes. PMID:22106422
BioC: a minimalist approach to interoperability for biomedical text processing
Comeau, Donald C.; Islamaj Doğan, Rezarta; Ciccarese, Paolo; Cohen, Kevin Bretonnel; Krallinger, Martin; Leitner, Florian; Lu, Zhiyong; Peng, Yifan; Rinaldi, Fabio; Torii, Manabu; Valencia, Alfonso; Verspoor, Karin; Wiegers, Thomas C.; Wu, Cathy H.; Wilbur, W. John
2013-01-01
A vast amount of scientific information is encoded in natural language text, and the quantity of such text has become so great that it is no longer economically feasible to have a human as the first step in the search process. Natural language processing and text mining tools have become essential to facilitate the search for and extraction of information from text. This has led to vigorous research efforts to create useful tools and to create humanly labeled text corpora, which can be used to improve such tools. To encourage combining these efforts into larger, more powerful and more capable systems, a common interchange format to represent, store and exchange the data in a simple manner between different language processing systems and text mining tools is highly desirable. Here we propose a simple extensible mark-up language format to share text documents and annotations. The proposed annotation approach allows a large number of different annotations to be represented including sentences, tokens, parts of speech, named entities such as genes or diseases and relationships between named entities. In addition, we provide simple code to hold this data, read it from and write it back to extensible mark-up language files and perform some sample processing. We also describe completed as well as ongoing work to apply the approach in several directions. Code and data are available at http://bioc.sourceforge.net/. Database URL: http://bioc.sourceforge.net/ PMID:24048470
NPTool: Towards Scalability and Reliability of Business Process Management
NASA Astrophysics Data System (ADS)
Braghetto, Kelly Rosa; Ferreira, João Eduardo; Pu, Calton
Currently one important challenge in business process management is provide at the same time scalability and reliability of business process executions. This difficulty becomes more accentuated when the execution control assumes complex countless business processes. This work presents NavigationPlanTool (NPTool), a tool to control the execution of business processes. NPTool is supported by Navigation Plan Definition Language (NPDL), a language for business processes specification that uses process algebra as formal foundation. NPTool implements the NPDL language as a SQL extension. The main contribution of this paper is a description of the NPTool showing how the process algebra features combined with a relational database model can be used to provide a scalable and reliable control in the execution of business processes. The next steps of NPTool include reuse of control-flow patterns and support to data flow management.
ERIC Educational Resources Information Center
Miller, Jon F.; Iglesias, Aquiles; Rojas, Raul
2010-01-01
Assessing the language development of bilingual children can be a challenge--too often, children in the complex process of learning both Spanish and English are under- or over-diagnosed with language disorders. SLPs can change that with "SALT 2010 Bilingual S/E Version" for grades K-3, the first tool to comprehensively assess children's language…
Assessing Group Interaction with Social Language Network Analysis
NASA Astrophysics Data System (ADS)
Scholand, Andrew J.; Tausczik, Yla R.; Pennebaker, James W.
In this paper we discuss a new methodology, social language network analysis (SLNA), that combines tools from social language processing and network analysis to assess socially situated working relationships within a group. Specifically, SLNA aims to identify and characterize the nature of working relationships by processing artifacts generated with computer-mediated communication systems, such as instant message texts or emails. Because social language processing is able to identify psychological, social, and emotional processes that individuals are not able to fully mask, social language network analysis can clarify and highlight complex interdependencies between group members, even when these relationships are latent or unrecognized.
Natural language processing and the Now-or-Never bottleneck.
Gómez-Rodríguez, Carlos
2016-01-01
Researchers, motivated by the need to improve the efficiency of natural language processing tools to handle web-scale data, have recently arrived at models that remarkably match the expected features of human language processing under the Now-or-Never bottleneck framework. This provides additional support for said framework and highlights the research potential in the interaction between applied computational linguistics and cognitive science.
ERIC Educational Resources Information Center
Lawrence, Geoff
2002-01-01
Outlines reasons why electronic mail, and specifically e-mail exchanges, are valuable tools for promoting authentic target language interaction in the second language (L2) classroom. Research examining the use of e-mail exchanges on the L2 learning process is outlined, followed by one specific example of an e-mail exchange in a secondary core…
Gaining insights from social media language: Methodologies and challenges.
Kern, Margaret L; Park, Gregory; Eichstaedt, Johannes C; Schwartz, H Andrew; Sap, Maarten; Smith, Laura K; Ungar, Lyle H
2016-12-01
Language data available through social media provide opportunities to study people at an unprecedented scale. However, little guidance is available to psychologists who want to enter this area of research. Drawing on tools and techniques developed in natural language processing, we first introduce psychologists to social media language research, identifying descriptive and predictive analyses that language data allow. Second, we describe how raw language data can be accessed and quantified for inclusion in subsequent analyses, exploring personality as expressed on Facebook to illustrate. Third, we highlight challenges and issues to be considered, including accessing and processing the data, interpreting effects, and ethical issues. Social media has become a valuable part of social life, and there is much we can learn by bringing together the tools of computer science with the theories and insights of psychology. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Language Management Theory as a Basis for the Dynamic Concept of EU Language Law
ERIC Educational Resources Information Center
Dovalil, Vít
2015-01-01
Language law is a tool used to manage problems of linguistic diversity in the EU. The paper analyzes the processes in which language law is found in the discursive practice of agents addressing the Court of Justice of the European Union with their language problems. The theoretical-methodological basis for the research is Language Management…
2012-01-01
Background We introduce the linguistic annotation of a corpus of 97 full-text biomedical publications, known as the Colorado Richly Annotated Full Text (CRAFT) corpus. We further assess the performance of existing tools for performing sentence splitting, tokenization, syntactic parsing, and named entity recognition on this corpus. Results Many biomedical natural language processing systems demonstrated large differences between their previously published results and their performance on the CRAFT corpus when tested with the publicly available models or rule sets. Trainable systems differed widely with respect to their ability to build high-performing models based on this data. Conclusions The finding that some systems were able to train high-performing models based on this corpus is additional evidence, beyond high inter-annotator agreement, that the quality of the CRAFT corpus is high. The overall poor performance of various systems indicates that considerable work needs to be done to enable natural language processing systems to work well when the input is full-text journal articles. The CRAFT corpus provides a valuable resource to the biomedical natural language processing community for evaluation and training of new models for biomedical full text publications. PMID:22901054
Can Computers Be Used for Whole Language Approaches to Reading and Language Arts?
ERIC Educational Resources Information Center
Balajthy, Ernest
Holistic approaches to the teaching of reading and writing, most notably the Whole Language movement, reject the philosophy that language skills can be taught. Instead, holistic teachers emphasize process, and they structure the students' classroom activities to be rich in language experience. Computers can be used as tools for whole language…
ERIC Educational Resources Information Center
Shenoy, Sunaina
2014-01-01
English language learners (ELLs) who are in the process of acquiring English as a second language for academic purposes, are often misidentified as having Language Learning Disabilities (LLDs). Policies regarding the assessment of ELLs have undergone many changes through the years, such as the introduction of a Response to Intervention (RTI)…
Google Docs as a Tool for Collaborative Writing in the Middle School Classroom
ERIC Educational Resources Information Center
Woodrich, Megan; Fan, Yanan
2017-01-01
Aim/Purpose: In this study, the authors examine how an online word processing tool can be used to encourage participation among students of different language backgrounds, including English Language Learners. To be exact, the paper discusses whether student participation in anonymous collaborative writing via Google Docs can lead to more…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharp, J.K.
1997-11-01
This seminar describes a process and methodology that uses structured natural language to enable the construction of precise information requirements directly from users, experts, and managers. The main focus of this natural language approach is to create the precise information requirements and to do it in such a way that the business and technical experts are fully accountable for the results. These requirements can then be implemented using appropriate tools and technology. This requirement set is also a universal learning tool because it has all of the knowledge that is needed to understand a particular process (e.g., expense vouchers, projectmore » management, budget reviews, tax, laws, machine function).« less
Applying language technology to nursing documents: pros and cons with a focus on ethics.
Suominen, Hanna; Lehtikunnas, Tuija; Back, Barbro; Karsten, Helena; Salakoski, Tapio; Salanterä, Sanna
2007-10-01
The present study discusses ethics in building and using applications based on natural language processing in electronic nursing documentation. Specifically, we first focus on the question of how patient confidentiality can be ensured in developing language technology for the nursing documentation domain. Then, we identify and theoretically analyze the ethical outcomes which arise when using natural language processing to support clinical judgement and decision-making. In total, we put forward and justify 10 claims related to ethics in applying language technology to nursing documents. A review of recent scientific articles related to ethics in electronic patient records or in the utilization of large databases was conducted. Then, the results were compared with ethical guidelines for nurses and the Finnish legislation covering health care and processing of personal data. Finally, the practical experiences of the authors in applying the methods of natural language processing to nursing documents were appended. Patient records supplemented with natural language processing capabilities may help nurses give better, more efficient and more individualized care for their patients. In addition, language technology may facilitate patients' possibility to receive truthful information about their health and improve the nature of narratives. Because of these benefits, research about the use of language technology in narratives should be encouraged. In contrast, privacy-sensitive health care documentation brings specific ethical concerns and difficulties to the natural language processing of nursing documents. Therefore, when developing natural language processing tools, patient confidentiality must be ensured. While using the tools, health care personnel should always be responsible for the clinical judgement and decision-making. One should also consider that the use of language technology in nursing narratives may threaten patients' rights by using documentation collected for other purposes. Applying language technology to nursing documents may, on the one hand, contribute to the quality of care, but, on the other hand, threaten patient confidentiality. As an overall conclusion, natural language processing of nursing documents holds the promise of great benefits if the potential risks are taken into consideration.
Modes of Learning in Religious Education
ERIC Educational Resources Information Center
Afdal, Geir
2015-01-01
This article is a contribution to the discussion of learning processes in religious education (RE) classrooms. Sociocultural theories of learning, understood here as tool-mediated processes, are used in an analysis of three RE classroom conversations. The analysis focuses on the language tools that are used in conversations; how the tools mediate;…
Divergence Measures Tool:An Introduction with Brief Tutorial
2014-03-01
in detecting differences across a wide range of Arabic -language text files (they varied by genre, domain, spelling variation, size, etc.), our...other. 2 These measures have been put to many uses in natural language processing ( NLP ). In the evaluation of machine translation (MT...files uploaded into the tool must be .txt files in ASCII or UTF-8 format. • This tool has been tested on English and Arabic script**, but should
Hybrid Applications Of Artificial Intelligence
NASA Technical Reports Server (NTRS)
Borchardt, Gary C.
1988-01-01
STAR, Simple Tool for Automated Reasoning, is interactive, interpreted programming language for development and operation of artificial-intelligence application systems. Couples symbolic processing with compiled-language functions and data structures. Written in C language and currently available in UNIX version (NPO-16832), and VMS version (NPO-16965).
Requirements Specification Language (RSL) and supporting tools
NASA Technical Reports Server (NTRS)
Frincke, Deborah; Wolber, Dave; Fisher, Gene; Cohen, Gerald C.
1992-01-01
This document describes a general purpose Requirement Specification Language (RSL). RSL is a hybrid of features found in several popular requirement specification languages. The purpose of RSL is to describe precisely the external structure of a system comprised of hardware, software, and human processing elements. To overcome the deficiencies of informal specification languages, RSL includes facilities for mathematical specification. Two RSL interface tools are described. The Browser view contains a complete document with all details of the objects and operations. The Dataflow view is a specialized, operation-centered depiction of a specification that shows how specified operations relate in terms of inputs and outputs.
Using natural language processing techniques to inform research on nanotechnology.
Lewinski, Nastassja A; McInnes, Bridget T
2015-01-01
Literature in the field of nanotechnology is exponentially increasing with more and more engineered nanomaterials being created, characterized, and tested for performance and safety. With the deluge of published data, there is a need for natural language processing approaches to semi-automate the cataloguing of engineered nanomaterials and their associated physico-chemical properties, performance, exposure scenarios, and biological effects. In this paper, we review the different informatics methods that have been applied to patent mining, nanomaterial/device characterization, nanomedicine, and environmental risk assessment. Nine natural language processing (NLP)-based tools were identified: NanoPort, NanoMapper, TechPerceptor, a Text Mining Framework, a Nanodevice Analyzer, a Clinical Trial Document Classifier, Nanotoxicity Searcher, NanoSifter, and NEIMiner. We conclude with recommendations for sharing NLP-related tools through online repositories to broaden participation in nanoinformatics.
Children's Foreign Language Anxiety Scale: Preliminary Tests of Reliability and Validity
ERIC Educational Resources Information Center
Aydin, Selami; Harputlu, Leyla; Güzel, Serhat; Ustuk, Özgehan; Savran Çelik, Seyda; Genç, Deniz
2016-01-01
Foreign language anxiety (FLA), which constitutes a serious problem in the foreign language learning process, has been mainly seen as a research issue regarding adult language learners, while it has been overlooked in children. This is because there is no an appropriate tool to measure FLA among children, whereas there are many studies on the…
The Children's Foreign Language Anxiety Scale: Reliability and Validity
ERIC Educational Resources Information Center
Aydin, Selami; Harputlu, Leyla; Ustuk, Özgehan; Güzel, Serhat; Çelik, Seyda Savran
2017-01-01
Foreign language anxiety (FLA) has been mainly associated with adult language learners. Although FLA forms a serious problem in the foreign language learning process for all learners, the effects of FLA on children have been mainly overlooked. The underlying reason is that there is a lack of an appropriate measurement tool for FLA among children.…
Language translation, doman specific languages and ANTLR
NASA Technical Reports Server (NTRS)
Craymer, Loring; Parr, Terence
2002-01-01
We will discuss the features of ANTLR that make it an attractive tool for rapid developement of domain specific language translators and present some practical examples of its use: extraction of information from the Cassini Command Language specification, the processing of structured binary data, and IVL--an English-like language for generating VRML scene graph, which is used in configuring the jGuru.com server.
Testing framework for embedded languages
NASA Astrophysics Data System (ADS)
Leskó, Dániel; Tejfel, Máté
2012-09-01
Embedding a new programming language into an existing one is a widely used technique, because it fastens the development process and gives a part of a language infrastructure for free (e.g. lexical, syntactical analyzers). In this paper we are presenting a new advantage of this development approach regarding to adding testing support for these new languages. Tool support for testing is a crucial point for a newly designed programming language. It could be done in the hard way by creating a testing tool from scratch, or we could try to reuse existing testing tools by extending them with an interface to our new language. The second approach requires less work, and also it fits very well for the embedded approach. The problem is that the creation of such interfaces is not straightforward at all, because the existing testing tools were mostly not designed to be extendable and to be able to deal with new languages. This paper presents an extendable and modular model of a testing framework, in which the most basic design decision was to keep the - previously mentioned - interface creation simple and straightforward. Other important aspects of our model are the test data generation, the oracle problem and the customizability of the whole testing phase.
An Experience of Social Rising of Logical Tools in a Primary School Classroom: The Role of Language
ERIC Educational Resources Information Center
Coppola, Cristina; Mollo, Monica; Pacelli, Tiziana
2011-01-01
In this paper we explore the relationship between language and developmental processes of logical tools through the analysis at different levels of some "linguistic-manipulative" activities in a primary school classroom. We believe that this kind of activities can spur in the children a reflection and a change in their language…
Data-Informed Language Learning
ERIC Educational Resources Information Center
Godwin-Jones, Robert
2017-01-01
Although data collection has been used in language learning settings for some time, it is only in recent decades that large corpora have become available, along with efficient tools for their use. Advances in natural language processing (NLP) have enabled rich tagging and annotation of corpus data, essential for their effective use in language…
Halim, Zahid; Abbas, Ghulam
2015-01-01
Sign language provides hearing and speech impaired individuals with an interface to communicate with other members of the society. Unfortunately, sign language is not understood by most of the common people. For this, a gadget based on image processing and pattern recognition can provide with a vital aid for detecting and translating sign language into a vocal language. This work presents a system for detecting and understanding the sign language gestures by a custom built software tool and later translating the gesture into a vocal language. For the purpose of recognizing a particular gesture, the system employs a Dynamic Time Warping (DTW) algorithm and an off-the-shelf software tool is employed for vocal language generation. Microsoft(®) Kinect is the primary tool used to capture video stream of a user. The proposed method is capable of successfully detecting gestures stored in the dictionary with an accuracy of 91%. The proposed system has the ability to define and add custom made gestures. Based on an experiment in which 10 individuals with impairments used the system to communicate with 5 people with no disability, 87% agreed that the system was useful.
Using natural language processing techniques to inform research on nanotechnology
Lewinski, Nastassja A
2015-01-01
Summary Literature in the field of nanotechnology is exponentially increasing with more and more engineered nanomaterials being created, characterized, and tested for performance and safety. With the deluge of published data, there is a need for natural language processing approaches to semi-automate the cataloguing of engineered nanomaterials and their associated physico-chemical properties, performance, exposure scenarios, and biological effects. In this paper, we review the different informatics methods that have been applied to patent mining, nanomaterial/device characterization, nanomedicine, and environmental risk assessment. Nine natural language processing (NLP)-based tools were identified: NanoPort, NanoMapper, TechPerceptor, a Text Mining Framework, a Nanodevice Analyzer, a Clinical Trial Document Classifier, Nanotoxicity Searcher, NanoSifter, and NEIMiner. We conclude with recommendations for sharing NLP-related tools through online repositories to broaden participation in nanoinformatics. PMID:26199848
ERIC Educational Resources Information Center
Liou, Hsien-Chin; Chang, Jason S; Chen, Hao-Jan; Lin, Chih-Cheng; Liaw, Meei-Ling; Gao, Zhao-Ming; Jang, Jyh-Shing Roger; Yeh, Yuli; Chuang, Thomas C.; You, Geeng-Neng
2006-01-01
This paper describes the development of an innovative web-based environment for English language learning with advanced data-driven and statistical approaches. The project uses various corpora, including a Chinese-English parallel corpus ("Sinorama") and various natural language processing (NLP) tools to construct effective English…
ERIC Educational Resources Information Center
Guerrero, Mario
2012-01-01
The rapid growth and interest of college students in Computer Mediated Communication and social media have impacted the second language learning and teaching process. This article reports on a pilot project that attempts to analyze the use of Skype as a synchronous communication tool in regard to the attitudes of students in learning a foreign…
Asmuri, Siti Noraini; Brown, Ted; Broom, Lisa J
2016-07-01
Valid translations of time use scales are needed by occupational therapists for use in different cross-cultural contexts to gather relevant data to inform practice and research. The purpose of this study was to describe the process of translating, adapting, and validating the Time Use Diary from its current English language edition into a Malay language version. Five steps of the cross-cultural adaptation process were completed: (i) translation from English into the Malay language by a qualified translator, (ii) synthesis of the translated Malay version, (iii) backtranslation from Malay to English by three bilingual speakers, (iv) expert committee review and discussion, and (v) pilot testing of the Malay language version with two participant groups. The translated version was found to be a reliable and valid tool identifying changes and potential challenges in the time use of older adults. This provides Malaysian occupational therapists with a useful tool for gathering time use data in practice settings and for research purposes.
Narrative Inquiry: A Dynamic Relationship between Culture, Language and Education
ERIC Educational Resources Information Center
Chan, Esther Yim Mei
2017-01-01
Human development is a cultural process, and language serves as a cultural tool is closely related to virtually all the cognitive changes. The author addresses issues of language in education, and suggests that changing the medium of instruction should not be understood as purely a pedagogical decision. The connection between culture and language…
NASA Technical Reports Server (NTRS)
1988-01-01
A NASA-developed software package has played a part in technical education of students who major in Mechanical Engineering Technology at William Rainey Harper College. Professor Hack has been using (APT) Automatically Programmed Tool Software since 1969 in his CAD/CAM Computer Aided Design and Manufacturing curriculum. Professor Hack teaches the use of APT programming languages for control of metal cutting machines. Machine tool instructions are geometry definitions written in APT Language to constitute a "part program." The part program is processed by the machine tool. CAD/CAM students go from writing a program to cutting steel in the course of a semester.
EVA - A Textual Data Processing Tool.
ERIC Educational Resources Information Center
Jakopin, Primoz
EVA, a text processing tool designed to be self-contained and useful for a variety of languages, is described briefly, and its extensive coded character set is illustrated. Features, specifications, and database functions are noted. Its application in development of a Slovenian literary dictionary is also described. (MSE)
Intentions and actions in molecular self-assembly: perspectives on students' language use
NASA Astrophysics Data System (ADS)
Höst, Gunnar E.; Anward, Jan
2017-04-01
Learning to talk science is an important aspect of learning to do science. Given that scientists' language frequently includes intentions and purposes in explanations of unobservable objects and events, teachers must interpret whether learners' use of such language reflects a scientific understanding or inaccurate anthropomorphism and teleology. In the present study, a framework consisting of three 'stances' (Dennett, 1987) - intentional, design and physical - is presented as a powerful tool for analysing students' language use. The aim was to investigate how the framework can be differentiated and used analytically for interpreting students' talk about a molecular process. Semi-structured group discussions and individual interviews about the molecular self-assembly process were conducted with engineering biology/chemistry (n = 15) and biology/chemistry teacher students (n = 6). Qualitative content analysis of transcripts showed that all three stances were employed by students. The analysis also identified subcategories for each stance, and revealed that intentional language with respect to molecular movement and assumptions about design requirements may be potentially problematic areas. Students' exclusion of physical stance explanations may indicate literal anthropomorphic interpretations. Implications for practice include providing teachers with a tool for scaffolding their use of metaphorical language and for supporting students' metacognitive development as scientific language users.
Advances in natural language processing.
Hirschberg, Julia; Manning, Christopher D
2015-07-17
Natural language processing employs computational techniques for the purpose of learning, understanding, and producing human language content. Early computational approaches to language research focused on automating the analysis of the linguistic structure of language and developing basic technologies such as machine translation, speech recognition, and speech synthesis. Today's researchers refine and make use of such tools in real-world applications, creating spoken dialogue systems and speech-to-speech translation engines, mining social media for information about health or finance, and identifying sentiment and emotion toward products and services. We describe successes and challenges in this rapidly advancing area. Copyright © 2015, American Association for the Advancement of Science.
NLPReViz: an interactive tool for natural language processing on clinical text.
Trivedi, Gaurav; Pham, Phuong; Chapman, Wendy W; Hwa, Rebecca; Wiebe, Janyce; Hochheiser, Harry
2018-01-01
The gap between domain experts and natural language processing expertise is a barrier to extracting understanding from clinical text. We describe a prototype tool for interactive review and revision of natural language processing models of binary concepts extracted from clinical notes. We evaluated our prototype in a user study involving 9 physicians, who used our tool to build and revise models for 2 colonoscopy quality variables. We report changes in performance relative to the quantity of feedback. Using initial training sets as small as 10 documents, expert review led to final F1scores for the "appendiceal-orifice" variable between 0.78 and 0.91 (with improvements ranging from 13.26% to 29.90%). F1for "biopsy" ranged between 0.88 and 0.94 (-1.52% to 11.74% improvements). The average System Usability Scale score was 70.56. Subjective feedback also suggests possible design improvements. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Construct Validity in TOEFL iBT Speaking Tasks: Insights from Natural Language Processing
ERIC Educational Resources Information Center
Kyle, Kristopher; Crossley, Scott A.; McNamara, Danielle S.
2016-01-01
This study explores the construct validity of speaking tasks included in the TOEFL iBT (e.g., integrated and independent speaking tasks). Specifically, advanced natural language processing (NLP) tools, MANOVA difference statistics, and discriminant function analyses (DFA) are used to assess the degree to which and in what ways responses to these…
Algorithms and programming tools for image processing on the MPP
NASA Technical Reports Server (NTRS)
Reeves, A. P.
1985-01-01
Topics addressed include: data mapping and rotational algorithms for the Massively Parallel Processor (MPP); Parallel Pascal language; documentation for the Parallel Pascal Development system; and a description of the Parallel Pascal language used on the MPP.
p3d--Python module for structural bioinformatics.
Fufezan, Christian; Specht, Michael
2009-08-21
High-throughput bioinformatic analysis tools are needed to mine the large amount of structural data via knowledge based approaches. The development of such tools requires a robust interface to access the structural data in an easy way. For this the Python scripting language is the optimal choice since its philosophy is to write an understandable source code. p3d is an object oriented Python module that adds a simple yet powerful interface to the Python interpreter to process and analyse three dimensional protein structure files (PDB files). p3d's strength arises from the combination of a) very fast spatial access to the structural data due to the implementation of a binary space partitioning (BSP) tree, b) set theory and c) functions that allow to combine a and b and that use human readable language in the search queries rather than complex computer language. All these factors combined facilitate the rapid development of bioinformatic tools that can perform quick and complex analyses of protein structures. p3d is the perfect tool to quickly develop tools for structural bioinformatics using the Python scripting language.
A Process for the Representation of openEHR ADL Archetypes in OWL Ontologies.
Porn, Alex Mateus; Peres, Leticia Mara; Didonet Del Fabro, Marcos
2015-01-01
ADL is a formal language to express archetypes, independent of standards or domain. However, its specification is not precise enough in relation to the specialization and semantic of archetypes, presenting difficulties in implementation and a few available tools. Archetypes may be implemented using other languages such as XML or OWL, increasing integration with Semantic Web tools. Exchanging and transforming data can be better implemented with semantics oriented models, for example using OWL which is a language to define and instantiate Web ontologies defined by W3C. OWL permits defining significant, detailed, precise and consistent distinctions among classes, properties and relations by the user, ensuring the consistency of knowledge than using ADL techniques. This paper presents a process of an openEHR ADL archetypes representation in OWL ontologies. This process consists of ADL archetypes conversion in OWL ontologies and validation of OWL resultant ontologies using the mutation test.
SignMT: An Alternative Language Learning Tool
ERIC Educational Resources Information Center
Ditcharoen, Nadh; Naruedomkul, Kanlaya; Cercone, Nick
2010-01-01
Learning a second language is very difficult, especially, for the disabled; the disability may be a barrier to learn and to utilize information written in text form. We present the SignMT, Thai sign to Thai machine translation system, which is able to translate from Thai sign language into Thai text. In the translation process, SignMT takes into…
Assessing Young Children's Oral Language: Recommendations for Classroom Practice and Policy
ERIC Educational Resources Information Center
Malec, Alesia; Peterson, Shelley Stagg; Elshereif, Heba
2017-01-01
A systematic review of research on oral language assessments for four-to-eight-year- old children was undertaken to support a six-year action research project aimed toward co-creating classroom oral language assessment tools with teachers in northern rural and Indigenous Canadian communities. Through an extensive screening process, 10 studies were…
Swartz, Jordan; Koziatek, Christian; Theobald, Jason; Smith, Silas; Iturrate, Eduardo
2017-05-01
Testing for venous thromboembolism (VTE) is associated with cost and risk to patients (e.g. radiation). To assess the appropriateness of imaging utilization at the provider level, it is important to know that provider's diagnostic yield (percentage of tests positive for the diagnostic entity of interest). However, determining diagnostic yield typically requires either time-consuming, manual review of radiology reports or the use of complex and/or proprietary natural language processing software. The objectives of this study were twofold: 1) to develop and implement a simple, user-configurable, and open-source natural language processing tool to classify radiology reports with high accuracy and 2) to use the results of the tool to design a provider-specific VTE imaging dashboard, consisting of both utilization rate and diagnostic yield. Two physicians reviewed a training set of 400 lower extremity ultrasound (UTZ) and computed tomography pulmonary angiogram (CTPA) reports to understand the language used in VTE-positive and VTE-negative reports. The insights from this review informed the arguments to the five modifiable parameters of the NLP tool. A validation set of 2,000 studies was then independently classified by the reviewers and by the tool; the classifications were compared and the performance of the tool was calculated. The tool was highly accurate in classifying the presence and absence of VTE for both the UTZ (sensitivity 95.7%; 95% CI 91.5-99.8, specificity 100%; 95% CI 100-100) and CTPA reports (sensitivity 97.1%; 95% CI 94.3-99.9, specificity 98.6%; 95% CI 97.8-99.4). The diagnostic yield was then calculated at the individual provider level and the imaging dashboard was created. We have created a novel NLP tool designed for users without a background in computer programming, which has been used to classify venous thromboembolism reports with a high degree of accuracy. The tool is open-source and available for download at http://iturrate.com/simpleNLP. Results obtained using this tool can be applied to enhance quality by presenting information about utilization and yield to providers via an imaging dashboard. Copyright © 2017 Elsevier B.V. All rights reserved.
Quantifiable and objective approach to organizational performance enhancement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholand, Andrew Joseph; Tausczik, Yla R.
This report describes a new methodology, social language network analysis (SLNA), that combines tools from social language processing and network analysis to identify socially situated relationships between individuals which, though subtle, are highly influential. Specifically, SLNA aims to identify and characterize the nature of working relationships by processing artifacts generated with computer-mediated communication systems, such as instant message texts or emails. Because social language processing is able to identify psychological, social, and emotional processes that individuals are not able to fully mask, social language network analysis can clarify and highlight complex interdependencies between group members, even when these relationships aremore » latent or unrecognized. This report outlines the philosophical antecedents of SLNA, the mechanics of preprocessing, processing, and post-processing stages, and some example results obtained by applying this approach to a 15-month corporate discussion archive.« less
Software Engineering Laboratory (SEL) compendium of tools, revision 1
NASA Technical Reports Server (NTRS)
1982-01-01
A set of programs used to aid software product development is listed. Known as software tools, such programs include requirements analyzers, design languages, precompilers, code auditors, code analyzers, and software librarians. Abstracts, resource requirements, documentation, processing summaries, and availability are indicated for most tools.
Modeling biochemical transformation processes and information processing with Narrator.
Mandel, Johannes J; Fuss, Hendrik; Palfreyman, Niall M; Dubitzky, Werner
2007-03-27
Software tools that model and simulate the dynamics of biological processes and systems are becoming increasingly important. Some of these tools offer sophisticated graphical user interfaces (GUIs), which greatly enhance their acceptance by users. Such GUIs are based on symbolic or graphical notations used to describe, interact and communicate the developed models. Typically, these graphical notations are geared towards conventional biochemical pathway diagrams. They permit the user to represent the transport and transformation of chemical species and to define inhibitory and stimulatory dependencies. A critical weakness of existing tools is their lack of supporting an integrative representation of transport, transformation as well as biological information processing. Narrator is a software tool facilitating the development and simulation of biological systems as Co-dependence models. The Co-dependence Methodology complements the representation of species transport and transformation together with an explicit mechanism to express biological information processing. Thus, Co-dependence models explicitly capture, for instance, signal processing structures and the influence of exogenous factors or events affecting certain parts of a biological system or process. This combined set of features provides the system biologist with a powerful tool to describe and explore the dynamics of life phenomena. Narrator's GUI is based on an expressive graphical notation which forms an integral part of the Co-dependence Methodology. Behind the user-friendly GUI, Narrator hides a flexible feature which makes it relatively easy to map models defined via the graphical notation to mathematical formalisms and languages such as ordinary differential equations, the Systems Biology Markup Language or Gillespie's direct method. This powerful feature facilitates reuse, interoperability and conceptual model development. Narrator is a flexible and intuitive systems biology tool. It is specifically intended for users aiming to construct and simulate dynamic models of biology without recourse to extensive mathematical detail. Its design facilitates mappings to different formal languages and frameworks. The combined set of features makes Narrator unique among tools of its kind. Narrator is implemented as Java software program and available as open-source from http://www.narrator-tool.org.
Odean, Rosalie; Nazareth, Alina; Pruden, Shannon M.
2015-01-01
Developmental systems theory posits that development cannot be segmented by influences acting in isolation, but should be studied through a scientific lens that highlights the complex interactions between these forces over time (Overton, 2013a). This poses a unique challenge for developmental psychologists studying complex processes like language development. In this paper, we advocate for the combining of highly sophisticated data collection technologies in an effort to move toward a more systemic approach to studying language development. We investigate the efficiency and appropriateness of combining eye-tracking technology and the LENA (Language Environment Analysis) system, an automated language analysis tool, in an effort to explore the relation between language processing in early development, and external dynamic influences like parent and educator language input in the home and school environments. Eye-tracking allows us to study language processing via eye movement analysis; these eye movements have been linked to both conscious and unconscious cognitive processing, and thus provide one means of evaluating cognitive processes underlying language development that does not require the use of subjective parent reports or checklists. The LENA system, on the other hand, provides automated language output that describes a child’s language-rich environment. In combination, these technologies provide critical information not only about a child’s language processing abilities but also about the complexity of the child’s language environment. Thus, when used in conjunction these technologies allow researchers to explore the nature of interacting systems involved in language development. PMID:26379591
Application programs written by using customizing tools of a computer-aided design system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, X.; Huang, R.; Juricic, D.
1995-12-31
Customizing tools of Computer-Aided Design Systems have been developed to such a degree as to become equivalent to powerful higher-level programming languages that are especially suitable for graphics applications. Two examples of application programs written by using AutoCAD`s customizing tools are given in some detail to illustrate their power. One tool uses AutoLISP list-processing language to develop an application program that produces four views of a given solid model. The other uses AutoCAD Developmental System, based on program modules written in C, to produce an application program that renders a freehand sketch from a given CAD drawing.
Talking the Test: Using Verbal Report Data in Looking at the Processing of Cloze Tasks.
ERIC Educational Resources Information Center
Gibson, Bob
1997-01-01
The use of verbal report procedures as a research tool for gaining insight into the language learning process is discussed. Specifically, having second language students complete think-aloud protocols when they take cloze tests can provide useful information about what is being measured and how it has been learned. Use of such introspective…
A Qualitative Analysis Framework Using Natural Language Processing and Graph Theory
ERIC Educational Resources Information Center
Tierney, Patrick J.
2012-01-01
This paper introduces a method of extending natural language-based processing of qualitative data analysis with the use of a very quantitative tool--graph theory. It is not an attempt to convert qualitative research to a positivist approach with a mathematical black box, nor is it a "graphical solution". Rather, it is a method to help qualitative…
ERIC Educational Resources Information Center
Duran, Nicholas D.; Hall, Charles; McCarthy, Philip M.; McNamara, Danielle S.
2010-01-01
The words people use and the way they use them can reveal a great deal about their mental states when they attempt to deceive. The challenge for researchers is how to reliably distinguish the linguistic features that characterize these hidden states. In this study, we use a natural language processing tool called Coh-Metrix to evaluate deceptive…
Tool-use-associated sound in the evolution of language.
Larsson, Matz
2015-09-01
Proponents of the motor theory of language evolution have primarily focused on the visual domain and communication through observation of movements. In the present paper, it is hypothesized that the production and perception of sound, particularly of incidental sound of locomotion (ISOL) and tool-use sound (TUS), also contributed. Human bipedalism resulted in rhythmic and more predictable ISOL. It has been proposed that this stimulated the evolution of musical abilities, auditory working memory, and abilities to produce complex vocalizations and to mimic natural sounds. Since the human brain proficiently extracts information about objects and events from the sounds they produce, TUS, and mimicry of TUS, might have achieved an iconic function. The prevalence of sound symbolism in many extant languages supports this idea. Self-produced TUS activates multimodal brain processing (motor neurons, hearing, proprioception, touch, vision), and TUS stimulates primate audiovisual mirror neurons, which is likely to stimulate the development of association chains. Tool use and auditory gestures involve motor processing of the forelimbs, which is associated with the evolution of vertebrate vocal communication. The production, perception, and mimicry of TUS may have resulted in a limited number of vocalizations or protowords that were associated with tool use. A new way to communicate about tools, especially when out of sight, would have had selective advantage. A gradual change in acoustic properties and/or meaning could have resulted in arbitrariness and an expanded repertoire of words. Humans have been increasingly exposed to TUS over millions of years, coinciding with the period during which spoken language evolved. ISOL and tool-use-related sound are worth further exploration.
SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
2008-01-01
Background It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. Results This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. Conclusion SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes. PMID:18706080
SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
2008-08-15
It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.
Natural Language as a Tool for Analyzing the Proving Process: The Case of Plane Geometry Proof
ERIC Educational Resources Information Center
Robotti, Elisabetta
2012-01-01
In the field of human cognition, language plays a special role that is connected directly to thinking and mental development (e.g., Vygotsky, "1938"). Thanks to "verbal thought", language allows humans to go beyond the limits of immediately perceived information, to form concepts and solve complex problems (Luria, "1975"). So, it appears language…
ERIC Educational Resources Information Center
Baser, Derya; Kopcha, Theodore J.; Ozden, M. Yasar
2016-01-01
This paper reports the development and validation process of a self-assessment survey that examines technological pedagogical content knowledge (TPACK) among preservice teachers learning to teach English as a foreign language (EFL). The survey, called TPACK-EFL, aims to provide an assessment tool for preservice foreign language teachers that…
Deburring: an annotated bibliography. Volume V
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gillespie, L.K.
1978-01-01
An annotated summary of 204 articles and publications on burrs, burr prevention and deburring is presented. Thirty-seven deburring processes are listed. Entries cited include English, Russian, French, Japanese and German language articles. Entries are indexed by deburring processes, author, and language. Indexes also indicate which references discuss equipment and tooling, how to use a process, economics, burr properties, and how to design to minimize burr problems. Research studies are identified as are the materials deburred.
Extraction of UMLS® Concepts Using Apache cTAKES™ for German Language.
Becker, Matthias; Böckmann, Britta
2016-01-01
Automatic information extraction of medical concepts and classification with semantic standards from medical reports is useful for standardization and for clinical research. This paper presents an approach for an UMLS concept extraction with a customized natural language processing pipeline for German clinical notes using Apache cTAKES. The objectives are, to test the natural language processing tool for German language if it is suitable to identify UMLS concepts and map these with SNOMED-CT. The German UMLS database and German OpenNLP models extended the natural language processing pipeline, so the pipeline can normalize to domain ontologies such as SNOMED-CT using the German concepts. For testing, the ShARe/CLEF eHealth 2013 training dataset translated into German was used. The implemented algorithms are tested with a set of 199 German reports, obtaining a result of average 0.36 F1 measure without German stemming, pre- and post-processing of the reports.
Processing sequence annotation data using the Lua programming language.
Ueno, Yutaka; Arita, Masanori; Kumagai, Toshitaka; Asai, Kiyoshi
2003-01-01
The data processing language in a graphical software tool that manages sequence annotation data from genome databases should provide flexible functions for the tasks in molecular biology research. Among currently available languages we adopted the Lua programming language. It fulfills our requirements to perform computational tasks for sequence map layouts, i.e. the handling of data containers, symbolic reference to data, and a simple programming syntax. Upon importing a foreign file, the original data are first decomposed in the Lua language while maintaining the original data schema. The converted data are parsed by the Lua interpreter and the contents are stored in our data warehouse. Then, portions of annotations are selected and arranged into our catalog format to be depicted on the sequence map. Our sequence visualization program was successfully implemented, embedding the Lua language for processing of annotation data and layout script. The program is available at http://staff.aist.go.jp/yutaka.ueno/guppy/.
1987-06-01
evaluation and chip layout planning for VLSI digital systems. A high-level applicative (functional) language, implemented at UCLA, allows combining of...operating system. 2.1 Introduction The complexity of VLSI requires the application of CAD tools at all levels of the design process. In order to be...effective, these tools must be adaptive to the specific design. In this project we studied a design method based on the use of applicative languages
NASA Technical Reports Server (NTRS)
Leveson, Nancy G.; Heimdahl, Mats P. E.; Reese, Jon Damon
1999-01-01
Previously, we defined a blackbox formal system modeling language called RSML (Requirements State Machine Language). The language was developed over several years while specifying the system requirements for a collision avoidance system for commercial passenger aircraft. During the language development, we received continual feedback and evaluation by FAA employees and industry representatives, which helped us to produce a specification language that is easily learned and used by application experts. Since the completion of the PSML project, we have continued our research on specification languages. This research is part of a larger effort to investigate the more general problem of providing tools to assist in developing embedded systems. Our latest experimental toolset is called SpecTRM (Specification Tools and Requirements Methodology), and the formal specification language is SpecTRM-RL (SpecTRM Requirements Language). This paper describes what we have learned from our use of RSML and how those lessons were applied to the design of SpecTRM-RL. We discuss our goals for SpecTRM-RL and the design features that support each of these goals.
Electronic processing of informed consents in a global pharmaceutical company environment.
Vishnyakova, Dina; Gobeill, Julien; Oezdemir-Zaech, Fatma; Kreim, Olivier; Vachon, Therese; Clade, Thierry; Haenning, Xavier; Mikhailov, Dmitri; Ruch, Patrick
2014-01-01
We present an electronic capture tool to process informed consents, which are mandatory recorded when running a clinical trial. This tool aims at the extraction of information expressing the duration of the consent given by the patient to authorize the exploitation of biomarker-related information collected during clinical trials. The system integrates a language detection module (LDM) to route a document into the appropriate information extraction module (IEM). The IEM is based on language-specific sets of linguistic rules for the identification of relevant textual facts. The achieved accuracy of both the LDM and IEM is 99%. The architecture of the system is described in detail.
Development of a comprehensive software engineering environment
NASA Technical Reports Server (NTRS)
Hartrum, Thomas C.; Lamont, Gary B.
1987-01-01
The generation of a set of tools for software lifecycle is a recurring theme in the software engineering literature. The development of such tools and their integration into a software development environment is a difficult task because of the magnitude (number of variables) and the complexity (combinatorics) of the software lifecycle process. An initial development of a global approach was initiated in 1982 as the Software Development Workbench (SDW). Continuing efforts focus on tool development, tool integration, human interfacing, data dictionaries, and testing algorithms. Current efforts are emphasizing natural language interfaces, expert system software development associates and distributed environments with Ada as the target language. The current implementation of the SDW is on a VAX-11/780. Other software development tools are being networked through engineering workstations.
Modeling biochemical transformation processes and information processing with Narrator
Mandel, Johannes J; Fuß, Hendrik; Palfreyman, Niall M; Dubitzky, Werner
2007-01-01
Background Software tools that model and simulate the dynamics of biological processes and systems are becoming increasingly important. Some of these tools offer sophisticated graphical user interfaces (GUIs), which greatly enhance their acceptance by users. Such GUIs are based on symbolic or graphical notations used to describe, interact and communicate the developed models. Typically, these graphical notations are geared towards conventional biochemical pathway diagrams. They permit the user to represent the transport and transformation of chemical species and to define inhibitory and stimulatory dependencies. A critical weakness of existing tools is their lack of supporting an integrative representation of transport, transformation as well as biological information processing. Results Narrator is a software tool facilitating the development and simulation of biological systems as Co-dependence models. The Co-dependence Methodology complements the representation of species transport and transformation together with an explicit mechanism to express biological information processing. Thus, Co-dependence models explicitly capture, for instance, signal processing structures and the influence of exogenous factors or events affecting certain parts of a biological system or process. This combined set of features provides the system biologist with a powerful tool to describe and explore the dynamics of life phenomena. Narrator's GUI is based on an expressive graphical notation which forms an integral part of the Co-dependence Methodology. Behind the user-friendly GUI, Narrator hides a flexible feature which makes it relatively easy to map models defined via the graphical notation to mathematical formalisms and languages such as ordinary differential equations, the Systems Biology Markup Language or Gillespie's direct method. This powerful feature facilitates reuse, interoperability and conceptual model development. Conclusion Narrator is a flexible and intuitive systems biology tool. It is specifically intended for users aiming to construct and simulate dynamic models of biology without recourse to extensive mathematical detail. Its design facilitates mappings to different formal languages and frameworks. The combined set of features makes Narrator unique among tools of its kind. Narrator is implemented as Java software program and available as open-source from . PMID:17389034
Clinical Natural Language Processing in languages other than English: opportunities and challenges.
Névéol, Aurélie; Dalianis, Hercules; Velupillai, Sumithra; Savova, Guergana; Zweigenbaum, Pierre
2018-03-30
Natural language processing applied to clinical text or aimed at a clinical outcome has been thriving in recent years. This paper offers the first broad overview of clinical Natural Language Processing (NLP) for languages other than English. Recent studies are summarized to offer insights and outline opportunities in this area. We envision three groups of intended readers: (1) NLP researchers leveraging experience gained in other languages, (2) NLP researchers faced with establishing clinical text processing in a language other than English, and (3) clinical informatics researchers and practitioners looking for resources in their languages in order to apply NLP techniques and tools to clinical practice and/or investigation. We review work in clinical NLP in languages other than English. We classify these studies into three groups: (i) studies describing the development of new NLP systems or components de novo, (ii) studies describing the adaptation of NLP architectures developed for English to another language, and (iii) studies focusing on a particular clinical application. We show the advantages and drawbacks of each method, and highlight the appropriate application context. Finally, we identify major challenges and opportunities that will affect the impact of NLP on clinical practice and public health studies in a context that encompasses English as well as other languages.
Stimulating Language: Insights from TMS
ERIC Educational Resources Information Center
Devlin, Joseph T.; Watkins, Kate E.
2007-01-01
Fifteen years ago, Pascual-Leone and colleagues used transcranial magnetic stimulation (TMS) to investigate speech production in pre-surgical epilepsy patients and in doing so, introduced a novel tool into language research. TMS can be used to non-invasively stimulate a specific cortical region and transiently disrupt information processing. These…
Validation of a Videoconferenced Speaking Test
ERIC Educational Resources Information Center
Kim, Jungtae; Craig, Daniel A.
2012-01-01
Videoconferencing offers new opportunities for language testers to assess speaking ability in low-stakes diagnostic tests. To be considered a trusted testing tool in language testing, a test should be examined employing appropriate validation processes [Chapelle, C.A., Jamieson, J., & Hegelheimer, V. (2003). "Validation of a web-based ESL…
ERIC Educational Resources Information Center
Bergil, Ayfer Su; Sariçoban, Arif
2017-01-01
The current practices in the field of foreign language teacher education have a heavy inclination to make use of traditional means especially throughout the assessment process of student teachers at foreign language departments. Observing the world in terms of teacher education makes it urgent to include more reflective and objective tools in…
ERIC Educational Resources Information Center
Alzaidiyeen, Naser Jamil
2017-01-01
The role of educational technologies, in the current processes of teaching and learning is becoming more prevalent and accepted in terms of being a mainstream pedagogical tool. During the past three decades, ICT has found its way into English language classrooms. In this study, a quantitative design was used to examine the attitudes of the English…
Lopopolo, Alessandro; Frank, Stefan L; van den Bosch, Antal; Willems, Roel M
2017-01-01
Language comprehension involves the simultaneous processing of information at the phonological, syntactic, and lexical level. We track these three distinct streams of information in the brain by using stochastic measures derived from computational language models to detect neural correlates of phoneme, part-of-speech, and word processing in an fMRI experiment. Probabilistic language models have proven to be useful tools for studying how language is processed as a sequence of symbols unfolding in time. Conditional probabilities between sequences of words are at the basis of probabilistic measures such as surprisal and perplexity which have been successfully used as predictors of several behavioural and neural correlates of sentence processing. Here we computed perplexity from sequences of words and their parts of speech, and their phonemic transcriptions. Brain activity time-locked to each word is regressed on the three model-derived measures. We observe that the brain keeps track of the statistical structure of lexical, syntactic and phonological information in distinct areas.
Analysing Culture and Interculture in Saudi EFL Textbooks: A Corpus Linguistic Approach
ERIC Educational Resources Information Center
Almujaiwel, Sultan
2018-01-01
This paper combines corpus processing tools to investigate the cultural elements of Saudi education of English as a foreign language (EFL). The latest Saudi EFL textbooks (2016 onwards) are available in researchable PDF formats. This helps process them through corpus search software tools. The method adopted is based on analysing 20 cultural…
Khalifa, Abdulrahman; Meystre, Stéphane
2015-12-01
The 2014 i2b2 natural language processing shared task focused on identifying cardiovascular risk factors such as high blood pressure, high cholesterol levels, obesity and smoking status among other factors found in health records of diabetic patients. In addition, the task involved detecting medications, and time information associated with the extracted data. This paper presents the development and evaluation of a natural language processing (NLP) application conceived for this i2b2 shared task. For increased efficiency, the application main components were adapted from two existing NLP tools implemented in the Apache UIMA framework: Textractor (for dictionary-based lookup) and cTAKES (for preprocessing and smoking status detection). The application achieved a final (micro-averaged) F1-measure of 87.5% on the final evaluation test set. Our attempt was mostly based on existing tools adapted with minimal changes and allowed for satisfying performance with limited development efforts. Copyright © 2015 Elsevier Inc. All rights reserved.
The writing process: A powerful approach for the language-disabled student.
Moulton, J R; Bader, M S
1985-01-01
Our understanding of the writing process can be a powerful tool for teaching language-disabled students the "how" of writing. Direct, explicit instruction in writing process helps these students learn to explore their ideas and to manage the multiple demands of writing. A case study of one student, Jeff, demonstrates how we structure the stages of writing: prewriting, planning, drafting, revising, and proofreading. When these stages are clearly defined and involve specific skills, language-disabled students can reach beyond their limitations and strengthen their expression. The case study of Jeff reveals the development of his sense of control and his regard for himself as a writer.
Digital Stories: A 21st-Century Communication Tool for the English Language Classroom
ERIC Educational Resources Information Center
Brenner, Kathy
2014-01-01
Digital storytelling can motivate and engage students and create a community in the classroom. This article lays out a 12-week digital storytelling project, describing the process in detail, including assessment, and pinpointing issues and challenges as well as benefits the project affords English language students.
Cryptanalysis on classical cipher based on Indonesian language
NASA Astrophysics Data System (ADS)
Marwati, R.; Yulianti, K.
2018-05-01
Cryptanalysis is a process of breaking a cipher in an illegal decryption. This paper discusses about encryption some classic cryptography, breaking substitution cipher and stream cipher, and increasing its security. Encryption and ciphering based on Indonesian Language text. Microsoft Word and Microsoft Excel were chosen as ciphering and breaking tools.
Language Supports for Journal Abstract Writing across Disciplines
ERIC Educational Resources Information Center
Liou, H.-C.; Yang, P.-C.; Chang, J. S.
2012-01-01
Various writing assistance tools have been developed through efforts in the areas of natural language processing with different degrees of success of curriculum integration depending on their functional rigor and pedagogical designs. In this paper, we developed a system, WriteAhead, that provides six types of suggestions when non-native graduate…
First Toronto Conference on Database Users. Systems that Enhance User Performance.
ERIC Educational Resources Information Center
Doszkocs, Tamas E.; Toliver, David
1987-01-01
The first of two papers discusses natural language searching as a user performance enhancement tool, focusing on artificial intelligence applications for information retrieval and problems with natural language processing. The second presents a conceptual framework for further development and future design of front ends to online bibliographic…
XML: A Publisher's Perspective.
ERIC Educational Resources Information Center
Andrews, Timothy M.
1999-01-01
Explains eXtensible Markup Language (XML) and describes how Dow Jones Interactive is using it to improve the news-gathering and dissemination process through intranets and the World Wide Web. Discusses benefits of using XML, the relationship to HyperText Markup Language (HTML), lack of available software tools and industry support, and future…
Self-Regulated Learning in the Digital Age: An EFL Perspective
ERIC Educational Resources Information Center
Sahin Kizil, Aysel; Savran, Zehra
2016-01-01
Research on the role of Information and Communication Technologies (ICT) in language learning has ascertained heretofore various potentials ranging from metacognitive domain to skill-based practices. One area in which the potentials of ICT tools requires further exploration is self-regulated language learning, an active, constructive process in…
Service Oriented Architecture for Coast Guard Command and Control
2007-03-01
Operations BPEL4WS The Business Process Execution Language for Web Services BPMN Business Process Modeling Notation CASP Computer Aided Search Planning...Business Process Modeling Notation ( BPMN ) provides a standardized graphical notation for drawing business processes in a workflow. Software tools
ERIC Educational Resources Information Center
Ambrose, Regina Maria; Palpanathan, Shanthini
2017-01-01
Computer-assisted language learning (CALL) has evolved through various stages in both technology as well as the pedagogical use of technology (Warschauer & Healey, 1998). Studies show that the CALL trend has facilitated students in their English language writing with useful tools such as computer based activities and word processing. Students…
NASA Technical Reports Server (NTRS)
Green, Jan
2009-01-01
This viewgraph presentation gives a detailed description of the avionics associated with the Space Shuttle's data processing system and its usage of z/OS. The contents include: 1) Mission, Products, and Customers; 2) Facility Overview; 3) Shuttle Data Processing System; 4) Languages and Compilers; 5) Application Tools; 6) Shuttle Flight Software Simulator; 7) Software Development and Build Tools; and 8) Fun Facts and Acronyms.
Kulhánek, Tomáš; Ježek, Filip; Mateják, Marek; Šilar, Jan; Kofránek, Jří
2015-08-01
This work introduces experiences of teaching modeling and simulation for graduate students in the field of biomedical engineering. We emphasize the acausal and object-oriented modeling technique and we have moved from teaching block-oriented tool MATLAB Simulink to acausal and object oriented Modelica language, which can express the structure of the system rather than a process of computation. However, block-oriented approach is allowed in Modelica language too and students have tendency to express the process of computation. Usage of the exemplar acausal domains and approach allows students to understand the modeled problems much deeper. The causality of the computation is derived automatically by the simulation tool.
Using bio.tools to generate and annotate workbench tool descriptions
Hillion, Kenzo-Hugo; Kuzmin, Ivan; Khodak, Anton; Rasche, Eric; Crusoe, Michael; Peterson, Hedi; Ison, Jon; Ménager, Hervé
2017-01-01
Workbench and workflow systems such as Galaxy, Taverna, Chipster, or Common Workflow Language (CWL)-based frameworks, facilitate the access to bioinformatics tools in a user-friendly, scalable and reproducible way. Still, the integration of tools in such environments remains a cumbersome, time consuming and error-prone process. A major consequence is the incomplete or outdated description of tools that are often missing important information, including parameters and metadata such as publication or links to documentation. ToolDog (Tool DescriptiOn Generator) facilitates the integration of tools - which have been registered in the ELIXIR tools registry (https://bio.tools) - into workbench environments by generating tool description templates. ToolDog includes two modules. The first module analyses the source code of the bioinformatics software with language-specific plugins, and generates a skeleton for a Galaxy XML or CWL tool description. The second module is dedicated to the enrichment of the generated tool description, using metadata provided by bio.tools. This last module can also be used on its own to complete or correct existing tool descriptions with missing metadata. PMID:29333231
Semantic biomedical resource discovery: a Natural Language Processing framework.
Sfakianaki, Pepi; Koumakis, Lefteris; Sfakianakis, Stelios; Iatraki, Galatia; Zacharioudakis, Giorgos; Graf, Norbert; Marias, Kostas; Tsiknakis, Manolis
2015-09-30
A plethora of publicly available biomedical resources do currently exist and are constantly increasing at a fast rate. In parallel, specialized repositories are been developed, indexing numerous clinical and biomedical tools. The main drawback of such repositories is the difficulty in locating appropriate resources for a clinical or biomedical decision task, especially for non-Information Technology expert users. In parallel, although NLP research in the clinical domain has been active since the 1960s, progress in the development of NLP applications has been slow and lags behind progress in the general NLP domain. The aim of the present study is to investigate the use of semantics for biomedical resources annotation with domain specific ontologies and exploit Natural Language Processing methods in empowering the non-Information Technology expert users to efficiently search for biomedical resources using natural language. A Natural Language Processing engine which can "translate" free text into targeted queries, automatically transforming a clinical research question into a request description that contains only terms of ontologies, has been implemented. The implementation is based on information extraction techniques for text in natural language, guided by integrated ontologies. Furthermore, knowledge from robust text mining methods has been incorporated to map descriptions into suitable domain ontologies in order to ensure that the biomedical resources descriptions are domain oriented and enhance the accuracy of services discovery. The framework is freely available as a web application at ( http://calchas.ics.forth.gr/ ). For our experiments, a range of clinical questions were established based on descriptions of clinical trials from the ClinicalTrials.gov registry as well as recommendations from clinicians. Domain experts manually identified the available tools in a tools repository which are suitable for addressing the clinical questions at hand, either individually or as a set of tools forming a computational pipeline. The results were compared with those obtained from an automated discovery of candidate biomedical tools. For the evaluation of the results, precision and recall measurements were used. Our results indicate that the proposed framework has a high precision and low recall, implying that the system returns essentially more relevant results than irrelevant. There are adequate biomedical ontologies already available, sufficiency of existing NLP tools and quality of biomedical annotation systems for the implementation of a biomedical resources discovery framework, based on the semantic annotation of resources and the use on NLP techniques. The results of the present study demonstrate the clinical utility of the application of the proposed framework which aims to bridge the gap between clinical question in natural language and efficient dynamic biomedical resources discovery.
Modeling languages for biochemical network simulation: reaction vs equation based approaches.
Wiechert, Wolfgang; Noack, Stephan; Elsheikh, Atya
2010-01-01
Biochemical network modeling and simulation is an essential task in any systems biology project. The systems biology markup language (SBML) was established as a standardized model exchange language for mechanistic models. A specific strength of SBML is that numerous tools for formulating, processing, simulation and analysis of models are freely available. Interestingly, in the field of multidisciplinary simulation, the problem of model exchange between different simulation tools occurred much earlier. Several general modeling languages like Modelica have been developed in the 1990s. Modelica enables an equation based modular specification of arbitrary hierarchical differential algebraic equation models. Moreover, libraries for special application domains can be rapidly developed. This contribution compares the reaction based approach of SBML with the equation based approach of Modelica and explains the specific strengths of both tools. Several biological examples illustrating essential SBML and Modelica concepts are given. The chosen criteria for tool comparison are flexibility for constraint specification, different modeling flavors, hierarchical, modular and multidisciplinary modeling. Additionally, support for spatially distributed systems, event handling and network analysis features is discussed. As a major result it is shown that the choice of the modeling tool has a strong impact on the expressivity of the specified models but also strongly depends on the requirements of the application context.
Deburring: an annotated bibliography. Volume VI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gillespie, L.K.
1980-07-01
An annotated summary of 138 articles and publications on burrs, burr prevention and deburring is presented. Thirty-seven deburring processes are listed. Entries cited include English, Russian, French, Japanese, and German language articles. Entries are indexed by deburring processes, author, and language. Indexes also indicate which references discuss equipment and tooling, how to use a proces economics, burr properties, and how to design to minimize burr problems. Research studies are identified as are the materials deburred.
Adaptation of a Control Center Development Environment for Industrial Process Control
NASA Technical Reports Server (NTRS)
Killough, Ronnie L.; Malik, James M.
1994-01-01
In the control center, raw telemetry data is received for storage, display, and analysis. This raw data must be combined and manipulated in various ways by mathematical computations to facilitate analysis, provide diversified fault detection mechanisms, and enhance display readability. A development tool called the Graphical Computation Builder (GCB) has been implemented which provides flight controllers with the capability to implement computations for use in the control center. The GCB provides a language that contains both general programming constructs and language elements specifically tailored for the control center environment. The GCB concept allows staff who are not skilled in computer programming to author and maintain computer programs. The GCB user is isolated from the details of external subsystem interfaces and has access to high-level functions such as matrix operators, trigonometric functions, and unit conversion macros. The GCB provides a high level of feedback during computation development that improves upon the often cryptic errors produced by computer language compilers. An equivalent need can be identified in the industrial data acquisition and process control domain: that of an integrated graphical development tool tailored to the application to hide the operating system, computer language, and data acquisition interface details. The GCB features a modular design which makes it suitable for technology transfer without significant rework. Control center-specific language elements can be replaced by elements specific to industrial process control.
Alternative Outlining Techniques for ESL Composition.
ERIC Educational Resources Information Center
Hubbard, Philip
Two methods of outlining are suggested for college-level students of English as a second language (ESL) who need the tools to master rhetorical patterns of academic written English that may be very different from those in their native languages. The two outlining techniques separate the four logically distinct tasks in the process of outlining:…
Formulaic Language in Computer-Supported Communication: Theory Meets Reality.
ERIC Educational Resources Information Center
Wray, Alison
2002-01-01
Attempts to validate a psycholinguistic model of language processing. One experiment designed to provide insight into the model is TALK, is a system developed to promote conversational fluency in non-speaking individuals. TALK, designed primarily for people with cerebral palsy and motor neuron disease. Talk is demonstrated to be a viable tool for…
Video-Sharing Websites: Tools for Developing Pattern Languages in Children
ERIC Educational Resources Information Center
An, Heejung; Seplocha, Holly
2010-01-01
Children and their families and teachers use video-sharing websites for new types of learning and information sharing. With the expansion of the World Wide Web, the ability to freely exchange pattern-based information has grown exponentially. As noted by Alexander, "pattern language development" is a process in which communities freely share…
Bilingualism--A Sanguine Step in ELT
ERIC Educational Resources Information Center
Anil, Beena
2014-01-01
Bilingualism can be used as a teaching aid in teaching and learning English language in an Indian classroom and to improve the language accuracy, fluency, and clarity of learners. Bilingualism can aid the teaching and learning process productively in the classroom. In India, most of the students consider English as a subject rather than a tool of…
Computer-Mediated Communication as an Autonomy-Enhancement Tool for Advanced Learners of English
ERIC Educational Resources Information Center
Wach, Aleksandra
2012-01-01
This article examines the relevance of modern technology for the development of learner autonomy in the process of learning English as a foreign language. Computer-assisted language learning and computer-mediated communication (CMC) appear to be particularly conducive to fostering autonomous learning, as they naturally incorporate many elements of…
Fujishiro, Kaori; Gong, Fang; Baron, Sherry; Jacobson, C Jeffery; DeLaney, Sheli; Flynn, Michael; Eggerth, Donald E
2010-02-01
The increasing ethnic diversity of the US workforce has created a need for research tools that can be used with multi-lingual worker populations. Developing multi-language questionnaire items is a complex process; however, very little has been documented in the literature. Commonly used English items from the Job Content Questionnaire and Quality of Work Life Questionnaire were translated by two interdisciplinary bilingual teams and cognitively tested in interviews with English-, Spanish-, and Chinese-speaking workers. Common problems across languages mainly concerned response format. Language-specific problems required more conceptual than literal translations. Some items were better understood by non-English speakers than by English speakers. De-centering (i.e., modifying the English original to correspond with translation) produced better understanding for one item. Translating questionnaire items and achieving equivalence across languages require various kinds of expertise. Backward translation itself is not sufficient. More research efforts should be concentrated on qualitative approaches to developing useful research tools. Published 2009 Wiley-Liss, Inc.
Capturing Communication Supporting Classrooms: The Development of a Tool and Feasibility Study
ERIC Educational Resources Information Center
Dockrell, Julie E.; Bakopoulou, Ioanna; Law, James; Spencer, Sarah; Lindsay, Geoff
2015-01-01
There is an increasing emphasis on supporting the oral language needs of children in the classroom. A variety of different measures have been developed to assist this process but few have been derived systematically from the available research evidence. A Communication Supporting Classrooms Observation Tool (CsC Observation Tool) for children aged…
APT - NASA ENHANCED VERSION OF AUTOMATICALLY PROGRAMMED TOOL SOFTWARE - STAND-ALONE VERSION
NASA Technical Reports Server (NTRS)
Premo, D. A.
1994-01-01
The APT code is one of the most widely used software tools for complex numerically controlled (N/C) machining. APT is an acronym for Automatically Programmed Tools and is used to denote both a language and the computer software that processes that language. Development of the APT language and software system was begun over twenty years ago as a U. S. government sponsored industry and university research effort. APT is a "problem oriented" language that was developed for the explicit purpose of aiding the N/C machine tools. Machine-tool instructions and geometry definitions are written in the APT language to constitute a "part program." The APT part program is processed by the APT software to produce a cutter location (CL) file. This CL file may then be processed by user supplied post processors to convert the CL data into a form suitable for a particular N/C machine tool. This June, 1989 offering of the APT system represents an adaptation, with enhancements, of the public domain version of APT IV/SSX8 to the DEC VAX-11/780 for use by the Engineering Services Division of the NASA Goddard Space Flight Center. Enhancements include the super pocket feature which allows concave and convex polygon shapes of up to 40 points including shapes that overlap, that leave islands of material within the pocket, and that have one or more arcs as part of the pocket boundary. Recent modifications to APT include a rework of the POCKET subroutine and correction of an error that prevented the use within a macro of a macro variable cutter move statement combined with macro variable double check surfaces. Former modifications included the expansion of array and buffer sizes to accommodate larger part programs, and the insertion of a few user friendly error messages. The APT system software on the DEC VAX-11/780 is organized into two separate programs: the load complex and the APT processor. The load complex handles the table initiation phase and is usually only run when changes to the APT processor capabilities are made. This phase initializes character recognition and syntax tables for the APT processor by creating FORTRAN block data programs. The APT processor consists of four components: the translator, the execution complex, the subroutine library, and the CL editor. The translator examines each APT statement in the part program for recognizable structure and generates a new statement, or series of statements, in an intermediate language. The execution complex processes all of the definition, motion, and related statements to generate cutter location coordinates. The subroutine library contains routines defining the algorithms required to process the sequenced list of intermediate language commands generated by the translator. The CL editor re-processes the cutter location coordinates according to user supplied commands to generate a final CL file. A sample post processor is also included which translates a CL file into a form for use with a Wales Strippit Fabramatic Model 30/30 sheet metal punch. The user should be able to readily develop post processors for other N/C machine tools. The APT language is a statement oriented, sequence dependent language. With the exception of such programming techniques as looping and macros, statements in an APT program are executed in a strict first-to-last sequence. In order to provide programming capability for the broadest possible range of parts and of machine tools, APT input (and output) is generalized, as represented by 3-dimensional geometry and tools, and arbitrarily uniform, as represented by the moving tool concept and output data in absolute coordinates. A command procedure allows the user to select the desired part program, ask for a graphics file of cutter motions in IGES format, and submit the procedure as a batch job, if desired. The APT system software is written in FORTRAN 77 for batch and interactive execution and has been implemented on a DEC VAX series computer under VMS 4.4. The enhancements for this version of APT were last updated in June, 1989. The NASA adaptation, with enhancements, of the public domain version of the APT IV/SSX8 software to the DEC VAX-11/780 is available by license for a period of ten (10) years to approved licensees. The licensed program product delivered includes the APT IV/SSX8 system source code, object code, executable images, and command procedures and one set of supporting documentation. Additional copies of the supporting documentation may be purchased at any time at the price indicated below.
Modeling the emergence of contact languages.
Tria, Francesca; Servedio, Vito D P; Mufwene, Salikoko S; Loreto, Vittorio
2015-01-01
Contact languages are born out of the non-trivial interaction of two (or more) parent languages. Nowadays, the enhanced possibility of mobility and communication allows for a strong mixing of languages and cultures, thus raising the issue of whether there are any pure languages or cultures that are unaffected by contact with others. As with bacteria or viruses in biological evolution, the evolution of languages is marked by horizontal transmission; but to date no reliable quantitative tools to investigate these phenomena have been available. An interesting and well documented example of contact language is the emergence of creole languages, which originated in the contacts of European colonists and slaves during the 17th and 18th centuries in exogenous plantation colonies of especially the Atlantic and Indian Ocean. Here, we focus on the emergence of creole languages to demonstrate a dynamical process that mimics the process of creole formation in American and Caribbean plantation ecologies. Inspired by the Naming Game (NG), our modeling scheme incorporates demographic information about the colonial population in the framework of a non-trivial interaction network including three populations: Europeans, Mulattos/Creoles, and Bozal slaves. We show how this sole information makes it possible to discriminate territories that produced modern creoles from those that did not, with a surprising accuracy. The generality of our approach provides valuable insights for further studies on the emergence of languages in contact ecologies as well as to test specific hypotheses about the peopling and the population structures of the relevant territories. We submit that these tools could be relevant to addressing problems related to contact phenomena in many cultural domains: e.g., emergence of dialects, language competition and hybridization, globalization phenomena.
Modeling the Emergence of Contact Languages
Tria, Francesca; Servedio, Vito D.P.; Mufwene, Salikoko S.; Loreto, Vittorio
2015-01-01
Contact languages are born out of the non-trivial interaction of two (or more) parent languages. Nowadays, the enhanced possibility of mobility and communication allows for a strong mixing of languages and cultures, thus raising the issue of whether there are any pure languages or cultures that are unaffected by contact with others. As with bacteria or viruses in biological evolution, the evolution of languages is marked by horizontal transmission; but to date no reliable quantitative tools to investigate these phenomena have been available. An interesting and well documented example of contact language is the emergence of creole languages, which originated in the contacts of European colonists and slaves during the 17th and 18th centuries in exogenous plantation colonies of especially the Atlantic and Indian Ocean. Here, we focus on the emergence of creole languages to demonstrate a dynamical process that mimics the process of creole formation in American and Caribbean plantation ecologies. Inspired by the Naming Game (NG), our modeling scheme incorporates demographic information about the colonial population in the framework of a non-trivial interaction network including three populations: Europeans, Mulattos/Creoles, and Bozal slaves. We show how this sole information makes it possible to discriminate territories that produced modern creoles from those that did not, with a surprising accuracy. The generality of our approach provides valuable insights for further studies on the emergence of languages in contact ecologies as well as to test specific hypotheses about the peopling and the population structures of the relevant territories. We submit that these tools could be relevant to addressing problems related to contact phenomena in many cultural domains: e.g., emergence of dialects, language competition and hybridization, globalization phenomena. PMID:25875371
MIA - A free and open source software for gray scale medical image analysis
2013-01-01
Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed. PMID:24119305
MIA - A free and open source software for gray scale medical image analysis.
Wollny, Gert; Kellman, Peter; Ledesma-Carbayo, María-Jesus; Skinner, Matthew M; Hublin, Jean-Jaques; Hierl, Thomas
2013-10-11
Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large.Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers.One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development.Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don't provide an clear approach when one wants to shape a new command line tool from a prototype shell script. The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.
Software engineering and Ada in design
NASA Technical Reports Server (NTRS)
Oneill, Don
1986-01-01
Modern software engineering promises significant reductions in software costs and improvements in software quality. The Ada language is the focus for these software methodology and tool improvements. The IBM FSD approach, including the software engineering practices that guide the systematic design and development of software products and the management of the software process are examined. The revised Ada design language adaptation is revealed. This four level design methodology is detailed including the purpose of each level, the management strategy that integrates the software design activity with the program milestones, and the technical strategy that maps the Ada constructs to each level of design. A complete description of each design level is provided along with specific design language recording guidelines for each level. Finally, some testimony is offered on education, tools, architecture, and metrics resulting from project use of the four level Ada design language adaptation.
Woodward-Kron, Robyn; Stevens, Mary; Flynn, Eleanor
2011-05-01
Frameworks for clinical communication assist educators in making explicit the principles of good communication and providing feedback to medical trainees. However, existing frameworks rarely take into account the roles of culture and language in communication, which can be important for international medical graduates (IMGs) whose first language is not English. This article describes the collaboration by a medical educator, a discourse analyst, and a phonetician to develop a communication and language feedback methodology to assist IMG trainees at a Victorian hospital in Australia with developing their doctor-patient communication skills. The Communication and Language Feedback (CaLF) methodology incorporates a written tool and video recording of role-plays of doctor-patient interactions in a classroom setting or in an objective structured clinical examination (OSCE) practice session with a simulated patient. IMG trainees receive verbal feedback from their hospital-based medical clinical educator, the simulated patient, and linguists. The CaLF tool was informed by a model of language in context, observation of IMG communication training, and process evaluation by IMG participants during January to August 2009. The authors provided participants with a feedback package containing their practice video (which included verbal feedback) and the completed CaLF tool.The CaLF methodology provides a tool for medical educators and language practitioners to work collaboratively with IMGs to enhance communication and language skills. The ongoing interdisciplinary collaboration also provides much-needed applied research opportunities in intercultural health communication, an area the authors believe cannot be adequately addressed from the perspective of one discipline alone. Copyright © by the Association of American medical Colleges.
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Boerschlein, David P.
1993-01-01
Fault-Tree Compiler (FTC) program, is software tool used to calculate probability of top event in fault tree. Gates of five different types allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language easy to understand and use. In addition, program supports hierarchical fault-tree definition feature, which simplifies tree-description process and reduces execution time. Set of programs created forming basis for reliability-analysis workstation: SURE, ASSIST, PAWS/STEM, and FTC fault-tree tool (LAR-14586). Written in PASCAL, ANSI-compliant C language, and FORTRAN 77. Other versions available upon request.
STAR - A computer language for hybrid AI applications
NASA Technical Reports Server (NTRS)
Borchardt, G. C.
1986-01-01
Constructing Artificial Intelligence application systems which rely on both symbolic and non-symbolic processing places heavy demands on the communication of data between dissimilar languages. This paper describes STAR (Simple Tool for Automated Reasoning), a computer language for the development of AI application systems which supports the transfer of data structures between a symbolic level and a non-symbolic level defined in languages such as FORTRAN, C and PASCAL. The organization of STAR is presented, followed by the description of an application involving STAR in the interpretation of airborne imaging spectrometer data.
Plazzotta, Fernando; Otero, Carlos; Luna, Daniel; de Quiros, Fernan Gonzalez Bernaldo
2013-01-01
Physicians do not always keep the problem list accurate, complete and updated. To analyze natural language processing (NLP) techniques and inference rules as strategies to maintain completeness and accuracy of the problem list in EHRs. Non systematic literature review in PubMed, in the last 10 years. Strategies to maintain the EHRs problem list were analyzed in two ways: inputting and removing problems from the problem list. NLP and inference rules have acceptable performance for inputting problems into the problem list. No studies using these techniques for removing problems were published Conclusion: Both tools, NLP and inference rules have had acceptable results as tools for maintain the completeness and accuracy of the problem list.
ERIC Educational Resources Information Center
Mississippi State Dept. of Education, Jackson. Bureau of School Improvement.
This document is a decision-making tool on the instructional process in Mississippi. It attempts to standardize curriculum content by identifying core skills that must be included in subject areas in kindergarten through grade 12. Subjects covered are reading, English/language arts, mathematics, art, computer education, foreign languages, health…
ERIC Educational Resources Information Center
Enkin, Elizabeth
2016-01-01
The maze task is a psycholinguistic experimental procedure that measures real-time incremental sentence processing. The task has recently been tested as a language learning tool with promising results. Therefore, the present study examines the merits of a contextualized version of this task: the story maze. The findings are consistent with…
ERIC Educational Resources Information Center
Zoreda, Margaret Lee
Foreign Language education will play an important role in the broadening and globalization of higher education for the 21st century. Where else will educators find the tools to "dialog" with--to engage--the "other" as part of the enriching process that accompanies cultural exchange, cultural broadening? This paper sheds light on these issues, and…
Detailed Phonetic Labeling of Multi-language Database for Spoken Language Processing Applications
2015-03-01
which contains about 60 interfering speakers as well as background music in a bar. The top panel is again clean training /noisy testing settings, and...recognition system for Mandarin was developed and tested. Character recognition rates as high as 88% were obtained, using an approximately 40 training ...Tool_ComputeFeat.m) .............................................................................................................. 50 6.3. Training
ERIC Educational Resources Information Center
Egan-Robertson, Ann, Ed.; Bloome, David, Ed.
This book presents new directions in classroom education generated by using ethnography and sociolinguistics as teaching tools, the theory behind these efforts, and the classroom practices involved. The chapters are organized to highlight three issues of recent concern to K-12 educators: how student ethnographic and sociolinguistic research can be…
Diaries as a Reflective Tool in Pre-Service Language Teacher Education
ERIC Educational Resources Information Center
Kömür, Sevki; Çepik, Hazal
2015-01-01
This study presents and analyzes the positive and negative reflections of ten pre-service English teachers who kept diaries on their own learning and teaching processes and daily lives. The participants were students in an English Language Teacher Education Program who took an on-campus methodology course and voluntarily agreed to keep diaries.…
Talking to Texts and Sketches: The Function of Written and Graphic Mediation in Engineering Design.
ERIC Educational Resources Information Center
Lewis, Barbara
2000-01-01
Describes the author's research that explores the role of language, particularly texts, in the engineering design process. Notes that results of this case study support a new "mediated" model of engineering design as an inventional activity in which designers use talk, written language, and other symbolic representations as tools to think about…
Smuggling Language into the Teaching of Reading.
ERIC Educational Resources Information Center
Heilman, Arthur W.; Holmes, Elizabeth Ann
Techniques and procedures for teaching reading as a meaning-making, language-oriented process are the focus of this book. The underlying premise is that children are taught to read so that they have an important tool for developing and expanding concepts. In order to accomplish this aim, children must be exposed to the precision and ambiguities of…
ERIC Educational Resources Information Center
Shtyrov, Yury; Smith, Marie L.; Horner, Aidan J.; Henson, Richard; Nathan, Pradeep J.; Bullmore, Edward T.; Pulvermuller, Friedemann
2012-01-01
Previous research indicates that, under explicit instructions to listen to spoken stimuli or in speech-oriented behavioural tasks, the brain's responses to senseless pseudowords are larger than those to meaningful words; the reverse is true in non-attended conditions. These differential responses could be used as a tool to trace linguistic…
ArdenML: The Arden Syntax Markup Language (or Arden Syntax: It's Not Just Text Any More!)
Sailors, R. Matthew
2001-01-01
It is no longer necessary to think of Arden Syntax as simply a text-based knowledge base format. The development of ArdenML (Arden Syntax Markup Language), an XML-based markup language allows structured access to most of the maintenance and library categories without the need to write or buy a compiler may lead to the development of simple commercial and freeware tools for processing Arden Syntax Medical Logic Modules (MLMs)
Applying a visual language for image processing as a graphical teaching tool in medical imaging
NASA Astrophysics Data System (ADS)
Birchman, James J.; Tanimoto, Steven L.; Rowberg, Alan H.; Choi, Hyung-Sik; Kim, Yongmin
1992-05-01
Typical user interaction in image processing is with command line entries, pull-down menus, or text menu selections from a list, and as such is not generally graphical in nature. Although applying these interactive methods to construct more sophisticated algorithms from a series of simple image processing steps may be clear to engineers and programmers, it may not be clear to clinicians. A solution to this problem is to implement a visual programming language using visual representations to express image processing algorithms. Visual representations promote a more natural and rapid understanding of image processing algorithms by providing more visual insight into what the algorithms do than the interactive methods mentioned above can provide. Individuals accustomed to dealing with images will be more likely to understand an algorithm that is represented visually. This is especially true of referring physicians, such as surgeons in an intensive care unit. With the increasing acceptance of picture archiving and communications system (PACS) workstations and the trend toward increasing clinical use of image processing, referring physicians will need to learn more sophisticated concepts than simply image access and display. If the procedures that they perform commonly, such as window width and window level adjustment and image enhancement using unsharp masking, are depicted visually in an interactive environment, it will be easier for them to learn and apply these concepts. The software described in this paper is a visual programming language for imaging processing which has been implemented on the NeXT computer using NeXTstep user interface development tools and other tools in an object-oriented environment. The concept is based upon the description of a visual language titled `Visualization of Vision Algorithms' (VIVA). Iconic representations of simple image processing steps are placed into a workbench screen and connected together into a dataflow path by the user. As the user creates and edits a dataflow path, more complex algorithms can be built on the screen. Once the algorithm is built, it can be executed, its results can be reviewed, and operator parameters can be interactively adjusted until an optimized output is produced. The optimized algorithm can then be saved and added to the system as a new operator. This system has been evaluated as a graphical teaching tool for window width and window level adjustment, image enhancement using unsharp masking, and other techniques.
An object-oriented description method of EPMM process
NASA Astrophysics Data System (ADS)
Jiang, Zuo; Yang, Fan
2017-06-01
In order to use the object-oriented mature tools and language in software process model, make the software process model more accord with the industrial standard, it’s necessary to study the object-oriented modelling of software process. Based on the formal process definition in EPMM, considering the characteristics that Petri net is mainly formal modelling tool and combining the Petri net modelling with the object-oriented modelling idea, this paper provides this implementation method to convert EPMM based on Petri net into object models based on object-oriented description.
A programming language for composable DNA circuits
Phillips, Andrew; Cardelli, Luca
2009-01-01
Recently, a range of information-processing circuits have been implemented in DNA by using strand displacement as their main computational mechanism. Examples include digital logic circuits and catalytic signal amplification circuits that function as efficient molecular detectors. As new paradigms for DNA computation emerge, the development of corresponding languages and tools for these paradigms will help to facilitate the design of DNA circuits and their automatic compilation to nucleotide sequences. We present a programming language for designing and simulating DNA circuits in which strand displacement is the main computational mechanism. The language includes basic elements of sequence domains, toeholds and branch migration, and assumes that strands do not possess any secondary structure. The language is used to model and simulate a variety of circuits, including an entropy-driven catalytic gate, a simple gate motif for synthesizing large-scale circuits and a scheme for implementing an arbitrary system of chemical reactions. The language is a first step towards the design of modelling and simulation tools for DNA strand displacement, which complements the emergence of novel implementation strategies for DNA computing. PMID:19535415
A programming language for composable DNA circuits.
Phillips, Andrew; Cardelli, Luca
2009-08-06
Recently, a range of information-processing circuits have been implemented in DNA by using strand displacement as their main computational mechanism. Examples include digital logic circuits and catalytic signal amplification circuits that function as efficient molecular detectors. As new paradigms for DNA computation emerge, the development of corresponding languages and tools for these paradigms will help to facilitate the design of DNA circuits and their automatic compilation to nucleotide sequences. We present a programming language for designing and simulating DNA circuits in which strand displacement is the main computational mechanism. The language includes basic elements of sequence domains, toeholds and branch migration, and assumes that strands do not possess any secondary structure. The language is used to model and simulate a variety of circuits, including an entropy-driven catalytic gate, a simple gate motif for synthesizing large-scale circuits and a scheme for implementing an arbitrary system of chemical reactions. The language is a first step towards the design of modelling and simulation tools for DNA strand displacement, which complements the emergence of novel implementation strategies for DNA computing.
Doan, Son; Maehara, Cleo K; Chaparro, Juan D; Lu, Sisi; Liu, Ruiling; Graham, Amanda; Berry, Erika; Hsu, Chun-Nan; Kanegaye, John T; Lloyd, David D; Ohno-Machado, Lucila; Burns, Jane C; Tremoulet, Adriana H
2016-05-01
Delayed diagnosis of Kawasaki disease (KD) may lead to serious cardiac complications. We sought to create and test the performance of a natural language processing (NLP) tool, the KD-NLP, in the identification of emergency department (ED) patients for whom the diagnosis of KD should be considered. We developed an NLP tool that recognizes the KD diagnostic criteria based on standard clinical terms and medical word usage using 22 pediatric ED notes augmented by Unified Medical Language System vocabulary. With high suspicion for KD defined as fever and three or more KD clinical signs, KD-NLP was applied to 253 ED notes from children ultimately diagnosed with either KD or another febrile illness. We evaluated KD-NLP performance against ED notes manually reviewed by clinicians and compared the results to a simple keyword search. KD-NLP identified high-suspicion patients with a sensitivity of 93.6% and specificity of 77.5% compared to notes manually reviewed by clinicians. The tool outperformed a simple keyword search (sensitivity = 41.0%; specificity = 76.3%). KD-NLP showed comparable performance to clinician manual chart review for identification of pediatric ED patients with a high suspicion for KD. This tool could be incorporated into the ED electronic health record system to alert providers to consider the diagnosis of KD. KD-NLP could serve as a model for decision support for other conditions in the ED. © 2016 by the Society for Academic Emergency Medicine.
Plat, Rika; Lowie, Wander; de Bot, Kees
2017-01-01
Reaction time data have long been collected in order to gain insight into the underlying mechanisms involved in language processing. Means analyses often attempt to break down what factors relate to what portion of the total reaction time. From a dynamic systems theory perspective or an interaction dominant view of language processing, it is impossible to isolate discrete factors contributing to language processing, since these continually and interactively play a role. Non-linear analyses offer the tools to investigate the underlying process of language use in time, without having to isolate discrete factors. Patterns of variability in reaction time data may disclose the relative contribution of automatic (grapheme-to-phoneme conversion) processing and attention-demanding (semantic) processing. The presence of a fractal structure in the variability of a reaction time series indicates automaticity in the mental structures contributing to a task. A decorrelated pattern of variability will indicate a higher degree of attention-demanding processing. A focus on variability patterns allows us to examine the relative contribution of automatic and attention-demanding processing when a speaker is using the mother tongue (L1) or a second language (L2). A word naming task conducted in the L1 (Dutch) and L2 (English) shows L1 word processing to rely more on automatic spelling-to-sound conversion than L2 word processing. A word naming task with a semantic categorization subtask showed more reliance on attention-demanding semantic processing when using the L2. A comparison to L1 English data shows this was not only due to the amount of language use or language dominance, but also to the difference in orthographic depth between Dutch and English. An important implication of this finding is that when the same task is used to test and compare different languages, one cannot straightforwardly assume the same cognitive sub processes are involved to an equal degree using the same task in different languages.
ERIC Educational Resources Information Center
Wood, Peter
2011-01-01
"QuickAssist," the program presented in this paper, uses natural language processing (NLP) technologies. It places a range of NLP tools at the disposal of learners, intended to enable them to independently read and comprehend a German text of their choice while they extend their vocabulary, learn about different uses of particular words,…
"What More Is Literacy?" The Language of Secondary Preservice Teachers about Reading and Content
ERIC Educational Resources Information Center
McArthur, Kerry Gordon
2007-01-01
Reform in the fields of adolescent and content area literacy have focused on broadening a definition of literacy beyond the ability to read and write. In a broader definition the language processes of reading, writing, speaking and listening become literacy tools to engage students in the learning of concepts and afford the learner ways to…
ERIC Educational Resources Information Center
Kadyrova, Alina A.; Valeev, Agzam A.
2016-01-01
There is a determined number of trends in the process of intensification of high school training, including the integration of professional, linguistic and cultural training of professionals in the unity with the development of their personal qualities;. For this reason, modern educational technologies serve as a tool for practical implementation…
KSC Space Station Operations Language (SSOL)
NASA Technical Reports Server (NTRS)
1985-01-01
The Space Station Operations Language (SSOL) will serve a large community of diverse users dealing with the integration and checkout of Space Station modules. Kennedy Space Center's plan to achieve Level A specification of the SSOL system, encompassing both its language and its automated support environment, is presented in the format of a briefing. The SSOL concept is a collection of fundamental elements that span languages, operating systems, software development, software tools and several user classes. The approach outlines a thorough process that combines the benefits of rapid prototyping with a coordinated requirements gathering effort, yielding a Level A specification of the SSOL requirements.
Comparing Noun Phrasing Techniques for Use with Medical Digital Library Tools.
ERIC Educational Resources Information Center
Tolle, Kristin M.; Chen, Hsinchun
2000-01-01
Describes a study that investigated the use of a natural language processing technique called noun phrasing to determine whether it is a viable technique for medical information retrieval. Evaluates four noun phrase generation tools for their ability to isolate noun phrases from medical journal abstracts, focusing on precision and recall.…
SoS Notebook: An Interactive Multi-Language Data Analysis Environment.
Peng, Bo; Wang, Gao; Ma, Jun; Leong, Man Chong; Wakefield, Chris; Melott, James; Chiu, Yulun; Du, Di; Weinstein, John N
2018-05-22
Complex bioinformatic data analysis workflows involving multiple scripts in different languages can be difficult to consolidate, share, and reproduce. An environment that streamlines the entire processes of data collection, analysis, visualization and reporting of such multi-language analyses is currently lacking. We developed Script of Scripts (SoS) Notebook, a web-based notebook environment that allows the use of multiple scripting language in a single notebook, with data flowing freely within and across languages. SoS Notebook enables researchers to perform sophisticated bioinformatic analysis using the most suitable tools for different parts of the workflow, without the limitations of a particular language or complications of cross-language communications. SoS Notebook is hosted at http://vatlab.github.io/SoS/ and is distributed under a BSD license. bpeng@mdanderson.org.
A verification strategy for web services composition using enhanced stacked automata model.
Nagamouttou, Danapaquiame; Egambaram, Ilavarasan; Krishnan, Muthumanickam; Narasingam, Poonkuzhali
2015-01-01
Currently, Service-Oriented Architecture (SOA) is becoming the most popular software architecture of contemporary enterprise applications, and one crucial technique of its implementation is web services. Individual service offered by some service providers may symbolize limited business functionality; however, by composing individual services from different service providers, a composite service describing the intact business process of an enterprise can be made. Many new standards have been defined to decipher web service composition problem namely Business Process Execution Language (BPEL). BPEL provides an initial work for forming an Extended Markup Language (XML) specification language for defining and implementing business practice workflows for web services. The problems with most realistic approaches to service composition are the verification of composed web services. It has to depend on formal verification method to ensure the correctness of composed services. A few research works has been carried out in the literature survey for verification of web services for deterministic system. Moreover the existing models did not address the verification properties like dead transition, deadlock, reachability and safetyness. In this paper, a new model to verify the composed web services using Enhanced Stacked Automata Model (ESAM) has been proposed. The correctness properties of the non-deterministic system have been evaluated based on the properties like dead transition, deadlock, safetyness, liveness and reachability. Initially web services are composed using Business Process Execution Language for Web Service (BPEL4WS) and it is converted into ESAM (combination of Muller Automata (MA) and Push Down Automata (PDA)) and it is transformed into Promela language, an input language for Simple ProMeLa Interpreter (SPIN) tool. The model is verified using SPIN tool and the results revealed better recital in terms of finding dead transition and deadlock in contrast to the existing models.
Computational Investigations of Multiword Chunks in Language Learning.
McCauley, Stewart M; Christiansen, Morten H
2017-07-01
Second-language learners rarely arrive at native proficiency in a number of linguistic domains, including morphological and syntactic processing. Previous approaches to understanding the different outcomes of first- versus second-language learning have focused on cognitive and neural factors. In contrast, we explore the possibility that children and adults may rely on different linguistic units throughout the course of language learning, with specific focus on the granularity of those units. Following recent psycholinguistic evidence for the role of multiword chunks in online language processing, we explore the hypothesis that children rely more heavily on multiword units in language learning than do adults learning a second language. To this end, we take an initial step toward using large-scale, corpus-based computational modeling as a tool for exploring the granularity of speakers' linguistic units. Employing a computational model of language learning, the Chunk-Based Learner, we compare the usefulness of chunk-based knowledge in accounting for the speech of second-language learners versus children and adults speaking their first language. Our findings suggest that while multiword units are likely to play a role in second-language learning, adults may learn less useful chunks, rely on them to a lesser extent, and arrive at them through different means than children learning a first language. Copyright © 2017 Cognitive Science Society, Inc.
Improvement of Computer Software Quality through Software Automated Tools.
1986-08-31
requirement for increased emphasis on software quality assurance has lead to the creation of various methods of verification and validation. Experience...result was a vast array of methods , systems, languages and automated tools to assist in the process. Given that the primary role of quality assurance is...Unfortunately, there is no single method , tool or technique that can insure accurate, reliable and cost effective software. Therefore, government and industry
Enabling international adoption of LOINC through translation
Vreeman, Daniel J.; Chiaravalloti, Maria Teresa; Hook, John; McDonald, Clement J.
2012-01-01
Interoperable health information exchange depends on adoption of terminology standards, but international use of such standards can be challenging because of language differences between local concept names and the standard terminology. To address this important barrier, we describe the evolution of an efficient process for constructing translations of LOINC terms names, the foreign language functions in RELMA, and the current state of translations in LOINC. We also present the development of the Italian translation to illustrate how translation is enabling adoption in international contexts. We built a tool that finds the unique list of LOINC Parts that make up a given set of LOINC terms. This list enables translation of smaller pieces like the core component “hepatitis c virus” separately from all the suffixes that could appear with it, such “Ab.IgG”, “DNA”, and “RNA”. We built another tool that generates a translation of a full LOINC name from all of these atomic pieces. As of version 2.36 (June 2011), LOINC terms have been translated into 9 languages from 15 linguistic variants other than its native English. The five largest linguistic variants have all used the Part-based translation mechanism. However, even with efficient tools and processes, translation of standard terminology is a complex undertaking. Two of the prominent linguistic challenges that translators have faced include: the approach to handling acronyms and abbreviations, and the differences in linguistic syntax (e.g. word order) between languages. LOINC’s open and customizable approach has enabled many different groups to create translations that met their needs and matched their resources. Distributing the standard and its many language translations at no cost worldwide accelerates LOINC adoption globally, and is an important enabler of interoperable health information exchange PMID:22285984
Integrated Speech and Language Technology for Intelligence, Surveillance, and Reconnaissance (ISR)
2017-07-01
applying submodularity techniques to address computing challenges posed by large datasets in speech and language processing. MT and speech tools were...aforementioned research-oriented activities, the IT system administration team provided necessary support to laboratory computing and network operations...operations of SCREAM Lab computer systems and networks. Other miscellaneous activities in relation to Task Order 29 are presented in an additional fourth
ERIC Educational Resources Information Center
Van Laere, Evelien; Rosiers, Kirsten; Van Avermaet, Piet; Slembrouck, Stef; van Braak, Johan
2017-01-01
Computer-based learning environments (CBLEs) have the potential to integrate the linguistic diversity present in classrooms as a resourceful tool in pupils' learning process. Particularly for pupils who speak a language at home other than the language which is used at school, more understanding is needed on how CBLEs offering multilingual content…
ERIC Educational Resources Information Center
Farver, JoAnn M.; Nakamoto, Jonathan; Lonigan, Christopher J.
2007-01-01
This study investigated the ability of the English and Spanish versions of the "Get Ready to Read!" Screener (E-GRTR and S-GRTR) administered at the beginning of the preschool year to predict the oral language and phonological and print processing skills of Spanish-speaking English-language learners (ELLs) and English-only speaking children (EO)…
Statistical physics of language dynamics
NASA Astrophysics Data System (ADS)
Loreto, Vittorio; Baronchelli, Andrea; Mukherjee, Animesh; Puglisi, Andrea; Tria, Francesca
2011-04-01
Language dynamics is a rapidly growing field that focuses on all processes related to the emergence, evolution, change and extinction of languages. Recently, the study of self-organization and evolution of language and meaning has led to the idea that a community of language users can be seen as a complex dynamical system, which collectively solves the problem of developing a shared communication framework through the back-and-forth signaling between individuals. We shall review some of the progress made in the past few years and highlight potential future directions of research in this area. In particular, the emergence of a common lexicon and of a shared set of linguistic categories will be discussed, as examples corresponding to the early stages of a language. The extent to which synthetic modeling is nowadays contributing to the ongoing debate in cognitive science will be pointed out. In addition, the burst of growth of the web is providing new experimental frameworks. It makes available a huge amount of resources, both as novel tools and data to be analyzed, allowing quantitative and large-scale analysis of the processes underlying the emergence of a collective information and language dynamics.
ANTLR Tree Grammar Generator and Extensions
NASA Technical Reports Server (NTRS)
Craymer, Loring
2005-01-01
A computer program implements two extensions of ANTLR (Another Tool for Language Recognition), which is a set of software tools for translating source codes between different computing languages. ANTLR supports predicated- LL(k) lexer and parser grammars, a notation for annotating parser grammars to direct tree construction, and predicated tree grammars. [ LL(k) signifies left-right, leftmost derivation with k tokens of look-ahead, referring to certain characteristics of a grammar.] One of the extensions is a syntax for tree transformations. The other extension is the generation of tree grammars from annotated parser or input tree grammars. These extensions can simplify the process of generating source-to-source language translators and they make possible an approach, called "polyphase parsing," to translation between computing languages. The typical approach to translator development is to identify high-level semantic constructs such as "expressions," "declarations," and "definitions" as fundamental building blocks in the grammar specification used for language recognition. The polyphase approach is to lump ambiguous syntactic constructs during parsing and then disambiguate the alternatives in subsequent tree transformation passes. Polyphase parsing is believed to be useful for generating efficient recognizers for C++ and other languages that, like C++, have significant ambiguities.
Comeau, Donald C.; Liu, Haibin; Islamaj Doğan, Rezarta; Wilbur, W. John
2014-01-01
BioC is a new format and associated code libraries for sharing text and annotations. We have implemented BioC natural language preprocessing pipelines in two popular programming languages: C++ and Java. The current implementations interface with the well-known MedPost and Stanford natural language processing tool sets. The pipeline functionality includes sentence segmentation, tokenization, part-of-speech tagging, lemmatization and sentence parsing. These pipelines can be easily integrated along with other BioC programs into any BioC compliant text mining systems. As an application, we converted the NCBI disease corpus to BioC format, and the pipelines have successfully run on this corpus to demonstrate their functionality. Code and data can be downloaded from http://bioc.sourceforge.net. Database URL: http://bioc.sourceforge.net PMID:24935050
The Benefits of Executive Control Training and the Implications for Language Processing
Hussey, Erika K.; Novick, Jared M.
2012-01-01
Recent psycholinguistics research suggests that the executive function (EF) skill known as conflict resolution – the ability to adjust behavior in the service of resolving among incompatible representations – is important for several language processing tasks such as lexical and syntactic ambiguity resolution, verbal fluency, and common-ground assessment. Here, we discuss work showing that various EF skills can be enhanced through consistent practice with working-memory tasks that tap these EFs, and, moreover, that improvements on the training tasks transfer across domains to novel tasks that may rely on shared underlying EFs. These findings have implications for language processing and could launch new research exploring if EF training, within a “process-specific” framework, could be used as a remediation tool for improving general language use. Indeed, work in our lab demonstrates that EF training that increases conflict-resolution processes has selective benefits on an untrained sentence-processing task requiring syntactic ambiguity resolution, which relies on shared conflict-resolution functions. Given claims that conflict-resolution abilities contribute to a range of linguistic skills, EF training targeting this process could theoretically yield wider performance gains beyond garden-path recovery. We offer some hypotheses on the potential benefits of EF training as a component of interventions to mitigate general difficulties in language processing. However, there are caveats to consider as well, which we also address. PMID:22661962
Computer Aided Management for Information Processing Projects.
ERIC Educational Resources Information Center
Akman, Ibrahim; Kocamustafaogullari, Kemal
1995-01-01
Outlines the nature of information processing projects and discusses some project management programming packages. Describes an in-house interface program developed to utilize a selected project management package (TIMELINE) by using Oracle Data Base Management System tools and Pascal programming language for the management of information system…
ERIC Educational Resources Information Center
Weeber, Marc; Klein, Henny; de Jong-van den Berg, Lolkje T. W.; Vos, Rein
2001-01-01
Proposes a two-step model of discovery in which new scientific hypotheses can be generated and subsequently tested. Applying advanced natural language processing techniques to find biomedical concepts in text, the model is implemented in a versatile interactive discovery support tool. This tool is used to successfully simulate Don R. Swanson's…
Teacher Evaluation as a Tool for Professional Development: A Case of Saudi Arabia
ERIC Educational Resources Information Center
Hakim, Badia Muntazir
2015-01-01
This study reports on the use of teacher evaluation and appraisal process as a tool for professional development. A group of 30 teachers from seven different nationalities with diverse qualifications and teaching experiences participated in this case study at the English Language Institute (ELI) at King Abdulaziz University (KAU), Saudi Arabia.…
A Summary of Some Discrete-Event System Control Problems
NASA Astrophysics Data System (ADS)
Rudie, Karen
A summary of the area of control of discrete-event systems is given. In this research area, automata and formal language theory is used as a tool to model physical problems that arise in technological and industrial systems. The key ingredients to discrete-event control problems are a process that can be modeled by an automaton, events in that process that cannot be disabled or prevented from occurring, and a controlling agent that manipulates the events that can be disabled to guarantee that the process under control either generates all the strings in some prescribed language or as many strings as possible in some prescribed language. When multiple controlling agents act on a process, decentralized control problems arise. In decentralized discrete-event systems, it is presumed that the agents effecting control cannot each see all event occurrences. Partial observation leads to some problems that cannot be solved in polynomial time and some others that are not even decidable.
Tools reference manual for a Requirements Specification Language (RSL), version 2.0
NASA Technical Reports Server (NTRS)
Fisher, Gene L.; Cohen, Gerald C.
1993-01-01
This report describes a general-purpose Requirements Specification Language, RSL. The purpose of RSL is to specify precisely the external structure of a mechanized system and to define requirements that the system must meet. A system can be comprised of a mixture of hardware, software, and human processing elements. RSL is a hybrid of features found in several popular requirements specification languages, such as SADT (Structured Analysis and Design Technique), PSL (Problem Statement Language), and RMF (Requirements Modeling Framework). While languages such as these have useful features for structuring a specification, they generally lack formality. To overcome the deficiencies of informal requirements languages, RSL has constructs for formal mathematical specification. These constructs are similar to those found in formal specification languages such as EHDM (Enhanced Hierarchical Development Methodology), Larch, and OBJ3.
Generating and Executing Complex Natural Language Queries across Linked Data.
Hamon, Thierry; Mougin, Fleur; Grabar, Natalia
2015-01-01
With the recent and intensive research in the biomedical area, the knowledge accumulated is disseminated through various knowledge bases. Links between these knowledge bases are needed in order to use them jointly. Linked Data, SPARQL language, and interfaces in Natural Language question-answering provide interesting solutions for querying such knowledge bases. We propose a method for translating natural language questions in SPARQL queries. We use Natural Language Processing tools, semantic resources, and the RDF triples description. The method is designed on 50 questions over 3 biomedical knowledge bases, and evaluated on 27 questions. It achieves 0.78 F-measure on the test set. The method for translating natural language questions into SPARQL queries is implemented as Perl module available at http://search.cpan.org/ thhamon/RDF-NLP-SPARQLQuery.
Early Childhood Classrooms and Computers: Programs with Promise.
ERIC Educational Resources Information Center
Hoot, James L.; Kimler, Michele
Word processing and the LOGO programing language are two microcomputer applications that are beginning to show benefits as learning tools in elementary school classrooms. Word processing packages are especially useful with beginning writers, whose lack of motor coordination often slows down their acquisition of competence in written communication.…
NASA Astrophysics Data System (ADS)
Zhou, Yuping; Zhang, Qi
2018-04-01
In the information environment, digital and information processing to Li brocade patterns reveals an important means of Li ethnic style and inheriting the national culture. Adobe Illustrator CS3 and Java language were used in the paper to make "variation" processing to Li brocade patterns, and generate "Li brocade pattern mutant genes". The generation of pattern mutant genes includes color mutation, shape mutation, adding and missing transform, and twisted transform, etc. Research shows that Li brocade pattern mutant genes can be generated by using the Adobe Illustrator CS3 and the image processing tools of Java language edit, etc.
ERIC Educational Resources Information Center
Siriwittayakorn, Teeranoot
2018-01-01
In typological literature, there has been disagreement as to whether there should be distinction between relative clauses (RCs) and nominal sentential complements (NSCs) in pro-drop languages such as Japanese, Chinese, Korean, Khmer and Thai. In pro-drop languages, nouns can be dropped when its reference can be retrieved from context. Therefore,…
2015-06-01
and tools, called model-integrated computing ( MIC ) [3] relies on the use of domain-specific modeling languages for creating models of the system to be...hence giving reflective capabilities to it. We have followed the MIC method here: we designed a domain- specific modeling language for modeling...are produced one-off and not for the mass market , the scope for price reduction based on the market demands is non-existent. Processes to create
STILTS -- Starlink Tables Infrastructure Library Tool Set
NASA Astrophysics Data System (ADS)
Taylor, Mark
STILTS is a set of command-line tools for processing tabular data. It has been designed for, but is not restricted to, use on astronomical data such as source catalogues. It contains both generic (format-independent) table processing tools and tools for processing VOTable documents. Facilities offered include crossmatching, format conversion, format validation, column calculation and rearrangement, row selection, sorting, plotting, statistical calculations and metadata display. Calculations on cell data can be performed using a powerful and extensible expression language. The package is written in pure Java and based on STIL, the Starlink Tables Infrastructure Library. This gives it high portability, support for many data formats (including FITS, VOTable, text-based formats and SQL databases), extensibility and scalability. Where possible the tools are written to accept streamed data so the size of tables which can be processed is not limited by available memory. As well as the tutorial and reference information in this document, detailed on-line help is available from the tools themselves. STILTS is available under the GNU General Public Licence.
Plat, Rika; Lowie, Wander; de Bot, Kees
2018-01-01
Reaction time data have long been collected in order to gain insight into the underlying mechanisms involved in language processing. Means analyses often attempt to break down what factors relate to what portion of the total reaction time. From a dynamic systems theory perspective or an interaction dominant view of language processing, it is impossible to isolate discrete factors contributing to language processing, since these continually and interactively play a role. Non-linear analyses offer the tools to investigate the underlying process of language use in time, without having to isolate discrete factors. Patterns of variability in reaction time data may disclose the relative contribution of automatic (grapheme-to-phoneme conversion) processing and attention-demanding (semantic) processing. The presence of a fractal structure in the variability of a reaction time series indicates automaticity in the mental structures contributing to a task. A decorrelated pattern of variability will indicate a higher degree of attention-demanding processing. A focus on variability patterns allows us to examine the relative contribution of automatic and attention-demanding processing when a speaker is using the mother tongue (L1) or a second language (L2). A word naming task conducted in the L1 (Dutch) and L2 (English) shows L1 word processing to rely more on automatic spelling-to-sound conversion than L2 word processing. A word naming task with a semantic categorization subtask showed more reliance on attention-demanding semantic processing when using the L2. A comparison to L1 English data shows this was not only due to the amount of language use or language dominance, but also to the difference in orthographic depth between Dutch and English. An important implication of this finding is that when the same task is used to test and compare different languages, one cannot straightforwardly assume the same cognitive sub processes are involved to an equal degree using the same task in different languages. PMID:29403404
[Development of a Text-Data Based Learning Tool That Integrates Image Processing and Displaying].
Shinohara, Hiroyuki; Hashimoto, Takeyuki
2015-01-01
We developed a text-data based learning tool that integrates image processing and displaying by Excel. Knowledge required for programing this tool is limited to using absolute, relative, and composite cell references and learning approximately 20 mathematical functions available in Excel. The new tool is capable of resolution translation, geometric transformation, spatial-filter processing, Radon transform, Fourier transform, convolutions, correlations, deconvolutions, wavelet transform, mutual information, and simulation of proton density-, T1-, and T2-weighted MR images. The processed images of 128 x 128 pixels or 256 x 256 pixels are observed directly within Excel worksheets without using any particular image display software. The results of image processing using this tool were compared with those using C language and the new tool was judged to have sufficient accuracy to be practically useful. The images displayed on Excel worksheets were compared with images using binary-data display software. This comparison indicated that the image quality of the Excel worksheets was nearly equal to the latter in visual impressions. Since image processing is performed by using text-data, the process is visible and facilitates making contrasts by using mathematical equations within the program. We concluded that the newly developed tool is adequate as a computer-assisted learning tool for use in medical image processing.
Graphical modeling and query language for hospitals.
Barzdins, Janis; Barzdins, Juris; Rencis, Edgars; Sostaks, Agris
2013-01-01
So far there has been little evidence that implementation of the health information technologies (HIT) is leading to health care cost savings. One of the reasons for this lack of impact by the HIT likely lies in the complexity of the business process ownership in the hospitals. The goal of our research is to develop a business model-based method for hospital use which would allow doctors to retrieve directly the ad-hoc information from various hospital databases. We have developed a special domain-specific process modelling language called the MedMod. Formally, we define the MedMod language as a profile on UML Class diagrams, but we also demonstrate it on examples, where we explain the semantics of all its elements informally. Moreover, we have developed the Process Query Language (PQL) that is based on MedMod process definition language. The purpose of PQL is to allow a doctor querying (filtering) runtime data of hospital's processes described using MedMod. The MedMod language tries to overcome deficiencies in existing process modeling languages, allowing to specify the loosely-defined sequence of the steps to be performed in the clinical process. The main advantages of PQL are in two main areas - usability and efficiency. They are: 1) the view on data through "glasses" of familiar process, 2) the simple and easy-to-perceive means of setting filtering conditions require no more expertise than using spreadsheet applications, 3) the dynamic response to each step in construction of the complete query that shortens the learning curve greatly and reduces the error rate, and 4) the selected means of filtering and data retrieving allows to execute queries in O(n) time regarding the size of the dataset. We are about to continue developing this project with three further steps. First, we are planning to develop user-friendly graphical editors for the MedMod process modeling and query languages. The second step is to do evaluation of usability the proposed language and tool involving the physicians from several hospitals in Latvia and working with real data from these hospitals. Our third step is to develop an efficient implementation of the query language.
Broadening the Notion of Text: An Exploration of an Artistic Composing Process.
ERIC Educational Resources Information Center
Smagorinsky, Peter; Coppock, John
In language arts classes a "composition" generally refers to a written text. Semiotic theory based on C. S. Peirce's work suggests that writing is only one of many forms of composition available for mediating thought and activity. According to J. V. Wertsch (1991), writing should be one tool in a tool kit of mediational means available…
Comeau, Donald C; Liu, Haibin; Islamaj Doğan, Rezarta; Wilbur, W John
2014-01-01
BioC is a new format and associated code libraries for sharing text and annotations. We have implemented BioC natural language preprocessing pipelines in two popular programming languages: C++ and Java. The current implementations interface with the well-known MedPost and Stanford natural language processing tool sets. The pipeline functionality includes sentence segmentation, tokenization, part-of-speech tagging, lemmatization and sentence parsing. These pipelines can be easily integrated along with other BioC programs into any BioC compliant text mining systems. As an application, we converted the NCBI disease corpus to BioC format, and the pipelines have successfully run on this corpus to demonstrate their functionality. Code and data can be downloaded from http://bioc.sourceforge.net. Database URL: http://bioc.sourceforge.net. © The Author(s) 2014. Published by Oxford University Press.
Event-Driven Process Chains (EPC)
NASA Astrophysics Data System (ADS)
Mendling, Jan
This chapter provides a comprehensive overview of Event-driven Process Chains (EPCs) and introduces a novel definition of EPC semantics. EPCs became popular in the 1990s as a conceptual business process modeling language in the context of reference modeling. Reference modeling refers to the documentation of generic business operations in a model such as service processes in the telecommunications sector, for example. It is claimed that reference models can be reused and adapted as best-practice recommendations in individual companies (see [230, 168, 229, 131, 400, 401, 446, 127, 362, 126]). The roots of reference modeling can be traced back to the Kölner Integrationsmodell (KIM) [146, 147] that was developed in the 1960s and 1970s. In the 1990s, the Institute of Information Systems (IWi) in Saarbrücken worked on a project with SAP to define a suitable business process modeling language to document the processes of the SAP R/3 enterprise resource planning system. There were two results from this joint effort: the definition of EPCs [210] and the documentation of the SAP system in the SAP Reference Model (see [92, 211]). The extensive database of this reference model contains almost 10,000 sub-models: 604 of them non-trivial EPC business process models. The SAP Reference model had a huge impact with several researchers referring to it in their publications (see [473, 235, 127, 362, 281, 427, 415]) as well as motivating the creation of EPC reference models in further domains including computer integrated manufacturing [377, 379], logistics [229] or retail [52]. The wide-spread application of EPCs in business process modeling theory and practice is supported by their coverage in seminal text books for business process management and information systems in general (see [378, 380, 49, 384, 167, 240]). EPCs are frequently used in practice due to a high user acceptance [376] and extensive tool support. Some examples of tools that support EPCs are ARIS Toolset by IDS Scheer AG, AENEIS by ATOSS Software AG, ADONIS by BOC GmbH, Visio by Microsoft Corp., Nautilus by Gedilan Consulting GmbH, and Bonapart by Pikos GmbH. In order to facilitate the interchange of EPC business process models between these tools, there is a tool neutral interchange format called EPC Markup Language (EPML) [283, 285, 286, 287, 289, 290, 291].
Basic Numeracy Abilities of Xhosa Reception Year Students in South Africa: Language Policy Issues
ERIC Educational Resources Information Center
Feza, Nosisi Nellie
2016-01-01
Language in mathematics learning and teaching has a significant role in influencing performance. Literature on language in mathematics learning has evolved from language as a barrier to language as a cultural tool, and recently more research has argued for use of home language as an instructional tool in mathematics classrooms. However, the…
Using Language Sample Databases
ERIC Educational Resources Information Center
Heilmann, John J.; Miller, Jon F.; Nockerts, Ann
2010-01-01
Purpose: Over the past 50 years, language sample analysis (LSA) has evolved from a powerful research tool that is used to document children's linguistic development into a powerful clinical tool that is used to identify and describe the language skills of children with language impairment. The Systematic Analysis of Language Transcripts (SALT; J.…
Words as cultivators of others minds.
Schilhab, Theresa S S
2015-01-01
The embodied-grounded view of cognition and language holds that sensorimotor experiences in the form of 're-enactments' or 'simulations' are significant to the individual's development of concepts and competent language use. However, a typical objection to the explanatory force of this view is that, in everyday life, we engage in linguistic exchanges about much more than might be directly accessible to our senses. For instance, when knowledge-sharing occurs as part of deep conversations between a teacher and student, language is the salient tool by which to obtain understanding, through the unfolding of explanations. Here, the acquisition of knowledge is realized through language, and the constitution of knowledge seems entirely linguistic. In this paper, based on a review of selected studies within contemporary embodied cognitive science, I propose that such linguistic exchanges, though occurring independently of direct experience, are in fact disguised forms of embodied cognition, leading to the reconciliation of the opposing views. I suggest that, in conversation, interlocutors use Words as Cultivators (WAC) of other minds as a direct result of their embodied-grounded origin, rendering WAC a radical interpretation of the Words as social Tools (WAT) proposal. The WAC hypothesis endorses the view of language as dynamic, continuously integrating with, and negotiating, cognitive processes in the individual. One such dynamic feature results from the 'linguification process', a term by which I refer to the socially produced mapping of a word to its referent which, mediated by the interlocutor, turns words into cultivators of others minds. In support of the linguification process hypothesis and WAC, I review relevant embodied-grounded research, and selected studies of instructed fear conditioning and guided imagery.
Programming languages for synthetic biology.
Umesh, P; Naveen, F; Rao, Chanchala Uma Maheswara; Nair, Achuthsankar S
2010-12-01
In the backdrop of accelerated efforts for creating synthetic organisms, the nature and scope of an ideal programming language for scripting synthetic organism in-silico has been receiving increasing attention. A few programming languages for synthetic biology capable of defining, constructing, networking, editing and delivering genome scale models of cellular processes have been recently attempted. All these represent important points in a spectrum of possibilities. This paper introduces Kera, a state of the art programming language for synthetic biology which is arguably ahead of similar languages or tools such as GEC, Antimony and GenoCAD. Kera is a full-fledged object oriented programming language which is tempered by biopart rule library named Samhita which captures the knowledge regarding the interaction of genome components and catalytic molecules. Prominent feature of the language are demonstrated through a toy example and the road map for the future development of Kera is also presented.
NASA Astrophysics Data System (ADS)
Hecht, Erin
2016-03-01
As Arbib [1] notes, the two-streams hypothesis [5] has provided a powerful explanatory framework for understanding visual processing. The inferotemporal ventral stream recognizes objects and agents - ;what; one is seeing. The dorsal ;how; or ;where; stream through parietal cortex processes motion, spatial location, and visuo-proprioceptive relationships - ;vision for action.; Hickock and Poeppel's [3] extension of this model to the auditory system raises the question of deeper, multi- or supra-sensory themes in dorsal vs. ventral processing. Petrides and Pandya [10] postulate that the evolution of language may have been influenced by the fact that the dorsal stream terminates in posterior Broca's area (BA44) while the ventral stream terminates in anterior Broca's area (BA45). In an intriguing potential parallel, a recent ALE metanalysis of 54 fMRI studies found that semantic processing is located more anteriorly and superiorly than syntactic processing in Broca's area [13]. But clearly, macaques do not have language, nor other likely pre- or co-adaptations to language, such as complex imitation and tool use. What changed in the brain that enabled these functions to evolve?
Moharir, Madhavi; Barnett, Noel; Taras, Jillian; Cole, Martha; Ford-Jones, E Lee; Levin, Leo
2014-01-01
Failure to recognize and intervene early in speech and language delays can lead to multifaceted and potentially severe consequences for early child development and later literacy skills. While routine evaluations of speech and language during well-child visits are recommended, there is no standardized (office) approach to facilitate this. Furthermore, extensive wait times for speech and language pathology consultation represent valuable lost time for the child and family. Using speech and language expertise, and paediatric collaboration, key content for an office-based tool was developed. The tool aimed to help physicians achieve three main goals: early and accurate identification of speech and language delays as well as children at risk for literacy challenges; appropriate referral to speech and language services when required; and teaching and, thus, empowering parents to create rich and responsive language environments at home. Using this tool, in combination with the Canadian Paediatric Society’s Read, Speak, Sing and Grow Literacy Initiative, physicians will be better positioned to offer practical strategies to caregivers to enhance children’s speech and language capabilities. The tool represents a strategy to evaluate speech and language delays. It depicts age-specific linguistic/phonetic milestones and suggests interventions. The tool represents a practical interim treatment while the family is waiting for formal speech and language therapy consultation. PMID:24627648
A UMLS-based spell checker for natural language processing in vaccine safety.
Tolentino, Herman D; Matters, Michael D; Walop, Wikke; Law, Barbara; Tong, Wesley; Liu, Fang; Fontelo, Paul; Kohl, Katrin; Payne, Daniel C
2007-02-12
The Institute of Medicine has identified patient safety as a key goal for health care in the United States. Detecting vaccine adverse events is an important public health activity that contributes to patient safety. Reports about adverse events following immunization (AEFI) from surveillance systems contain free-text components that can be analyzed using natural language processing. To extract Unified Medical Language System (UMLS) concepts from free text and classify AEFI reports based on concepts they contain, we first needed to clean the text by expanding abbreviations and shortcuts and correcting spelling errors. Our objective in this paper was to create a UMLS-based spelling error correction tool as a first step in the natural language processing (NLP) pipeline for AEFI reports. We developed spell checking algorithms using open source tools. We used de-identified AEFI surveillance reports to create free-text data sets for analysis. After expansion of abbreviated clinical terms and shortcuts, we performed spelling correction in four steps: (1) error detection, (2) word list generation, (3) word list disambiguation and (4) error correction. We then measured the performance of the resulting spell checker by comparing it to manual correction. We used 12,056 words to train the spell checker and tested its performance on 8,131 words. During testing, sensitivity, specificity, and positive predictive value (PPV) for the spell checker were 74% (95% CI: 74-75), 100% (95% CI: 100-100), and 47% (95% CI: 46%-48%), respectively. We created a prototype spell checker that can be used to process AEFI reports. We used the UMLS Specialist Lexicon as the primary source of dictionary terms and the WordNet lexicon as a secondary source. We used the UMLS as a domain-specific source of dictionary terms to compare potentially misspelled words in the corpus. The prototype sensitivity was comparable to currently available tools, but the specificity was much superior. The slow processing speed may be improved by trimming it down to the most useful component algorithms. Other investigators may find the methods we developed useful for cleaning text using lexicons specific to their area of interest.
A UMLS-based spell checker for natural language processing in vaccine safety
Tolentino, Herman D; Matters, Michael D; Walop, Wikke; Law, Barbara; Tong, Wesley; Liu, Fang; Fontelo, Paul; Kohl, Katrin; Payne, Daniel C
2007-01-01
Background The Institute of Medicine has identified patient safety as a key goal for health care in the United States. Detecting vaccine adverse events is an important public health activity that contributes to patient safety. Reports about adverse events following immunization (AEFI) from surveillance systems contain free-text components that can be analyzed using natural language processing. To extract Unified Medical Language System (UMLS) concepts from free text and classify AEFI reports based on concepts they contain, we first needed to clean the text by expanding abbreviations and shortcuts and correcting spelling errors. Our objective in this paper was to create a UMLS-based spelling error correction tool as a first step in the natural language processing (NLP) pipeline for AEFI reports. Methods We developed spell checking algorithms using open source tools. We used de-identified AEFI surveillance reports to create free-text data sets for analysis. After expansion of abbreviated clinical terms and shortcuts, we performed spelling correction in four steps: (1) error detection, (2) word list generation, (3) word list disambiguation and (4) error correction. We then measured the performance of the resulting spell checker by comparing it to manual correction. Results We used 12,056 words to train the spell checker and tested its performance on 8,131 words. During testing, sensitivity, specificity, and positive predictive value (PPV) for the spell checker were 74% (95% CI: 74–75), 100% (95% CI: 100–100), and 47% (95% CI: 46%–48%), respectively. Conclusion We created a prototype spell checker that can be used to process AEFI reports. We used the UMLS Specialist Lexicon as the primary source of dictionary terms and the WordNet lexicon as a secondary source. We used the UMLS as a domain-specific source of dictionary terms to compare potentially misspelled words in the corpus. The prototype sensitivity was comparable to currently available tools, but the specificity was much superior. The slow processing speed may be improved by trimming it down to the most useful component algorithms. Other investigators may find the methods we developed useful for cleaning text using lexicons specific to their area of interest. PMID:17295907
ERIC Educational Resources Information Center
Albaiz, Tahany
2016-01-01
Teaching English to ESL teachers is a challenging task for a number of reasons, the lack of connection between the target language and the native one being one of the most challenging factors (Ferlazzo & Sypnieski, 2013). Therefore, teachers are supposed to be innovators in creating the tools that could boost the learning process, as well as…
Louhi 2010: Special issue on Text and Data Mining of Health Documents
2011-01-01
The papers presented in this supplement focus and reflect on computer use in every-day clinical work in hospitals and clinics such as electronic health record systems, pre-processing for computer aided summaries, clinical coding, computer decision systems, as well as related ethical concerns and security. Much of this work concerns itself by necessity with incorporation and development of language processing tools and methods, and as such this supplement aims at providing an arena for reporting on development in a diversity of languages. In the supplement we can read about some of the challenges identified above. PMID:21992545
NASA Astrophysics Data System (ADS)
Hoebelheinrich, N. J.; Lynnes, C.; West, P.; Ferritto, M.
2014-12-01
Two problems common to many geoscience domains are the difficulties in finding tools to work with a given dataset collection, and conversely, the difficulties in finding data for a known tool. A collaborative team from the Earth Science Information Partnership (ESIP) has gotten together to design and create a web service, called ToolMatch, to address these problems. The team began their efforts by defining an initial, relatively simple conceptual model that addressed the two uses cases briefly described above. The conceptual model is expressed as an ontology using OWL (Web Ontology Language) and DCterms (Dublin Core Terms), and utilizing standard ontologies such as DOAP (Description of a Project), FOAF (Friend of a Friend), SKOS (Simple Knowledge Organization System) and DCAT (Data Catalog Vocabulary). The ToolMatch service will be taking advantage of various Semantic Web and Web standards, such as OpenSearch, RESTful web services, SWRL (Semantic Web Rule Language) and SPARQL (Simple Protocol and RDF Query Language). The first version of the ToolMatch service was deployed in early fall 2014. While more complete testing is required, a number of communities besides ESIP member organizations have expressed interest in collaborating to create, test and use the service and incorporate it into their own web pages, tools and / or services including the USGS Data Catalog service, DataONE, the Deep Carbon Observatory, Virtual Solar Terrestrial Observatory (VSTO), and the U.S. Global Change Research Program. In this session, presenters will discuss the inception and development of the ToolMatch service, the collaborative process used to design, refine, and test the service, and future plans for the service.
Improving Cognitive Processes in Preschool Children: The COGEST Programme
ERIC Educational Resources Information Center
Mayoral-Rodríguez, Silvia; Timoneda-Gallart, Carme; Pérez-Álvarez, Federico; Das, J. P.
2015-01-01
The present study provides empirical evidence to support the hypothesis that pre-school children's cognitive functions can be developed by virtue of a training tool named COGENT (Cognitive Enhancement Training). We assumed that COGENT (COGEST in Spain) which is embedded in speech and language, will enhance the core cognitive processes that are…
Video Recording and the Research Process
ERIC Educational Resources Information Center
Leung, Constant; Hawkins, Margaret R.
2011-01-01
This is a two-part discussion. Part 1 is entitled "English Language Learning in Subject Lessons", and Part 2 is titled "Video as a Research Tool/Counterpoint". Working with different research concerns, the authors attempt to draw attention to a set of methodological and theoretical issues that have emerged in the research process using video data.…
Friedman, Carol; Hripcsak, George; Shagina, Lyuda; Liu, Hongfang
1999-01-01
Objective: To design a document model that provides reliable and efficient access to clinical information in patient reports for a broad range of clinical applications, and to implement an automated method using natural language processing that maps textual reports to a form consistent with the model. Methods: A document model that encodes structured clinical information in patient reports while retaining the original contents was designed using the extensible markup language (XML), and a document type definition (DTD) was created. An existing natural language processor (NLP) was modified to generate output consistent with the model. Two hundred reports were processed using the modified NLP system, and the XML output that was generated was validated using an XML validating parser. Results: The modified NLP system successfully processed all 200 reports. The output of one report was invalid, and 199 reports were valid XML forms consistent with the DTD. Conclusions: Natural language processing can be used to automatically create an enriched document that contains a structured component whose elements are linked to portions of the original textual report. This integrated document model provides a representation where documents containing specific information can be accurately and efficiently retrieved by querying the structured components. If manual review of the documents is desired, the salient information in the original reports can also be identified and highlighted. Using an XML model of tagging provides an additional benefit in that software tools that manipulate XML documents are readily available. PMID:9925230
Tools for automating spacecraft ground systems: The Intelligent Command and Control (ICC) approach
NASA Technical Reports Server (NTRS)
Stoffel, A. William; Mclean, David
1996-01-01
The practical application of scripting languages and World Wide Web tools to the support of spacecraft ground system automation, is reported on. The mission activities and the automation tools used at the Goddard Space Flight Center (MD) are reviewed. The use of the Tool Command Language (TCL) and the Practical Extraction and Report Language (PERL) scripting tools for automating mission operations is discussed together with the application of different tools for the Compton Gamma Ray Observatory ground system.
ERIC Educational Resources Information Center
L'Homme, Marie-Claude
The evolution of "language utilities," a concept confined largely to the francophone world and relating to the uses of language in computer science and the use of computer science for languages, is chronicled. The language utilities are of three types: (1) tools for language development, primarily dictionary databases and related tools;…
Gálvez, Jorge A; Pappas, Janine M; Ahumada, Luis; Martin, John N; Simpao, Allan F; Rehman, Mohamed A; Witmer, Char
2017-10-01
Venous thromboembolism (VTE) is a potentially life-threatening condition that includes both deep vein thrombosis (DVT) and pulmonary embolism. We sought to improve detection and reporting of children with a new diagnosis of VTE by applying natural language processing (NLP) tools to radiologists' reports. We validated an NLP tool, Reveal NLP (Health Fidelity Inc, San Mateo, CA) and inference rules engine's performance in identifying reports with deep venous thrombosis using a curated set of ultrasound reports. We then configured the NLP tool to scan all available radiology reports on a daily basis for studies that met criteria for VTE between July 1, 2015, and March 31, 2016. The NLP tool and inference rules engine correctly identified 140 out of 144 reports with positive DVT findings and 98 out of 106 negative reports in the validation set. The tool's sensitivity was 97.2% (95% CI 93-99.2%), specificity was 92.5% (95% CI 85.7-96.7%). Subsequently, the NLP tool and inference rules engine processed 6373 radiology reports from 3371 hospital encounters. The NLP tool and inference rules engine identified 178 positive reports and 3193 negative reports with a sensitivity of 82.9% (95% CI 74.8-89.2) and specificity of 97.5% (95% CI 96.9-98). The system functions well as a safety net to screen patients for HA-VTE on a daily basis and offers value as an automated, redundant system. To our knowledge, this is the first pediatric study to apply NLP technology in a prospective manner for HA-VTE identification.
Dogac, Asuman; Kabak, Yildiray; Namli, Tuncay; Okcan, Alper
2008-11-01
Integrating healthcare enterprise (IHE) specifies integration profiles describing selected real world use cases to facilitate the interoperability of healthcare information resources. While realizing a complex real-world scenario, IHE profiles are combined by grouping the related IHE actors. Grouping IHE actors implies that the associated business processes (IHE profiles) that the actors are involved must be combined, that is, the choreography of the resulting collaborative business process must be determined by deciding on the execution sequence of transactions coming from different profiles. There are many IHE profiles and each user or vendor may support a different set of IHE profiles that fits to its business need. However, determining the precedence of all the involved transactions manually for each possible combination of the profiles is a very tedious task. In this paper, we describe how to obtain the overall business process automatically when IHE actors are grouped. For this purpose, we represent the IHE profiles through a standard, machine-processable language, namely, Organization for the Advancement of Structured Information Standards (OASIS) ebusiness eXtensible Markup Language (ebXML) Business Process Specification (ebBP) Language. We define the precedence rules among the transactions of the IHE profiles, again, in a machine-processable way. Then, through a graphical tool, we allow users to select the actors to be grouped and automatically produce the overall business process in a machine-processable format.
Domain-specific languages and diagram customization for a concurrent engineering environment
NASA Astrophysics Data System (ADS)
Cole, B.; Dubos, G.; Banazadeh, P.; Reh, J.; Case, K.; Wang, Y.; Jones, S.; Picha, F.
A major open question for advocates of Model-Based Systems Engineering (MBSE) is the question of how system and subsystem engineers will work together. The Systems Modeling Language (SysML), like any language intended for a large audience, is in tension between the desires for simplicity and for expressiveness. In order to be more expressive, many specialized language elements may be introduced, which will unfortunately make a complete understanding of the language a more daunting task. While this may be acceptable for systems modelers, it will increase the challenge of including subsystem engineers in the modeling effort. One possible answer to this situation is the use of Domain-Specific Languages (DSL), which are fully supported by the Unified Modeling Language (UML). SysML is in fact a DSL for systems engineering. The expressive power of a DSL can be enhanced through the use of diagram customization. Various domains have already developed their own schematic vocabularies. Within the space engineering community, two excellent examples are the propulsion and telecommunication subsystems. A return to simple box-and-line diagrams (e.g., the SysML Internal Block Diagram) are in many ways a step backward. In order allow subsystem engineers to contribute directly to the model, it is necessary to make a system modeling tool at least approximate in accessibility to drawing tools like Microsoft PowerPoint and Visio. The challenge is made more extreme in a concurrent engineering environment, where designs must often be drafted in an hour or two. In the case of the Jet Propulsion Laboratory's Team X concurrent design team, a subsystem is specified using a combination of PowerPoint for drawing and Excel for calculation. A pilot has been undertaken in order to meld the drawing portion and the production of master equipment lists (MELs) via a SysML authoring tool, MagicDraw. Team X currently interacts with its customers in a process of sharing presentations. There are severa- inefficiencies that arise from this situation. The first is that a customer team must wait two weeks to a month (which is 2-4 times the duration of most Team X studies themselves) for a finalized, detailed design description. Another is that this information must be re-entered by hand into the set of engineering artifacts and design tools that the mission concept team uses after a study is complete. Further, there is no persistent connection to Team X or institutionally shared formulation design tools and data after a given study, again reducing the direct reuse of designs created in a Team X study. This paper presents the underpinnings of subsystem DSLs as they were developed for this pilot. This includes specialized semantics for different domains as well as the process by which major categories of objects were derived in support of defining the DSLs. The feedback given to us by the domain experts on usability, along with a pilot study with the partial inclusion of these tools is also discussed.
Domain-Specific Languages and Diagram Customization for a Concurrent Engineering Environment
NASA Technical Reports Server (NTRS)
Cole, Bjorn; Dubos, Greg; Banazadeh, Payam; Reh, Jonathan; Case, Kelley; Wang, Yeou-Fang; Jones, Susan; Picha, Frank
2013-01-01
A major open question for advocates of Model-Based Systems Engineering (MBSE) is the question of how system and subsystem engineers will work together. The Systems Modeling Language (SysML), like any language intended for a large audience, is in tension between the desires for simplicity and for expressiveness. In order to be more expressive, many specialized language elements may be introduced, which will unfortunately make a complete understanding of the language a more daunting task. While this may be acceptable for systems modelers, it will increase the challenge of including subsystem engineers in the modeling effort. One possible answer to this situation is the use of Domain-Specific Languages (DSL), which are fully supported by the Unified Modeling Language (UML). SysML is in fact a DSL for systems engineering. The expressive power of a DSL can be enhanced through the use of diagram customization. Various domains have already developed their own schematic vocabularies. Within the space engineering community, two excellent examples are the propulsion and telecommunication subsystems. A return to simple box-and-line diagrams (e.g., the SysML Internal Block Diagram) are in many ways a step backward. In order allow subsystem engineers to contribute directly to the model, it is necessary to make a system modeling tool at least approximate in accessibility to drawing tools like Microsoft PowerPoint and Visio. The challenge is made more extreme in a concurrent engineering environment, where designs must often be drafted in an hour or two. In the case of the Jet Propulsion Laboratory's Team X concurrent design team, a subsystem is specified using a combination of PowerPoint for drawing and Excel for calculation. A pilot has been undertaken in order to meld the drawing portion and the production of master equipment lists (MELs) via a SysML authoring tool, MagicDraw. Team X currently interacts with its customers in a process of sharing presentations. There are several inefficiencies that arise from this situation. The first is that a customer team must wait two weeks to a month (which is 2-4 times the duration of most Team X studies themselves) for a finalized, detailed design description. Another is that this information must be re-entered by hand into the set of engineering artifacts and design tools that the mission concept team uses after a study is complete. Further, there is no persistent connection to Team X or institutionally shared formulation design tools and data after a given study, again reducing the direct reuse of designs created in a Team X study. This paper presents the underpinnings of subsystem DSLs as they were developed for this pilot. This includes specialized semantics for different domains as well as the process by which major categories of objects were derived in support of defining the DSLs. The feedback given to us by the domain experts on usability, along with a pilot study with the partial inclusion of these tools is also discussed.
Primate vocal communication: a useful tool for understanding human speech and language evolution?
Fedurek, Pawel; Slocombe, Katie E
2011-04-01
Language is a uniquely human trait, and questions of how and why it evolved have been intriguing scientists for years. Nonhuman primates (primates) are our closest living relatives, and their behavior can be used to estimate the capacities of our extinct ancestors. As humans and many primate species rely on vocalizations as their primary mode of communication, the vocal behavior of primates has been an obvious target for studies investigating the evolutionary roots of human speech and language. By studying the similarities and differences between human and primate vocalizations, comparative research has the potential to clarify the evolutionary processes that shaped human speech and language. This review examines some of the seminal and recent studies that contribute to our knowledge regarding the link between primate calls and human language and speech. We focus on three main aspects of primate vocal behavior: functional reference, call combinations, and vocal learning. Studies in these areas indicate that despite important differences, primate vocal communication exhibits some key features characterizing human language. They also indicate, however, that some critical aspects of speech, such as vocal plasticity, are not shared with our primate cousins. We conclude that comparative research on primate vocal behavior is a very promising tool for deepening our understanding of the evolution of human speech and language, but much is still to be done as many aspects of monkey and ape vocalizations remain largely unexplored.
The Language Exposure Assessment Tool: Quantifying Language Exposure in Infants and Children
ERIC Educational Resources Information Center
DeAnda, Stephanie; Bosch, Laura; Poulin-Dubois, Diane; Zesiger, Pascal; Friend, Margaret
2016-01-01
Purpose: The aim of this study was to develop the Language Exposure Assessment Tool (LEAT) and to examine its cross-linguistic validity, reliability, and utility. The LEAT is a computerized interview-style assessment that requests parents to estimate language exposure. The LEAT yields an automatic calculation of relative language exposure and…
Graphical CONOPS Prototype to Demonstrate Emerging Methods, Processes, and Tools at ARDEC
2013-07-17
Concept Engineering Framework (ICEF), an extensive literature review was conducted to discover metrics that exist for evaluating concept engineering...language to ICEF to SysML ................................................ 34 Table 5 Artifact metrics ...50 Table 6 Collaboration metrics
STAR (Simple Tool for Automated Reasoning): Tutorial guide and reference manual
NASA Technical Reports Server (NTRS)
Borchardt, G. C.
1985-01-01
STAR is an interactive, interpreted programming language for the development and operation of Artificial Intelligence application systems. The language is intended for use primarily in the development of software application systems which rely on a combination of symbolic processing, central to the vast majority of AI algorithms, with routines and data structures defined in compiled languages such as C, FORTRAN and PASCAL. References to routines and data structures defined in compiled languages are intermixed with symbolic structures in STAR, resulting in a hybrid operating environment in which symbolic and non-symbolic processing and organization of data may interact to a high degree within the execution of particular application systems. The STAR language was developed in the course of a project involving AI techniques in the interpretation of imaging spectrometer data and is derived in part from a previous language called CLIP. The interpreter for STAR is implemented as a program defined in the language C and has been made available for distribution in source code form through NASA's Computer Software Management and Information Center (COSMIC). Contained within this report are the STAR Tutorial Guide, which introduces the language in a step-by-step manner, and the STAR Reference Manual, which provides a detailed summary of the features of STAR.
A remote sensing computer-assisted learning tool developed using the unified modeling language
NASA Astrophysics Data System (ADS)
Friedrich, J.; Karslioglu, M. O.
The goal of this work has been to create an easy-to-use and simple-to-make learning tool for remote sensing at an introductory level. Many students struggle to comprehend what seems to be a very basic knowledge of digital images, image processing and image arithmetic, for example. Because professional programs are generally too complex and overwhelming for beginners and often not tailored to the specific needs of a course regarding functionality, a computer-assisted learning (CAL) program was developed based on the unified modeling language (UML), the present standard for object-oriented (OO) system development. A major advantage of this approach is an easier transition from modeling to coding of such an application, if modern UML tools are being used. After introducing the constructed UML model, its implementation is briefly described followed by a series of learning exercises. They illustrate how the resulting CAL tool supports students taking an introductory course in remote sensing at the author's institution.
Evaluation and revision of questionnaires for use among low-literacy immigrant Latinos
D'Alonzo, Karen T.
2011-01-01
As more Spanish speaking immigrants participate in and become the focus of research studies, questions arise about the appropriateness of existing research tools. Questionnaires have often been adapted from English language instruments and tested among college- educated Hispanic-Americans. Little has been written regarding the testing and evaluation of research tools among less educated Latino immigrants. The purpose of this study was to revise and evaluate the appropriateness of a battery of existing Spanish-language questionnaires for a physical activity intervention for immigrant Hispanic women. A three-step process was utilized to evaluate, adapt and test Spanish versions of the Self-Efficacy and Exercise Habits Survey, an abbreviated version of the Hispanic Stress Inventory-Immigrant version and the Latina Values Scale. The revised tools demonstrated acceptable validity and reliability. The adaptations improved the readability of the tools, resulting in a greater response rate, less missing data and fewer extreme responses. Psychometric limitations to the adaptation of Likert scales are discussed. PMID:22030592
Rapid Prototyping of Application Specific Signal Processors (RASSP)
1993-12-23
Compilers 2-9 - Cadre Teamwork 2-13 - CodeCenter (Centerline) 2-15 - dbx/dbxtool (UNIXm) 2-17 - Falcon (Mentor) 2-19 - FrameMaker (Frame Tech) 2-21 - gprof...UNIXm C debuggers Falcon Mentor ECAD Framework FrameMaker Frame Tech Word Processing gcc GNU CIC++ compiler gprof GNU Software profiling tool...organization can put their own documentation on-line using the BOLD Com- poser for Framemaker . " The AMPLE programming language is a C like language used for
A review on adult pragmatic assessments
Sobhani Rad, Davood
2014-01-01
Pragmatics is defined as appropriate use of language either to comprehend ideas or to interact in social situations effectively. Pragmatic competence, which is processed in the right hemisphere, comprises a number of interrelated skills that manifest in a range of adaptive behaviors. Due to the widespread influence of language in communication, studying pragmatic profiles, by developing appropriate tools, is of importance. Here, a range of pragmatic theories and assessment instruments available for use in adult patients is reviewed. PMID:25422728
The KIT Motion-Language Dataset.
Plappert, Matthias; Mandery, Christian; Asfour, Tamim
2016-12-01
Linking human motion and natural language is of great interest for the generation of semantic representations of human activities as well as for the generation of robot activities based on natural language input. However, although there have been years of research in this area, no standardized and openly available data set exists to support the development and evaluation of such systems. We, therefore, propose the Karlsruhe Institute of Technology (KIT) Motion-Language Dataset, which is large, open, and extensible. We aggregate data from multiple motion capture databases and include them in our data set using a unified representation that is independent of the capture system or marker set, making it easy to work with the data regardless of its origin. To obtain motion annotations in natural language, we apply a crowd-sourcing approach and a web-based tool that was specifically build for this purpose, the Motion Annotation Tool. We thoroughly document the annotation process itself and discuss gamification methods that we used to keep annotators motivated. We further propose a novel method, perplexity-based selection, which systematically selects motions for further annotation that are either under-represented in our data set or that have erroneous annotations. We show that our method mitigates the two aforementioned problems and ensures a systematic annotation process. We provide an in-depth analysis of the structure and contents of our resulting data set, which, as of October 10, 2016, contains 3911 motions with a total duration of 11.23 hours and 6278 annotations in natural language that contain 52,903 words. We believe this makes our data set an excellent choice that enables more transparent and comparable research in this important area.
Using a simulation assistant in modeling manufacturing systems
NASA Technical Reports Server (NTRS)
Schroer, Bernard J.; Tseng, Fan T.; Zhang, S. X.; Wolfsberger, John W.
1988-01-01
Numerous simulation languages exist for modeling discrete event processes, and are now ported to microcomputers. Graphic and animation capabilities were added to many of these languages to assist the users build models and evaluate the simulation results. With all these languages and added features, the user is still plagued with learning the simulation language. Futhermore, the time to construct and then to validate the simulation model is always greater than originally anticipated. One approach to minimize the time requirement is to use pre-defined macros that describe various common processes or operations in a system. The development of a simulation assistant for modeling discrete event manufacturing processes is presented. A simulation assistant is defined as an interactive intelligent software tool that assists the modeler in writing a simulation program by translating the modeler's symbolic description of the problem and then automatically generating the corresponding simulation code. The simulation assistant is discussed with emphasis on an overview of the simulation assistant, the elements of the assistant, and the five manufacturing simulation generators. A typical manufacturing system will be modeled using the simulation assistant and the advantages and disadvantages discussed.
Moharir, Madhavi; Barnett, Noel; Taras, Jillian; Cole, Martha; Ford-Jones, E Lee; Levin, Leo
2014-01-01
Failure to recognize and intervene early in speech and language delays can lead to multifaceted and potentially severe consequences for early child development and later literacy skills. While routine evaluations of speech and language during well-child visits are recommended, there is no standardized (office) approach to facilitate this. Furthermore, extensive wait times for speech and language pathology consultation represent valuable lost time for the child and family. Using speech and language expertise, and paediatric collaboration, key content for an office-based tool was developed. early and accurate identification of speech and language delays as well as children at risk for literacy challenges; appropriate referral to speech and language services when required; and teaching and, thus, empowering parents to create rich and responsive language environments at home. Using this tool, in combination with the Canadian Paediatric Society's Read, Speak, Sing and Grow Literacy Initiative, physicians will be better positioned to offer practical strategies to caregivers to enhance children's speech and language capabilities. The tool represents a strategy to evaluate speech and language delays. It depicts age-specific linguistic/phonetic milestones and suggests interventions. The tool represents a practical interim treatment while the family is waiting for formal speech and language therapy consultation.
A Review of Language: The Cultural Tool by Daniel L. Everett
Weitzman, Raymond S.
2013-01-01
Language: The Cultural Tool by Daniel Everett covers a broad spectrum of issues concerning the nature of language from the perspective of an anthropological linguist who has had considerable fieldwork experience studying the language and culture of the Pirahã, an indigenous Amazonian tribe in Brazil, as well as a number of other indigenous languages and cultures. This review focuses mainly on the key elements of his approach to language: language as a solution to the communication problem; Everett's conception of language; what makes language possible; how language and culture influence each other.
ERIC Educational Resources Information Center
Templar, Bill
2002-01-01
Foreign language educators need creative tools to empower students to make the languages they are learning truly their own. One such new instrument well worth inventive appropriation in Asia is the European Language Portfolio (ELP), launched by the Council of Europe in 2001. This report focuses on the three parts of ELP--a language passport,…
Using Films in Vocabulary Teaching of Turkish as a Foreign Language
ERIC Educational Resources Information Center
Iscan, Adem
2017-01-01
The use and utility of auditory and visual tools in language teaching is a common practice. Films constitute one of the tools. It has been found that using films in language teaching is also effective in the development of vocabulary of foreign language learners. The literature review reveals that while films are used in foreign language teaching…
Oral-diadochokinesis rates across languages: English and Hebrew norms.
Icht, Michal; Ben-David, Boaz M
2014-01-01
Oro-facial and speech motor control disorders represent a variety of speech and language pathologies. Early identification of such problems is important and carries clinical implications. A common and simple tool for gauging the presence and severity of speech motor control impairments is oral-diadochokinesis (oral-DDK). Surprisingly, norms for adult performance are missing from the literature. The goals of this study were: (1) to establish a norm for oral-DDK rate for (young to middle-age) adult English speakers, by collecting data from the literature (five studies, N=141); (2) to investigate the possible effect of language (and culture) on oral-DDK performance, by analyzing studies conducted in other languages (five studies, N=140), alongside the English norm; and (3) to find a new norm for adult Hebrew speakers, by testing 115 speakers. We first offer an English norm with a mean of 6.2syllables/s (SD=.8), and a lower boundary of 5.4syllables/s that can be used to indicate possible abnormality. Next, we found significant differences between four tested languages (English, Portuguese, Farsi and Greek) in oral-DDK rates. Results suggest the need to set language and culture sensitive norms for the application of the oral-DDK task world-wide. Finally, we found the oral-DDK performance for adult Hebrew speakers to be 6.4syllables/s (SD=.8), not significantly different than the English norms. This implies possible phonological similarities between English and Hebrew. We further note that no gender effects were found in our study. We recommend using oral-DDK as an important tool in the speech language pathologist's arsenal. Yet, application of this task should be done carefully, comparing individual performance to a set norm within the specific language. Readers will be able to: (1) identify the Speech-Language Pathologist assessment process using the oral-DDK task, by comparing an individual performance to the present English norm, (2) describe the impact of language on oral-DDK performance, and (3) accurately detect Hebrew speakers' patients using this tool. Copyright © 2014 Elsevier Inc. All rights reserved.
Misersky, Julia; Gygax, Pascal M; Canal, Paolo; Gabriel, Ute; Garnham, Alan; Braun, Friederike; Chiarini, Tania; Englund, Kjellrun; Hanulikova, Adriana; Ottl, Anton; Valdrova, Jana; Von Stockhausen, Lisa; Sczesny, Sabine
2014-09-01
We collected norms on the gender stereotypicality of an extensive list of role nouns in Czech, English, French, German, Italian, Norwegian, and Slovak, to be used as a basis for the selection of stimulus materials in future studies. We present a Web-based tool (available at https://www.unifr.ch/lcg/ ) that we developed to collect these norms and that we expect to be useful for other researchers, as well. In essence, we provide (a) gender stereotypicality norms across a number of languages and (b) a tool to facilitate cross-language as well as cross-cultural comparisons when researchers are interested in the investigation of the impact of stereotypicality on the processing of role nouns.
ERIC Educational Resources Information Center
Yoon, Su-Youn; Lee, Chong Min; Houghton, Patrick; Lopez, Melissa; Sakano, Jennifer; Loukina, Anastasia; Krovetz, Bob; Lu, Chi; Madani, Nitin
2017-01-01
In this study, we developed assistive tools and resources to support TOEIC® Listening test item generation. There has recently been an increased need for a large pool of items for these tests. This need has, in turn, inspired efforts to increase the efficiency of item generation while maintaining the quality of the created items. We aimed to…
BOWS (bioinformatics open web services) to centralize bioinformatics tools in web services.
Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel
2015-06-02
Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.
GoCxx: a tool to easily leverage C++ legacy code for multicore-friendly Go libraries and frameworks
NASA Astrophysics Data System (ADS)
Binet, Sébastien
2012-12-01
Current HENP libraries and frameworks were written before multicore systems became widely deployed and used. From this environment, a ‘single-thread’ processing model naturally emerged but the implicit assumptions it encouraged are greatly impairing our abilities to scale in a multicore/manycore world. Writing scalable code in C++ for multicore architectures, while doable, is no panacea. Sure, C++11 will improve on the current situation (by standardizing on std::thread, introducing lambda functions and defining a memory model) but it will do so at the price of complicating further an already quite sophisticated language. This level of sophistication has probably already strongly motivated analysis groups to migrate to CPython, hoping for its current limitations with respect to multicore scalability to be either lifted (Grand Interpreter Lock removal) or for the advent of a new Python VM better tailored for this kind of environment (PyPy, Jython, …) Could HENP migrate to a language with none of the deficiencies of C++ (build time, deployment, low level tools for concurrency) and with the fast turn-around time, simplicity and ease of coding of Python? This paper will try to make the case for Go - a young open source language with built-in facilities to easily express and expose concurrency - being such a language. We introduce GoCxx, a tool leveraging gcc-xml's output to automatize the tedious work of creating Go wrappers for foreign languages, a critical task for any language wishing to leverage legacy and field-tested code. We will conclude with the first results of applying GoCxx to real C++ code.
Transforming Language Ideologies through Action Research: A Case Study of Bilingual Science Learning
NASA Astrophysics Data System (ADS)
Yang, Eunah
This qualitative case study explored a third grade bilingual teacher's transformative language ideologies through participating in a collaborative action research project. By merging language ideologies theory, Cultural Historical Activity Theory (CHAT), and action research, I was able to identify the analytic focus of this study. I analyzed how one teacher and I, the researcher, collaboratively reflected on classroom language practices during the video analysis meetings and focus groups. Further, I analyzed twelve videos that we coded together to see the changes in the teacher's language practices over time. My unit of analysis was the discourse practice mediated by additive language ideologies. Throughout the collaborative action research process, we both critically reflected on the classroom language use. We also developed a critical consciousness about the participatory shifts and learning of focal English Learner (EL) students. Finally, the teacher made changes to her classroom language practices. The results of this study will contribute to the literacy education research field for theoretical, methodological, and practical insights. The integration of language ideologies, CHAT, and action research can help educational practitioners, researchers, and policy makers understand the importance of transforming teachers' language ideologies in designing additive learning contexts for ELs. From a methodological perspective, the transformative language ideologies through researcher and teacher collaborated video analysis process provide a unique contribution to the language ideologies in education literature, with analytic triangulation. As a practical implication, this study suggests action research can be one of the teacher education tools to help the teachers transform language ideologies for EL education.
Neural Cognition and Affective Computing on Cyber Language.
Huang, Shuang; Zhou, Xuan; Xue, Ke; Wan, Xiqiong; Yang, Zhenyi; Xu, Duo; Ivanović, Mirjana; Yu, Xueer
2015-01-01
Characterized by its customary symbol system and simple and vivid expression patterns, cyber language acts as not only a tool for convenient communication but also a carrier of abundant emotions and causes high attention in public opinion analysis, internet marketing, service feedback monitoring, and social emergency management. Based on our multidisciplinary research, this paper presents a classification of the emotional symbols in cyber language, analyzes the cognitive characteristics of different symbols, and puts forward a mechanism model to show the dominant neural activities in that process. Through the comparative study of Chinese, English, and Spanish, which are used by the largest population in the world, this paper discusses the expressive patterns of emotions in international cyber languages and proposes an intelligent method for affective computing on cyber language in a unified PAD (Pleasure-Arousal-Dominance) emotional space.
Neural Cognition and Affective Computing on Cyber Language
Huang, Shuang; Zhou, Xuan; Xue, Ke; Wan, Xiqiong; Yang, Zhenyi; Xu, Duo; Ivanović, Mirjana
2015-01-01
Characterized by its customary symbol system and simple and vivid expression patterns, cyber language acts as not only a tool for convenient communication but also a carrier of abundant emotions and causes high attention in public opinion analysis, internet marketing, service feedback monitoring, and social emergency management. Based on our multidisciplinary research, this paper presents a classification of the emotional symbols in cyber language, analyzes the cognitive characteristics of different symbols, and puts forward a mechanism model to show the dominant neural activities in that process. Through the comparative study of Chinese, English, and Spanish, which are used by the largest population in the world, this paper discusses the expressive patterns of emotions in international cyber languages and proposes an intelligent method for affective computing on cyber language in a unified PAD (Pleasure-Arousal-Dominance) emotional space. PMID:26491431
Web-Based Machine Translation as a Tool for Promoting Electronic Literacy and Language Awareness
ERIC Educational Resources Information Center
Williams, Lawrence
2006-01-01
This article addresses a pervasive problem of concern to teachers of many foreign languages: the use of Web-Based Machine Translation (WBMT) by students who do not understand the complexities of this relatively new tool. Although networked technologies have greatly increased access to many language and communication tools, WBMT is still…
Sharma, Vivekanand; Law, Wayne; Balick, Michael J; Sarkar, Indra Neil
2017-01-01
The growing amount of data describing historical medicinal uses of plants from digitization efforts provides the opportunity to develop systematic approaches for identifying potential plant-based therapies. However, the task of cataloguing plant use information from natural language text is a challenging task for ethnobotanists. To date, there have been only limited adoption of informatics approaches used for supporting the identification of ethnobotanical information associated with medicinal uses. This study explored the feasibility of using biomedical terminologies and natural language processing approaches for extracting relevant plant-associated therapeutic use information from historical biodiversity literature collection available from the Biodiversity Heritage Library. The results from this preliminary study suggest that there is potential utility of informatics methods to identify medicinal plant knowledge from digitized resources as well as highlight opportunities for improvement.
Sharma, Vivekanand; Law, Wayne; Balick, Michael J.; Sarkar, Indra Neil
2017-01-01
The growing amount of data describing historical medicinal uses of plants from digitization efforts provides the opportunity to develop systematic approaches for identifying potential plant-based therapies. However, the task of cataloguing plant use information from natural language text is a challenging task for ethnobotanists. To date, there have been only limited adoption of informatics approaches used for supporting the identification of ethnobotanical information associated with medicinal uses. This study explored the feasibility of using biomedical terminologies and natural language processing approaches for extracting relevant plant-associated therapeutic use information from historical biodiversity literature collection available from the Biodiversity Heritage Library. The results from this preliminary study suggest that there is potential utility of informatics methods to identify medicinal plant knowledge from digitized resources as well as highlight opportunities for improvement. PMID:29854223
Baneyx, Audrey; Charlet, Jean; Jaulent, Marie-Christine
2007-01-01
Pathologies and acts are classified in thesauri to help physicians to code their activity. In practice, the use of thesauri is not sufficient to reduce variability in coding and thesauri are not suitable for computer processing. We think the automation of the coding task requires a conceptual modeling of medical items: an ontology. Our task is to help lung specialists code acts and diagnoses with software that represents medical knowledge of this concerned specialty by an ontology. The objective of the reported work was to build an ontology of pulmonary diseases dedicated to the coding process. To carry out this objective, we develop a precise methodological process for the knowledge engineer in order to build various types of medical ontologies. This process is based on the need to express precisely in natural language the meaning of each concept using differential semantics principles. A differential ontology is a hierarchy of concepts and relationships organized according to their similarities and differences. Our main research hypothesis is to apply natural language processing tools to corpora to develop the resources needed to build the ontology. We consider two corpora, one composed of patient discharge summaries and the other being a teaching book. We propose to combine two approaches to enrich the ontology building: (i) a method which consists of building terminological resources through distributional analysis and (ii) a method based on the observation of corpus sequences in order to reveal semantic relationships. Our ontology currently includes 1550 concepts and the software implementing the coding process is still under development. Results show that the proposed approach is operational and indicates that the combination of these methods and the comparison of the resulting terminological structures give interesting clues to a knowledge engineer for the building of an ontology.
Symbolic dynamic filtering and language measure for behavior identification of mobile robots.
Mallapragada, Goutham; Ray, Asok; Jin, Xin
2012-06-01
This paper presents a procedure for behavior identification of mobile robots, which requires limited or no domain knowledge of the underlying process. While the features of robot behavior are extracted by symbolic dynamic filtering of the observed time series, the behavior patterns are classified based on language measure theory. The behavior identification procedure has been experimentally validated on a networked robotic test bed by comparison with commonly used tools, namely, principal component analysis for feature extraction and Bayesian risk analysis for pattern classification.
ERIC Educational Resources Information Center
Cote Parra, Gabriel Eduardo
2015-01-01
The purpose of this action research was to explore the types of interactions that foreign language learners experience while using a wiki as a supporting tool for a face-to-face research course. This design allowed me to play a dual role: first, I studied my own classroom setting and students. Second, I implemented a pedagogical intervention based…
Zheng, Kai; Mei, Qiaozhu; Yang, Lei; Manion, Frank J.; Balis, Ulysses J.; Hanauer, David A.
2011-01-01
In this study, we comparatively examined the linguistic properties of narrative clinician notes created through voice dictation versus those directly entered by clinicians via a computer keyboard. Intuitively, the nature of voice-dictated notes would resemble that of natural language, while typed-in notes may demonstrate distinctive language features for reasons such as intensive usage of acronyms. The study analyses were based on an empirical dataset retrieved from our institutional electronic health records system. The dataset contains 30,000 voice-dictated notes and 30,000 notes that were entered manually; both were encounter notes generated in ambulatory care settings. The results suggest that between the narrative clinician notes created via these two different methods, there exists a considerable amount of lexical and distributional differences. Such differences could have a significant impact on the performance of natural language processing tools, necessitating these two different types of documents being differentially treated. PMID:22195229
Scriptwriting as a Tool for Learning Stylistic Variation
ERIC Educational Resources Information Center
Saugera, Valerie
2011-01-01
A film script is a useful tool for allowing students to experiment with language variation. Scripts of love stories comprise a range of language contexts, each triggering a different style on a formal-neutral-informal linguistic continuum: (1) technical cinematographic language in camera directions; (2) narrative language in exposition of scenes,…
Facebook Groups as a Supporting Tool for Language Classrooms
ERIC Educational Resources Information Center
Ekoc, Arzu
2014-01-01
This paper attempts to present a review of Facebook group pages as an educational tool for language learning. One of the primary needs of foreign language learners is to gain the opportunity to use the target language outside the classroom practice. Social media communication provides occasions for learners to receive input and produce output…
The role of the cerebellum in the regulation of language functions.
Starowicz-Filip, Anna; Chrobak, Adrian Andrzej; Moskała, Marek; Krzyżewski, Roger M; Kwinta, Borys; Kwiatkowski, Stanisław; Milczarek, Olga; Rajtar-Zembaty, Anna; Przewoźnik, Dorota
2017-08-29
The present paper is a review of studies on the role of the cerebellum in the regulation of language functions. This brain structure until recently associated chiefly with motor skills, visual-motor coordination and balance, proves to be significant also for cognitive functioning. With regard to language functions, studies show that the cerebellum determines verbal fluency (both semantic and formal) expressive and receptive grammar processing, the ability to identify and correct language mistakes, and writing skills. Cerebellar damage is a possible cause of aphasia or the cerebellar mutism syndrome (CMS). Decreased cerebellocortical connectivity as well as anomalies in the structure of the cerebellum are emphasized in numerous developmental dyslexia theories. The cerebellum is characterized by linguistic lateralization. From the neuroanatomical perspective, its right hemisphere and dentate nucleus, having multiple cerebellocortical connections with the cerebral cortical language areas, are particularly important for language functions. Usually, language deficits developed as a result of a cerebellar damage have subclinical intensity and require applying sensitive neuropsychological diagnostic tools designed to assess higher verbal functions.
Dutta, Sayon; Long, William J; Brown, David F M; Reisner, Andrew T
2013-08-01
As use of radiology studies increases, there is a concurrent increase in incidental findings (eg, lung nodules) for which the radiologist issues recommendations for additional imaging for follow-up. Busy emergency physicians may be challenged to carefully communicate recommendations for additional imaging not relevant to the patient's primary evaluation. The emergence of electronic health records and natural language processing algorithms may help address this quality gap. We seek to describe recommendations for additional imaging from our institution and develop and validate an automated natural language processing algorithm to reliably identify recommendations for additional imaging. We developed a natural language processing algorithm to detect recommendations for additional imaging, using 3 iterative cycles of training and validation. The third cycle used 3,235 radiology reports (1,600 for algorithm training and 1,635 for validation) of discharged emergency department (ED) patients from which we determined the incidence of discharge-relevant recommendations for additional imaging and the frequency of appropriate discharge documentation. The test characteristics of the 3 natural language processing algorithm iterations were compared, using blinded chart review as the criterion standard. Discharge-relevant recommendations for additional imaging were found in 4.5% (95% confidence interval [CI] 3.5% to 5.5%) of ED radiology reports, but 51% (95% CI 43% to 59%) of discharge instructions failed to note those findings. The final natural language processing algorithm had 89% (95% CI 82% to 94%) sensitivity and 98% (95% CI 97% to 98%) specificity for detecting recommendations for additional imaging. For discharge-relevant recommendations for additional imaging, sensitivity improved to 97% (95% CI 89% to 100%). Recommendations for additional imaging are common, and failure to document relevant recommendations for additional imaging in ED discharge instructions occurs frequently. The natural language processing algorithm's performance improved with each iteration and offers a promising error-prevention tool. Copyright © 2013 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.
What Box: A task for assessing language lateralization in young children.
Badcock, Nicholas A; Spooner, Rachael; Hofmann, Jessica; Flitton, Atlanta; Elliott, Scott; Kurylowicz, Lisa; Lavrencic, Louise M; Payne, Heather M; Holt, Georgina K; Holden, Anneka; Churches, Owen F; Kohler, Mark J; Keage, Hannah A D
2018-07-01
The assessment of active language lateralization in infants and toddlers is challenging. It requires an imaging tool that is unintimidating, quick to setup, and robust to movement, in addition to an engaging and cognitively simple language processing task. Functional Transcranial Doppler Ultrasound (fTCD) offers a suitable technique and here we report on a suitable method to elicit active language production in young children. The 34-second "What Box" trial presents an animated face "searching" for an object. The face "finds" a box that opens to reveal a to-be-labelled object. In a sample of 95 children (1 to 5 years of age), 81% completed the task-32% with ≥10 trials. The task was validated (ρ = 0.4) against the gold standard Word Generation task in a group of older adults (n = 65, 60-85 years of age), though was less likely to categorize lateralization as left or right, indicative of greater measurement variability. Existing methods for active language production have been used with 2-year-old children while passive listening has been conducted with sleeping 6-month-olds. This is the first active method to be successfully employed with infants through to pre-schoolers, forming a useful tool for populations in which complex instructions are problematic.
Improving the Plasticity of LIMS Implementation: LIMS Extension through Microsoft Excel
NASA Technical Reports Server (NTRS)
Culver, Mark
2017-01-01
A Laboratory Information Management System (LIMS) is a databasing software with many built-in tools ideal for handling and documenting most laboratory processes in an accurate and consistent manner, making it an indispensable tool for the modern laboratory. However, a lot of LIMS end users will find that in the performance of analyses that have unique considerations such as standard curves, multiple stages incubations, or logical considerations, a base LIMS distribution may not ideally suit their needs. These considerations bring about the need for extension languages, which can extend the functionality of a LIMS. While these languages do provide the implementation team the functionality required to accommodate these special laboratory analyses, they are usually too complex for the end user to modify to compensate for natural changes in laboratory operations. The LIMS utilized by our laboratory offers a unique and easy-to-use choice for an extension language, one that is already heavily relied upon not only in science but also in most academic and business pursuits: Microsoft Excel. The validity of Microsoft Excel as a pseudo programming language and its usability and versatility as a LIMS extension language will be discussed. The NELAC implications and overall drawbacks of this LIMS configuration will also be discussed.
Carter, Julie Anne; Murira, Grace; Gona, Joseph; Tumaini, Judy; Lees, Janet; Neville, Brian George; Newton, Charles Richard
2013-01-01
This study sought to adapt a battery of Western speech and language assessment tools to a rural Kenyan setting. The tool was developed for children whose first language was KiGiryama, a Bantu language. A total of 539 Kenyan children (males=271, females=268, ethnicity=100% Kigiryama. Data were collected from 303 children admitted to hospital with severe malaria and 206 age-matched children recruited from the village communities. The language assessments were based upon the Content, Form and Use (C/F/U) model. The assessment was based upon the adapted versions of the Peabody Picture Vocabulary Test, Test for the Reception of Grammar, Renfrew Action Picture Test, Pragmatics Profile of Everyday Communication Skills in Children, Test of Word Finding and language specific tests of lexical semantics, higher level language. Preliminary measures of construct validity suggested that the theoretical assumptions behind the construction of the assessments were appropriate and re-test and inter-rater reliability scores were acceptable. These findings illustrate the potential to adapt Western speech and language assessments in other languages and settings, particularly those in which there is a paucity of standardised tools. PMID:24294109
ERIC Educational Resources Information Center
Kidd, Ross; Byram, Martin
Popular theatre that speaks to the common man in his own language and deals with directly relevant problems can be an effective adult education tool in the process Paulo Freire calls conscientization--a process aiming to radically transform social reality and improve people's lives. It can also serve as a medium for participatory research. Popular…
ERIC Educational Resources Information Center
Wolff, William I.
2008-01-01
How do we know when an educational organization, process, or courseware tool is "innovative"? How do we define the processes that encourage change or the ways in which faculty "develop" new courseware "innovations"? The terms "innovation", "change", and "development" have been overused in so many contexts that they now seem to have lost their…
Raster Metafile And Raster Metafile Translator Programs
NASA Technical Reports Server (NTRS)
Randall, Donald P.; Gates, Raymond L.; Skeens, Kristi M.
1994-01-01
Raster Metafile (RM) computer program is generic raster-image-format program, and Raster Metafile Translator (RMT) program is assortment of software tools for processing images prepared in this format. Processing includes reading, writing, and displaying RM images. Such other image-manipulation features as minimal compositing operator and resizing option available under RMT command structure. RMT written in FORTRAN 77 and C language.
Individual Differences in the Real-Time Comprehension of Children with ASD
Venker, Courtney E.; Eernisse, Elizabeth R.; Saffran, Jenny R.; Weismer, Susan Ellis
2013-01-01
Lay Abstract Spoken language processing is related to language and cognitive skills in typically developing children, but very little is known about how children with autism spectrum disorders (ASD) comprehend words in real time. Studying this area is important because it may help us understand why many children with autism have delayed language comprehension. Thirty-four children with ASD (3–6 years old) participated in this study. They took part in a language comprehension task that involved looking at pictures on a screen and listening to questions about familiar nouns (e.g., Where’s the shoe?). Children as a group understood the familiar words, but accuracy and processing speed varied considerably across children. The children who were more accurate were also faster to process the familiar words. Children’s language processing accuracy was related to processing speed and language comprehension on a standardized test; nonverbal cognition did not explain additional information after accounting for these factors. Additionally, lexical processing accuracy at age 5½ was related to children’s vocabulary comprehension three years earlier, at age 2½. Autism severity and years of maternal education were unrelated to language processing. Words typically acquired earlier in life were processed more quickly than words acquired later. These findings point to similarities in patterns of language development in typically developing children and children with ASD. Studying real-time comprehension in children with ASD may help us better understand mechanisms of language comprehension in this population. Future work may help explain why some children with ASD develop age-appropriate language skills, whereas others experience lasting deficits. Scientific Abstract Many children with autism spectrum disorders (ASD) demonstrate deficits in language comprehension, but little is known about how they process spoken language as it unfolds. Real-time lexical comprehension is associated with language and cognition in children without ASD, suggesting that this may also be the case for children with ASD. This study adopted an individual differences approach to characterizing real-time comprehension of familiar words in a group of 34 three- to six-year-olds with ASD. The looking-while-listening paradigm was employed; it measures online accuracy and latency through language-mediated eye movements and has limited task demands. On average, children demonstrated comprehension of the familiar words, but considerable variability emerged. Children with better accuracy were faster to process the familiar words. In combination, processing speed and comprehension on a standardized language assessment explained 63% of the variance in online accuracy. Online accuracy was not correlated with autism severity or maternal education, and nonverbal cognition did not explain unique variance. Notably, online accuracy at age 5½ was related to vocabulary comprehension three years earlier. The words typically learned earliest in life were processed most quickly. Consistent with a dimensional view of language abilities, these findings point to similarities in patterns of language acquisition in typically developing children and those with ASD. Overall, our results emphasize the value of examining individual differences in real-time language comprehension in this population. We propose that the looking-while-listening paradigm is a sensitive and valuable methodological tool that can be applied across many areas of autism research. PMID:23696214
ERIC Educational Resources Information Center
Boerma, Tessel; Chiat, Shula; Leseman, Paul; Timmermeister, Mona; Wijnen, Frank; Blom, Elma
2015-01-01
Purpose: This study evaluated a newly developed quasi-universal nonword repetition task (Q-U NWRT) as a diagnostic tool for bilingual children with language impairment (LI) who have Dutch as a 2nd language. The Q-U NWRT was designed to be minimally influenced by knowledge of 1 specific language in contrast to a language-specific NWRT with which it…
A Computational Workflow for the Automated Generation of Models of Genetic Designs.
Misirli, Göksel; Nguyen, Tramy; McLaughlin, James Alastair; Vaidyanathan, Prashant; Jones, Timothy S; Densmore, Douglas; Myers, Chris; Wipat, Anil
2018-06-05
Computational models are essential to engineer predictable biological systems and to scale up this process for complex systems. Computational modeling often requires expert knowledge and data to build models. Clearly, manual creation of models is not scalable for large designs. Despite several automated model construction approaches, computational methodologies to bridge knowledge in design repositories and the process of creating computational models have still not been established. This paper describes a workflow for automatic generation of computational models of genetic circuits from data stored in design repositories using existing standards. This workflow leverages the software tool SBOLDesigner to build structural models that are then enriched by the Virtual Parts Repository API using Systems Biology Open Language (SBOL) data fetched from the SynBioHub design repository. The iBioSim software tool is then utilized to convert this SBOL description into a computational model encoded using the Systems Biology Markup Language (SBML). Finally, this SBML model can be simulated using a variety of methods. This workflow provides synthetic biologists with easy to use tools to create predictable biological systems, hiding away the complexity of building computational models. This approach can further be incorporated into other computational workflows for design automation.
Design and Implementation of a Tool for Teaching Programming.
ERIC Educational Resources Information Center
Goktepe, Mesut; And Others
1989-01-01
Discussion of the use of computers in education focuses on a graphics-based system for teaching the Pascal programing language for problem solving. Topics discussed include user interface; notification based systems; communication processes; object oriented programing; workstations; graphics architecture; and flowcharts. (18 references) (LRW)
Applying Modeling Tools to Ground System Procedures
NASA Technical Reports Server (NTRS)
Di Pasquale, Peter
2012-01-01
As part of a long-term effort to revitalize the Ground Systems (GS) Engineering Section practices, Systems Modeling Language (SysML) and Business Process Model and Notation (BPMN) have been used to model existing GS products and the procedures GS engineers use to produce them.
Clinical nursing informatics. Developing tools for knowledge workers.
Ozbolt, J G; Graves, J R
1993-06-01
Current research in clinical nursing informatics is proceeding along three important dimensions: (1) identifying and defining nursing's language and structuring its data; (2) understanding clinical judgment and how computer-based systems can facilitate and not replace it; and (3) discovering how well-designed systems can transform nursing practice. A number of efforts are underway to find and use language that accurately represents nursing and that can be incorporated into computer-based information systems. These efforts add to understanding nursing problems, interventions, and outcomes, and provide the elements for databases from which nursing's costs and effectiveness can be studied. Research on clinical judgment focuses on how nurses (perhaps with different levels of expertise) assess patient needs, set goals, and plan and deliver care, as well as how computer-based systems can be developed to aid these cognitive processes. Finally, investigators are studying not only how computers can help nurses with the mechanics and logistics of processing information but also and more importantly how access to informatics tools changes nursing care.
Joiner, Kevin L; Sternberg, Rosa Maria; Kennedy, Christine; Chen, Jyu-Lin; Fukuoka, Yoshimi; Janson, Susan L
2016-12-01
Create a Spanish-language version of the Risk Perception Survey for Developing Diabetes (RPS-DD) and assess psychometric properties. The Spanish-language version was created through translation, harmonization, and presentation to the tool's original author. It was field tested in a foreignborn Latino sample and properties evaluated in principal components analysis. Personal Control, Optimistic Bias, and Worry multi-item Likert subscale responses did not cluster together. A clean solution was obtained after removing two Personal Control subscale items. Neither the Personal Disease Risk scale nor the Environmental Health Risk scale responses loaded onto single factors. Reliabilities ranged from .54 to .88. Test of knowledge performance varied by item. This study contributes to evidence of validation of a Spanish-language RPS-DD in foreign-born Latinos.
ERIC Educational Resources Information Center
Musk, Nigel
2014-01-01
The integration of translation tools into the Google search engine has led to a huge increase in the visibility and accessibility of such tools, with potentially far-reaching implications for the English language classroom. Although these translation tools are the focus of this study, using them is in fact only one way in which English language…
ERIC Educational Resources Information Center
Bunting, John David
2013-01-01
Despite claims that the use of corpus tools can have a major impact in language classrooms (e.g., Conrad, 2000, 2004; Davies, 2004; O'Keefe, McCarthy, & Carter, 2007; Sinclair, 2004b; Tsui, 2004), many language teachers express apparent apathy or even resistance towards adding corpus tools to their repertoire (Cortes, 2013b). This study…
A mask quality control tool for the OSIRIS multi-object spectrograph
NASA Astrophysics Data System (ADS)
López-Ruiz, J. C.; Vaz Cedillo, Jacinto Javier; Ederoclite, Alessandro; Bongiovanni, Ángel; González Escalera, Víctor
2012-09-01
OSIRIS multi object spectrograph uses a set of user-customised-masks, which are manufactured on-demand. The manufacturing process consists of drilling the specified slits on the mask with the required accuracy. Ensuring that slits are on the right place when observing is of vital importance. We present a tool for checking the quality of the process of manufacturing the masks which is based on analyzing the instrument images obtained with the manufactured masks on place. The tool extracts the slit information from these images, relates specifications with the extracted slit information, and finally communicates to the operator if the manufactured mask fulfills the expectations of the mask designer. The proposed tool has been built using scripting languages and using standard libraries such as opencv, pyraf and scipy. The software architecture, advantages and limits of this tool in the lifecycle of a multiobject acquisition are presented.
SLIPTA e-Tool improves laboratory audit process in Vietnam and Cambodia.
Nguyen, Thuong T; McKinney, Barbara; Pierson, Antoine; Luong, Khue N; Hoang, Quynh T; Meharwal, Sandeep; Carvalho, Humberto M; Nguyen, Cuong Q; Nguyen, Kim T; Bond, Kyle B
2014-01-01
The Stepwise Laboratory Quality Improvement Process Towards Accreditation (SLIPTA) checklist is used worldwide to drive quality improvement in laboratories in developing countries and to assess the effectiveness of interventions such as the Strengthening Laboratory Management Toward Accreditation (SLMTA) programme. However, the paper-based format of the checklist makes administration cumbersome and limits timely analysis and communication of results. In early 2012, the SLMTA team in Vietnam developed an electronic SLIPTA checklist tool. The e-Tool was pilot tested in Vietnam in mid-2012 and revised. It was used during SLMTA implementation in Vietnam and Cambodia in 2012 and 2013 and further revised based on auditors' feedback about usability. The SLIPTA e-Tool enabled rapid turn-around of audit results, reduced workload and language barriers and facilitated analysis of national results. Benefits of the e-Tool will be magnified with in-country scale-up of laboratory quality improvement efforts and potential expansion to other countries.
Podcasting: a new tool for student retention?
Greenfield, Sue
2011-02-01
Emerging mobile technologies offer nursing faculty a broader armamentarium with which to support traditionally at-risk students. Podcasting, a type of mobile learning, uses technology that allows students to access and listen to recorded classroom audio files from a computer, MP3 player, or iPod. Podcasting also offers particular promise for non-native English speakers. This article describes how podcasting was used to offer academic support to students in a medical-surgical nursing course and to report the postimplementation test grade improvement among English as a second language nursing students. This article also discusses tips for implementing podcasting within the educational arena. Developing innovative ways to improve student retention is an ongoing process. Podcasting is one tool that should be considered for English as a second language nursing students. Copyright 2011, SLACK Incorporated.
The semantic web and computer vision: old AI meets new AI
NASA Astrophysics Data System (ADS)
Mundy, J. L.; Dong, Y.; Gilliam, A.; Wagner, R.
2018-04-01
There has been vast process in linking semantic information across the billions of web pages through the use of ontologies encoded in the Web Ontology Language (OWL) based on the Resource Description Framework (RDF). A prime example is the Wikipedia where the knowledge contained in its more than four million pages is encoded in an ontological database called DBPedia http://wiki.dbpedia.org/. Web-based query tools can retrieve semantic information from DBPedia encoded in interlinked ontologies that can be accessed using natural language. This paper will show how this vast context can be used to automate the process of querying images and other geospatial data in support of report changes in structures and activities. Computer vision algorithms are selected and provided with context based on natural language requests for monitoring and analysis. The resulting reports provide semantically linked observations from images and 3D surface models.
ERIC Educational Resources Information Center
Christie, Colin
2016-01-01
This article reports on the findings of a study into the conditions which promote spontaneous learner talk in the target language in the modern foreign languages (MFL) classroom. A qualitative case study approach was adopted. French lessons, with school students aged 11-16 years old, were observed and analysed with the aim of identifying tools and…
ERIC Educational Resources Information Center
de Ramirez, Lori Langer
2013-01-01
Webtools provide language students a uniquely authentic audience with which to share their creativity and growing proficiency in the target language. Students tend to write/speak more--and better--when using these tools in the language classroom. Webtools form an enjoyable and pedagogically sound way of getting students to create and have fun with…
ERIC Educational Resources Information Center
Darot, Mireille
1983-01-01
The usefulness of classifications within and comparisons among languages as a means of discovering the commonalities of human language is discussed. Metalinguistics offers not only the potential for analyzing the specifics of each language, but also the tools for teaching across languages. (MSE)
Combining Different Tools for EEG Analysis to Study the Distributed Character of Language Processing
da Rocha, Armando Freitas; Foz, Flávia Benevides; Pereira, Alfredo
2015-01-01
Recent studies on language processing indicate that language cognition is better understood if assumed to be supported by a distributed intelligent processing system enrolling neurons located all over the cortex, in contrast to reductionism that proposes to localize cognitive functions to specific cortical structures. Here, brain activity was recorded using electroencephalogram while volunteers were listening or reading small texts and had to select pictures that translate meaning of these texts. Several techniques for EEG analysis were used to show this distributed character of neuronal enrollment associated with the comprehension of oral and written descriptive texts. Low Resolution Tomography identified the many different sets (s i) of neurons activated in several distinct cortical areas by text understanding. Linear correlation was used to calculate the information H(e i) provided by each electrode of the 10/20 system about the identified s i. H(e i) Principal Component Analysis (PCA) was used to study the temporal and spatial activation of these sources s i. This analysis evidenced 4 different patterns of H(e i) covariation that are generated by neurons located at different cortical locations. These results clearly show that the distributed character of language processing is clearly evidenced by combining available EEG technologies. PMID:26713089
Rocha, Armando Freitas da; Foz, Flávia Benevides; Pereira, Alfredo
2015-01-01
Recent studies on language processing indicate that language cognition is better understood if assumed to be supported by a distributed intelligent processing system enrolling neurons located all over the cortex, in contrast to reductionism that proposes to localize cognitive functions to specific cortical structures. Here, brain activity was recorded using electroencephalogram while volunteers were listening or reading small texts and had to select pictures that translate meaning of these texts. Several techniques for EEG analysis were used to show this distributed character of neuronal enrollment associated with the comprehension of oral and written descriptive texts. Low Resolution Tomography identified the many different sets (s i ) of neurons activated in several distinct cortical areas by text understanding. Linear correlation was used to calculate the information H(e i ) provided by each electrode of the 10/20 system about the identified s i . H(e i ) Principal Component Analysis (PCA) was used to study the temporal and spatial activation of these sources s i . This analysis evidenced 4 different patterns of H(e i ) covariation that are generated by neurons located at different cortical locations. These results clearly show that the distributed character of language processing is clearly evidenced by combining available EEG technologies.
[Formian 2 and a Formian Function for Processing Polyhedric Configurations
NASA Technical Reports Server (NTRS)
Nooshin, H.; Disney, P. L.; Champion, O. C.
1996-01-01
The work began in October 1994 with the following objectives: (1) to produce an improved version of the programming language Formian; and (2) to create a means for computer aided handling of polyhedric configurations including the geodesic forms of all kinds. A new version of Formian, referred to as Formian 2, is being implemented to operate in the Windows 95 environment. It is an ideal tool for configuration management in a convenient and user-friendly manner. The second objective was achieved by creating a standard Formian function that allows convenient handling of all types of polyhedric configurations. In particular, the focus of attention is on polyhedric configurations that are of importance in architectural and structural engineering fields. The natural medium for processing of polyhedric configurations is a programming language that incorporates the concepts of 'formex algebra'. Formian is such a programming language in which the processing of polyhedric configurations can be carried out using the standard elements of the language. A description of this function is included in a chapter for a book entitled 'Beyond the Cube: the Architecture of space Frames and Polyhedra'. A copy of this chapter is appended.
Global Situational Awareness with Free Tools
2015-01-15
Client Technical Solutions • Software Engineering Measurement and Analysis • Architecture Practices • Product Line Practice • Team Software Process...multiple data sources • Snort (Snorby on Security Onion ) • Nagios • SharePoint RSS • Flow • Others • Leverage standard data formats • Keyhole Markup Language
Nursing informatics, outcomes, and quality improvement.
Charters, Kathleen G
2003-08-01
Nursing informatics actively supports nursing by providing standard language systems, databases, decision support, readily accessible research results, and technology assessments. Through normalized datasets spanning an entire enterprise or other large demographic, nursing informatics tools support improvement of healthcare by answering questions about patient outcomes and quality improvement on an enterprise scale, and by providing documentation for business process definition, business process engineering, and strategic planning. Nursing informatics tools provide a way for advanced practice nurses to examine their practice and the effect of their actions on patient outcomes. Analysis of patient outcomes may lead to initiatives for quality improvement. Supported by nursing informatics tools, successful advance practice nurses leverage their quality improvement initiatives against the enterprise strategic plan to gain leadership support and resources.
Words as cultivators of others minds
Schilhab, Theresa S. S.
2015-01-01
The embodied–grounded view of cognition and language holds that sensorimotor experiences in the form of ‘re-enactments’ or ‘simulations’ are significant to the individual’s development of concepts and competent language use. However, a typical objection to the explanatory force of this view is that, in everyday life, we engage in linguistic exchanges about much more than might be directly accessible to our senses. For instance, when knowledge-sharing occurs as part of deep conversations between a teacher and student, language is the salient tool by which to obtain understanding, through the unfolding of explanations. Here, the acquisition of knowledge is realized through language, and the constitution of knowledge seems entirely linguistic. In this paper, based on a review of selected studies within contemporary embodied cognitive science, I propose that such linguistic exchanges, though occurring independently of direct experience, are in fact disguised forms of embodied cognition, leading to the reconciliation of the opposing views. I suggest that, in conversation, interlocutors use Words as Cultivators (WAC) of other minds as a direct result of their embodied–grounded origin, rendering WAC a radical interpretation of the Words as social Tools (WAT) proposal. The WAC hypothesis endorses the view of language as dynamic, continuously integrating with, and negotiating, cognitive processes in the individual. One such dynamic feature results from the ‘linguification process’, a term by which I refer to the socially produced mapping of a word to its referent which, mediated by the interlocutor, turns words into cultivators of others minds. In support of the linguification process hypothesis and WAC, I review relevant embodied–grounded research, and selected studies of instructed fear conditioning and guided imagery. PMID:26594187
The efficiency of geophysical adjoint codes generated by automatic differentiation tools
NASA Astrophysics Data System (ADS)
Vlasenko, A. V.; Köhl, A.; Stammer, D.
2016-02-01
The accuracy of numerical models that describe complex physical or chemical processes depends on the choice of model parameters. Estimating an optimal set of parameters by optimization algorithms requires knowledge of the sensitivity of the process of interest to model parameters. Typically the sensitivity computation involves differentiation of the model, which can be performed by applying algorithmic differentiation (AD) tools to the underlying numerical code. However, existing AD tools differ substantially in design, legibility and computational efficiency. In this study we show that, for geophysical data assimilation problems of varying complexity, the performance of adjoint codes generated by the existing AD tools (i) Open_AD, (ii) Tapenade, (iii) NAGWare and (iv) Transformation of Algorithms in Fortran (TAF) can be vastly different. Based on simple test problems, we evaluate the efficiency of each AD tool with respect to computational speed, accuracy of the adjoint, the efficiency of memory usage, and the capability of each AD tool to handle modern FORTRAN 90-95 elements such as structures and pointers, which are new elements that either combine groups of variables or provide aliases to memory addresses, respectively. We show that, while operator overloading tools are the only ones suitable for modern codes written in object-oriented programming languages, their computational efficiency lags behind source transformation by orders of magnitude, rendering the application of these modern tools to practical assimilation problems prohibitive. In contrast, the application of source transformation tools appears to be the most efficient choice, allowing handling even large geophysical data assimilation problems. However, they can only be applied to numerical models written in earlier generations of programming languages. Our study indicates that applying existing AD tools to realistic geophysical problems faces limitations that urgently need to be solved to allow the continuous use of AD tools for solving geophysical problems on modern computer architectures.
Cognitive Tools for Language Pedagogy.
ERIC Educational Resources Information Center
Schoelles, Michael; Hamburger, Henry
1996-01-01
Discusses the integration of Fluent 2, a two-medium immersive conversational language learning environment, into the pedagogical environment. The article presents a strategy to provide teachers and other designers of language lessons with tools that will enable them to produce lessons they consider appropriate. (seven references) (Author/CK)
Biomedical information retrieval across languages.
Daumke, Philipp; Markü, Kornél; Poprat, Michael; Schulz, Stefan; Klar, Rüdiger
2007-06-01
This work presents a new dictionary-based approach to biomedical cross-language information retrieval (CLIR) that addresses many of the general and domain-specific challenges in current CLIR research. Our method is based on a multilingual lexicon that was generated partly manually and partly automatically, and currently covers six European languages. It contains morphologically meaningful word fragments, termed subwords. Using subwords instead of entire words significantly reduces the number of lexical entries necessary to sufficiently cover a specific language and domain. Mediation between queries and documents is based on these subwords as well as on lists of word-n-grams that are generated from large monolingual corpora and constitute possible translation units. The translations are then sent to a standard Internet search engine. This process makes our approach an effective tool for searching the biomedical content of the World Wide Web in different languages. We evaluate this approach using the OHSUMED corpus, a large medical document collection, within a cross-language retrieval setting.
ERIC Educational Resources Information Center
Aydin, Belgin; Unver, Meral Melek; Alan, Bülent; Saglam, Sercan
2017-01-01
This paper explains the process of designing a curriculum based on the Taba Model and the Global Scale of English (GSE) in an intensive language education program. The Taba Model emphasizing the involvement of the teachers and the learners in the curriculum development process was combined with the GSE, a psychometric tool measuring language…
Leveraging workflow control patterns in the domain of clinical practice guidelines.
Kaiser, Katharina; Marcos, Mar
2016-02-10
Clinical practice guidelines (CPGs) include recommendations describing appropriate care for the management of patients with a specific clinical condition. A number of representation languages have been developed to support executable CPGs, with associated authoring/editing tools. Even with tool assistance, authoring of CPG models is a labor-intensive task. We aim at facilitating the early stages of CPG modeling task. In this context, we propose to support the authoring of CPG models based on a set of suitable procedural patterns described in an implementation-independent notation that can be then semi-automatically transformed into one of the alternative executable CPG languages. We have started with the workflow control patterns which have been identified in the fields of workflow systems and business process management. We have analyzed the suitability of these patterns by means of a qualitative analysis of CPG texts. Following our analysis we have implemented a selection of workflow patterns in the Asbru and PROforma CPG languages. As implementation-independent notation for the description of patterns we have chosen BPMN 2.0. Finally, we have developed XSLT transformations to convert the BPMN 2.0 version of the patterns into the Asbru and PROforma languages. We showed that although a significant number of workflow control patterns are suitable to describe CPG procedural knowledge, not all of them are applicable in the context of CPGs due to their focus on single-patient care. Moreover, CPGs may require additional patterns not included in the set of workflow control patterns. We also showed that nearly all the CPG-suitable patterns can be conveniently implemented in the Asbru and PROforma languages. Finally, we demonstrated that individual patterns can be semi-automatically transformed from a process specification in BPMN 2.0 to executable implementations in these languages. We propose a pattern and transformation-based approach for the development of CPG models. Such an approach can form the basis of a valid framework for the authoring of CPG models. The identification of adequate patterns and the implementation of transformations to convert patterns from a process specification into different executable implementations are the first necessary steps for our approach.
Language promotion for educational purposes: The example of Tanzania
NASA Astrophysics Data System (ADS)
Rubagumya, Casmir M.
1991-03-01
Kiswahili is one of the most widely used languages in East and Central Africa. In Tanzania, where it is the national language, attempts have been made to develop it so that it can be used as an efficient tool of communication in all sectors of the society, including education. This paper shows that although Kiswahili has successfully been promoted as the medium of primary and adult education, at secondary and tertiary levels of education, its position is still precarious. The notion that English and Kiswahili are in complementary distribution is rejected. It is argued that the two languages are in conflict, and that those who are in a better socio-political/economic position have more control of, and better access to, English. In such a situation the right question to ask is not in which domains English is used, but why it is used in such domains and who uses it. The paper further argues that the present sociolinguistic environment makes the use of English as a viable medium unsustainable. For this reason, insistence on the use of English adversely affects the learning process. It is suggested that if Kiswahili became the medium of education at secondary school level and English was taught well as a foreign language, this would help to promote both languages without jeopardising the learning process.
An overview of the CellML API and its implementation
2010-01-01
Background CellML is an XML based language for representing mathematical models, in a machine-independent form which is suitable for their exchange between different authors, and for archival in a model repository. Allowing for the exchange and archival of models in a computer readable form is a key strategic goal in bioinformatics, because of the associated improvements in scientific record accuracy, the faster iterative process of scientific development, and the ability to combine models into large integrative models. However, for CellML models to be useful, tools which can process them correctly are needed. Due to some of the more complex features present in CellML models, such as imports, developing code ab initio to correctly process models can be an onerous task. For this reason, there is a clear and pressing need for an application programming interface (API), and a good implementation of that API, upon which tools can base their support for CellML. Results We developed an API which allows the information in CellML models to be retrieved and/or modified. We also developed a series of optional extension APIs, for tasks such as simplifying the handling of connections between variables, dealing with physical units, validating models, and translating models into different procedural languages. We have also provided a Free/Open Source implementation of this application programming interface, optimised to achieve good performance. Conclusions Tools have been developed using the API which are mature enough for widespread use. The API has the potential to accelerate the development of additional tools capable of processing CellML, and ultimately lead to an increased level of sharing of mathematical model descriptions. PMID:20377909
An overview of the CellML API and its implementation.
Miller, Andrew K; Marsh, Justin; Reeve, Adam; Garny, Alan; Britten, Randall; Halstead, Matt; Cooper, Jonathan; Nickerson, David P; Nielsen, Poul F
2010-04-08
CellML is an XML based language for representing mathematical models, in a machine-independent form which is suitable for their exchange between different authors, and for archival in a model repository. Allowing for the exchange and archival of models in a computer readable form is a key strategic goal in bioinformatics, because of the associated improvements in scientific record accuracy, the faster iterative process of scientific development, and the ability to combine models into large integrative models.However, for CellML models to be useful, tools which can process them correctly are needed. Due to some of the more complex features present in CellML models, such as imports, developing code ab initio to correctly process models can be an onerous task. For this reason, there is a clear and pressing need for an application programming interface (API), and a good implementation of that API, upon which tools can base their support for CellML. We developed an API which allows the information in CellML models to be retrieved and/or modified. We also developed a series of optional extension APIs, for tasks such as simplifying the handling of connections between variables, dealing with physical units, validating models, and translating models into different procedural languages.We have also provided a Free/Open Source implementation of this application programming interface, optimised to achieve good performance. Tools have been developed using the API which are mature enough for widespread use. The API has the potential to accelerate the development of additional tools capable of processing CellML, and ultimately lead to an increased level of sharing of mathematical model descriptions.
Language deficits in Pre-Symptomatic Huntington's Disease: Evidence from Hungarian
Németh, Dezso; Dye, Cristina D.; Sefcsik, Tamás; Janacsek, Karolina; Turi, Zsolt; Londe, Zsuzsa; Klivenyi, Péter; Kincses, Tamás Zs.; Nikoletta, Szabó; Vecsei, László; Ullman, Michael T.
2012-01-01
A limited number of studies have investigated language in Huntington's disease (HD). These have generally reported abnormalities in rule-governed (grammatical) aspects of language, in both syntax and morphology. Several studies of verbal inflectional morphology in English and French have reported evidence of over-active rule processing, such as over-suffixation errors (e.g., walkeded) and over-regularizations (e.g., digged). Here we extend the investigation to noun inflection in Hungarian, a Finno-Ugric agglutinative language with complex morphology, and to genetically proven pre-symptomatic Huntington's disease (pre-HD). Although individuals with pre-HD have no clinical, motor or cognitive symptoms, the underlying pathology may already have begun, and thus sensitive behavioral measures might reveal already-present impairments. Indeed, in a Hungarian morphology production task, pre-HD patients made both over-suffixation and over-regularization errors. The findings suggest the generality of over-active rule processing in both HD and pre-HD, across languages from different families with different morphological systems, and for both verbal and noun inflection. Because the neuropathology in pre-HD appears to be largely restricted to the caudate nucleus and related structures, the findings further implicate these structures in language, and in rule-processing in particular. Finally, the need for effective treatments in HD, which will likely depend in part on the ability to sensitively measure early changes in the disease, suggests the possibility that inflectional morphology, and perhaps other language measures, may provide useful diagnostic, tracking, and therapeutic tools for assessing and treating early degeneration in pre-HD and HD. PMID:22538085
Kidwatching: A Vygotskyan Approach to Children's Language In the "Star Wars" Age.
ERIC Educational Resources Information Center
Monroe, Suzanne S.
A Vygotskyan review of children's language examines language samples of a 7-year-old boy at home, at a birthday party, and at play in a sandbox. The language samples indicate common patterns, including his use of tools and symbol together in play. A common thread in the samples is his involvement with high tech tools of futuristic toys. Vygotsky…
ERIC Educational Resources Information Center
Santana-Paixao, Raquel C.
2017-01-01
Oral testing administration plays a significant role in foreign language programs aiming to foster the development of students' speaking abilities. With the development of language teaching software, the use of computer based recording tools are becoming increasingly used in language courses as an alternative to traditional face-to-face oral…
ERIC Educational Resources Information Center
Sharp, Kathryn M; Gathercole, Virginia C. Mueller
2013-01-01
In recent years, there has been growing recognition of a need for a general, non-language-specific assessment tool that could be used to evaluate general speech and language abilities in children, especially to assist in identifying atypical development in bilingual children who speak a language unfamiliar to the assessor. It has been suggested…
Indexing Anatomical Phrases in Neuro-Radiology Reports to the UMLS 2005AA
Bashyam, Vijayaraghavan; Taira, Ricky K.
2005-01-01
This work describes a methodology to index anatomical phrases to the 2005AA release of the Unified Medical Language System (UMLS). A phrase chunking tool based on Natural Language Processing (NLP) was developed to identify semantically coherent phrases within medical reports. Using this phrase chunker, a set of 2,551 unique anatomical phrases was extracted from brain radiology reports. These phrases were mapped to the 2005AA release of the UMLS using a vector space model. Precision for the task of indexing unique phrases was 0.87. PMID:16778995
Cleft audit protocol for speech (CAPS-A): a comprehensive training package for speech analysis.
Sell, D; John, A; Harding-Bell, A; Sweeney, T; Hegarty, F; Freeman, J
2009-01-01
The previous literature has largely focused on speech analysis systems and ignored process issues, such as the nature of adequate speech samples, data acquisition, recording and playback. Although there has been recognition of the need for training on tools used in speech analysis associated with cleft palate, little attention has been paid to this issue. To design, execute, and evaluate a training programme for speech and language therapists on the systematic and reliable use of the Cleft Audit Protocol for Speech-Augmented (CAPS-A), addressing issues of standardized speech samples, data acquisition, recording, playback, and listening guidelines. Thirty-six specialist speech and language therapists undertook the training programme over four days. This consisted of two days' training on the CAPS-A tool followed by a third day, making independent ratings and transcriptions on ten new cases which had been previously recorded during routine audit data collection. This task was repeated on day 4, a minimum of one month later. Ratings were made using the CAPS-A record form with the CAPS-A definition table. An analysis was made of the speech and language therapists' CAPS-A ratings at occasion 1 and occasion 2 and the intra- and inter-rater reliability calculated. Trained therapists showed consistency in individual judgements on specific sections of the tool. Intraclass correlation coefficients were calculated for each section with good agreement on eight of 13 sections. There were only fair levels of agreement on anterior oral cleft speech characteristics, non-cleft errors/immaturities and voice. This was explained, at least in part, by their low prevalence which affects the calculation of the intraclass correlation coefficient statistic. Speech and language therapists benefited from training on the CAPS-A, focusing on specific aspects of speech using definitions of parameters and scalar points, in order to apply the tool systematically and reliably. Ratings are enhanced by ensuring a high degree of attention to the nature of the data, standardizing the speech sample, data acquisition, the listening process together with the use of high-quality recording and playback equipment. In addition, a method is proposed for maintaining listening skills following training as part of an individual's continuing education.
Conversion of HSPF Legacy Model to a Platform-Independent, Open-Source Language
NASA Astrophysics Data System (ADS)
Heaphy, R. T.; Burke, M. P.; Love, J. T.
2015-12-01
Since its initial development over 30 years ago, the Hydrologic Simulation Program - FORTAN (HSPF) model has been used worldwide to support water quality planning and management. In the United States, HSPF receives widespread endorsement as a regulatory tool at all levels of government and is a core component of the EPA's Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) system, which was developed to support nationwide Total Maximum Daily Load (TMDL) analysis. However, the model's legacy code and data management systems have limitations in their ability to integrate with modern software, hardware, and leverage parallel computing, which have left voids in optimization, pre-, and post-processing tools. Advances in technology and our scientific understanding of environmental processes that have occurred over the last 30 years mandate that upgrades be made to HSPF to allow it to evolve and continue to be a premiere tool for water resource planners. This work aims to mitigate the challenges currently facing HSPF through two primary tasks: (1) convert code to a modern widely accepted, open-source, high-performance computing (hpc) code; and (2) convert model input and output files to modern widely accepted, open-source, data model, library, and binary file format. Python was chosen as the new language for the code conversion. It is an interpreted, object-oriented, hpc code with dynamic semantics that has become one of the most popular open-source languages. While python code execution can be slow compared to compiled, statically typed programming languages, such as C and FORTRAN, the integration of Numba (a just-in-time specializing compiler) has allowed this challenge to be overcome. For the legacy model data management conversion, HDF5 was chosen to store the model input and output. The code conversion for HSPF's hydrologic and hydraulic modules has been completed. The converted code has been tested against HSPF's suite of "test" runs and shown good agreement and similar execution times while using the Numba compiler. Continued verification of the accuracy of the converted code against more complex legacy applications and improvement upon execution times by incorporating an intelligent network change detection tool is currently underway, and preliminary results will be presented.
Language at Three Timescales: The Role of Real-Time Processes in Language Development and Evolution.
McMurray, Bob
2016-04-01
Evolutionary developmental systems (evo-devo) theory stresses that selection pressures operate on entire developmental systems rather than just genes. This study extends this approach to language evolution, arguing that selection pressure may operate on two quasi-independent timescales. First, children clearly must acquire language successfully (as acknowledged in traditional evo-devo accounts) and evolution must equip them with the tools to do so. Second, while this is developing, they must also communicate with others in the moment using partially developed knowledge. These pressures may require different solutions, and their combination may underlie the evolution of complex mechanisms for language development and processing. I present two case studies to illustrate how the demands of both real-time communication and language acquisition may be subtly different (and interact). The first case study examines infant-directed speech (IDS). A recent view is that IDS underwent cultural to statistical learning mechanisms that infants use to acquire the speech categories of their language. However, recent data suggest is it may not have evolved to enhance development, but rather to serve a more real-time communicative function. The second case study examines the argument for seemingly specialized mechanisms for learning word meanings (e.g., fast-mapping). Both behavioral and computational work suggest that learning may be much slower and served by general-purpose mechanisms like associative learning. Fast-mapping, then, may be a real-time process meant to serve immediate communication, not learning, by augmenting incomplete vocabulary knowledge with constraints from the current context. Together, these studies suggest that evolutionary accounts consider selection pressure arising from both real-time communicative demands and from the need for accurate language development. Copyright © 2016 Cognitive Science Society, Inc.
NASA Technical Reports Server (NTRS)
Borchardt, G. C.
1994-01-01
The Simple Tool for Automated Reasoning program (STAR) is an interactive, interpreted programming language for the development and operation of artificial intelligence (AI) application systems. STAR provides an environment for integrating traditional AI symbolic processing with functions and data structures defined in compiled languages such as C, FORTRAN and PASCAL. This type of integration occurs in a number of AI applications including interpretation of numerical sensor data, construction of intelligent user interfaces to existing compiled software packages, and coupling AI techniques with numerical simulation techniques and control systems software. The STAR language was created as part of an AI project for the evaluation of imaging spectrometer data at NASA's Jet Propulsion Laboratory. Programming in STAR is similar to other symbolic processing languages such as LISP and CLIP. STAR includes seven primitive data types and associated operations for the manipulation of these structures. A semantic network is used to organize data in STAR, with capabilities for inheritance of values and generation of side effects. The AI knowledge base of STAR can be a simple repository of records or it can be a highly interdependent association of implicit and explicit components. The symbolic processing environment of STAR may be extended by linking the interpreter with functions defined in conventional compiled languages. These external routines interact with STAR through function calls in either direction, and through the exchange of references to data structures. The hybrid knowledge base may thus be accessed and processed in general by either side of the application. STAR is initially used to link externally compiled routines and data structures. It is then invoked to interpret the STAR rules and symbolic structures. In a typical interactive session, the user enters an expression to be evaluated, STAR parses the input, evaluates the expression, performs any file input/output required, and displays the results. The STAR interpreter is written in the C language for interactive execution. It has been implemented on a VAX 11/780 computer operating under VMS, and the UNIX version has been implemented on a Sun Microsystems 2/170 workstation. STAR has a memory requirement of approximately 200K of 8 bit bytes, excluding externally compiled functions and application-dependent symbolic definitions. This program was developed in 1985.
NASA Technical Reports Server (NTRS)
Borchardt, G. C.
1994-01-01
The Simple Tool for Automated Reasoning program (STAR) is an interactive, interpreted programming language for the development and operation of artificial intelligence (AI) application systems. STAR provides an environment for integrating traditional AI symbolic processing with functions and data structures defined in compiled languages such as C, FORTRAN and PASCAL. This type of integration occurs in a number of AI applications including interpretation of numerical sensor data, construction of intelligent user interfaces to existing compiled software packages, and coupling AI techniques with numerical simulation techniques and control systems software. The STAR language was created as part of an AI project for the evaluation of imaging spectrometer data at NASA's Jet Propulsion Laboratory. Programming in STAR is similar to other symbolic processing languages such as LISP and CLIP. STAR includes seven primitive data types and associated operations for the manipulation of these structures. A semantic network is used to organize data in STAR, with capabilities for inheritance of values and generation of side effects. The AI knowledge base of STAR can be a simple repository of records or it can be a highly interdependent association of implicit and explicit components. The symbolic processing environment of STAR may be extended by linking the interpreter with functions defined in conventional compiled languages. These external routines interact with STAR through function calls in either direction, and through the exchange of references to data structures. The hybrid knowledge base may thus be accessed and processed in general by either side of the application. STAR is initially used to link externally compiled routines and data structures. It is then invoked to interpret the STAR rules and symbolic structures. In a typical interactive session, the user enters an expression to be evaluated, STAR parses the input, evaluates the expression, performs any file input/output required, and displays the results. The STAR interpreter is written in the C language for interactive execution. It has been implemented on a VAX 11/780 computer operating under VMS, and the UNIX version has been implemented on a Sun Microsystems 2/170 workstation. STAR has a memory requirement of approximately 200K of 8 bit bytes, excluding externally compiled functions and application-dependent symbolic definitions. This program was developed in 1985.
CLIPS/Ada: An Ada-based tool for building expert systems
NASA Technical Reports Server (NTRS)
White, W. A.
1990-01-01
Clips/Ada is a production system language and a development environment. It is functionally equivalent to the CLIPS tool. CLIPS/Ada was developed in order to provide a means of incorporating expert system technology into projects where the use of the Ada language had been mandated. A secondary purpose was to glean information about the Ada language and its compilers. Specifically, whether or not the language and compilers were mature enough to support AI applications. The CLIPS/Ada tool is coded entirely in Ada and is designed to be used by Ada systems that require expert reasoning.
Toledo, Cíntia Matsuda; Cunha, Andre; Scarton, Carolina; Aluísio, Sandra
2014-01-01
Discourse production is an important aspect in the evaluation of brain-injured individuals. We believe that studies comparing the performance of brain-injured subjects with that of healthy controls must use groups with compatible education. A pioneering application of machine learning methods using Brazilian Portuguese for clinical purposes is described, highlighting education as an important variable in the Brazilian scenario. The aims were to describe how to:(i) develop machine learning classifiers using features generated by natural language processing tools to distinguish descriptions produced by healthy individuals into classes based on their years of education; and(ii) automatically identify the features that best distinguish the groups. The approach proposed here extracts linguistic features automatically from the written descriptions with the aid of two Natural Language Processing tools: Coh-Metrix-Port and AIC. It also includes nine task-specific features (three new ones, two extracted manually, besides description time; type of scene described - simple or complex; presentation order - which type of picture was described first; and age). In this study, the descriptions by 144 of the subjects studied in Toledo 18 were used,which included 200 healthy Brazilians of both genders. A Support Vector Machine (SVM) with a radial basis function (RBF) kernel is the most recommended approach for the binary classification of our data, classifying three of the four initial classes. CfsSubsetEval (CFS) is a strong candidate to replace manual feature selection methods.
Danahy Ebert, Kerry; Scott, Cheryl M
2014-10-01
Both narrative language samples and norm-referenced language tests can be important components of language assessment for school-age children. The present study explored the relationship between these 2 tools within a group of children referred for language assessment. The study is a retrospective analysis of clinical records from 73 school-age children. Participants had completed an oral narrative language sample and at least one norm-referenced language test. Correlations between microstructural language sample measures and norm-referenced test scores were compared for younger (6- to 8-year-old) and older (9- to 12-year-old) children. Contingency tables were constructed to compare the 2 types of tools, at 2 different cutpoints, in terms of which children were identified as having a language disorder. Correlations between narrative language sample measures and norm-referenced tests were stronger for the younger group than the older group. Within the younger group, the level of language assessed by each measure contributed to associations among measures. Contingency analyses revealed moderate overlap in the children identified by each tool, with agreement affected by the cutpoint used. Narrative language samples may complement norm-referenced tests well, but age combined with narrative task can be expected to influence the nature of the relationship.
AUTOMATED GIS WATERSHED ANALYSIS TOOLS FOR RUSLE/SEDMOD SOIL EROSION AND SEDIMENTATION MODELING
A comprehensive procedure for computing soil erosion and sediment delivery metrics has been developed using a suite of automated Arc Macro Language (AML ) scripts and a pair of processing- intensive ANSI C++ executable programs operating on an ESRI ArcGIS 8.x Workstation platform...
Metrical Phonology: German Sound System.
ERIC Educational Resources Information Center
Tice, Bradley S.
Metrical phonology, a linguistic process of phonological stress assessment and diagrammatic simplification of sentence and word stress, is discussed as it is found in the English and German languages. The objective is to promote use of metrical phonology as a tool for enhancing instruction in stress patterns in words and sentences, particularly in…
Translation: Elements of a Craft.
ERIC Educational Resources Information Center
Heiderson, Mazin A.
An overview of the skills, techniques, tools, and compensation of language translators and interpreters is offered. It begins with a definition of translation and a brief history of translation in the western world. Basic principles of translation dating back to Roman writers are also outlined. A five-step process in producing a good translation…
EFL Instructors' Perceptions of Usefulness and Ease of Use of the LMS Manaba
ERIC Educational Resources Information Center
Toland, Sean; White, Jeremy; Mills, Daniel; Bolliger, Doris U.
2014-01-01
Learning Management Systems (LMSs) have become important tools in higher education language instruction, which can facilitate both student learning and the administration of courses. The decision regarding which LMS a particular university adopts is a complicated process where the needs and opinions of several stakeholders, including…
Encouraging Learners to Create Language-Learning Materials
ERIC Educational Resources Information Center
Moiseenko, Veronika
2015-01-01
Student-produced materials are a powerful tool for promoting learner autonomy. They challenge the traditional paradigm of education because the very concept of learner-produced materials is based on trust in the student-centered learning process; when developing materials, learners do not rely on the teacher to make every decision. In this…
Tools for language: patterned iconicity in sign language nouns and verbs.
Padden, Carol; Hwang, So-One; Lepic, Ryan; Seegers, Sharon
2015-01-01
When naming certain hand-held, man-made tools, American Sign Language (ASL) signers exhibit either of two iconic strategies: a handling strategy, where the hands show holding or grasping an imagined object in action, or an instrument strategy, where the hands represent the shape or a dimension of the object in a typical action. The same strategies are also observed in the gestures of hearing nonsigners identifying pictures of the same set of tools. In this paper, we compare spontaneously created gestures from hearing nonsigning participants to commonly used lexical signs in ASL. Signers and gesturers were asked to respond to pictures of tools and to video vignettes of actions involving the same tools. Nonsigning gesturers overwhelmingly prefer the handling strategy for both the Picture and Video conditions. Nevertheless, they use more instrument forms when identifying tools in pictures, and more handling forms when identifying actions with tools. We found that ASL signers generally favor the instrument strategy when naming tools, but when describing tools being used by an actor, they are significantly more likely to use more handling forms. The finding that both gesturers and signers are more likely to alternate strategies when the stimuli are pictures or video suggests a common cognitive basis for differentiating objects from actions. Furthermore, the presence of a systematic handling/instrument iconic pattern in a sign language demonstrates that a conventionalized sign language exploits the distinction for grammatical purpose, to distinguish nouns and verbs related to tool use. Copyright © 2014 Cognitive Science Society, Inc.
SLIPTA e-Tool improves laboratory audit process in Vietnam and Cambodia
Nguyen, Thuong T.; McKinney, Barbara; Pierson, Antoine; Luong, Khue N.; Hoang, Quynh T.; Meharwal, Sandeep; Carvalho, Humberto M.; Nguyen, Cuong Q.; Nguyen, Kim T.
2014-01-01
Background The Stepwise Laboratory Quality Improvement Process Towards Accreditation (SLIPTA) checklist is used worldwide to drive quality improvement in laboratories in developing countries and to assess the effectiveness of interventions such as the Strengthening Laboratory Management Toward Accreditation (SLMTA) programme. However, the paper-based format of the checklist makes administration cumbersome and limits timely analysis and communication of results. Development of e-Tool In early 2012, the SLMTA team in Vietnam developed an electronic SLIPTA checklist tool. The e-Tool was pilot tested in Vietnam in mid-2012 and revised. It was used during SLMTA implementation in Vietnam and Cambodia in 2012 and 2013 and further revised based on auditors’ feedback about usability. Outcomes The SLIPTA e-Tool enabled rapid turn-around of audit results, reduced workload and language barriers and facilitated analysis of national results. Benefits of the e-Tool will be magnified with in-country scale-up of laboratory quality improvement efforts and potential expansion to other countries. PMID:29043190
Banna, Jinan C; Vera Becerra, Luz E; Kaiser, Lucia L; Townsend, Marilyn S
2010-01-01
Development of outcome measures relevant to health nutrition behaviors requires a rigorous process of testing and revision. Whereas researchers often report performance of quantitative data collection to assess questionnaire validity and reliability, qualitative testing procedures are often overlooked. This report outlines a procedure for assessing face validity of a Spanish-language dietary assessment tool. Reviewing the literature produced no rigorously validated Spanish-language food behavior assessment tools for the US Department of Agriculture's food assistance and education programs. In response to this need, this study evaluated the face validity of a Spanish-language food behavior checklist adapted from a 16-item English version of a food behavior checklist shown to be valid and reliable for limited-resource English speakers. The English version was translated using rigorous methods involving initial translation by one party and creation of five possible versions. Photos were modified based on client input and new photos were taken as necessary. A sample of low-income, Spanish-speaking women completed cognitive interviews (n=20). Spanish translation experts (n=7) fluent in both languages and familiar with both cultures made minor modifications but essentially approved client preferences. The resulting checklist generated a readability score of 93, indicating low reading difficulty. The Spanish-language checklist has adequate face validity in the target population and is ready for further validation using convergent measures. At the conclusion of testing, this instrument may be used to evaluate nutrition education interventions in California. These qualitative procedures provide a framework for designing evaluation tools for low-literate audiences participating in the US Department of Agriculture food assistance and education programs. Copyright 2010 American Dietetic Association. Published by Elsevier Inc. All rights reserved.
BANNA, JINAN C.; VERA BECERRA, LUZ E.; KAISER, LUCIA L.; TOWNSEND, MARILYN S.
2015-01-01
Development of outcome measures relevant to health nutrition behaviors requires a rigorous process of testing and revision. Whereas researchers often report performance of quantitative data collection to assess questionnaire validity and reliability, qualitative testing procedures are often overlooked. This report outlines a procedure for assessing face validity of a Spanish-language dietary assessment tool. Reviewing the literature produced no rigorously validated Spanish-language food behavior assessment tools for the US Department of Agriculture’s food assistance and education programs. In response to this need, this study evaluated the face validity of a Spanish-language food behavior checklist adapted from a 16-item English version of a food behavior checklist shown to be valid and reliable for limited-resource English speakers. The English version was translated using rigorous methods involving initial translation by one party and creation of five possible versions. Photos were modified based on client input and new photos were taken as necessary. A sample of low-income, Spanish-speaking women completed cognitive interviews (n=20). Spanish translation experts (n=7) fluent in both languages and familiar with both cultures made minor modifications but essentially approved client preferences. The resulting checklist generated a readability score of 93, indicating low reading difficulty. The Spanish-language checklist has adequate face validity in the target population and is ready for further validation using convergent measures. At the conclusion of testing, this instrument may be used to evaluate nutrition education interventions in California. These qualitative procedures provide a framework for designing evaluation tools for low-literate audiences participating in the US Department of Agriculture food assistance and education programs. PMID:20102831
Village Voices, Global Visions: Digital Video as a Transformative Foreign Language Learning Tool
ERIC Educational Resources Information Center
Goulah, Jason
2007-01-01
This instrumental case study examines how adolescent high-intermediate Japanese language learners enrolled in a one-month credited abroad program used video as a mediational tool for (1) learning foreign language, content, and technology skills, (2) cultivating critical multiliteracies and transformative learning regarding geopolitics and the…
Developing an Indigenous Proficiency Scale
ERIC Educational Resources Information Center
Kahakalau, Ku
2017-01-01
With an increased interest in the revitalization of Indigenous languages and cultural practices worldwide, there is also an increased need to develop tools to support Indigenous language learners and instructors. The purpose of this article is to presents such a tool called ANA 'OLELO, designed specifically to assess Hawaiian language proficiency.…
Public Domain Generic Tools: An Overview.
ERIC Educational Resources Information Center
Erjavec, Tomaz
This paper presents an introduction to language engineering software, especially for computerized language and text corpora. The focus of the paper is on small and relatively independent pieces of software designed for specific, often low-level language analysis tasks, and on tools in the public domain. Discussion begins with the application of…
Specification, Design, and Analysis of Advanced HUMS Architectures
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
2004-01-01
During the two-year project period, we have worked on several aspects of domain-specific architectures for HUMS. In particular, we looked at using scenario-based approach for the design and designed a language for describing such architectures. The language is now being used in all aspects of our HUMS design. In particular, we have made contributions in the following areas. 1) We have employed scenarios in the development of HUMS in three main areas. They are: (a) To improve reusability by using scenarios as a library indexing tool and as a domain analysis tool; (b) To improve maintainability by recording design rationales from two perspectives - problem domain and solution domain; (c) To evaluate the software architecture. 2) We have defined a new architectural language called HADL or HUMS Architectural Definition Language. It is a customized version of xArch/xADL. It is based on XML and, hence, is easily portable from domain to domain, application to application, and machine to machine. Specifications written in HADL can be easily read and parsed using the currently available XML parsers. Thus, there is no need to develop a plethora of software to support HADL. 3) We have developed an automated design process that involves two main techniques: (a) Selection of solutions from a large space of designs; (b) Synthesis of designs. However, the automation process is not an absolute Artificial Intelligence (AI) approach though it uses a knowledge-based system that epitomizes a specific HUMS domain. The process uses a database of solutions as an aid to solve the problems rather than creating a new design in the literal sense. Since searching is adopted as the main technique, the challenges involved are: (a) To minimize the effort in searching the database where a very large number of possibilities exist; (b) To develop representations that could conveniently allow us to depict design knowledge evolved over many years; (c) To capture the required information that aid the automation process.
Conversion of the agent-oriented domain-specific language ALAS into JavaScript
NASA Astrophysics Data System (ADS)
Sredojević, Dejan; Vidaković, Milan; Okanović, Dušan; Mitrović, Dejan; Ivanović, Mirjana
2016-06-01
This paper shows generation of JavaScript code from code written in agent-oriented domain-specific language ALAS. ALAS is an agent-oriented domain-specific language for writing software agents that are executed within XJAF middleware. Since the agents can be executed on various platforms, they must be converted into a language of the target platform. We also try to utilize existing tools and technologies to make the whole conversion process as simple as possible, as well as faster and more efficient. We use the Xtext framework that is compatible with Java to implement ALAS infrastructure - editor and code generator. Since Xtext supports Java, generation of Java code from ALAS code is straightforward. To generate a JavaScript code that will be executed within the target JavaScript XJAF implementation, Google Web Toolkit (GWT) is used.
Hund-Georgiadis, Margret; Lex, Ulrike; Friederici, Angela D; von Cramon, D Yves
2002-07-01
Language lateralization was assessed by two independent functional techniques, fMRI and a dichotic listening test (DLT), in an attempt to establish a reliable and non-invasive protocol of dominance determination. This should particularly address the high intraindividual variability of language lateralization and allow decision-making in individual cases. Functional MRI of word classification tasks showed robust language lateralization in 17 right-handers and 17 left-handers in terms of activation in the inferior frontal gyrus. The DLT was introduced as a complementary tool to MR mapping for language dominance assessment, providing information on perceptual language processing located in superior temporal cortices. The overall agreement of lateralization assessment between the two techniques was 97.1%. Conflicting results were found in one subject, and diverging indices in ten further subjects. Increasing age, non-familial sinistrality, and a non-dominant writing hand were identified as the main factors explaining the observed mismatch between the two techniques. This finding stresses the concept of an intrahemispheric distribution of language function that is obviously associated with certain behavioral characteristics.
A review of cultural adaptations of screening tools for autism spectrum disorders.
Soto, Sandra; Linas, Keri; Jacobstein, Diane; Biel, Matthew; Migdal, Talia; Anthony, Bruno J
2015-08-01
Screening children to determine risk for Autism Spectrum Disorders has become more common, although some question the advisability of such a strategy. The purpose of this systematic review is to identify autism screening tools that have been adapted for use in cultures different from that in which they were developed, evaluate the cultural adaptation process, report on the psychometric properties of the adapted instruments, and describe the implications for further research and clinical practice. A total of 21 articles met criteria for inclusion, reporting on the cultural adaptation of autism screening in 19 countries and in 10 languages. The cultural adaptation process was not always clearly outlined and often did not include the recommended guidelines. Cultural/linguistic modifications to the translated tools tended to increase with the rigor of the adaptation process. Differences between the psychometric properties of the original and adapted versions were common, indicating the need to obtain normative data on populations to increase the utility of the translated tool. © The Author(s) 2014.
ERIC Educational Resources Information Center
Fredriksson, Christine
2015-01-01
Synchronous written chat and instant messaging are tools which have been used and explored in online language learning settings for at least two decades. Research literature has shown that such tools give second language (L2) learners opportunities for language learning, e.g. , the interaction in real time with peers and native speakers, the…
ERIC Educational Resources Information Center
Spieker, Matthew H.
2017-01-01
The purpose of this study was to compare the use of figurative language between master and novice instrumental music teachers and to investigate their attitudes toward figurative language as a teaching tool. Figurative language is defined as any creative verbal instruction intended to teach a concept. Sixteen (N = 16) secondary school,…
Ivanova, Maria V.; Hallowell, Brooke
2013-01-01
Background There are a limited number of aphasia language tests in the majority of the world's commonly spoken languages. Furthermore, few aphasia tests in languages other than English have been standardized and normed, and few have supportive psychometric data pertaining to reliability and validity. The lack of standardized assessment tools across many of the world's languages poses serious challenges to clinical practice and research in aphasia. Aims The current review addresses this lack of assessment tools by providing conceptual and statistical guidance for the development of aphasia assessment tools and establishment of their psychometric properties. Main Contribution A list of aphasia tests in the 20 most widely spoken languages is included. The pitfalls of translating an existing test into a new language versus creating a new test are outlined. Factors to consider in determining test content are discussed. Further, a description of test items corresponding to different language functions is provided, with special emphasis on implementing important controls in test design. Next, a broad review of principal psychometric properties relevant to aphasia tests is presented, with specific statistical guidance for establishing psychometric properties of standardized assessment tools. Conclusions This article may be used to help guide future work on developing, standardizing and validating aphasia language tests. The considerations discussed are also applicable to the development of standardized tests of other cognitive functions. PMID:23976813
A programmable computational image sensor for high-speed vision
NASA Astrophysics Data System (ADS)
Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian
2013-08-01
In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.
Open Source Clinical NLP - More than Any Single System.
Masanz, James; Pakhomov, Serguei V; Xu, Hua; Wu, Stephen T; Chute, Christopher G; Liu, Hongfang
2014-01-01
The number of Natural Language Processing (NLP) tools and systems for processing clinical free-text has grown as interest and processing capability have surged. Unfortunately any two systems typically cannot simply interoperate, even when both are built upon a framework designed to facilitate the creation of pluggable components. We present two ongoing activities promoting open source clinical NLP. The Open Health Natural Language Processing (OHNLP) Consortium was originally founded to foster a collaborative community around clinical NLP, releasing UIMA-based open source software. OHNLP's mission currently includes maintaining a catalog of clinical NLP software and providing interfaces to simplify the interaction of NLP systems. Meanwhile, Apache cTAKES aims to integrate best-of-breed annotators, providing a world-class NLP system for accessing clinical information within free-text. These two activities are complementary. OHNLP promotes open source clinical NLP activities in the research community and Apache cTAKES bridges research to the health information technology (HIT) practice.
2013-09-01
processes used in space system acquisitions, simply implementing a data exchange specification would not fundamentally improve how information is...instruction, searching existing data sources , gathering and maintaining the data needed, and completing and reviewing the collection of information ...and manage the configuration of all critical program models, processes , and tools used throughout the DoD. Second, mandate a data exchange
Speech, stone tool-making and the evolution of language.
Cataldo, Dana Michelle; Migliano, Andrea Bamberg; Vinicius, Lucio
2018-01-01
The 'technological hypothesis' proposes that gestural language evolved in early hominins to enable the cultural transmission of stone tool-making skills, with speech appearing later in response to the complex lithic industries of more recent hominins. However, no flintknapping study has assessed the efficiency of speech alone (unassisted by gesture) as a tool-making transmission aid. Here we show that subjects instructed by speech alone underperform in stone tool-making experiments in comparison to subjects instructed through either gesture alone or 'full language' (gesture plus speech), and also report lower satisfaction with their received instruction. The results provide evidence that gesture was likely to be selected over speech as a teaching aid in the earliest hominin tool-makers; that speech could not have replaced gesturing as a tool-making teaching aid in later hominins, possibly explaining the functional retention of gesturing in the full language of modern humans; and that speech may have evolved for reasons unrelated to tool-making. We conclude that speech is unlikely to have evolved as tool-making teaching aid superior to gesture, as claimed by the technological hypothesis, and therefore alternative views should be considered. For example, gestural language may have evolved to enable tool-making in earlier hominins, while speech may have later emerged as a response to increased trade and more complex inter- and intra-group interactions in Middle Pleistocene ancestors of Neanderthals and Homo sapiens; or gesture and speech may have evolved in parallel rather than in sequence.
Accuracy of a Screening Tool for Early Identification of Language Impairment
ERIC Educational Resources Information Center
Uilenburg, Noëlle; Wiefferink, Karin; Verkerk, Paul; van Denderen, Margot; van Schie, Carla; Oudesluys-Murphy, Ann-Marie
2018-01-01
Purpose: A screening tool called the "VTO Language Screening Instrument" (VTO-LSI) was developed to enable more uniform and earlier detection of language impairment. This report, consisting of 2 retrospective studies, focuses on the effects of using the VTO-LSI compared to regular detection procedures. Method: Study 1 retrospectively…
Language at a Distance: Sharpening a Communication Tool in the Online Classroom
ERIC Educational Resources Information Center
Hannan, Annika
2009-01-01
Both immensely powerful and entirely fickle, language in online instruction is a double-edged sword. A potent intermediary between instructor and students, and among students themselves, language is a key tool in online learning. It carries and cultivates information. It builds knowledge and self-awareness. It brings learners together in a…
Podcasting: An Effective Tool for Honing Language Students' Pronunciation?
ERIC Educational Resources Information Center
Ducate, Lara; Lomicka, Lara
2009-01-01
This paper reports on an investigation of podcasting as a tool for honing pronunciation skills in intermediate language learning. We examined the effects of using podcasts to improve pronunciation in second language learning and how students' attitudes changed toward pronunciation over the semester. A total of 22 students in intermediate German…
Integrating Computer-Assisted Translation Tools into Language Learning
ERIC Educational Resources Information Center
Fernández-Parra, María
2016-01-01
Although Computer-Assisted Translation (CAT) tools play an important role in the curriculum in many university translator training programmes, they are seldom used in the context of learning a language, as a good command of a language is needed before starting to translate. Since many institutions often have translator-training programmes as well…
A Format for Phylogenetic Placements
Matsen, Frederick A.; Hoffman, Noah G.; Gallagher, Aaron; Stamatakis, Alexandros
2012-01-01
We have developed a unified format for phylogenetic placements, that is, mappings of environmental sequence data (e.g., short reads) into a phylogenetic tree. We are motivated to do so by the growing number of tools for computing and post-processing phylogenetic placements, and the lack of an established standard for storing them. The format is lightweight, versatile, extensible, and is based on the JSON format, which can be parsed by most modern programming languages. Our format is already implemented in several tools for computing and post-processing parsimony- and likelihood-based phylogenetic placements and has worked well in practice. We believe that establishing a standard format for analyzing read placements at this early stage will lead to a more efficient development of powerful and portable post-analysis tools for the growing applications of phylogenetic placement. PMID:22383988
A format for phylogenetic placements.
Matsen, Frederick A; Hoffman, Noah G; Gallagher, Aaron; Stamatakis, Alexandros
2012-01-01
We have developed a unified format for phylogenetic placements, that is, mappings of environmental sequence data (e.g., short reads) into a phylogenetic tree. We are motivated to do so by the growing number of tools for computing and post-processing phylogenetic placements, and the lack of an established standard for storing them. The format is lightweight, versatile, extensible, and is based on the JSON format, which can be parsed by most modern programming languages. Our format is already implemented in several tools for computing and post-processing parsimony- and likelihood-based phylogenetic placements and has worked well in practice. We believe that establishing a standard format for analyzing read placements at this early stage will lead to a more efficient development of powerful and portable post-analysis tools for the growing applications of phylogenetic placement.
Proposal for constructing an advanced software tool for planetary atmospheric modeling
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Sims, Michael H.; Podolak, Esther; Mckay, Christopher P.; Thompson, David E.
1990-01-01
Scientific model building can be a time intensive and painstaking process, often involving the development of large and complex computer programs. Despite the effort involved, scientific models cannot easily be distributed and shared with other scientists. In general, implemented scientific models are complex, idiosyncratic, and difficult for anyone but the original scientist/programmer to understand. We believe that advanced software techniques can facilitate both the model building and model sharing process. We propose to construct a scientific modeling software tool that serves as an aid to the scientist in developing and using models. The proposed tool will include an interactive intelligent graphical interface and a high level, domain specific, modeling language. As a testbed for this research, we propose development of a software prototype in the domain of planetary atmospheric modeling.
Query2Question: Translating Visualization Interaction into Natural Language.
Nafari, Maryam; Weaver, Chris
2015-06-01
Richly interactive visualization tools are increasingly popular for data exploration and analysis in a wide variety of domains. Existing systems and techniques for recording provenance of interaction focus either on comprehensive automated recording of low-level interaction events or on idiosyncratic manual transcription of high-level analysis activities. In this paper, we present the architecture and translation design of a query-to-question (Q2Q) system that automatically records user interactions and presents them semantically using natural language (written English). Q2Q takes advantage of domain knowledge and uses natural language generation (NLG) techniques to translate and transcribe a progression of interactive visualization states into a visual log of styled text that complements and effectively extends the functionality of visualization tools. We present Q2Q as a means to support a cross-examination process in which questions rather than interactions are the focus of analytic reasoning and action. We describe the architecture and implementation of the Q2Q system, discuss key design factors and variations that effect question generation, and present several visualizations that incorporate Q2Q for analysis in a variety of knowledge domains.
Iriki, Atsushi; Taoka, Miki
2012-01-12
Hominin evolution has involved a continuous process of addition of new kinds of cognitive capacity, including those relating to manufacture and use of tools and to the establishment of linguistic faculties. The dramatic expansion of the brain that accompanied additions of new functional areas would have supported such continuous evolution. Extended brain functions would have driven rapid and drastic changes in the hominin ecological niche, which in turn demanded further brain resources to adapt to it. In this way, humans have constructed a novel niche in each of the ecological, cognitive and neural domains, whose interactions accelerated their individual evolution through a process of triadic niche construction. Human higher cognitive activity can therefore be viewed holistically as one component in a terrestrial ecosystem. The brain's functional characteristics seem to play a key role in this triadic interaction. We advance a speculative argument about the origins of its neurobiological mechanisms, as an extension (with wider scope) of the evolutionary principles of adaptive function in the animal nervous system. The brain mechanisms that subserve tool use may bridge the gap between gesture and language--the site of such integration seems to be the parietal and extending opercular cortices.
Stone tools, language and the brain in human evolution.
Stout, Dietrich; Chaminade, Thierry
2012-01-12
Long-standing speculations and more recent hypotheses propose a variety of possible evolutionary connections between language, gesture and tool use. These arguments have received important new support from neuroscientific research on praxis, observational action understanding and vocal language demonstrating substantial functional/anatomical overlap between these behaviours. However, valid reasons for scepticism remain as well as substantial differences in detail between alternative evolutionary hypotheses. Here, we review the current status of alternative 'gestural' and 'technological' hypotheses of language origins, drawing on current evidence of the neural bases of speech and tool use generally, and on recent studies of the neural correlates of Palaeolithic technology specifically.
Stone tools, language and the brain in human evolution
Stout, Dietrich; Chaminade, Thierry
2012-01-01
Long-standing speculations and more recent hypotheses propose a variety of possible evolutionary connections between language, gesture and tool use. These arguments have received important new support from neuroscientific research on praxis, observational action understanding and vocal language demonstrating substantial functional/anatomical overlap between these behaviours. However, valid reasons for scepticism remain as well as substantial differences in detail between alternative evolutionary hypotheses. Here, we review the current status of alternative ‘gestural’ and ‘technological’ hypotheses of language origins, drawing on current evidence of the neural bases of speech and tool use generally, and on recent studies of the neural correlates of Palaeolithic technology specifically. PMID:22106428
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, H; Tan, J; Kavanaugh, J
Purpose: Radiotherapy (RT) contours delineated either manually or semiautomatically require verification before clinical usage. Manual evaluation is very time consuming. A new integrated software tool using supervised pattern contour recognition was thus developed to facilitate this process. Methods: The contouring tool was developed using an object-oriented programming language C# and application programming interfaces, e.g. visualization toolkit (VTK). The C# language served as the tool design basis. The Accord.Net scientific computing libraries were utilized for the required statistical data processing and pattern recognition, while the VTK was used to build and render 3-D mesh models from critical RT structures in real-timemore » and 360° visualization. Principal component analysis (PCA) was used for system self-updating geometry variations of normal structures based on physician-approved RT contours as a training dataset. The inhouse design of supervised PCA-based contour recognition method was used for automatically evaluating contour normality/abnormality. The function for reporting the contour evaluation results was implemented by using C# and Windows Form Designer. Results: The software input was RT simulation images and RT structures from commercial clinical treatment planning systems. Several abilities were demonstrated: automatic assessment of RT contours, file loading/saving of various modality medical images and RT contours, and generation/visualization of 3-D images and anatomical models. Moreover, it supported the 360° rendering of the RT structures in a multi-slice view, which allows physicians to visually check and edit abnormally contoured structures. Conclusion: This new software integrates the supervised learning framework with image processing and graphical visualization modules for RT contour verification. This tool has great potential for facilitating treatment planning with the assistance of an automatic contour evaluation module in avoiding unnecessary manual verification for physicians/dosimetrists. In addition, its nature as a compact and stand-alone tool allows for future extensibility to include additional functions for physicians’ clinical needs.« less
Using XML Configuration-Driven Development to Create a Customizable Ground Data System
NASA Technical Reports Server (NTRS)
Nash, Brent; DeMore, Martha
2009-01-01
The Mission data Processing and Control Subsystem (MPCS) is being developed as a multi-mission Ground Data System with the Mars Science Laboratory (MSL) as the first fully supported mission. MPCS is a fully featured, Java-based Ground Data System (GDS) for telecommand and telemetry processing based on Configuration-Driven Development (CDD). The eXtensible Markup Language (XML) is the ideal language for CDD because it is easily readable and editable by all levels of users and is also backed by a World Wide Web Consortium (W3C) standard and numerous powerful processing tools that make it uniquely flexible. The CDD approach adopted by MPCS minimizes changes to compiled code by using XML to create a series of configuration files that provide both coarse and fine grained control over all aspects of GDS operation.
UMLS content views appropriate for NLP processing of the biomedical literature vs. clinical text.
Demner-Fushman, Dina; Mork, James G; Shooshan, Sonya E; Aronson, Alan R
2010-08-01
Identification of medical terms in free text is a first step in such Natural Language Processing (NLP) tasks as automatic indexing of biomedical literature and extraction of patients' problem lists from the text of clinical notes. Many tools developed to perform these tasks use biomedical knowledge encoded in the Unified Medical Language System (UMLS) Metathesaurus. We continue our exploration of automatic approaches to creation of subsets (UMLS content views) which can support NLP processing of either the biomedical literature or clinical text. We found that suppression of highly ambiguous terms in the conservative AutoFilter content view can partially replace manual filtering for literature applications, and suppression of two character mappings in the same content view achieves 89.5% precision at 78.6% recall for clinical applications. Published by Elsevier Inc.
ATS displays: A reasoning visualization tool for expert systems
NASA Technical Reports Server (NTRS)
Selig, William John; Johannes, James D.
1990-01-01
Reasoning visualization is a useful tool that can help users better understand the inherently non-sequential logic of an expert system. While this is desirable in most all expert system applications, it is especially so for such critical systems as those destined for space-based operations. A hierarchical view of the expert system reasoning process and some characteristics of these various levels is presented. Also presented are Abstract Time Slice (ATS) displays, a tool to visualize the plethora of interrelated information available at the host inferencing language level of reasoning. The usefulness of this tool is illustrated with some examples from a prototype potable water expert system for possible use aboard Space Station Freedom.
NASA Astrophysics Data System (ADS)
Sato, Yuko
The purpose of this study was to investigate the effects of culture and language on Japanese aerospace engineers' information-seeking processes by both quantitative and qualitative approaches. The Japanese sample consisted of 162 members of the Japan Society for Aeronautical and Space Sciences (JSASS). U.S. aerospace engineers served as a reference point, consisting of 213 members of the American Institute of Aeronautics and Astronautics (AIAA). The survey method was utilized in gathering data using self-administered mail questionnaires in order to explore the following eight areas: (1) the content and use of information resources; (2) production and use of information products; (3) methods of accessing information service providers; (4) foreign language skills; (5) studying/researching/collaborating abroad as a tool in expanding information resources; (6) scientific and technical societies as networking tools; (7) alumni associations (school/class reunions) as networking tools; and (8) social, corporate, civic and health/fitness clubs as networking tools. Nine Japanese cultural factors expressed as statements about Japanese society are as follows: (1) information is neither autonomous, objective, nor independent of the subject of cognition; (2) information and knowledge are not readily accessible to the public; (3) emphasis on groups is reinforced in a hierarchical society; (4) social networks thrive as information-sharing vehicles; (5) high context is a predominant form of communication in which most of the information is already in the person, while very little is in the coded, transmitted part of the message; (6) obligations based on mutual trust dictate social behaviors instead of contractual agreements; (7) a surface message is what is presented while a bottom-line message is true feeling privately held; (8) various religious beliefs uphold a work ethic based on harmony; (9) ideas from outside are readily assimilated into its own society. The result of the investigation showed that culture and language affect Japanese aerospace engineers' information-seeking processes. The awareness and the knowledge of such effects will lead to improvement in global information services in aerospace engineering by incorporating various information resource providing organizations.
Uses of Digital Tools and Literacies in the English Language Arts Classroom
ERIC Educational Resources Information Center
Beach, Richard
2012-01-01
This article reviews research on English language arts teachers' use of digital tools in the classroom to remediate print literacies. Specifically, this review focuses on the affordances of digital tools to foster uses of digital literacies of informational/accessibility, collaboration knowledge construction, multimodal communication, gaming…
It Is Time to Rethink Central Auditory Processing Disorder Protocols for School-Aged Children.
DeBonis, David A
2015-06-01
The purpose of this article is to review the literature that pertains to ongoing concerns regarding the central auditory processing construct among school-aged children and to assess whether the degree of uncertainty surrounding central auditory processing disorder (CAPD) warrants a change in current protocols. Methodology on this topic included a review of relevant and recent literature through electronic search tools (e.g., ComDisDome, PsycINFO, Medline, and Cochrane databases); published texts; as well as published articles from the Journal of the American Academy of Audiology; the American Journal of Audiology; the Journal of Speech, Language, and Hearing Research; and Language, Speech, and Hearing Services in Schools. This review revealed strong support for the following: (a) Current testing of CAPD is highly influenced by nonauditory factors, including memory, attention, language, and executive function; (b) the lack of agreement regarding the performance criteria for diagnosis is concerning; (c) the contribution of auditory processing abilities to language, reading, and academic and listening abilities, as assessed by current measures, is not significant; and (d) the effectiveness of auditory interventions for improving communication abilities has not been established. Routine use of CAPD test protocols cannot be supported, and strong consideration should be given to redirecting focus on assessing overall listening abilities. Also, intervention needs to be contextualized and functional. A suggested protocol is provided for consideration. All of these issues warrant ongoing research.
Exploring Listeners' Real-Time Reactions to Regional Accents
ERIC Educational Resources Information Center
Watson, Kevin; Clark, Lynn
2015-01-01
Evaluative reactions to language stimuli are presumably dynamic events, constantly changing through time as the signal unfolds, yet the tools we usually use to capture these reactions provide us with only a snapshot of this process by recording reactions at a single point in time. This paper outlines and evaluates a new methodology which employs…
"Immunity-to-Change Language Technology": An Educational Tool for Pastoral Leadership Education
ERIC Educational Resources Information Center
Ste-Marie, Lorraine
2008-01-01
One of the primary aims of pastoral leadership education is to offer reflective processes that enable learners to surface, critique, and construct different epistemological conceptions of reality leading to more effective pastoral practice. In many pastoral leadership education programs, this type of intentional reflection usually takes place in a…
Writing as Learning: A Content-Based Approach.
ERIC Educational Resources Information Center
Rothstein, Evelyn; Lauber, Gerald
Based on the understanding that writing should not be confined to the language arts classroom, this book provides over 200 examples of how 12 different strategies can be used in kindergarten through high school classrooms. The writing strategies in the book demonstrate how writing can also be employed as a powerful tool for processing new…
ERIC Educational Resources Information Center
Correia, Secundino; Medeiros, Paula; Mendes, Mafalda; Silva, Margarida
2013-01-01
We are in an innovation process for the development of a new generation of tools and resources for education and training throughout life, available in any platform, at anytime and place and in any language. The project TOPQX intends to congregate a set of theoretical and empirical resources that form a scientific base from which it will be…
ERIC Educational Resources Information Center
Witt, Autumn Song
2010-01-01
This dissertation follows an oral language assessment tool from initial design and implementation to validity analysis. The specialized variables of this study are the population: international teaching assistants and the purpose: spoken assessment as a hiring prerequisite. However, the process can easily be applied to other populations and…
ERIC Educational Resources Information Center
Hardin, Belinda J.; Scott-Little, Catherine; Mereoiu, Mariana
2013-01-01
With the increasing number of preschool-age children of Latino heritage entering U.S. schools comes a growing need to accurately determine children's individual needs and identify potential disabilities, beginning with the screening process. Unfortunately, teachers face many challenges when screening English language learners. Often, parents have…
An Evolving Ecosystem for Natural Language Processing in Department of Veterans Affairs.
Garvin, Jennifer H; Kalsy, Megha; Brandt, Cynthia; Luther, Stephen L; Divita, Guy; Coronado, Gregory; Redd, Doug; Christensen, Carrie; Hill, Brent; Kelly, Natalie; Treitler, Qing Zeng
2017-02-01
In an ideal clinical Natural Language Processing (NLP) ecosystem, researchers and developers would be able to collaborate with others, undertake validation of NLP systems, components, and related resources, and disseminate them. We captured requirements and formative evaluation data from the Veterans Affairs (VA) Clinical NLP Ecosystem stakeholders using semi-structured interviews and meeting discussions. We developed a coding rubric to code interviews. We assessed inter-coder reliability using percent agreement and the kappa statistic. We undertook 15 interviews and held two workshop discussions. The main areas of requirements related to; design and functionality, resources, and information. Stakeholders also confirmed the vision of the second generation of the Ecosystem and recommendations included; adding mechanisms to better understand terms, measuring collaboration to demonstrate value, and datasets/tools to navigate spelling errors with consumer language, among others. Stakeholders also recommended capability to: communicate with developers working on the next version of the VA electronic health record (VistA Evolution), provide a mechanism to automatically monitor download of tools and to automatically provide a summary of the downloads to Ecosystem contributors and funders. After three rounds of coding and discussion, we determined the percent agreement of two coders to be 97.2% and the kappa to be 0.7851. The vision of the VA Clinical NLP Ecosystem met stakeholder needs. Interviews and discussion provided key requirements that inform the design of the VA Clinical NLP Ecosystem.
Benchmarking natural-language parsers for biological applications using dependency graphs.
Clegg, Andrew B; Shepherd, Adrian J
2007-01-25
Interest is growing in the application of syntactic parsers to natural language processing problems in biology, but assessing their performance is difficult because differences in linguistic convention can falsely appear to be errors. We present a method for evaluating their accuracy using an intermediate representation based on dependency graphs, in which the semantic relationships important in most information extraction tasks are closer to the surface. We also demonstrate how this method can be easily tailored to various application-driven criteria. Using the GENIA corpus as a gold standard, we tested four open-source parsers which have been used in bioinformatics projects. We first present overall performance measures, and test the two leading tools, the Charniak-Lease and Bikel parsers, on subtasks tailored to reflect the requirements of a system for extracting gene expression relationships. These two tools clearly outperform the other parsers in the evaluation, and achieve accuracy levels comparable to or exceeding native dependency parsers on similar tasks in previous biological evaluations. Evaluating using dependency graphs allows parsers to be tested easily on criteria chosen according to the semantics of particular biological applications, drawing attention to important mistakes and soaking up many insignificant differences that would otherwise be reported as errors. Generating high-accuracy dependency graphs from the output of phrase-structure parsers also provides access to the more detailed syntax trees that are used in several natural-language processing techniques.
Benchmarking natural-language parsers for biological applications using dependency graphs
Clegg, Andrew B; Shepherd, Adrian J
2007-01-01
Background Interest is growing in the application of syntactic parsers to natural language processing problems in biology, but assessing their performance is difficult because differences in linguistic convention can falsely appear to be errors. We present a method for evaluating their accuracy using an intermediate representation based on dependency graphs, in which the semantic relationships important in most information extraction tasks are closer to the surface. We also demonstrate how this method can be easily tailored to various application-driven criteria. Results Using the GENIA corpus as a gold standard, we tested four open-source parsers which have been used in bioinformatics projects. We first present overall performance measures, and test the two leading tools, the Charniak-Lease and Bikel parsers, on subtasks tailored to reflect the requirements of a system for extracting gene expression relationships. These two tools clearly outperform the other parsers in the evaluation, and achieve accuracy levels comparable to or exceeding native dependency parsers on similar tasks in previous biological evaluations. Conclusion Evaluating using dependency graphs allows parsers to be tested easily on criteria chosen according to the semantics of particular biological applications, drawing attention to important mistakes and soaking up many insignificant differences that would otherwise be reported as errors. Generating high-accuracy dependency graphs from the output of phrase-structure parsers also provides access to the more detailed syntax trees that are used in several natural-language processing techniques. PMID:17254351
Teachers' Opinions about the Use of Body Language
ERIC Educational Resources Information Center
Benzer, Ahmet
2012-01-01
Effective communication occurs with non-verbal and verbal tools. In this study the body language as non-verbal communication tool is taken to be examined, and teachers' opinions about the use and importance of body language in education are surveyed. Eight open-ended questions are asked to 100 teachers. As a result, it is shown that teachers…
ERIC Educational Resources Information Center
Sherman, Tracy; Shulman, Brian B.
1999-01-01
This study examined test characteristics of the Pediatric Language Acquisition Screening Tool for Early Referral-Revised (PLASTER-R), a set of developmental questionnaires for children 3 to 60 months of age. The PLASTER-R was moderately to highly successful in identifying children within normal limits for language development. Test-retest…
Enhancing the Learning and Retention of Biblical Languages for Adult Students
ERIC Educational Resources Information Center
Morse, MaryKate
2004-01-01
Finding ways to reduce students' anxiety and maximize the value of learning Greek and Hebrew is a continual challenge for biblical language teachers. Some language teachers use technology tools such as web sites or CDs with audio lessons to improve the experience. Though these tools are helpful, this paper explores the value gained from…
UNIX as an environment for producing numerical software
NASA Technical Reports Server (NTRS)
Schryer, N. L.
1978-01-01
The UNIX operating system supports a number of software tools; a mathematical equation-setting language, a phototypesetting language, a FORTRAN preprocessor language, a text editor, and a command interpreter. The design, implementation, documentation, and maintenance of a portable FORTRAN test of the floating-point arithmetic unit of a computer is used to illustrate these tools at work.
Peer Observation: A Professional Learning Tool for English Language Teachers in an EFL Institute
ERIC Educational Resources Information Center
Ahmed, Ejaz; Nordin, Zaimuariffudin Shukri; Shah, Sayyed Rashid; Channa, Mansoor Ahmed
2018-01-01
The key aim of this study is to explore the perceptions of English as foreign language (EFL) teachers about peer observation as a tool for professional development that is implemented in an English Language Institute of a Saudi Arabian university. This paper reviews literature on peer observation to develop a conceptual and theoretical…
ERIC Educational Resources Information Center
Guardado, Martin
2013-01-01
This article investigates the linguistic tools employed by Hispanic Canadian families in their language socialization efforts of fostering sustained heritage language (HL) use. The article is based on data collected during a 1½-year ethnography, and focuses on the metapragmatic devices used in daily interactions. Utilizing analytic tools from the…
ERIC Educational Resources Information Center
Estapa, Anne; Pinnow, Rachel J.; Chval, Kathryn B.
2016-01-01
This two-year study investigated how an innovative video tool enhanced novice-teacher noticing abilities and instructional practice in relation to teaching mathematics to English language learners in third grade classrooms. Specifically, teachers viewed videos of their mathematics lessons that were filmed by Latino English language learners who…
In pursuit of rigour and accountability in participatory design☆
Frauenberger, Christopher; Good, Judith; Fitzpatrick, Geraldine; Iversen, Ole Sejer
2015-01-01
The field of Participatory Design (PD) has greatly diversified and we see a broad spectrum of approaches and methodologies emerging. However, to foster its role in designing future interactive technologies, a discussion about accountability and rigour across this spectrum is needed. Rejecting the traditional, positivistic framework, we take inspiration from related fields such as Design Research and Action Research to develop interpretations of these concepts that are rooted in PD׳s own belief system. We argue that unlike in other fields, accountability and rigour are nuanced concepts that are delivered through debate, critique and reflection. A key prerequisite for having such debates is the availability of a language that allows designers, researchers and practitioners to construct solid arguments about the appropriateness of their stances, choices and judgements. To this end, we propose a “tool-to-think-with” that provides such a language by guiding designers, researchers and practitioners through a process of systematic reflection and critical analysis. The tool proposes four lenses to critically reflect on the nature of a PD effort: epistemology, values, stakeholders and outcomes. In a subsequent step, the coherence between the revealed features is analysed and shows whether they pull the project in the same direction or work against each other. Regardless of the flavour of PD, we argue that this coherence of features indicates the level of internal rigour of PD work and that the process of reflection and analysis provides the language to argue for it. We envision our tool to be useful at all stages of PD work: in the planning phase, as part of a reflective practice during the work, and as a means to construct knowledge and advance the field after the fact. We ground our theoretical discussions in a specific PD experience, the ECHOES project, to motivate the tool and to illustrate its workings. PMID:26109833
Automated Generation of Technical Documentation and Provenance for Reproducible Research
NASA Astrophysics Data System (ADS)
Jolly, B.; Medyckyj-Scott, D.; Spiekermann, R.; Ausseil, A. G.
2017-12-01
Data provenance and detailed technical documentation are essential components of high-quality reproducible research, however are often only partially addressed during a research project. Recording and maintaining this information during the course of a project can be a difficult task to get right as it is a time consuming and often boring process for the researchers involved. As a result, provenance records and technical documentation provided alongside research results can be incomplete or may not be completely consistent with the actual processes followed. While providing access to the data and code used by the original researchers goes some way toward enabling reproducibility, this does not count as, or replace, data provenance. Additionally, this can be a poor substitute for good technical documentation and is often more difficult for a third-party to understand - particularly if they do not understand the programming language(s) used. We present and discuss a tool built from the ground up for the production of well-documented and reproducible spatial datasets that are created by applying a series of classification rules to a number of input layers. The internal model of the classification rules required by the tool to process the input data is exploited to also produce technical documentation and provenance records with minimal additional user input. Available provenance records that accompany input datasets are incorporated into those that describe the current process. As a result, each time a new iteration of the analysis is performed the documentation and provenance records are re-generated to provide an accurate description of the exact process followed. The generic nature of this tool, and the lessons learned during its creation, have wider application to other fields where the production of derivative datasets must be done in an open, defensible, and reproducible way.
Natural language processing to ascertain two key variables from operative reports in ophthalmology.
Liu, Liyan; Shorstein, Neal H; Amsden, Laura B; Herrinton, Lisa J
2017-04-01
Antibiotic prophylaxis is critical to ophthalmology and other surgical specialties. We performed natural language processing (NLP) of 743 838 operative notes recorded for 315 246 surgeries to ascertain two variables needed to study the comparative effectiveness of antibiotic prophylaxis in cataract surgery. The first key variable was an exposure variable, intracameral antibiotic injection. The second was an intraoperative complication, posterior capsular rupture (PCR), which functioned as a potential confounder. To help other researchers use NLP in their settings, we describe our NLP protocol and lessons learned. For each of the two variables, we used SAS Text Miner and other SAS text-processing modules with a training set of 10 000 (1.3%) operative notes to develop a lexicon. The lexica identified misspellings, abbreviations, and negations, and linked words into concepts (e.g. "antibiotic" linked with "injection"). We confirmed the NLP tools by iteratively obtaining random samples of 2000 (0.3%) notes, with replacement. The NLP tools identified approximately 60 000 intracameral antibiotic injections and 3500 cases of PCR. The positive and negative predictive values for intracameral antibiotic injection exceeded 99%. For the intraoperative complication, they exceeded 94%. NLP was a valid and feasible method for obtaining critical variables needed for a research study of surgical safety. These NLP tools were intended for use in the study sample. Use with external datasets or future datasets in our own setting would require further testing. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Natural Language Processing to Ascertain Two Key Variables from Operative Reports in Ophthalmology
Liu, Liyan; Shorstein, Neal H.; Amsden, Laura B; Herrinton, Lisa J.
2016-01-01
Purpose Antibiotic prophylaxis is critical to ophthalmology and other surgical specialties. We performed natural language processing (NLP) of 743,838 operative notes recorded for 315,246 surgeries to ascertain two variables needed to study the comparative effectiveness of antibiotic prophylaxis in cataract surgery. The first key variable was an exposure variable, intracameral antibiotic injection. The second was an intraoperative complication, posterior capsular rupture (PCR), that functioned as a potential confounder. To help other researchers use NLP in their settings, we describe our NLP protocol and lessons learned. Methods For each of the two variables, we used SAS Text Miner and other SAS text-processing modules with a training set of 10,000 (1.3%) operative notes to develop a lexicon. The lexica identified misspellings, abbreviations, and negations, and linked words into concepts (e.g., “antibiotic” linked with “injection”). We confirmed the NLP tools by iteratively obtaining random samples of 2,000 (0.3%) notes, with replacement. Results The NLP tools identified approximately 60,000 intracameral antibiotic injections and 3,500 cases of PCR. The positive and negative predictive values for intracameral antibiotic injection exceeded 99%. For the intraoperative complication, they exceeded 94%. Conclusion NLP was a valid and feasible method for obtaining critical variables needed for a research study of surgical safety. These NLP tools were intended for use in the study sample. Use with external datasets or future datasets in our own setting would require further testing. PMID:28052483
Schrader, Ulrich; Tackenberg, Peter; Widmer, Rudolf; Portenier, Lucien; König, Peter
2007-01-01
To ease and speed up the translation of the ICNP version 1 into the German language a web service was developed to support the collaborative work of all Austrian, Swiss, and German translators and subsequently of the evaluators of the resultant translation. The web service does help to support a modified Delphi technique. Since the web service is multilingual by design it can facilitate the translation of the ICNP into other languages as well. The process chosen can be adopted by other projects involved in translating terminologies.
An introduction to scripting in Ruby for biologists.
Aerts, Jan; Law, Andy
2009-07-16
The Ruby programming language has a lot to offer to any scientist with electronic data to process. Not only is the initial learning curve very shallow, but its reflection and meta-programming capabilities allow for the rapid creation of relatively complex applications while still keeping the code short and readable. This paper provides a gentle introduction to this scripting language for researchers without formal informatics training such as many wet-lab scientists. We hope this will provide such researchers an idea of how powerful a tool Ruby can be for their data management tasks and encourage them to learn more about it.
DiCanio, Christian; Nam, Hosung; Whalen, Douglas H.; Timothy Bunnell, H.; Amith, Jonathan D.; García, Rey Castillo
2013-01-01
While efforts to document endangered languages have steadily increased, the phonetic analysis of endangered language data remains a challenge. The transcription of large documentation corpora is, by itself, a tremendous feat. Yet, the process of segmentation remains a bottleneck for research with data of this kind. This paper examines whether a speech processing tool, forced alignment, can facilitate the segmentation task for small data sets, even when the target language differs from the training language. The authors also examined whether a phone set with contextualization outperforms a more general one. The accuracy of two forced aligners trained on English (hmalign and p2fa) was assessed using corpus data from Yoloxóchitl Mixtec. Overall, agreement performance was relatively good, with accuracy at 70.9% within 30 ms for hmalign and 65.7% within 30 ms for p2fa. Segmental and tonal categories influenced accuracy as well. For instance, additional stop allophones in hmalign's phone set aided alignment accuracy. Agreement differences between aligners also corresponded closely with the types of data on which the aligners were trained. Overall, using existing alignment systems was found to have potential for making phonetic analysis of small corpora more efficient, with more allophonic phone sets providing better agreement than general ones. PMID:23967953
DiCanio, Christian; Nam, Hosung; Whalen, Douglas H; Bunnell, H Timothy; Amith, Jonathan D; García, Rey Castillo
2013-09-01
While efforts to document endangered languages have steadily increased, the phonetic analysis of endangered language data remains a challenge. The transcription of large documentation corpora is, by itself, a tremendous feat. Yet, the process of segmentation remains a bottleneck for research with data of this kind. This paper examines whether a speech processing tool, forced alignment, can facilitate the segmentation task for small data sets, even when the target language differs from the training language. The authors also examined whether a phone set with contextualization outperforms a more general one. The accuracy of two forced aligners trained on English (hmalign and p2fa) was assessed using corpus data from Yoloxóchitl Mixtec. Overall, agreement performance was relatively good, with accuracy at 70.9% within 30 ms for hmalign and 65.7% within 30 ms for p2fa. Segmental and tonal categories influenced accuracy as well. For instance, additional stop allophones in hmalign's phone set aided alignment accuracy. Agreement differences between aligners also corresponded closely with the types of data on which the aligners were trained. Overall, using existing alignment systems was found to have potential for making phonetic analysis of small corpora more efficient, with more allophonic phone sets providing better agreement than general ones.
Toledo, Cíntia Matsuda; Cunha, Andre; Scarton, Carolina; Aluísio, Sandra
2014-01-01
Discourse production is an important aspect in the evaluation of brain-injured individuals. We believe that studies comparing the performance of brain-injured subjects with that of healthy controls must use groups with compatible education. A pioneering application of machine learning methods using Brazilian Portuguese for clinical purposes is described, highlighting education as an important variable in the Brazilian scenario. Objective The aims were to describe how to: (i) develop machine learning classifiers using features generated by natural language processing tools to distinguish descriptions produced by healthy individuals into classes based on their years of education; and (ii) automatically identify the features that best distinguish the groups. Methods The approach proposed here extracts linguistic features automatically from the written descriptions with the aid of two Natural Language Processing tools: Coh-Metrix-Port and AIC. It also includes nine task-specific features (three new ones, two extracted manually, besides description time; type of scene described – simple or complex; presentation order – which type of picture was described first; and age). In this study, the descriptions by 144 of the subjects studied in Toledo18 were used,which included 200 healthy Brazilians of both genders. Results and Conclusion A Support Vector Machine (SVM) with a radial basis function (RBF) kernel is the most recommended approach for the binary classification of our data, classifying three of the four initial classes. CfsSubsetEval (CFS) is a strong candidate to replace manual feature selection methods. PMID:29213908
Using PHP/MySQL to Manage Potential Mass Impacts
NASA Technical Reports Server (NTRS)
Hager, Benjamin I.
2010-01-01
This paper presents a new application using commercially available software to manage mass properties for spaceflight vehicles. PHP/MySQL(PHP: Hypertext Preprocessor and My Structured Query Language) are a web scripting language and a database language commonly used in concert with each other. They open up new opportunities to develop cutting edge mass properties tools, and in particular, tools for the management of potential mass impacts (threats and opportunities). The paper begins by providing an overview of the functions and capabilities of PHP/MySQL. The focus of this paper is on how PHP/MySQL are being used to develop an advanced "web accessible" database system for identifying and managing mass impacts on NASA's Ares I Upper Stage program, managed by the Marshall Space Flight Center. To fully describe this application, examples of the data, search functions, and views are provided to promote, not only the function, but the security, ease of use, simplicity, and eye-appeal of this new application. This paper concludes with an overview of the other potential mass properties applications and tools that could be developed using PHP/MySQL. The premise behind this paper is that PHP/MySQL are software tools that are easy to use and readily available for the development of cutting edge mass properties applications. These tools are capable of providing "real-time" searching and status of an active database, automated report generation, and other capabilities to streamline and enhance mass properties management application. By using PHP/MySQL, proven existing methods for managing mass properties can be adapted to present-day information technology to accelerate mass properties data gathering, analysis, and reporting, allowing mass property management to keep pace with today's fast-pace design and development processes.
Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki
2014-12-01
As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.
SpeakApps 2: Speaking Practice in a Foreign Language through ICT Tools
ERIC Educational Resources Information Center
Appel, Christine; Nic Giolla Mhichíl, Mairéad; Jager, Sake; Prizel-Kania, Adriana
2014-01-01
SpeakApps 2 is a project with support of the Lifelong Learning Programme, Accompanying Measures. It follows up on the work and results reached during the KA2 project "SpeakApps: Oral production and interaction in a foreign language through ICT tools". The overarching aim of SpeakApps 2 is to further enhance Europeans' language learning…
Pedagogical Models of Concordance Use: Correlations between Concordance User Preferences
ERIC Educational Resources Information Center
Ballance, Oliver James
2017-01-01
One of the most promising avenues of research in computer-assisted language learning is the potential for language learners to make use of language corpora. However, using a corpus requires use of a corpus tool as an interface, typically a concordancer. How such a tool can be made most accessible to learners is an important issue. Specifically,…
The Use of Language Learning Apps as a Didactic Tool for EFL Vocabulary Building
ERIC Educational Resources Information Center
Guaqueta, Cesar A.; Castro-Garces, Angela Yicely
2018-01-01
This study explores the use of language learning apps as a didactic tool for vocabulary building in an English as a Foreign Language (EFL) context. It was developed through a mixed-methods approach, with a concurrent design in order to collect, analyze and validate qualitative and quantitative data. Although there was controversy on the use of…
A novel way of integrating rule-based knowledge into a web ontology language framework.
Gamberger, Dragan; Krstaçić, Goran; Jović, Alan
2013-01-01
Web ontology language (OWL), used in combination with the Protégé visual interface, is a modern standard for development and maintenance of ontologies and a powerful tool for knowledge presentation. In this work, we describe a novel possibility to use OWL also for the conceptualization of knowledge presented by a set of rules. In this approach, rules are represented as a hierarchy of actionable classes with necessary and sufficient conditions defined by the description logic formalism. The advantages are that: the set of the rules is not an unordered set anymore, the concepts defined in descriptive ontologies can be used directly in the bodies of rules, and Protégé presents an intuitive tool for editing the set of rules. Standard ontology reasoning processes are not applicable in this framework, but experiments conducted on the rule sets have demonstrated that the reasoning problems can be successfully solved.
Extending BPM Environments of Your Choice with Performance Related Decision Support
NASA Astrophysics Data System (ADS)
Fritzsche, Mathias; Picht, Michael; Gilani, Wasif; Spence, Ivor; Brown, John; Kilpatrick, Peter
What-if Simulations have been identified as one solution for business performance related decision support. Such support is especially useful in cases where it can be automatically generated out of Business Process Management (BPM) Environments from the existing business process models and performance parameters monitored from the executed business process instances. Currently, some of the available BPM Environments offer basic-level performance prediction capabilities. However, these functionalities are normally too limited to be generally useful for performance related decision support at business process level. In this paper, an approach is presented which allows the non-intrusive integration of sophisticated tooling for what-if simulations, analytic performance prediction tools, process optimizations or a combination of such solutions into already existing BPM environments. The approach abstracts from process modelling techniques which enable automatic decision support spanning processes across numerous BPM Environments. For instance, this enables end-to-end decision support for composite processes modelled with the Business Process Modelling Notation (BPMN) on top of existing Enterprise Resource Planning (ERP) processes modelled with proprietary languages.
One approach for evaluating the Distributed Computing Design System (DCDS)
NASA Technical Reports Server (NTRS)
Ellis, J. T.
1985-01-01
The Distributed Computer Design System (DCDS) provides an integrated environment to support the life cycle of developing real-time distributed computing systems. The primary focus of DCDS is to significantly increase system reliability and software development productivity, and to minimize schedule and cost risk. DCDS consists of integrated methodologies, languages, and tools to support the life cycle of developing distributed software and systems. Smooth and well-defined transistions from phase to phase, language to language, and tool to tool provide a unique and unified environment. An approach to evaluating DCDS highlights its benefits.
NSTX-U Advances in Real-Time C++11 on Linux
NASA Astrophysics Data System (ADS)
Erickson, Keith G.
2015-08-01
Programming languages like C and Ada combined with proprietary embedded operating systems have dominated the real-time application space for decades. The new C++11 standard includes native, language-level support for concurrency, a required feature for any nontrivial event-oriented real-time software. Threads, Locks, and Atomics now exist to provide the necessary tools to build the structures that make up the foundation of a complex real-time system. The National Spherical Torus Experiment Upgrade (NSTX-U) at the Princeton Plasma Physics Laboratory (PPPL) is breaking new ground with the language as applied to the needs of fusion devices. A new Digital Coil Protection System (DCPS) will serve as the main protection mechanism for the magnetic coils, and it is written entirely in C++11 running on Concurrent Computer Corporation's real-time operating system, RedHawk Linux. It runs over 600 algorithms in a 5 kHz control loop that determine whether or not to shut down operations before physical damage occurs. To accomplish this, NSTX-U engineers developed software tools that do not currently exist elsewhere, including real-time atomic synchronization, real-time containers, and a real-time logging framework. Together with a recent (and carefully configured) version of the GCC compiler, these tools enable data acquisition, processing, and output using a conventional operating system to meet a hard real-time deadline (that is, missing one periodic is a failure) of 200 microseconds.
Towards programming languages for genetic engineering of living cells
Pedersen, Michael; Phillips, Andrew
2009-01-01
Synthetic biology aims at producing novel biological systems to carry out some desired and well-defined functions. An ultimate dream is to design these systems at a high level of abstraction using engineering-based tools and programming languages, press a button, and have the design translated to DNA sequences that can be synthesized and put to work in living cells. We introduce such a programming language, which allows logical interactions between potentially undetermined proteins and genes to be expressed in a modular manner. Programs can be translated by a compiler into sequences of standard biological parts, a process that relies on logic programming and prototype databases that contain known biological parts and protein interactions. Programs can also be translated to reactions, allowing simulations to be carried out. While current limitations on available data prevent full use of the language in practical applications, the language can be used to develop formal models of synthetic systems, which are otherwise often presented by informal notations. The language can also serve as a concrete proposal on which future language designs can be discussed, and can help to guide the emerging standard of biological parts which so far has focused on biological, rather than logical, properties of parts. PMID:19369220
Towards programming languages for genetic engineering of living cells.
Pedersen, Michael; Phillips, Andrew
2009-08-06
Synthetic biology aims at producing novel biological systems to carry out some desired and well-defined functions. An ultimate dream is to design these systems at a high level of abstraction using engineering-based tools and programming languages, press a button, and have the design translated to DNA sequences that can be synthesized and put to work in living cells. We introduce such a programming language, which allows logical interactions between potentially undetermined proteins and genes to be expressed in a modular manner. Programs can be translated by a compiler into sequences of standard biological parts, a process that relies on logic programming and prototype databases that contain known biological parts and protein interactions. Programs can also be translated to reactions, allowing simulations to be carried out. While current limitations on available data prevent full use of the language in practical applications, the language can be used to develop formal models of synthetic systems, which are otherwise often presented by informal notations. The language can also serve as a concrete proposal on which future language designs can be discussed, and can help to guide the emerging standard of biological parts which so far has focused on biological, rather than logical, properties of parts.
NASA Technical Reports Server (NTRS)
Milligan, James R.; Dutton, James E.
1993-01-01
In this paper, we have introduced a comprehensive method for enterprise modeling that addresses the three important aspects of how an organization goes about its business. FirstEP includes infrastructure modeling, information modeling, and process modeling notations that are intended to be easy to learn and use. The notations stress the use of straightforward visual languages that are intuitive, syntactically simple, and semantically rich. ProSLCSE will be developed with automated tools and services to facilitate enterprise modeling and process enactment. In the spirit of FirstEP, ProSLCSE tools will also be seductively easy to use. Achieving fully managed, optimized software development and support processes will be long and arduous for most software organizations, and many serious problems will have to be solved along the way. ProSLCSE will provide the ability to document, communicate, and modify existing processes, which is the necessary first step.
Silicon compilation: From the circuit to the system
NASA Astrophysics Data System (ADS)
Obrien, Keven
The methodology used for the compilation of silicon from a behavioral level to a system level is presented. The aim was to link the heretofore unrelated areas of high level synthesis and system level design. This link will play an important role in the development of future design automation tools as it will allow hardware/software co-designs to be synthesized. A design methodology that alllows, through the use of an intermediate representation, SOLAR, a System level Design Language (SDL), to be combined with a Hardware Description Language (VHDL) is presented. Two main steps are required in order to transform this specification into a synthesizable one. Firstly, a system level synthesis step including partitioning and communication synthesis is required in order to split the model into a set of interconnected subsystems, each of which will be processed by a high level synthesis tool. For this latter step AMICAL is used and this allows powerful scheduling techniques to be used, that accept very abstract descriptions of control flow dominated circuits as input, and interconnected RTL blocks that may feed existing logic-level synthesis tools to be generated.
NASA Astrophysics Data System (ADS)
Watanabe, W. M.; Candido, A.; Amâncio, M. A.; De Oliveira, M.; Pardo, T. A. S.; Fortes, R. P. M.; Aluísio, S. M.
2010-12-01
This paper presents an approach for assisting low-literacy readers in accessing Web online information. The "Educational FACILITA" tool is a Web content adaptation tool that provides innovative features and follows more intuitive interaction models regarding accessibility concerns. Especially, we propose an interaction model and a Web application that explore the natural language processing tasks of lexical elaboration and named entity labeling for improving Web accessibility. We report on the results obtained from a pilot study on usability analysis carried out with low-literacy users. The preliminary results show that "Educational FACILITA" improves the comprehension of text elements, although the assistance mechanisms might also confuse users when word sense ambiguity is introduced, by gathering, for a complex word, a list of synonyms with multiple meanings. This fact evokes a future solution in which the correct sense for a complex word in a sentence is identified, solving this pervasive characteristic of natural languages. The pilot study also identified that experienced computer users find the tool to be more useful than novice computer users do.
The Eugene language for synthetic biology.
Bilitchenko, Lesia; Liu, Adam; Densmore, Douglas
2011-01-01
Synthetic biological systems are currently created by an ad hoc, iterative process of design, simulation, and assembly. These systems would greatly benefit from the introduction of a more formalized and rigorous specification of the desired system components as well as constraints on their composition. In order to do so, the creation of robust and efficient design flows and tools is imperative. We present a human readable language (Eugene) which allows for both the specification of synthetic biological designs based on biological parts as well as providing a very expressive constraint system to drive the creation of composite devices from collection of parts. This chapter provides an overview of the language primitives as well as instructions on installation and use of Eugene v0.03b. Copyright © 2011 Elsevier Inc. All rights reserved.
ADASS Web Database XML Project
NASA Astrophysics Data System (ADS)
Barg, M. I.; Stobie, E. B.; Ferro, A. J.; O'Neil, E. J.
In the spring of 2000, at the request of the ADASS Program Organizing Committee (POC), we began organizing information from previous ADASS conferences in an effort to create a centralized database. The beginnings of this database originated from data (invited speakers, participants, papers, etc.) extracted from HyperText Markup Language (HTML) documents from past ADASS host sites. Unfortunately, not all HTML documents are well formed and parsing them proved to be an iterative process. It was evident at the beginning that if these Web documents were organized in a standardized way, such as XML (Extensible Markup Language), the processing of this information across the Web could be automated, more efficient, and less error prone. This paper will briefly review the many programming tools available for processing XML, including Java, Perl and Python, and will explore the mapping of relational data from our MySQL database to XML.
ERIC Educational Resources Information Center
Bale, Jeff
2016-01-01
This article addresses language rights as a legitimate political tool for language policy scholarship and activism. The article begins by engaging several critiques of language rights. It analyzes Ruiz's language-as-right orientation to language policy, and then reviews recent scholarship challenging language rights from poststructural and…
Collaborative Writing among Second Language Learners in Academic Web-Based Projects
ERIC Educational Resources Information Center
Kessler, Greg; Bikowski, Dawn; Boggs, Jordan
2012-01-01
This study investigates Web-based, project oriented, many-to-many collaborative writing for academic purposes. Thirty-eight Fulbright scholars in an orientation program at a large Midwestern university used a Web-based word processing tool to collaboratively plan and report on a research project. The purpose of this study is to explore and…
Technology Use as Transformative Pedagogy: Using Video Editing Technology to Learn about Teaching
ERIC Educational Resources Information Center
Macy, Michelle
2011-01-01
Within the paradigm of Sociocultural Theory, and using Activity Theory as a data-gathering and management tool, this microgenetic case study examined the processes--the growth, change, and development--engaged in by student-teachers in a foreign language education program as they worked together to complete an activity. The activity involved…
ERIC Educational Resources Information Center
Crossley, Scott A.; Roscoe, Rod; McNamara, Danielle S.
2014-01-01
This study identifies multiple profiles of successful essays via a cluster analysis approach using linguistic features reported by a variety of natural language processing tools. The findings from the study indicate that there are four profiles of successful writers for the samples analyzed. These four profiles are linguistically distinct from one…
Cell Phone Video Recording Feature as a Language Learning Tool: A Case Study
ERIC Educational Resources Information Center
Gromik, Nicolas A.
2012-01-01
This paper reports on a case study conducted at a Japanese national university. Nine participants used the video recording feature on their cell phones to produce weekly video productions. The task required that participants produce one 30-second video on a teacher-selected topic. Observations revealed the process of video creation with a cell…
ERIC Educational Resources Information Center
Hella, Pertti; Niemi, Jussi; Hintikka, Jukka; Otsa, Lidia; Tirkkonen, Jani-Matti; Koponen, Hannu
2013-01-01
Background: Disorganized speech, manifested as derailment, tangentiality, incoherence and loss of goal, occurs commonly in schizophrenia. Studies of language processing have demonstrated that semantic activation in schizophrenia is often disordered and, moreover, the ability to use contextual cues is impaired. Aims: To reconstruct the origins and…
ERIC Educational Resources Information Center
Jalilian, Sahar; Rahmatian, Rouhollah; Safa, Parivash; Letafati, Roya
2016-01-01
Simultaneous bilingual education of a child is a dynamic process. Construction of linguistic competences undeniably depends on the conditions of the linguistic environment of the child. This education in a monolingual family, requires the practice of parenting tactics to increase the frequency of the language use in minority, during which,…
Screen Capture Technology: A Digital Window into Students' Writing Processes
ERIC Educational Resources Information Center
Seror, Jeremie
2013-01-01
Technological innovations and the prevalence of the computer as a means of producing and engaging with texts have dramatically transformed how literacy is defined and developed in modern society. This rise in digital writing practices has led to a growing number of tools and methods that can be used to explore second language (L2) writing…
Using Text Sets to Facilitate Critical Thinking in Sixth Graders
ERIC Educational Resources Information Center
Scales, Roya Q.; Tracy, Kelly N.
2017-01-01
This case study examines features and processes of a sixth grade teacher (Jane) utilizing text sets as a tool for facilitating critical thinking. Jane's strong vision and student-centered beliefs informed her use of various texts to teach language arts as she worked to address demands of the Common Core State Standards. Text sets promoted multiple…
Tucker Signing as a Phonics Instruction Tool to Develop Phonemic Awareness in Children
ERIC Educational Resources Information Center
Valbuena, Amanda Carolina
2014-01-01
To develop reading acquisition in an effective way, it is necessary to take into account three goals during the process: automatic word recognition, or development of phonemic awareness, reading comprehension, and a desire for reading. This article focuses on promoting phonemic awareness in English as a second language through a program called…
Harnessing QbD, Programming Languages, and Automation for Reproducible Biology.
Sadowski, Michael I; Grant, Chris; Fell, Tim S
2016-03-01
Building robust manufacturing processes from biological components is a task that is highly complex and requires sophisticated tools to describe processes, inputs, and measurements and administrate management of knowledge, data, and materials. We argue that for bioengineering to fully access biological potential, it will require application of statistically designed experiments to derive detailed empirical models of underlying systems. This requires execution of large-scale structured experimentation for which laboratory automation is necessary. This requires development of expressive, high-level languages that allow reusability of protocols, characterization of their reliability, and a change in focus from implementation details to functional properties. We review recent developments in these areas and identify what we believe is an exciting trend that promises to revolutionize biotechnology. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Oller, Stephen D
2005-01-01
The pragmatic mapping process and its variants have proven effective in second language learning and teaching. The goal of this paper is to show that the same process applies in teaching and intervention with disordered populations. A secondary goal, ultimately more important, is to give clinicians, teachers, and other educators a tool-kit, or a framework, from which they can evaluate and implement interventions. What is offered is an introduction to a general theory of signs and some examples of how it can be applied in treating communication disorders. (1) Readers will be able to relate the three theoretical consistency requirements to language teaching and intervention. (2) Readers will be introduced to a general theory of signs that provides a basis for evaluating and implementing interventions.
MedEx/J: A One-Scan Simple and Fast NLP Tool for Japanese Clinical Texts.
Aramaki, Eiji; Yano, Ken; Wakamiya, Shoko
2017-01-01
Because of recent replacement of physical documents with electronic medical records (EMR), the importance of information processing in the medical field has increased. In light of this trend, we have been developing MedEx/J, which retrieves important Japanese language information from medical reports. MedEx/J executes two tasks simultaneously: (1) term extraction, and (2) positive and negative event classification. We designate this approach as a one-scan approach, providing simplicity of systems and reasonable accuracy. MedEx/J performance on the two tasks is described herein: (1) term extraction (F
Russo, Paola; Piazza, Miriam; Leonardi, Giorgio; Roncoroni, Layla; Russo, Carlo; Spadaro, Salvatore; Quaglini, Silvana
2012-01-01
The blood transfusion is a complex activity subject to a high risk of eventually fatal errors. The development and application of computer-based systems could help reducing the error rate, playing a fundamental role in the improvement of the quality of care. This poster presents an under development eLearning tool formalizing the guidelines of the transfusion process. This system, implemented in YAWL (Yet Another Workflow Language), will be used to train the personnel in order to improve the efficiency of care and to reduce errors.
The Role of Computers in Research and Development at Langley Research Center
NASA Technical Reports Server (NTRS)
Wieseman, Carol D. (Compiler)
1994-01-01
This document is a compilation of presentations given at a workshop on the role cf computers in research and development at the Langley Research Center. The objectives of the workshop were to inform the Langley Research Center community of the current software systems and software practices in use at Langley. The workshop was organized in 10 sessions: Software Engineering; Software Engineering Standards, methods, and CASE tools; Solutions of Equations; Automatic Differentiation; Mosaic and the World Wide Web; Graphics and Image Processing; System Design Integration; CAE Tools; Languages; and Advanced Topics.
Boerma, Tessel; Chiat, Shula; Leseman, Paul; Timmermeister, Mona; Wijnen, Frank; Blom, Elma
2015-12-01
This study evaluated a newly developed quasi-universal nonword repetition task (Q-U NWRT) as a diagnostic tool for bilingual children with language impairment (LI) who have Dutch as a 2nd language. The Q-U NWRT was designed to be minimally influenced by knowledge of 1 specific language in contrast to a language-specific NWRT with which it was compared. One hundred twenty monolingual and bilingual children with and without LI participated (30 per group). A mixed-design analysis of variance was used to investigate the effects of LI and bilingualism on the NWRTs. Receiver operating characteristic analyses were conducted to evaluate the instruments' diagnostic value. Large negative effects of LI were found on both NWRTs, whereas negative effects of bilingualism only occurred on the language-specific NWRT. Both instruments had high clinical accuracy in the monolingual group, but only the Q-U NWRT had high clinical accuracy in the bilingual group. This study indicates that the Q-U NWRT is a promising diagnostic tool to help identify LI in bilingual children learning Dutch as a 2nd language. The instrument was clinically accurate in both a monolingual and bilingual group of children and seems better able to disentangle LI from language disadvantage than more language-specific measures.
Automatically Detecting Failures in Natural Language Processing Tools for Online Community Text.
Park, Albert; Hartzler, Andrea L; Huh, Jina; McDonald, David W; Pratt, Wanda
2015-08-31
The prevalence and value of patient-generated health text are increasing, but processing such text remains problematic. Although existing biomedical natural language processing (NLP) tools are appealing, most were developed to process clinician- or researcher-generated text, such as clinical notes or journal articles. In addition to being constructed for different types of text, other challenges of using existing NLP include constantly changing technologies, source vocabularies, and characteristics of text. These continuously evolving challenges warrant the need for applying low-cost systematic assessment. However, the primarily accepted evaluation method in NLP, manual annotation, requires tremendous effort and time. The primary objective of this study is to explore an alternative approach-using low-cost, automated methods to detect failures (eg, incorrect boundaries, missed terms, mismapped concepts) when processing patient-generated text with existing biomedical NLP tools. We first characterize common failures that NLP tools can make in processing online community text. We then demonstrate the feasibility of our automated approach in detecting these common failures using one of the most popular biomedical NLP tools, MetaMap. Using 9657 posts from an online cancer community, we explored our automated failure detection approach in two steps: (1) to characterize the failure types, we first manually reviewed MetaMap's commonly occurring failures, grouped the inaccurate mappings into failure types, and then identified causes of the failures through iterative rounds of manual review using open coding, and (2) to automatically detect these failure types, we then explored combinations of existing NLP techniques and dictionary-based matching for each failure cause. Finally, we manually evaluated the automatically detected failures. From our manual review, we characterized three types of failure: (1) boundary failures, (2) missed term failures, and (3) word ambiguity failures. Within these three failure types, we discovered 12 causes of inaccurate mappings of concepts. We used automated methods to detect almost half of 383,572 MetaMap's mappings as problematic. Word sense ambiguity failure was the most widely occurring, comprising 82.22% of failures. Boundary failure was the second most frequent, amounting to 15.90% of failures, while missed term failures were the least common, making up 1.88% of failures. The automated failure detection achieved precision, recall, accuracy, and F1 score of 83.00%, 92.57%, 88.17%, and 87.52%, respectively. We illustrate the challenges of processing patient-generated online health community text and characterize failures of NLP tools on this patient-generated health text, demonstrating the feasibility of our low-cost approach to automatically detect those failures. Our approach shows the potential for scalable and effective solutions to automatically assess the constantly evolving NLP tools and source vocabularies to process patient-generated text.
A python tool for the implementation of domain-specific languages
NASA Astrophysics Data System (ADS)
Dejanović, Igor; Vaderna, Renata; Milosavljević, Gordana; Simić, Miloš; Vuković, Željko
2017-07-01
In this paper we describe textX, a meta-language and a tool for building Domain-Specific Languages. It is implemented in Python using Arpeggio PEG (Parsing Expression Grammar) parser library. From a single language description (grammar) textX will build a parser and a meta-model (a.k.a. abstract syntax) of the language. The parser is used to parse textual representations of models conforming to the meta-model. As a result of parsing, a Python object graph will be automatically created. The structure of the object graph will conform to the meta-model defined by the grammar. This approach frees a developer from the need to manually analyse a parse tree and transform it to other suitable representation. The textX library is independent of any integrated development environment and can be easily integrated in any Python project. The textX tool works as a grammar interpreter. The parser is configured at run-time using the grammar. The textX tool is a free and open-source project available at GitHub.
ExcelAutomat: a tool for systematic processing of files as applied to quantum chemical calculations
NASA Astrophysics Data System (ADS)
Laloo, Jalal Z. A.; Laloo, Nassirah; Rhyman, Lydia; Ramasami, Ponnadurai
2017-07-01
The processing of the input and output files of quantum chemical calculations often necessitates a spreadsheet as a key component of the workflow. Spreadsheet packages with a built-in programming language editor can automate the steps involved and thus provide a direct link between processing files and the spreadsheet. This helps to reduce user-interventions as well as the need to switch between different programs to carry out each step. The ExcelAutomat tool is the implementation of this method in Microsoft Excel (MS Excel) using the default Visual Basic for Application (VBA) programming language. The code in ExcelAutomat was adapted to work with the platform-independent open-source LibreOffice Calc, which also supports VBA. ExcelAutomat provides an interface through the spreadsheet to automate repetitive tasks such as merging input files, splitting, parsing and compiling data from output files, and generation of unique filenames. Selected extracted parameters can be retrieved as variables which can be included in custom codes for a tailored approach. ExcelAutomat works with Gaussian files and is adapted for use with other computational packages including the non-commercial GAMESS. ExcelAutomat is available as a downloadable MS Excel workbook or as a LibreOffice workbook.
ExcelAutomat: a tool for systematic processing of files as applied to quantum chemical calculations.
Laloo, Jalal Z A; Laloo, Nassirah; Rhyman, Lydia; Ramasami, Ponnadurai
2017-07-01
The processing of the input and output files of quantum chemical calculations often necessitates a spreadsheet as a key component of the workflow. Spreadsheet packages with a built-in programming language editor can automate the steps involved and thus provide a direct link between processing files and the spreadsheet. This helps to reduce user-interventions as well as the need to switch between different programs to carry out each step. The ExcelAutomat tool is the implementation of this method in Microsoft Excel (MS Excel) using the default Visual Basic for Application (VBA) programming language. The code in ExcelAutomat was adapted to work with the platform-independent open-source LibreOffice Calc, which also supports VBA. ExcelAutomat provides an interface through the spreadsheet to automate repetitive tasks such as merging input files, splitting, parsing and compiling data from output files, and generation of unique filenames. Selected extracted parameters can be retrieved as variables which can be included in custom codes for a tailored approach. ExcelAutomat works with Gaussian files and is adapted for use with other computational packages including the non-commercial GAMESS. ExcelAutomat is available as a downloadable MS Excel workbook or as a LibreOffice workbook.
VHP - An environment for the remote visualization of heuristic processes
NASA Technical Reports Server (NTRS)
Crawford, Stuart L.; Leiner, Barry M.
1991-01-01
A software system called VHP is introduced which permits the visualization of heuristic algorithms on both resident and remote hardware platforms. The VHP is based on the DCF tool for interprocess communication and is applicable to remote algorithms which can be on different types of hardware and in languages other than VHP. The VHP system is of particular interest to systems in which the visualization of remote processes is required such as robotics for telescience applications.
ERIC Educational Resources Information Center
Cajar-Bravo, Aristides
2010-01-01
This study is an action research project that analyzed the ways in which ESL students improve their language learning processes by using as a teaching tool a media literacy video and Civics Education for social skills; it was presented to two groups of 12 students who were attending an ESL/Civics Education Intermediate-Advanced class in an ABE…
Demner-Fushman, D; Elhadad, N
2016-11-10
This paper reviews work over the past two years in Natural Language Processing (NLP) applied to clinical and consumer-generated texts. We included any application or methodological publication that leverages text to facilitate healthcare and address the health-related needs of consumers and populations. Many important developments in clinical text processing, both foundational and task-oriented, were addressed in community- wide evaluations and discussed in corresponding special issues that are referenced in this review. These focused issues and in-depth reviews of several other active research areas, such as pharmacovigilance and summarization, allowed us to discuss in greater depth disease modeling and predictive analytics using clinical texts, and text analysis in social media for healthcare quality assessment, trends towards online interventions based on rapid analysis of health-related posts, and consumer health question answering, among other issues. Our analysis shows that although clinical NLP continues to advance towards practical applications and more NLP methods are used in large-scale live health information applications, more needs to be done to make NLP use in clinical applications a routine widespread reality. Progress in clinical NLP is mirrored by developments in social media text analysis: the research is moving from capturing trends to addressing individual health-related posts, thus showing potential to become a tool for precision medicine and a valuable addition to the standard healthcare quality evaluation tools.
Collaborating with Youth to Inform and Develop Tools for Psychotropic Decision Making
Murphy, Andrea; Gardner, David; Kutcher, Stan; Davidson, Simon; Manion, Ian
2010-01-01
Introduction: Youth oriented and informed resources designed to support psychopharmacotherapeutic decision-making are essentially unavailable. This article outlines the approach taken to design such resources, the product that resulted from the approach taken, and the lessons learned from the process. Methods: A project team with psychopharmacology expertise was assembled. The project team reviewed best practices regarding medication educational materials and related tools to support decisions. Collaboration with key stakeholders who were thought of as primary end-users and target groups occurred. A graphic designer and a plain language consultant were also retained. Results: Through an iterative and collaborative process over approximately 6 months, Med Ed and Med Ed Passport were developed. Literature and input from key stakeholders, in particular youth, was instrumental to the development of the tools and materials within Med Ed. A training program utilizing a train-the-trainer model was developed to facilitate the implementation of Med Ed in Ontario, which is currently ongoing. Conclusion: An evidence-informed process that includes youth and key stakeholder engagement is required for developing tools to support in psychopharmacotherapeutic decision-making. The development process fostered an environment of reciprocity between the project team and key stakeholders. PMID:21037916
Toward a molecular programming language for algorithmic self-assembly
NASA Astrophysics Data System (ADS)
Patitz, Matthew John
Self-assembly is the process whereby relatively simple components autonomously combine to form more complex objects. Nature exhibits self-assembly to form everything from microscopic crystals to living cells to galaxies. With a desire to both form increasingly sophisticated products and to understand the basic components of living systems, scientists have developed and studied artificial self-assembling systems. One such framework is the Tile Assembly Model introduced by Erik Winfree in 1998. In this model, simple two-dimensional square 'tiles' are designed so that they self-assemble into desired shapes. The work in this thesis consists of a series of results which build toward the future goal of designing an abstracted, high-level programming language for designing the molecular components of self-assembling systems which can perform powerful computations and form into intricate structures. The first two sets of results demonstrate self-assembling systems which perform infinite series of computations that characterize computably enumerable and decidable languages, and exhibit tools for algorithmically generating the necessary sets of tiles. In the next chapter, methods for generating tile sets which self-assemble into complicated shapes, namely a class of discrete self-similar fractal structures, are presented. Next, a software package for graphically designing tile sets, simulating their self-assembly, and debugging designed systems is discussed. Finally, a high-level programming language which abstracts much of the complexity and tedium of designing such systems, while preventing many of the common errors, is presented. The summation of this body of work presents a broad coverage of the spectrum of desired outputs from artificial self-assembling systems and a progression in the sophistication of tools used to design them. By creating a broader and deeper set of modular tools for designing self-assembling systems, we hope to increase the complexity which is attainable. These tools provide a solid foundation for future work in both the Tile Assembly Model and explorations into more advanced models.
Valdez, Joshua; Rueschman, Michael; Kim, Matthew; Redline, Susan; Sahoo, Satya S
2016-10-01
Extraction of structured information from biomedical literature is a complex and challenging problem due to the complexity of biomedical domain and lack of appropriate natural language processing (NLP) techniques. High quality domain ontologies model both data and metadata information at a fine level of granularity, which can be effectively used to accurately extract structured information from biomedical text. Extraction of provenance metadata, which describes the history or source of information, from published articles is an important task to support scientific reproducibility. Reproducibility of results reported by previous research studies is a foundational component of scientific advancement. This is highlighted by the recent initiative by the US National Institutes of Health called "Principles of Rigor and Reproducibility". In this paper, we describe an effective approach to extract provenance metadata from published biomedical research literature using an ontology-enabled NLP platform as part of the Provenance for Clinical and Healthcare Research (ProvCaRe). The ProvCaRe-NLP tool extends the clinical Text Analysis and Knowledge Extraction System (cTAKES) platform using both provenance and biomedical domain ontologies. We demonstrate the effectiveness of ProvCaRe-NLP tool using a corpus of 20 peer-reviewed publications. The results of our evaluation demonstrate that the ProvCaRe-NLP tool has significantly higher recall in extracting provenance metadata as compared to existing NLP pipelines such as MetaMap.
Structural and functional neural correlates of music perception.
Limb, Charles J
2006-04-01
This review article highlights state-of-the-art functional neuroimaging studies and demonstrates the novel use of music as a tool for the study of human auditory brain structure and function. Music is a unique auditory stimulus with properties that make it a compelling tool with which to study both human behavior and, more specifically, the neural elements involved in the processing of sound. Functional neuroimaging techniques represent a modern and powerful method of investigation into neural structure and functional correlates in the living organism. These methods have demonstrated a close relationship between the neural processing of music and language, both syntactically and semantically. Greater neural activity and increased volume of gray matter in Heschl's gyrus has been associated with musical aptitude. Activation of Broca's area, a region traditionally considered to subserve language, is important in interpreting whether a note is on or off key. The planum temporale shows asymmetries that are associated with the phenomenon of perfect pitch. Functional imaging studies have also demonstrated activation of primitive emotional centers such as ventral striatum, midbrain, amygdala, orbitofrontal cortex, and ventral medial prefrontal cortex in listeners of moving musical passages. In addition, studies of melody and rhythm perception have elucidated mechanisms of hemispheric specialization. These studies show the power of music and functional neuroimaging to provide singularly useful tools for the study of brain structure and function.
ERIC Educational Resources Information Center
Rettig, Heike, Ed.
This proceedings contains papers from the first European seminar of the Trans-European Language Resources Infrastructure (TELRI) include: "Cooperation with Central and Eastern Europe in Language Engineering" (Poul Andersen); "Language Technology and Language Resources in China" (Feng Zhiwei); "Public Domain Generic Tools:…
Language Schemes--A Useful Policy Tool for Language Planning?
ERIC Educational Resources Information Center
Ó Flatharta, Peadar
2015-01-01
The Irish language is recognised in Bunreacht na hÉireann [The Constitution of Ireland] as the national and first official language, and provisions to support the language are to found in c.120 specific enactments in Irish legislation. In 2007, the Irish language was designated as an official working language of the European Union. In 2003, the…
Legacy model integration for enhancing hydrologic interdisciplinary research
NASA Astrophysics Data System (ADS)
Dozier, A.; Arabi, M.; David, O.
2013-12-01
Many challenges are introduced to interdisciplinary research in and around the hydrologic science community due to advances in computing technology and modeling capabilities in different programming languages, across different platforms and frameworks by researchers in a variety of fields with a variety of experience in computer programming. Many new hydrologic models as well as optimization, parameter estimation, and uncertainty characterization techniques are developed in scripting languages such as Matlab, R, Python, or in newer languages such as Java and the .Net languages, whereas many legacy models have been written in FORTRAN and C, which complicates inter-model communication for two-way feedbacks. However, most hydrologic researchers and industry personnel have little knowledge of the computing technologies that are available to address the model integration process. Therefore, the goal of this study is to address these new challenges by utilizing a novel approach based on a publish-subscribe-type system to enhance modeling capabilities of legacy socio-economic, hydrologic, and ecologic software. Enhancements include massive parallelization of executions and access to legacy model variables at any point during the simulation process by another program without having to compile all the models together into an inseparable 'super-model'. Thus, this study provides two-way feedback mechanisms between multiple different process models that can be written in various programming languages and can run on different machines and operating systems. Additionally, a level of abstraction is given to the model integration process that allows researchers and other technical personnel to perform more detailed and interactive modeling, visualization, optimization, calibration, and uncertainty analysis without requiring deep understanding of inter-process communication. To be compatible, a program must be written in a programming language with bindings to a common implementation of the message passing interface (MPI), which includes FORTRAN, C, Java, the .NET languages, Python, R, Matlab, and many others. The system is tested on a longstanding legacy hydrologic model, the Soil and Water Assessment Tool (SWAT), to observe and enhance speed-up capabilities for various optimization, parameter estimation, and model uncertainty characterization techniques, which is particularly important for computationally intensive hydrologic simulations. Initial results indicate that the legacy extension system significantly decreases developer time, computation time, and the cost of purchasing commercial parallel processing licenses, while enhancing interdisciplinary research by providing detailed two-way feedback mechanisms between various process models with minimal changes to legacy code.
SuML: A Survey Markup Language for Generalized Survey Encoding
Barclay, MW; Lober, WB; Karras, BT
2002-01-01
There is a need in clinical and research settings for a sophisticated, generalized, web based survey tool that supports complex logic, separation of content and presentation, and computable guidelines. There are many commercial and open source survey packages available that provide simple logic; few provide sophistication beyond “goto” statements; none support the use of guidelines. These tools are driven by databases, static web pages, and structured documents using markup languages such as eXtensible Markup Language (XML). We propose a generalized, guideline aware language and an implementation architecture using open source standards.
A Single-Display Groupware Collaborative Language Laboratory
ERIC Educational Resources Information Center
Calderón, Juan Felipe; Nussbaum, Miguel; Carmach, Ignacio; Díaz, Juan Jaime; Villalta, Marco
2016-01-01
Language learning tools have evolved to take into consideration new teaching models of collaboration and communication. While second language acquisition tasks have been taken online, the traditional language laboratory has remained unchanged. By continuing to follow its original configuration based on individual work, the language laboratory…
Depression in Aboriginal men in central Australia: adaptation of the Patient Health Questionnaire 9
2013-01-01
Background While Indigenous Australians are believed to be at a high risk of psychological illness, few screening instruments have been designed to accurately measure this burden. Rather than simply transposing western labels of symptoms, this paper describes the process by which a screening tool for depression was specifically adapted for use across multiple Indigenous Australian communities. Method Potential depression screening instruments were identified and interrogated according to a set of pre-defined criteria. A structured process was then developed which relied on the expertise of five focus groups comprising of members from primary Indigenous language groups in central Australia. First, focus group participants were asked to review and select a screening measure for adaptation. Bi-lingual experts then translated and back translated the language within the selected measure. Focus group participants re-visited the difficult items, explored their meaning and identified potential ways to achieve equivalence of meaning. Results All five focus groups independently selected the Primary Health Questionnaire 9, several key conceptual differences were exposed, largely related to the construction of hopelessness. Together with translated versions of each instrument for each of the five languages, a single, simplified English version for use across heterogeneous settings was negotiated. Importantly, the ‘code’ and specific conceptually equivalent words that could be used for other Indigenous language groups were also developed. Conclusions The extensive process of adaptation used in this study has demonstrated that within the context of Indigenous Australian communities, across multiple language groups, where English is often a third or fourth language, conceptual and linguistic equivalence of psychological constructs can be negotiated. A validation study is now required to assess the adapted instrument’s potential for measuring the burden of disease across all Indigenous Australian populations. PMID:24139186
Comprehension of Spacecraft Telemetry Using Hierarchical Specifications of Behavior
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Joshi, Rajeev
2014-01-01
A key challenge in operating remote spacecraft is that ground operators must rely on the limited visibility available through spacecraft telemetry in order to assess spacecraft health and operational status. We describe a tool for processing spacecraft telemetry that allows ground operators to impose structure on received telemetry in order to achieve a better comprehension of system state. A key element of our approach is the design of a domain-specific language that allows operators to express models of expected system behavior using partial specifications. The language allows behavior specifications with data fields, similar to other recent runtime verification systems. What is notable about our approach is the ability to develop hierarchical specifications of behavior. The language is implemented as an internal DSL in the Scala programming language that synthesizes rules from patterns of specification behavior. The rules are automatically applied to received telemetry and the inferred behaviors are available to ground operators using a visualization interface that makes it easier to understand and track spacecraft state. We describe initial results from applying our tool to telemetry received from the Curiosity rover currently roving the surface of Mars, where the visualizations are being used to trend subsystem behaviors, in order to identify potential problems before they happen. However, the technology is completely general and can be applied to any system that generates telemetry such as event logs.
Sakji, Saoussen; Gicquel, Quentin; Pereira, Suzanne; Kergourlay, Ivan; Proux, Denys; Darmoni, Stéfan; Metzger, Marie-Hélène
2010-01-01
Surveillance of healthcare-associated infections is essential to prevention. A new collaborative project, namely ALADIN, was launched in January 2009 and aims to develop an automated detection tool based on natural language processing of medical documents. The objective of this study was to evaluate the annotation of natural language medical reports of healthcare-associated infections. A software MS Access application (NosIndex) has been developed to interface ECMT XML answer and manual annotation work. ECMT performances were evaluated by an infection control practitioner (ICP). Precision was evaluated for the 2 modules and recall only for the default module. Exclusion rate was defined as ratio between medical terms not found by ECMT and total number of terms evaluated. The medical discharge summaries were randomly selected in 4 medical wards. From the 247 medical terms evaluated, ECMT proposed 428 and 3,721 codes, respectively for the default and expansion modules. The precision was higher with the default module (P1=0.62) than with the expansion (P2=0.47). Performances of ECMT as support tool for the medical annotation were satisfactory.
iBIOMES Lite: Summarizing Biomolecular Simulation Data in Limited Settings
2015-01-01
As the amount of data generated by biomolecular simulations dramatically increases, new tools need to be developed to help manage this data at the individual investigator or small research group level. In this paper, we introduce iBIOMES Lite, a lightweight tool for biomolecular simulation data indexing and summarization. The main goal of iBIOMES Lite is to provide a simple interface to summarize computational experiments in a setting where the user might have limited privileges and limited access to IT resources. A command-line interface allows the user to summarize, publish, and search local simulation data sets. Published data sets are accessible via static hypertext markup language (HTML) pages that summarize the simulation protocols and also display data analysis graphically. The publication process is customized via extensible markup language (XML) descriptors while the HTML summary template is customized through extensible stylesheet language (XSL). iBIOMES Lite was tested on different platforms and at several national computing centers using various data sets generated through classical and quantum molecular dynamics, quantum chemistry, and QM/MM. The associated parsers currently support AMBER, GROMACS, Gaussian, and NWChem data set publication. The code is available at https://github.com/jcvthibault/ibiomes. PMID:24830957
ChemicalTagger: A tool for semantic text-mining in chemistry.
Hawizy, Lezan; Jessop, David M; Adams, Nico; Murray-Rust, Peter
2011-05-16
The primary method for scientific communication is in the form of published scientific articles and theses which use natural language combined with domain-specific terminology. As such, they contain free owing unstructured text. Given the usefulness of data extraction from unstructured literature, we aim to show how this can be achieved for the discipline of chemistry. The highly formulaic style of writing most chemists adopt make their contributions well suited to high-throughput Natural Language Processing (NLP) approaches. We have developed the ChemicalTagger parser as a medium-depth, phrase-based semantic NLP tool for the language of chemical experiments. Tagging is based on a modular architecture and uses a combination of OSCAR, domain-specific regex and English taggers to identify parts-of-speech. The ANTLR grammar is used to structure this into tree-based phrases. Using a metric that allows for overlapping annotations, we achieved machine-annotator agreements of 88.9% for phrase recognition and 91.9% for phrase-type identification (Action names). It is possible parse to chemical experimental text using rule-based techniques in conjunction with a formal grammar parser. ChemicalTagger has been deployed for over 10,000 patents and has identified solvents from their linguistic context with >99.5% precision.
English for Business: Student Responses to Language Learning through Social Networking Tools
ERIC Educational Resources Information Center
García Laborda, Jesús; Litzler, Mary Frances
2017-01-01
This action research based case study addresses the situation of a first year class of Business English students at Universidad de Alcalá and their attitudes towards using Web 2.0 tools and social media for language learning. During the semester, the students were asked to collaborate in the creation and use of some tools such as blogs, video…
Motor-Iconicity of Sign Language Does Not Alter the Neural Systems Underlying Tool and Action Naming
ERIC Educational Resources Information Center
Emmorey, Karen; Grabowski, Thomas; McCullough, Stephen; Damasio, Hannah; Ponto, Laurie; Hichwa, Richard; Bellugi, Ursula
2004-01-01
Positron emission tomography was used to investigate whether the motor-iconic basis of certain forms in American Sign Language (ASL) partially alters the neural systems engaged during lexical retrieval. Most ASL nouns denoting tools and ASL verbs referring to tool-based actions are produced with a handshape representing the human hand holding a…
Torres, Samantha; de la Riva, Erika E; Tom, Laura S; Clayman, Marla L; Taylor, Chirisse; Dong, Xinqi; Simon, Melissa A
2015-12-01
Despite increasing need to boost the recruitment of underrepresented populations into cancer trials and biobanking research, few tools exist for facilitating dialogue between researchers and potential research participants during the recruitment process. In this paper, we describe the initial processes of a user-centered design cycle to develop a standardized research communication tool prototype for enhancing research literacy among individuals from underrepresented populations considering enrollment in cancer research and biobanking studies. We present qualitative feedback and recommendations on the prototype's design and content from potential end users: five clinical trial recruiters and ten potential research participants recruited from an academic medical center. Participants were given the prototype (a set of laminated cards) and were asked to provide feedback about the tool's content, design elements, and word choices during semi-structured, in-person interviews. Results suggest that the prototype was well received by recruiters and patients alike. They favored the simplicity, lay language, and layout of the cards. They also noted areas for improvement, leading to card refinements that included the following: addressing additional topic areas, clarifying research processes, increasing the number of diverse images, and using alternative word choices. Our process for refining user interfaces and iterating content in early phases of design may inform future efforts to develop tools for use in clinical research or biobanking studies to increase research literacy.
Expert consensus on best evaluative practices in community-based rehabilitation.
Grandisson, Marie; Thibeault, Rachel; Hébert, Michèle; Cameron, Debra
2016-01-01
The objective of this study was to generate expert consensus on best evaluative practices for community-based rehabilitation (CBR). This consensus includes key features of the evaluation process and methods, and discussion of whether a shared framework should be used to report findings and, if so, which framework should play this role. A Delphi study with two predefined rounds was conducted. Experts in CBR from a wide range of geographical areas and disciplinary backgrounds were recruited to complete the questionnaires. Both quantitative and qualitative analyses were performed to generate the recommendations for best practices in CBR evaluation. A panel of 42 experts reached consensus on 13 recommendations for best evaluative practices in CBR. In regard to the critical qualities of sound CBR evaluation processes, panellists emphasized that these processes should be inclusive, participatory, empowering and respectful of local cultures and languages. The group agreed that evaluators should consider the use of mixed methods and participatory tools, and should combine indicators from a universal list of CBR indicators with locally generated ones. The group also agreed that a common framework should guide CBR evaluations, and that this framework should be a flexible combination between the CBR Matrix and the CBR Principles. An expert panel reached consensus on key features of best evaluative practices in CBR. Knowledge transfer initiatives are now required to develop guidelines, tools and training opportunities to facilitate CBR program evaluations. CBR evaluation processes should strive to be inclusive, participatory, empowering and respectful of local cultures and languages. CBR evaluators should strongly consider using mixed methods, participatory tools, a combination of indicators generated with the local community and with others from a bank of CBR indicators. CBR evaluations should be situated within a shared, but flexible, framework. This shared framework could combine the CBR Matrix and the CBR Principles.
Handling or being the concept: An fMRI study on metonymy representations in coverbal gestures.
Joue, Gina; Boven, Linda; Willmes, Klaus; Evola, Vito; Demenescu, Liliana R; Hassemer, Julius; Mittelberg, Irene; Mathiak, Klaus; Schneider, Frank; Habel, Ute
2018-01-31
In "Two heads are better than one," "head" stands for people and focuses the message on the intelligence of people. This is an example of figurative language through metonymy, where substituting a whole entity by one of its parts focuses attention on a specific aspect of the entity. Whereas metaphors, another figurative language device, are substitutions based on similarity, metonymy involves substitutions based on associations. Both are figures of speech but are also expressed in coverbal gestures during multimodal communication. The closest neuropsychological studies of metonymy in gestures have been nonlinguistic tool-use, illustrated by the classic apraxic problem of body-part-as-object (BPO, equivalent to an internal metonymy representation of the tool) vs. pantomimed action (external metonymy representation of the absent object/tool). Combining these research domains with concepts in cognitive linguistic research on gestures, we conducted an fMRI study to investigate metonymy resolution in coverbal gestures. Given the greater difficulty in developmental and apraxia studies, perhaps explained by the more complex semantic inferencing involved for external metonymy than for internal metonymy representations, we hypothesized that external metonymy resolution requires greater processing demands and that the neural resources supporting metonymy resolution would modulate regions involved in semantic processing. We found that there are indeed greater activations for external than for internal metonymy resolution in the temporoparietal junction (TPJ). This area is posterior to the lateral temporal regions recruited by metaphor processing. Effective connectivity analysis confirmed our hypothesis that metonymy resolution modulates areas implicated in semantic processing. We interpret our results in an interdisciplinary view of what metonymy in action can reveal about abstract cognition. Copyright © 2017 Elsevier Ltd. All rights reserved.
Rofes, A; Spena, G; Miozzo, A; Fontanella, M M; Miceli, G
2015-12-01
Multidisciplinary efforts are being made to provide surgical teams with sensitive and specific tasks for language mapping in awake surgery. Researchers and clinicians have elaborated different tasks over time. A fair amount of work has been directed to study the neurofunctional correlates of some of these tasks, and there is recent interest in their standardization. However, little discussion exists on the advantages and disadvantages that each task poses from the perspective of the cognitive neuroscience of language. Such an approach may be a relevant step to assess task validity, to avoid using tasks that tap onto similar processes, and to provide patients with a surgical treatment that ensures maximal tumor resection while avoiding postoperative language deficits. An understanding of the language components that each task entails may also be relevant to improve the current assessments and the ways in which tasks are administered, and to disentangle neurofunctional questions. We reviewed 17 language mapping tasks that have been used in awake surgery. Overt production tasks have been a preferred choice over comprehension tasks. Tasks tapping lexico-semantic processes, particularly object-naming, maintain their role as gold standards. Automated speech tasks are used to detect speech errors and to set the amplitude of the stimulator. Comprehension tasks, reading and writing tasks, and tasks that assess grammatical aspects of language may be regularly administered in the near future. We provide examples of a three-task approach we are administering to patients with prefrontal lesions. We believe that future advances in this area are contingent upon reviewing gold standards and introducing new assessment tools.
Open Source Clinical NLP – More than Any Single System
Masanz, James; Pakhomov, Serguei V.; Xu, Hua; Wu, Stephen T.; Chute, Christopher G.; Liu, Hongfang
2014-01-01
The number of Natural Language Processing (NLP) tools and systems for processing clinical free-text has grown as interest and processing capability have surged. Unfortunately any two systems typically cannot simply interoperate, even when both are built upon a framework designed to facilitate the creation of pluggable components. We present two ongoing activities promoting open source clinical NLP. The Open Health Natural Language Processing (OHNLP) Consortium was originally founded to foster a collaborative community around clinical NLP, releasing UIMA-based open source software. OHNLP’s mission currently includes maintaining a catalog of clinical NLP software and providing interfaces to simplify the interaction of NLP systems. Meanwhile, Apache cTAKES aims to integrate best-of-breed annotators, providing a world-class NLP system for accessing clinical information within free-text. These two activities are complementary. OHNLP promotes open source clinical NLP activities in the research community and Apache cTAKES bridges research to the health information technology (HIT) practice. PMID:25954581
Liaw, Siaw-Teng; Deveny, Elizabeth; Morrison, Iain; Lewis, Bryn
2006-09-01
Using a factorial vignette survey and modeling methodology, we developed clinical and information models - incorporating evidence base, key concepts, relevant terms, decision-making and workflow needed to practice safely and effectively - to guide the development of an integrated rule-based knowledge module to support prescribing decisions in asthma. We identified workflows, decision-making factors, factor use, and clinician information requirements. The Unified Modeling Language (UML) and public domain software and knowledge engineering tools (e.g. Protégé) were used, with the Australian GP Data Model as the starting point for expressing information needs. A Web Services service-oriented architecture approach was adopted within which to express functional needs, and clinical processes and workflows were expressed in the Business Process Execution Language (BPEL). This formal analysis and modeling methodology to define and capture the process and logic of prescribing best practice in a reference implementation is fundamental to tackling deficiencies in prescribing decision support software.
Towards health care process description framework: an XML DTD design.
Staccini, P.; Joubert, M.; Quaranta, J. F.; Aymard, S.; Fieschi, D.; Fieschi, M.
2001-01-01
The development of health care and hospital information systems has to meet users needs as well as requirements such as the tracking of all care activities and the support of quality improvement. The use of process-oriented analysis is of-value to provide analysts with: (i) a systematic description of activities; (ii) the elicitation of the useful data to perform and record care tasks; (iii) the selection of relevant decision-making support. But paper-based tools are not a very suitable way to manage and share the documentation produced during this step. The purpose of this work is to propose a method to implement the results of process analysis according to XML techniques (eXtensible Markup Language). It is based on the IDEF0 activity modeling language (Integration DEfinition for Function modeling). A hierarchical description of a process and its components has been defined through a flat XML file with a grammar of proper metadata tags. Perspectives of this method are discussed. PMID:11825265
NASA Astrophysics Data System (ADS)
Kuckein, C.; Denker, C.; Verma, M.; Balthasar, H.; González Manrique, S. J.; Louis, R. E.; Diercke, A.
2017-10-01
A huge amount of data has been acquired with the GREGOR Fabry-Pérot Interferometer (GFPI), large-format facility cameras, and since 2016 with the High-resolution Fast Imager (HiFI). These data are processed in standardized procedures with the aim of providing science-ready data for the solar physics community. For this purpose, we have developed a user-friendly data reduction pipeline called ``sTools'' based on the Interactive Data Language (IDL) and licensed under creative commons license. The pipeline delivers reduced and image-reconstructed data with a minimum of user interaction. Furthermore, quick-look data are generated as well as a webpage with an overview of the observations and their statistics. All the processed data are stored online at the GREGOR GFPI and HiFI data archive of the Leibniz Institute for Astrophysics Potsdam (AIP). The principles of the pipeline are presented together with selected high-resolution spectral scans and images processed with sTools.
ERIC Educational Resources Information Center
Bachman, Lyle F.
1989-01-01
Applied linguistics and psychometrics have influenced language testing, providing additional tools for investigating factors affecting language test performance and assuring measurement reliability. An examination is presented of language testing, including the theoretical issues involved, the methodological advances, language test development,…
Biomechanical ToolKit: Open-source framework to visualize and process biomechanical data.
Barre, Arnaud; Armand, Stéphane
2014-04-01
C3D file format is widely used in the biomechanical field by companies and laboratories to store motion capture systems data. However, few software packages can visualize and modify the integrality of the data in the C3D file. Our objective was to develop an open-source and multi-platform framework to read, write, modify and visualize data from any motion analysis systems using standard (C3D) and proprietary file formats (used by many companies producing motion capture systems). The Biomechanical ToolKit (BTK) was developed to provide cost-effective and efficient tools for the biomechanical community to easily deal with motion analysis data. A large panel of operations is available to read, modify and process data through C++ API, bindings for high-level languages (Matlab, Octave, and Python), and standalone application (Mokka). All these tools are open-source and cross-platform and run on all major operating systems (Windows, Linux, MacOS X). Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Doerr, Martin; Freitas, Fred; Guizzardi, Giancarlo; Han, Hyoil
Ontology is a cross-disciplinary field concerned with the study of concepts and theories that can be used for representing shared conceptualizations of specific domains. Ontological Engineering is a discipline in computer and information science concerned with the development of techniques, methods, languages and tools for the systematic construction of concrete artifacts capturing these representations, i.e., models (e.g., domain ontologies) and metamodels (e.g., upper-level ontologies). In recent years, there has been a growing interest in the application of formal ontology and ontological engineering to solve modeling problems in diverse areas in computer science such as software and data engineering, knowledge representation, natural language processing, information science, among many others.
Speed up of XML parsers with PHP language implementation
NASA Astrophysics Data System (ADS)
Georgiev, Bozhidar; Georgieva, Adriana
2012-11-01
In this paper, authors introduce PHP5's XML implementation and show how to read, parse, and write a short and uncomplicated XML file using Simple XML in a PHP environment. The possibilities for mutual work of PHP5 language and XML standard are described. The details of parsing process with Simple XML are also cleared. A practical project PHP-XML-MySQL presents the advantages of XML implementation in PHP modules. This approach allows comparatively simple search of XML hierarchical data by means of PHP software tools. The proposed project includes database, which can be extended with new data and new XML parsing functions.
An introduction to scripting in Ruby for biologists
Aerts, Jan; Law, Andy
2009-01-01
The Ruby programming language has a lot to offer to any scientist with electronic data to process. Not only is the initial learning curve very shallow, but its reflection and meta-programming capabilities allow for the rapid creation of relatively complex applications while still keeping the code short and readable. This paper provides a gentle introduction to this scripting language for researchers without formal informatics training such as many wet-lab scientists. We hope this will provide such researchers an idea of how powerful a tool Ruby can be for their data management tasks and encourage them to learn more about it. PMID:19607723
Topaz, Maxim; Lai, Kenneth; Dowding, Dawn; Lei, Victor J; Zisberg, Anna; Bowles, Kathryn H; Zhou, Li
2016-12-01
Electronic health records are being increasingly used by nurses with up to 80% of the health data recorded as free text. However, only a few studies have developed nursing-relevant tools that help busy clinicians to identify information they need at the point of care. This study developed and validated one of the first automated natural language processing applications to extract wound information (wound type, pressure ulcer stage, wound size, anatomic location, and wound treatment) from free text clinical notes. First, two human annotators manually reviewed a purposeful training sample (n=360) and random test sample (n=1100) of clinical notes (including 50% discharge summaries and 50% outpatient notes), identified wound cases, and created a gold standard dataset. We then trained and tested our natural language processing system (known as MTERMS) to process the wound information. Finally, we assessed our automated approach by comparing system-generated findings against the gold standard. We also compared the prevalence of wound cases identified from free-text data with coded diagnoses in the structured data. The testing dataset included 101 notes (9.2%) with wound information. The overall system performance was good (F-measure is a compiled measure of system's accuracy=92.7%), with best results for wound treatment (F-measure=95.7%) and poorest results for wound size (F-measure=81.9%). Only 46.5% of wound notes had a structured code for a wound diagnosis. The natural language processing system achieved good performance on a subset of randomly selected discharge summaries and outpatient notes. In more than half of the wound notes, there were no coded wound diagnoses, which highlight the significance of using natural language processing to enrich clinical decision making. Our future steps will include expansion of the application's information coverage to other relevant wound factors and validation of the model with external data. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Pythonic Approach for Computational Geosciences and Geo-Data Processing
NASA Astrophysics Data System (ADS)
Morra, G.; Yuen, D. A.; Lee, S. M.
2016-12-01
Computational methods and data analysis play a constantly increasing role in Earth Sciences however students and professionals need to climb a steep learning curve before reaching a sufficient level that allows them to run effective models. Furthermore the recent arrival and new powerful machine learning tools such as Torch and Tensor Flow has opened new possibilities but also created a new realm of complications related to the completely different technology employed. We present here a series of examples entirely written in Python, a language that combines the simplicity of Matlab with the power and speed of compiled languages such as C, and apply them to a wide range of geological processes such as porous media flow, multiphase fluid-dynamics, creeping flow and many-faults interaction. We also explore ways in which machine learning can be employed in combination with numerical modelling. From immediately interpreting a large number of modeling results to optimizing a set of modeling parameters to obtain a desired optimal simulation. We show that by using Python undergraduate and graduate can learn advanced numerical technologies with a minimum dedicated effort, which in turn encourages them to develop more numerical tools and quickly progress in their computational abilities. We also show how Python allows combining modeling with machine learning as pieces of LEGO, therefore simplifying the transition towards a new kind of scientific geo-modelling. The conclusion is that Python is an ideal tool to create an infrastructure for geosciences that allows users to quickly develop tools, reuse techniques and encourage collaborative efforts to interpret and integrate geo-data in profound new ways.
Multi-Spacecraft Analysis with Generic Visualization Tools
NASA Astrophysics Data System (ADS)
Mukherjee, J.; Vela, L.; Gonzalez, C.; Jeffers, S.
2010-12-01
To handle the needs of scientists today and in the future, software tools are going to have to take better advantage of the currently available hardware. Specifically, computing power, memory, and disk space have become cheaper, while bandwidth has become more expensive due to the explosion of online applications. To overcome these limitations, we have enhanced our Southwest Data Display and Analysis System (SDDAS) to take better advantage of the hardware by utilizing threads and data caching. Furthermore, the system was enhanced to support a framework for adding data formats and data visualization methods without costly rewrites. Visualization tools can speed analysis of many common scientific tasks and we will present a suite of tools that encompass the entire process of retrieving data from multiple data stores to common visualizations of the data. The goals for the end user are ease of use and interactivity with the data and the resulting plots. The data can be simultaneously plotted in a variety of formats and/or time and spatial resolutions. The software will allow one to slice and separate data to achieve other visualizations. Furthermore, one can interact with the data using the GUI or through an embedded language based on the Lua scripting language. The data presented will be primarily from the Cluster and Mars Express missions; however, the tools are data type agnostic and can be used for virtually any type of data.
... Map FAQs Customer Support Health Topics Drugs & Supplements Videos & Tools You Are Here: Home → Multiple Languages → All Health Topics → Suicide URL of this page: https://medlineplus.gov/languages/ ...
An Integrated Tool for System Analysis of Sample Return Vehicles
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.; Maddock, Robert W.; Winski, Richard G.
2012-01-01
The next important step in space exploration is the return of sample materials from extraterrestrial locations to Earth for analysis. Most mission concepts that return sample material to Earth share one common element: an Earth entry vehicle. The analysis and design of entry vehicles is multidisciplinary in nature, requiring the application of mass sizing, flight mechanics, aerodynamics, aerothermodynamics, thermal analysis, structural analysis, and impact analysis tools. Integration of a multidisciplinary problem is a challenging task; the execution process and data transfer among disciplines should be automated and consistent. This paper describes an integrated analysis tool for the design and sizing of an Earth entry vehicle. The current tool includes the following disciplines: mass sizing, flight mechanics, aerodynamics, aerothermodynamics, and impact analysis tools. Python and Java languages are used for integration. Results are presented and compared with the results from previous studies.
Ensembles of NLP Tools for Data Element Extraction from Clinical Notes
Kuo, Tsung-Ting; Rao, Pallavi; Maehara, Cleo; Doan, Son; Chaparro, Juan D.; Day, Michele E.; Farcas, Claudiu; Ohno-Machado, Lucila; Hsu, Chun-Nan
2016-01-01
Natural Language Processing (NLP) is essential for concept extraction from narrative text in electronic health records (EHR). To extract numerous and diverse concepts, such as data elements (i.e., important concepts related to a certain medical condition), a plausible solution is to combine various NLP tools into an ensemble to improve extraction performance. However, it is unclear to what extent ensembles of popular NLP tools improve the extraction of numerous and diverse concepts. Therefore, we built an NLP ensemble pipeline to synergize the strength of popular NLP tools using seven ensemble methods, and to quantify the improvement in performance achieved by ensembles in the extraction of data elements for three very different cohorts. Evaluation results show that the pipeline can improve the performance of NLP tools, but there is high variability depending on the cohort. PMID:28269947
Ensembles of NLP Tools for Data Element Extraction from Clinical Notes.
Kuo, Tsung-Ting; Rao, Pallavi; Maehara, Cleo; Doan, Son; Chaparro, Juan D; Day, Michele E; Farcas, Claudiu; Ohno-Machado, Lucila; Hsu, Chun-Nan
2016-01-01
Natural Language Processing (NLP) is essential for concept extraction from narrative text in electronic health records (EHR). To extract numerous and diverse concepts, such as data elements (i.e., important concepts related to a certain medical condition), a plausible solution is to combine various NLP tools into an ensemble to improve extraction performance. However, it is unclear to what extent ensembles of popular NLP tools improve the extraction of numerous and diverse concepts. Therefore, we built an NLP ensemble pipeline to synergize the strength of popular NLP tools using seven ensemble methods, and to quantify the improvement in performance achieved by ensembles in the extraction of data elements for three very different cohorts. Evaluation results show that the pipeline can improve the performance of NLP tools, but there is high variability depending on the cohort.
Construction of an advanced software tool for planetary atmospheric modeling
NASA Technical Reports Server (NTRS)
Friedland, Peter; Keller, Richard M.; Mckay, Christopher P.; Sims, Michael H.; Thompson, David E.
1993-01-01
Scientific model-building can be a time intensive and painstaking process, often involving the development of large complex computer programs. Despite the effort involved, scientific models cannot be distributed easily and shared with other scientists. In general, implemented scientific models are complicated, idiosyncratic, and difficult for anyone but the original scientist/programmer to understand. We propose to construct a scientific modeling software tool that serves as an aid to the scientist in developing, using and sharing models. The proposed tool will include an interactive intelligent graphical interface and a high-level domain-specific modeling language. As a testbed for this research, we propose to develop a software prototype in the domain of planetary atmospheric modeling.
Construction of an advanced software tool for planetary atmospheric modeling
NASA Technical Reports Server (NTRS)
Friedland, Peter; Keller, Richard M.; Mckay, Christopher P.; Sims, Michael H.; Thompson, David E.
1992-01-01
Scientific model-building can be a time intensive and painstaking process, often involving the development of large complex computer programs. Despite the effort involved, scientific models cannot be distributed easily and shared with other scientists. In general, implemented scientific models are complicated, idiosyncratic, and difficult for anyone but the original scientist/programmer to understand. We propose to construct a scientific modeling software tool that serves as an aid to the scientist in developing, using and sharing models. The proposed tool will include an interactive intelligent graphical interface and a high-level domain-specific modeling language. As a test bed for this research, we propose to develop a software prototype in the domain of planetary atmospheric modeling.
Topouzkhanian, Sylvia; Mijiyawa, Moustafa
2013-02-01
In West Africa, as in Majority World countries, people with a communication disability are generally cut-off from the normal development process. A long-term involvement of two partners (Orthophonistes du Monde and Handicap International) allowed the implementation in 2003 of the first speech-language pathology qualifying course in West Africa, within the Ecole Nationale des Auxiliaires Medicaux (ENAM, National School for Medical Auxiliaries) in Lome, Togo. It is a 3-year basic training (after the baccalaureate) in the only academic training centre for medical assistants in Togo. This department has a regional purpose and aims at training French-speaking African students. French speech-language pathology lecturers had to adapt their courses to the local realities they discovered in Togo. It was important to introduce and develop knowledge and skills in the students' system of reference. African speech-language pathologists have to face many challenges: creating an African speech and language therapy, introducing language disorders and their possible cure by means other than traditional therapies, and adapting all the evaluation tests and tools for speech-language pathology to each country, each culture, and each language. Creating an African speech-language pathology profession (according to its own standards) with a real influence in West Africa opens great opportunities for schooling and social and occupational integration of people with communication disabilities.
Promoting consistent use of the communication function classification system (CFCS).
Cunningham, Barbara Jane; Rosenbaum, Peter; Hidecker, Mary Jo Cooley
2016-01-01
We developed a Knowledge Translation (KT) intervention to standardize the way speech-language pathologists working in Ontario Canada's Preschool Speech and Language Program (PSLP) used the Communication Function Classification System (CFCS). This tool was being used as part of a provincial program evaluation and standardizing its use was critical for establishing reliability and validity within the provincial dataset. Two theoretical foundations - Diffusion of Innovations and the Communication Persuasion Matrix - were used to develop and disseminate the intervention to standardize use of the CFCS among a cohort speech-language pathologists. A descriptive pre-test/post-test study was used to evaluate the intervention. Fifty-two participants completed an electronic pre-test survey, reviewed intervention materials online, and then immediately completed an electronic post-test survey. The intervention improved clinicians' understanding of how the CFCS should be used, their intentions to use the tool in the standardized way, and their abilities to make correct classifications using the tool. Findings from this work will be shared with representatives of the Ontario PSLP. The intervention may be disseminated to all speech-language pathologists working in the program. This study can be used as a model for developing and disseminating KT interventions for clinicians in paediatric rehabilitation. The Communication Function Classification System (CFCS) is a new tool that allows speech-language pathologists to classify children's skills into five meaningful levels of function. There is uncertainty and inconsistent practice in the field about the methods for using this tool. This study used combined two theoretical frameworks to develop an intervention to standardize use of the CFCS among a cohort of speech-language pathologists. The intervention effectively increased clinicians' understanding of the methods for using the CFCS, ability to make correct classifications, and intention to use the tool in the standardized way in the future.
The Memory Stack: New Technologies Harness Talking for Writing.
ERIC Educational Resources Information Center
Gannon, Maureen T.
In this paper, an elementary school teacher describes her experiences with the Memory Stack--a HyperCard based tool that can accommodate a voice recording, a graphic image, and a written text on the same card--which she designed to help her second and third grade students integrate their oral language fluency into the process of learning how to…
ERIC Educational Resources Information Center
Van Beuningen, Catherine
2010-01-01
The role of (written) corrective feedback (CF) in the process of acquiring a second language (L2) has been an issue of considerable controversy among theorists and researchers alike. Although CF is a widely applied pedagogical tool and its use finds support in SLA theory, practical and theoretical objections to its usefulness have been raised…
ERIC Educational Resources Information Center
Al-Imamy, Samer; Alizadeh, Javanshir; Nour, Mohamed A.
2006-01-01
One of the major issues related to teaching an introductory programming course is the excessive amount of time spent on the language's syntax, which leaves little time for developing skills in program design and solution creativity. The wide variation in the students' backgrounds, coupled with the traditional classroom (one size-fits-all) teaching…
Nouns Referring to Tools and Natural Objects Differentially Modulate the Motor System
ERIC Educational Resources Information Center
Gough, Patricia M.; Riggio, Lucia; Chersi, Fabian; Sato, Marc; Fogassi, Leonardo; Buccino, Giovanni
2012-01-01
While increasing evidence points to a critical role for the motor system in language processing, the focus of previous work has been on the linguistic category of verbs. Here we tested whether nouns are effective in modulating the motor system and further whether different kinds of nouns--those referring to artifacts or natural items, and items…
ERIC Educational Resources Information Center
PACER Center, 2004
2004-01-01
Communication is accomplished in many ways--through gestures, body language, writing, and speaking. Most people communicate verbally, without giving much thought to the process, but others may struggle to effectively communicate with others. The ability to express oneself affects behavior, learning, and sociability. When children are unable to…
Reading in English and in Chinese: Case Study of Retrospective Miscue Analysis with Two Adult ELLS
ERIC Educational Resources Information Center
Wang, Yang; Gilles, Carol J.
2017-01-01
Retrospective Miscue Analysis (RMA) has proved to be a useful instructional tool in language arts classrooms and for English learners from various cultures. However, it has not been used with native Mandarin-speaking English learners. This qualitative case study explored the reading process of two adult Mandarin-speaking ELs through RMA. They read…
ERIC Educational Resources Information Center
Perry, Christina; Albrecht, Julie; Litchfield, Ruth; Meysenburg, Rebecca L.; Er, Ida NgYin; Lum, Adeline; Beattie, Sam; Larvick, Carol; Schwarz, Carol; Temple, Jan; Meimann, Elizabeth
2012-01-01
Printed materials have been used extensively as an educational tool to increase food safety awareness. Few educational materials have been designed to target families with young children for food safety education. This article reports the use of the formative evaluation process to develop a brochure designed to enhance awareness about food safety…
ERIC Educational Resources Information Center
Márquez, Manuel; Chaves, Beatriz
2016-01-01
The application of a methodology based on S.C. Dik's Functionalist Grammar linguistic principles, which is addressed to the teaching of Latin to secondary students, has resulted in a quantitative improvement in students' acquisition process of knowledge. To do so, we have used a self-learning tool, an ad hoc dictionary, of which the use in…
Simulink/PARS Integration Support
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vacaliuc, B.; Nakhaee, N.
2013-12-18
The state of the art for signal processor hardware has far out-paced the development tools for placing applications on that hardware. In addition, signal processors are available in a variety of architectures, each uniquely capable of handling specific types of signal processing efficiently. With these processors becoming smaller and demanding less power, it has become possible to group multiple processors, a heterogeneous set of processors, into single systems. Different portions of the desired problem set can be assigned to different processor types as appropriate. As software development tools do not keep pace with these processors, especially when multiple processors ofmore » different types are used, a method is needed to enable software code portability among multiple processors and multiple types of processors along with their respective software environments. Sundance DSP, Inc. has developed a software toolkit called “PARS”, whose objective is to provide a framework that uses suites of tools provided by different vendors, along with modeling tools and a real time operating system, to build an application that spans different processor types. The software language used to express the behavior of the system is a very high level modeling language, “Simulink”, a MathWorks product. ORNL has used this toolkit to effectively implement several deliverables. This CRADA describes this collaboration between ORNL and Sundance DSP, Inc.« less
Automated Classification of Phonological Errors in Aphasic Language
Ahuja, Sanjeev B.; Reggia, James A.; Berndt, Rita S.
1984-01-01
Using heuristically-guided state space search, a prototype program has been developed to simulate and classify phonemic errors occurring in the speech of neurologically-impaired patients. Simulations are based on an interchangeable rule/operator set of elementary errors which represent a theory of phonemic processing faults. This work introduces and evaluates a novel approach to error simulation and classification, it provides a prototype simulation tool for neurolinguistic research, and it forms the initial phase of a larger research effort involving computer modelling of neurolinguistic processes.
Neural-Network-Development Program
NASA Technical Reports Server (NTRS)
Phillips, Todd A.
1993-01-01
NETS, software tool for development and evaluation of neural networks, provides simulation of neural-network algorithms plus computing environment for development of such algorithms. Uses back-propagation learning method for all of networks it creates. Enables user to customize patterns of connections between layers of network. Also provides features for saving, during learning process, values of weights, providing more-precise control over learning process. Written in ANSI standard C language. Machine-independent version (MSC-21588) includes only code for command-line-interface version of NETS 3.0.
Scrutinizing UML Activity Diagrams
NASA Astrophysics Data System (ADS)
Al-Fedaghi, Sabah
Building an information system involves two processes: conceptual modeling of the “real world domain” and designing the software system. Object-oriented methods and languages (e.g., UML) are typically used for describing the software system. For the system analysis process that produces the conceptual description, object-oriented techniques or semantics extensions are utilized. Specifically, UML activity diagrams are the “flow charts” of object-oriented conceptualization tools. This chapter proposes an alternative to UML activity diagrams through the development of a conceptual modeling methodology based on the notion of flow.
The Relationship between Artificial and Second Language Learning
ERIC Educational Resources Information Center
Ettlinger, Marc; Morgan-Short, Kara; Faretta-Stutenberg, Mandy; Wong, Patrick C. M.
2016-01-01
Artificial language learning (ALL) experiments have become an important tool in exploring principles of language and language learning. A persistent question in all of this work, however, is whether ALL engages the linguistic system and whether ALL studies are ecologically valid assessments of natural language ability. In the present study, we…
Impacts of the Use of "Support Tools" on a Distance Language Learning Course
ERIC Educational Resources Information Center
Pradier, Vincent; Andronova, Olga
2014-01-01
This study is a temporary assessment of the impacts of the use of support tools as part of a distance language training course for non-specialist students implemented at the Diderot Paris VII University in 2011/2012. The study mainly focuses on the uses of the support tool "Sounds Right APP" and details the results of the analysis of…
Preferences of Turkish Language Teachers for the Assessment-Evaluation Tools and Methods
ERIC Educational Resources Information Center
Guney, Nail
2013-01-01
The aim of this study is to determine the rate of teachers' use of assessment and evaluation tools given in 2005 curriculum of Turkish language teaching. To this end; we presented a list of assessment and evaluation tools on the basis of random sampling to 216 teachers of Turkish who work in Ordu, Samsun, Ankara, Trabzon and Istanbul provinces.…
Ontology-Based Information Extraction for Business Intelligence
NASA Astrophysics Data System (ADS)
Saggion, Horacio; Funk, Adam; Maynard, Diana; Bontcheva, Kalina
Business Intelligence (BI) requires the acquisition and aggregation of key pieces of knowledge from multiple sources in order to provide valuable information to customers or feed statistical BI models and tools. The massive amount of information available to business analysts makes information extraction and other natural language processing tools key enablers for the acquisition and use of that semantic information. We describe the application of ontology-based extraction and merging in the context of a practical e-business application for the EU MUSING Project where the goal is to gather international company intelligence and country/region information. The results of our experiments so far are very promising and we are now in the process of building a complete end-to-end solution.
Artificial intelligence support for scientific model-building
NASA Technical Reports Server (NTRS)
Keller, Richard M.
1992-01-01
Scientific model-building can be a time-intensive and painstaking process, often involving the development of large and complex computer programs. Despite the effort involved, scientific models cannot easily be distributed and shared with other scientists. In general, implemented scientific models are complex, idiosyncratic, and difficult for anyone but the original scientific development team to understand. We believe that artificial intelligence techniques can facilitate both the model-building and model-sharing process. In this paper, we overview our effort to build a scientific modeling software tool that aids the scientist in developing and using models. This tool includes an interactive intelligent graphical interface, a high-level domain specific modeling language, a library of physics equations and experimental datasets, and a suite of data display facilities.
NASA Technical Reports Server (NTRS)
Johnson, Sally C.; Boerschlein, David P.
1995-01-01
Semi-Markov models can be used to analyze the reliability of virtually any fault-tolerant system. However, the process of delineating all the states and transitions in a complex system model can be devastatingly tedious and error prone. The Abstract Semi-Markov Specification Interface to the SURE Tool (ASSIST) computer program allows the user to describe the semi-Markov model in a high-level language. Instead of listing the individual model states, the user specifies the rules governing the behavior of the system, and these are used to generate the model automatically. A few statements in the abstract language can describe a very large, complex model. Because no assumptions are made about the system being modeled, ASSIST can be used to generate models describing the behavior of any system. The ASSIST program and its input language are described and illustrated by examples.
ERIC Educational Resources Information Center
Ann Arbor Public Schools, MI.
Designed as a tool for foreign language teachers attempting to keep updated on the constantly proliferating number of printed materials being produced for use in elementary and secondary school language classes, this catalog attempts to bring together in one collection a representative selection of foreign language texts, readers, workbooks, and…
Unifying Model-Based and Reactive Programming within a Model-Based Executive
NASA Technical Reports Server (NTRS)
Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)
1999-01-01
Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.
Tashkeela: Novel corpus of Arabic vocalized texts, data for auto-diacritization systems.
Zerrouki, Taha; Balla, Amar
2017-04-01
Arabic diacritics are often missed in Arabic scripts. This feature is a handicap for new learner to read َArabic, text to speech conversion systems, reading and semantic analysis of Arabic texts. The automatic diacritization systems are the best solution to handle this issue. But such automation needs resources as diactritized texts to train and evaluate such systems. In this paper, we describe our corpus of Arabic diacritized texts. This corpus is called Tashkeela. It can be used as a linguistic resource tool for natural language processing such as automatic diacritics systems, dis-ambiguity mechanism, features and data extraction. The corpus is freely available, it contains 75 million of fully vocalized words mainly 97 books from classical and modern Arabic language. The corpus is collected from manually vocalized texts using web crawling process.
The ideomotor recycling theory for tool use, language, and foresight.
Badets, Arnaud; Osiurak, François
2017-02-01
The present theoretical framework highlights a common action-perception mechanism for tool use, spoken language, and foresight capacity. On the one hand, it has been suggested that human language and the capacity to envision the future (i.e. foresight) have, from an evolutionary viewpoint, developed mutually along with the pressure of tool use. This co-evolution has afforded humans an evident survival advantage in the animal kingdom because language can help to refine the representation of future scenarios, which in turn can help to encourage or discourage engagement in appropriate and efficient behaviours. On the other hand, recent assumptions regarding the evolution of the brain have capitalized on the concept of "neuronal recycling". In the domain of cognitive neuroscience, neuronal recycling means that during evolution, some neuronal areas and cognitive functions have been recycled to manage new environmental and social constraints. In the present article, we propose that the co-evolution of tool use, language, and foresight represents a suitable example of such functional recycling throughout a well-defined common action-perception mechanism, i.e. the ideomotor mechanism. This ideomotor account is discussed in light of different future ontogenetic and phylogenetic perspectives.
NSTX-U Advances in Real-Time C++11 on Linux
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, Keith G.
Programming languages like C and Ada combined with proprietary embedded operating systems have dominated the real-time application space for decades. The new C++11standard includes native, language-level support for concurrency, a required feature for any nontrivial event-oriented real-time software. Threads, Locks, and Atomics now exist to provide the necessary tools to build the structures that make up the foundation of a complex real-time system. The National Spherical Torus Experiment Upgrade (NSTX-U) at the Princeton Plasma Physics Laboratory (PPPL) is breaking new ground with the language as applied to the needs of fusion devices. A new Digital Coil Protection System (DCPS) willmore » serve as the main protection mechanism for the magnetic coils, and it is written entirely in C++11 running on Concurrent Computer Corporation's real-time operating system, RedHawk Linux. It runs over 600 algorithms in a 5 kHz control loop that determine whether or not to shut down operations before physical damage occurs. To accomplish this, NSTX-U engineers developed software tools that do not currently exist elsewhere, including real-time atomic synchronization, real-time containers, and a real-time logging framework. Together with a recent (and carefully configured) version of the GCC compiler, these tools enable data acquisition, processing, and output using a conventional operating system to meet a hard real-time deadline (that is, missing one periodic is a failure) of 200 microseconds.« less
NSTX-U Advances in Real-Time C++11 on Linux
Erickson, Keith G.
2015-08-14
Programming languages like C and Ada combined with proprietary embedded operating systems have dominated the real-time application space for decades. The new C++11standard includes native, language-level support for concurrency, a required feature for any nontrivial event-oriented real-time software. Threads, Locks, and Atomics now exist to provide the necessary tools to build the structures that make up the foundation of a complex real-time system. The National Spherical Torus Experiment Upgrade (NSTX-U) at the Princeton Plasma Physics Laboratory (PPPL) is breaking new ground with the language as applied to the needs of fusion devices. A new Digital Coil Protection System (DCPS) willmore » serve as the main protection mechanism for the magnetic coils, and it is written entirely in C++11 running on Concurrent Computer Corporation's real-time operating system, RedHawk Linux. It runs over 600 algorithms in a 5 kHz control loop that determine whether or not to shut down operations before physical damage occurs. To accomplish this, NSTX-U engineers developed software tools that do not currently exist elsewhere, including real-time atomic synchronization, real-time containers, and a real-time logging framework. Together with a recent (and carefully configured) version of the GCC compiler, these tools enable data acquisition, processing, and output using a conventional operating system to meet a hard real-time deadline (that is, missing one periodic is a failure) of 200 microseconds.« less
Language and Tools for Networkers
ERIC Educational Resources Information Center
Wielinga, Eelke; Vrolijk, Maarten
2009-01-01
The network society has a major impact on knowledge systems, and in agricultural and rural development. It has changed relationships between actors such as farmers, extension workers, researchers, policy-makers, businessmen and consumers. These changes require different language, concepts and tools compared to the time that it was thought that…
Sign Language Legislation as a Tool for Sustainability
ERIC Educational Resources Information Center
Pabsch, Annika
2017-01-01
This article explores three models of sustainability (environmental, economic, and social) and identifies characteristics of a sustainable community necessary to sustain the Deaf community as a whole. It is argued that sign language legislation is a valuable tool for achieving sustainability for the generations to come.
Timeliner: Automating Procedures on the ISS
NASA Technical Reports Server (NTRS)
Brown, Robert; Braunstein, E.; Brunet, Rick; Grace, R.; Vu, T.; Zimpfer, Doug; Dwyer, William K.; Robinson, Emily
2002-01-01
Timeliner has been developed as a tool to automate procedural tasks. These tasks may be sequential tasks that would typically be performed by a human operator, or precisely ordered sequencing tasks that allow autonomous execution of a control process. The Timeliner system includes elements for compiling and executing sequences that are defined in the Timeliner language. The Timeliner language was specifically designed to allow easy definition of scripts that provide sequencing and control of complex systems. The execution environment provides real-time monitoring and control based on the commands and conditions defined in the Timeliner language. The Timeliner sequence control may be preprogrammed, compiled from Timeliner "scripts," or it may consist of real-time, interactive inputs from system operators. In general, the Timeliner system lowers the workload for mission or process control operations. In a mission environment, scripts can be used to automate spacecraft operations including autonomous or interactive vehicle control, performance of preflight and post-flight subsystem checkouts, or handling of failure detection and recovery. Timeliner may also be used for mission payload operations, such as stepping through pre-defined procedures of a scientific experiment.
The crustal dynamics intelligent user interface anthology
NASA Technical Reports Server (NTRS)
Short, Nicholas M., Jr.; Campbell, William J.; Roelofs, Larry H.; Wattawa, Scott L.
1987-01-01
The National Space Science Data Center (NSSDC) has initiated an Intelligent Data Management (IDM) research effort which has, as one of its components, the development of an Intelligent User Interface (IUI). The intent of the IUI is to develop a friendly and intelligent user interface service based on expert systems and natural language processing technologies. The purpose of such a service is to support the large number of potential scientific and engineering users that have need of space and land-related research and technical data, but have little or no experience in query languages or understanding of the information content or architecture of the databases of interest. This document presents the design concepts, development approach and evaluation of the performance of a prototype IUI system for the Crustal Dynamics Project Database, which was developed using a microcomputer-based expert system tool (M. 1), the natural language query processor THEMIS, and the graphics software system GSS. The IUI design is based on a multiple view representation of a database from both the user and database perspective, with intelligent processes to translate between the views.
Language Flowering, Language Empowering for Young Children.
ERIC Educational Resources Information Center
Honig, Alice Sterling
Based upon the view that parents, home visitors, and teachers in early childhood settings need tools for empowering young children to develop language, this paper examines what adults need to know to guide young children's language development and presents 20 suggestions for enhancing language growth. The paper maintains that adults need to know…
Adults Learning Languages: A CILT Guide to Good Practice
ERIC Educational Resources Information Center
Harnisch, Henriette, Ed.; Swanton, Pauline, Ed.
2004-01-01
"Adults Learning Languages" is aimed at those responsible for teaching languages across AE, FE and HE. In the much-changed world of post-19 languages, new funding and inspection regimes with revised needs for quality assurance are challenging practitioners to adapt and review approaches. This book offers teachers of languages to adults tools to…
ERIC Educational Resources Information Center
Ryder, Nuala; Leinonen, Eeva; Schulz, Joerg
2008-01-01
Background: Pragmatic language impairment in children with specific language impairment has proved difficult to assess, and the nature of their abilities to comprehend pragmatic meaning has not been fully investigated. Aims: To develop both a cognitive approach to pragmatic language assessment based on Relevance Theory and an assessment tool for…
MaLT - Combined Motor and Language Therapy Tool for Brain Injury Patients Using Kinect.
Wairagkar, Maitreyee; McCrindle, Rachel; Robson, Holly; Meteyard, Lotte; Sperrin, Malcom; Smith, Andy; Pugh, Moyra
2017-03-23
The functional connectivity and structural proximity of elements of the language and motor systems result in frequent co-morbidity post brain injury. Although rehabilitation services are becoming increasingly multidisciplinary and "integrated", treatment for language and motor functions often occurs in isolation. Thus, behavioural therapies which promote neural reorganisation do not reflect the high intersystem connectivity of the neurologically intact brain. As such, there is a pressing need for rehabilitation tools which better reflect and target the impaired cognitive networks. The objective of this research is to develop a combined high dosage therapy tool for language and motor rehabilitation. The rehabilitation therapy tool developed, MaLT (Motor and Language Therapy), comprises a suite of computer games targeting both language and motor therapy that use the Kinect sensor as an interaction device. The games developed are intended for use in the home environment over prolonged periods of time. In order to track patients' engagement with the games and their rehabilitation progress, the game records patient performance data for the therapist to interrogate. MaLT incorporates Kinect-based games, a database of objects and language parameters, and a reporting tool for therapists. Games have been developed that target four major language therapy tasks involving single word comprehension, initial phoneme identification, rhyme identification and a naming task. These tasks have 8 levels each increasing in difficulty. A database of 750 objects is used to programmatically generate appropriate questions for the game, providing both targeted therapy and unique gameplay every time. The design of the games has been informed by therapists and by discussions with a Public Patient Involvement (PPI) group. Pilot MaLT trials have been conducted with three stroke survivors for the duration of 6 to 8 weeks. Patients' performance is monitored through MaLT's reporting facility presented as graphs plotted from patient game data. Performance indicators include reaction time, accuracy, number of incorrect responses and hand use. The resultant games have also been tested by the PPI with a positive response and further suggestions for future modifications made. MaLT provides a tool that innovatively combines motor and language therapy for high dosage rehabilitation in the home. It has demonstrated that motion sensor technology can be successfully combined with a language therapy task to target both upper limb and linguistic impairment in patients following brain injury. The initial studies on stroke survivors have demonstrated that the combined therapy approach is viable and the outputs of this study will inform planned larger scale future trials.
Linguistic Theory and Actual Language.
ERIC Educational Resources Information Center
Segerdahl, Par
1995-01-01
Examines Noam Chomsky's (1957) discussion of "grammaticalness" and the role of linguistics in the "correct" way of speaking and writing. It is argued that the concern of linguistics with the tools of grammar has resulted in confusion, with the tools becoming mixed up with the actual language, thereby becoming the central…
McIntyre, Laureen J; Hellsten, Laurie-Ann M; Bidonde, Julia; Boden, Catherine; Doi, Carolyn
2017-04-04
The majority of a child's language development occurs in the first 5 years of life when brain development is most rapid. There are significant long-term benefits to supporting all children's language and literacy development such as maximizing their developmental potential (i.e., cognitive, linguistic, social-emotional), when children are experiencing a critical period of development (i.e., early childhood to 9 years of age). A variety of people play a significant role in supporting children's language development, including parents, guardians, family members, educators, and/or speech-language pathologists. Speech-language pathologists and educators are the professionals who predominantly support children's language development in order for them to become effective communicators and lay the foundation for later developing literacy skills (i.e., reading and writing skills). Therefore, these professionals need formal and informal assessments that provide them information on a child's understanding and/or use of the increasingly complex aspects of language in order to identify and support the receptive and expressive language learning needs of diverse children during their early learning experiences (i.e., aged 1.5 to 9 years). However, evidence on what methods and tools are being used is lacking. The authors will carry out a scoping review of the literature to identify studies and map the receptive and expressive English language assessment methods and tools that have been published and used since 1980. Arksey and O'Malley's (2005) six-stage approach to conducting a scoping review was drawn upon to design the protocol for this investigation: (1) identifying the research question; (2) identifying relevant studies; (3) study selection; (4) charting the data; (5) collating, summarizing, and reporting the results; and (6) consultation. This information will help these professionals identify and select appropriate assessment methods or tools that can be used to support development and/or identify areas of delay or difficulty and plan, implement, and monitor the progress of interventions supporting the development of receptive and expressive language skills in individuals with diverse language needs (e.g., typically developing children, children with language delays and disorders, children learning English as a second or additional language, Indigenous children who may be speaking dialects of English). Researchers plan to evaluate the effectiveness of the assessment methods or tools identified in the scoping review as an extension of this study.
2017-01-01
Parent report is commonly used to assess language and attention in children for research and clinical purposes. It is therefore important to understand the convergent validity of parent-report tools in comparison to direct assessments of language and attention. In particular, cultural and linguistic background may influence this convergence. In this study a group of six- to eight-year old children (N = 110) completed direct assessments of language and attention and their parents reported on the same areas. Convergence between assessment types was explored using correlations. Possible influences of ethnicity (Hispanic or non-Hispanic) and of parent report language (English or Spanish) were explored using hierarchical linear regression. Correlations between parent report and direct child assessments were significant for both language and attention, suggesting convergence between assessment types. Ethnicity and parent report language did not moderate the relationships between direct child assessments and parent report tools for either attention or language. PMID:28683131
Ebert, Kerry Danahy
2017-01-01
Parent report is commonly used to assess language and attention in children for research and clinical purposes. It is therefore important to understand the convergent validity of parent-report tools in comparison to direct assessments of language and attention. In particular, cultural and linguistic background may influence this convergence. In this study a group of six- to eight-year old children (N = 110) completed direct assessments of language and attention and their parents reported on the same areas. Convergence between assessment types was explored using correlations. Possible influences of ethnicity (Hispanic or non-Hispanic) and of parent report language (English or Spanish) were explored using hierarchical linear regression. Correlations between parent report and direct child assessments were significant for both language and attention, suggesting convergence between assessment types. Ethnicity and parent report language did not moderate the relationships between direct child assessments and parent report tools for either attention or language.
Fields, Chris
2011-03-01
Structure-mapping inferences are generally regarded as dependent upon relational concepts that are understood and expressible in language by subjects capable of analogical reasoning. However, tool-improvisation inferences are executed by members of a variety of non-human primate and other species. Tool improvisation requires correctly inferring the motion and force-transfer affordances of an object; hence tool improvisation requires structure mapping driven by relational properties. Observational and experimental evidence can be interpreted to indicate that structure-mapping analogies in tool improvisation are implemented by multi-step manipulation of event files by binding and action-planning mechanisms that act in a language-independent manner. A functional model of language-independent event-file manipulations that implement structure mapping in the tool-improvisation domain is developed. This model provides a mechanism by which motion and force representations commonly employed in tool-improvisation structure mappings may be sufficiently reinforced to be available to inwardly directed attention and hence conceptualization. Predictions and potential experimental tests of this model are outlined.
Voice-enabled Knowledge Engine using Flood Ontology and Natural Language Processing
NASA Astrophysics Data System (ADS)
Sermet, M. Y.; Demir, I.; Krajewski, W. F.
2015-12-01
The Iowa Flood Information System (IFIS) is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to flood inundation maps, real-time flood conditions, flood forecasts, flood-related data, information and interactive visualizations for communities in Iowa. The IFIS is designed for use by general public, often people with no domain knowledge and limited general science background. To improve effective communication with such audience, we have introduced a voice-enabled knowledge engine on flood related issues in IFIS. Instead of navigating within many features and interfaces of the information system and web-based sources, the system provides dynamic computations based on a collection of built-in data, analysis, and methods. The IFIS Knowledge Engine connects to real-time stream gauges, in-house data sources, analysis and visualization tools to answer natural language questions. Our goal is the systematization of data and modeling results on flood related issues in Iowa, and to provide an interface for definitive answers to factual queries. The goal of the knowledge engine is to make all flood related knowledge in Iowa easily accessible to everyone, and support voice-enabled natural language input. We aim to integrate and curate all flood related data, implement analytical and visualization tools, and make it possible to compute answers from questions. The IFIS explicitly implements analytical methods and models, as algorithms, and curates all flood related data and resources so that all these resources are computable. The IFIS Knowledge Engine computes the answer by deriving it from its computational knowledge base. The knowledge engine processes the statement, access data warehouse, run complex database queries on the server-side and return outputs in various formats. This presentation provides an overview of IFIS Knowledge Engine, its unique information interface and functionality as an educational tool, and discusses the future plans for providing knowledge on flood related issues and resources. IFIS Knowledge Engine provides an alternative access method to these comprehensive set of tools and data resources available in IFIS. Current implementation of the system accepts free-form input and voice recognition capabilities within browser and mobile applications.
CLIPS: The C language integrated production system
NASA Technical Reports Server (NTRS)
Riley, Gary
1994-01-01
Expert systems are computer programs which emulate human expertise in well defined problem domains. The potential payoff from expert systems is high: valuable expertise can be captured and preserved, repetitive and/or mundane tasks requiring human expertise can be automated, and uniformity can be applied in decision making processes. The C Language Integrated Production System (CLIPS) is an expert system building tool, developed at the Johnson Space Center, which provides a complete environment for the development and delivery of rule and/or object based expert systems. CLIPS was specifically designed to provide a low cost option for developing and deploying expert system applications across a wide range of hardware platforms. The commercial potential of CLIPS is vast. Currently, CLIPS is being used by over 5,000 individuals throughout the public and private sector. Because the CLIPS source code is readily available, numerous groups have used CLIPS as the basis for their own expert system tools. To date, three commercially available tools have been derived from CLIPS. In general, the development of CLIPS has helped to improve the ability to deliver expert system technology throughout the public and private sectors for a wide range of applications and diverse computing environments.
Hierarchical programming for data storage and visualization
Donovan, John M.; Smith, Peter E.; ,
2001-01-01
Graphics software is an essential tool for interpreting, analyzing, and presenting data from multidimensional hydrodynamic models used in estuarine and coastal ocean studies. The post-processing of time-varying three-dimensional model output presents unique requirements for data visualization because of the large volume of data that can be generated and the multitude of time scales that must be examined. Such data can relate to estuarine or coastal ocean environments and come from numerical models or field instruments. One useful software tool for the display, editing, visualization, and printing of graphical data is the Gr application, written by the first author for use in U.S. Geological Survey San Francisco Bay Program. The Gr application has been made available to the public via the Internet since the year 2000. The Gr application is written in the Java (Sun Microsystems, Nov. 29, 2001) programming language and uses the Extensible Markup Language standard for hierarchical data storage. Gr presents a hierarchy of objects to the user that can be edited using a common interface. Java's object-oriented capabilities allow Gr to treat data, graphics, and tools equally and to save them all to a single XML file.
Elhadad, N.
2016-01-01
Summary Objectives This paper reviews work over the past two years in Natural Language Processing (NLP) applied to clinical and consumer-generated texts. Methods We included any application or methodological publication that leverages text to facilitate healthcare and address the health-related needs of consumers and populations. Results Many important developments in clinical text processing, both foundational and task-oriented, were addressed in community-wide evaluations and discussed in corresponding special issues that are referenced in this review. These focused issues and in-depth reviews of several other active research areas, such as pharmacovigilance and summarization, allowed us to discuss in greater depth disease modeling and predictive analytics using clinical texts, and text analysis in social media for healthcare quality assessment, trends towards online interventions based on rapid analysis of health-related posts, and consumer health question answering, among other issues. Conclusions Our analysis shows that although clinical NLP continues to advance towards practical applications and more NLP methods are used in large-scale live health information applications, more needs to be done to make NLP use in clinical applications a routine widespread reality. Progress in clinical NLP is mirrored by developments in social media text analysis: the research is moving from capturing trends to addressing individual health-related posts, thus showing potential to become a tool for precision medicine and a valuable addition to the standard healthcare quality evaluation tools. PMID:27830255
Tools for Knowledge Analysis, Synthesis, and Sharing
NASA Astrophysics Data System (ADS)
Medland, Michael B.
2007-04-01
Change and complexity are creating a need for increasing levels of literacy in science and technology. Presently, we are beginning to provide students with clear contexts in which to learn, including clearly written text, visual displays and maps, and more effective instruction. We are also beginning to give students tools that promote their own literacy by helping them to interact with the learning context. These tools include peer-group skills as well as strategies to analyze text and to indicate comprehension by way of text summaries and concept maps. Even with these tools, more appears to be needed. Disparate backgrounds and languages interfere with the comprehension and the sharing of knowledge. To meet this need, two new tools are proposed. The first tool fractures language ontologically, giving all learners who use it a language to talk about what has, and what has not, been uttered in text or talk about the world. The second fractures language epistemologically, giving those involved in working with text or on the world around them a way to talk about what they have done and what remains to be done. Together, these tools operate as a two- tiered knowledge representation of knowledge. This representation promotes both an individual meta-cognitive and a social meta-cognitive approach to what is known and to what is not known, both ontologically and epistemologically. Two hypotheses guide the presentation: If the tools are taught during early childhood, children will be prepared to master science and technology content. If the tools are used by both students and those who design and deliver instruction, the learning of such content will be accelerated.
ERIC Educational Resources Information Center
Gimeno, Ana; Seiz, Rafael; de Siqueira, Jose Macario; Martinez, Antonio
2010-01-01
The future professional world of today's students is becoming a life-long learning process where they have to adapt to a changing market and an environment full of new opportunities and challenges. Thus, the development of a number of personal and professional skills, in addition to technical content and knowledge, is a crucial part of their…
Visual Purple, the Next Generation Crisis Management Decision Training Tool
2001-09-01
talents of professional Hollywood screenwriters during the scripting and writing process of the simulations. Additionally, cinematic techniques learned...cultural, and language experts for research development. Additionally, GTA provides country specific support in script writing and cinematic resources as...The result is an entirely new dimension of realism that traditional exercises often fail to capture. The scenario requires the participant to make the
ERIC Educational Resources Information Center
Peace Corps, Washington, DC. Information Collection and Exchange Div.
A french language version of a training manual that presents guidelines for planning and conducting a project design and management (PDM) workshop to teach Peace Corps volunteers to involve local community members in the process of using participatory analysis tools and planning and implementing projects meeting local desires and needs. The first…
Medical document anonymization with a semantic lexicon.
Ruch, P.; Baud, R. H.; Rassinoux, A. M.; Bouillon, P.; Robert, G.
2000-01-01
We present an original system for locating and removing personally-identifying information in patient records. In this experiment, anonymization is seen as a particular case of knowledge extraction. We use natural language processing tools provided by the MEDTAG framework: a semantic lexicon specialized in medicine, and a toolkit for word-sense and morpho-syntactic tagging. The system finds 98-99% of all personally-identifying information. PMID:11079980
ERIC Educational Resources Information Center
Reece Armour, Ashley
2017-01-01
The purpose of this phenomenological case study is to explore the reading attitudes and decision-making skills of college freshmen enrolled in remedial language arts courses. The theoretical framework guiding this study is qualitative phenomenology explained by Baxter and Jack (2008). This specific type of research "provides tools for…
NASA Astrophysics Data System (ADS)
Lukyanov, A. A.; Grigoriev, S. N.; Bobrovskij, I. N.; Melnikov, P. A.; Bobrovskij, N. M.
2017-05-01
With regard to the complexity of the new technology and increase its reliability requirements laboriousness of control operations in industrial quality control systems increases significantly. The importance of quality management control due to the fact that its promotes the correct use of production conditions, the relevant requirements are required. Digital image processing allows to reach a new technological level of production (new technological way). The most complicated automated interpretation of information is the basis for decision-making in the management of production processes. In the case of surface analysis of tools used for processing with the using of metalworking fluids (MWF) it is more complicated. The authors suggest new algorithm for optical inspection of the wear of the cylinder tool for burnishing, which used in surface plastic deformation without using of MWF. The main advantage of proposed algorithm is the possibility of automatic recognition of images of burnisher tool with the subsequent allocation of its boundaries, finding a working surface and automatically allocating the defects and wear area. Software that implements the algorithm was developed by the authors in Matlab programming environment, but can be implemented using other programming languages.
An efficient framework for Java data processing systems in HPC environments
NASA Astrophysics Data System (ADS)
Fries, Aidan; Castañeda, Javier; Isasi, Yago; Taboada, Guillermo L.; Portell de Mora, Jordi; Sirvent, Raül
2011-11-01
Java is a commonly used programming language, although its use in High Performance Computing (HPC) remains relatively low. One of the reasons is a lack of libraries offering specific HPC functions to Java applications. In this paper we present a Java-based framework, called DpcbTools, designed to provide a set of functions that fill this gap. It includes a set of efficient data communication functions based on message-passing, thus providing, when a low latency network such as Myrinet is available, higher throughputs and lower latencies than standard solutions used by Java. DpcbTools also includes routines for the launching, monitoring and management of Java applications on several computing nodes by making use of JMX to communicate with remote Java VMs. The Gaia Data Processing and Analysis Consortium (DPAC) is a real case where scientific data from the ESA Gaia astrometric satellite will be entirely processed using Java. In this paper we describe the main elements of DPAC and its usage of the DpcbTools framework. We also assess the usefulness and performance of DpcbTools through its performance evaluation and the analysis of its impact on some DPAC systems deployed in the MareNostrum supercomputer (Barcelona Supercomputing Center).
Davies, Jane; Bukulatjpi, Sarah; Sharma, Suresh; Caldwell, Luci; Johnston, Vanessa; Davis, Joshua Saul
2015-06-10
Hepatitis B is endemic in Indigenous communities in Northern Australia; however, there is a lack of culturally appropriate educational tools. Health care workers and educators in this setting have voiced a desire for visual, interactive tools in local languages. Mobile phones are increasingly used and available in remote Indigenous communities. In this context, we identified the need for a tablet-based health education app about hepatitis B, developed in partnership with an Australian remote Indigenous community. To develop a culturally appropriate bilingual app about hepatitis B for Indigenous Australians in Arnhem Land using a participatory action research (PAR) framework. This project was a partnership between the Menzies School of Health Research, Miwatj Aboriginal Health Corporation, Royal Darwin Hospital Liver Clinic, and Dreamedia Darwin. We have previously published a qualitative study that identified major knowledge gaps about hepatitis B in this community, and suggested that a tablet-based app would be an appropriate and popular tool to improve this knowledge. The process of developing the app was based on PAR principles, particularly ongoing consultation, evaluation, and discussion with the community throughout each iterative cycle. Stages included development of the storyboard, the translation process (forward translation and backtranslation), prelaunch community review, launch and initial community evaluation, and finally, wider launch and evaluation at a viral hepatitis conference. We produced an app called "Hep B Story" for use with iPad, iPhone, Android tablets, and mobile phones or personal computers. The app is culturally appropriate, audiovisual, interactive, and users can choose either English or Yolŋu Matha (the most common language in East Arnhem Land) as their preferred language. The initial evaluation demonstrated a statistically significant improvement in Hep B-related knowledge for 2 of 3 questions (P=.01 and .02, respectively) and overwhelmingly positive opinion regarding acceptability and ease of use (median rating of 5, on a 5-point Likert-type scale when users were asked if they would recommend the app to others). We describe the process of development of a bilingual hepatitis B-specific app for Indigenous Australians, using a PAR framework. The approach was found to be successful with positive evaluations.
Facilitating hydrological data analysis workflows in R: the RHydro package
NASA Astrophysics Data System (ADS)
Buytaert, Wouter; Moulds, Simon; Skoien, Jon; Pebesma, Edzer; Reusser, Dominik
2015-04-01
The advent of new technologies such as web-services and big data analytics holds great promise for hydrological data analysis and simulation. Driven by the need for better water management tools, it allows for the construction of much more complex workflows, that integrate more and potentially more heterogeneous data sources with longer tool chains of algorithms and models. With the scientific challenge of designing the most adequate processing workflow comes the technical challenge of implementing the workflow with a minimal risk for errors. A wide variety of new workbench technologies and other data handling systems are being developed. At the same time, the functionality of available data processing languages such as R and Python is increasing at an accelerating pace. Because of the large diversity of scientific questions and simulation needs in hydrology, it is unlikely that one single optimal method for constructing hydrological data analysis workflows will emerge. Nevertheless, languages such as R and Python are quickly gaining popularity because they combine a wide array of functionality with high flexibility and versatility. The object-oriented nature of high-level data processing languages makes them particularly suited for the handling of complex and potentially large datasets. In this paper, we explore how handling and processing of hydrological data in R can be facilitated further by designing and implementing a set of relevant classes and methods in the experimental R package RHydro. We build upon existing efforts such as the sp and raster packages for spatial data and the spacetime package for spatiotemporal data to define classes for hydrological data (HydroST). In order to handle simulation data from hydrological models conveniently, a HM class is defined. Relevant methods are implemented to allow for an optimal integration of the HM class with existing model fitting and simulation functionality in R. Lastly, we discuss some of the design challenges of the RHydro package, including integration with big data technologies, web technologies, and emerging data models in hydrology.
NASA Technical Reports Server (NTRS)
Hayden, Jeffrey L.; Jeffries, Alan
2012-01-01
The JPSS Ground System is a lIexible system of systems responsible for telemetry, tracking & command (TT &C), data acquisition, routing and data processing services for a varied lIeet of satellites to support weather prediction, modeling and climate modeling. To assist in this engineering effort, architecture modeling tools are being employed to translate the former NPOESS baseline to the new JPSS baseline, The paper will focus on the methodology for the system engineering process and the use of these architecture modeling tools within that process, The Department of Defense Architecture Framework version 2,0 (DoDAF 2.0) viewpoints and views that are being used to describe the JPSS GS architecture are discussed. The Unified Profile for DoOAF and MODAF (UPDM) and Systems Modeling Language (SysML), as ' provided by extensions to the MagicDraw UML modeling tool, are used to develop the diagrams and tables that make up the architecture model. The model development process and structure are discussed, examples are shown, and details of handling the complexities of a large System of Systems (SoS), such as the JPSS GS, with an equally complex modeling tool, are described
User-friendly solutions for microarray quality control and pre-processing on ArrayAnalysis.org
Eijssen, Lars M. T.; Jaillard, Magali; Adriaens, Michiel E.; Gaj, Stan; de Groot, Philip J.; Müller, Michael; Evelo, Chris T.
2013-01-01
Quality control (QC) is crucial for any scientific method producing data. Applying adequate QC introduces new challenges in the genomics field where large amounts of data are produced with complex technologies. For DNA microarrays, specific algorithms for QC and pre-processing including normalization have been developed by the scientific community, especially for expression chips of the Affymetrix platform. Many of these have been implemented in the statistical scripting language R and are available from the Bioconductor repository. However, application is hampered by lack of integrative tools that can be used by users of any experience level. To fill this gap, we developed a freely available tool for QC and pre-processing of Affymetrix gene expression results, extending, integrating and harmonizing functionality of Bioconductor packages. The tool can be easily accessed through a wizard-like web portal at http://www.arrayanalysis.org or downloaded for local use in R. The portal provides extensive documentation, including user guides, interpretation help with real output illustrations and detailed technical documentation. It assists newcomers to the field in performing state-of-the-art QC and pre-processing while offering data analysts an integral open-source package. Providing the scientific community with this easily accessible tool will allow improving data quality and reuse and adoption of standards. PMID:23620278
ERIC Educational Resources Information Center
Herazo Rivera, Jose David; Sagre Barboza, Anamaría
2016-01-01
Sociocultural theory argues that an individual's mental, social, and material activity is mediated by cultural tools. One such tool is the language or discourse teachers use during whole class interaction in the second language classroom. The purpose of this study was to examine how a Colombian second language teacher mediated her ninth-grade…
Erasmus, D; Schutte, L; van der Merwe, M; Geertsema, S
2013-12-01
To investigate whether privately practising speech-language therapists in South Africa are fulfilling their role of identification, assessment and intervention for adolescents with written-language and reading difficulties. Further needs concerning training with regard to this population group were also determined. A survey study was conducted, using a self-administered questionnaire. Twenty-two currently practising speech-language therapists who are registered members of the South African Speech-Language-Hearing Association (SASLHA) participated in the study. The respondents indicated that they are aware of their role regarding adolescents with written-language difficulties. However, they feel that South-African speech-language therapists are not fulfilling this role. Existing assessment tools and interventions for written-language difficulties are described as inadequate, and culturally and age inappropriate. Yet, the majority of the respondents feel that they are adequately equipped to work with adolescents with written-language difficulties, based on their own experience, self-study and secondary training. The respondents feel that training regarding effective collaboration with teachers is necessary to establish specific roles, and to promote speech-language therapy for adolescents among teachers. Further research is needed in developing appropriate assessment and intervention tools as well as improvement of training at an undergraduate level.
Interrogating Your Wisdom of Practice to Improve Classroom Practices
ERIC Educational Resources Information Center
Chappell, Philip
2017-01-01
This article presents a heuristic for language teachers to articulate and explore their fundamental theories of and philosophical stances towards language, language learning, and language teaching. It includes tools with which teachers can interrogate those theories, weighing them up against their actual classroom practices. Through presenting…
AAC Language Activity Monitoring: Entering the New Millennium.
ERIC Educational Resources Information Center
Hill, Katya; Romich, Barry
This report describes how augmentative and alternative communication (AAC) automated language activity monitoring can provide clinicians with the tools they need to collect and analyze language samples from the natural environment of children with disabilities for clinical intervention and outcomes measurements. The Language Activity Monitor (LAM)…
ERIC Educational Resources Information Center
Roy, Debopriyo
2014-01-01
Besides focusing on grammar, writing skills, and web-based language learning, researchers in "CALL" and second language acquisition have also argued for the importance of promoting higher-order thinking skills in ESL (English as Second Language) and EFL (English as Foreign Language) classrooms. There is solid evidence supporting the…
Empirical Learner Language and the Levels of the "Common European Framework of Reference"
ERIC Educational Resources Information Center
Wisniewski, Katrin
2017-01-01
The "Common European Framework of Reference" (CEFR) is the most widespread reference tool for linking language tests, curricula, and national educational standards to levels of foreign language proficiency in Europe. In spite of this, little is known about how the CEFR levels (A1-C2) relate to empirical learner language(s). This article…
Language Management × 3: A Theory, a Sub-Concept, and a Business Strategy Tool
ERIC Educational Resources Information Center
Sanden, Guro Refsum
2016-01-01
The term "language management" has become a widely used expression in the sociolinguistic literature. Originally introduced by Jernudd and Neustupný in 1987, as a novel continuation of the language planning tradition stemming from the 1960/70s, language management along these lines has developed into the Language Management Theory (LMT).…
The Role of a Language Scale for Infant and Preschool Assessment
ERIC Educational Resources Information Center
Zimmerman, Irla Lee; Castilleja, Nancy Flores
2005-01-01
The PLS-4 (Preschool Language Scale, 4th edition) is a psychometrically sound instrument constructed to assess language skills in children from birth to 6 years 11 months. It is a useful diagnostic and research tool that can be used to identify current comprehension and expressive language skills and can measure changes in language skills over…
A UML model for the description of different brain-computer interface systems.
Quitadamo, Lucia Rita; Abbafati, Manuel; Saggio, Giovanni; Marciani, Maria Grazia; Cardarilli, Gian Carlo; Bianchi, Luigi
2008-01-01
BCI research lacks a universal descriptive language among labs and a unique standard model for the description of BCI systems. This results in a serious problem in comparing performances of different BCI processes and in unifying tools and resources. In such a view we implemented a Unified Modeling Language (UML) model for the description virtually of any BCI protocol and we demonstrated that it can be successfully applied to the most common ones such as P300, mu-rhythms, SCP, SSVEP, fMRI. Finally we illustrated the advantages in utilizing a standard terminology for BCIs and how the same basic structure can be successfully adopted for the implementation of new systems.
NASA Technical Reports Server (NTRS)
Gryphon, Coranth D.; Miller, Mark D.
1991-01-01
PCLIPS (Parallel CLIPS) is a set of extensions to the C Language Integrated Production System (CLIPS) expert system language. PCLIPS is intended to provide an environment for the development of more complex, extensive expert systems. Multiple CLIPS expert systems are now capable of running simultaneously on separate processors, or separate machines, thus dramatically increasing the scope of solvable tasks within the expert systems. As a tool for parallel processing, PCLIPS allows for an expert system to add to its fact-base information generated by other expert systems, thus allowing systems to assist each other in solving a complex problem. This allows individual expert systems to be more compact and efficient, and thus run faster or on smaller machines.
Hoelzer, Simon; Schweiger, Ralf K; Liu, Raymond; Rudolf, Dirk; Rieger, Joerg; Dudeck, Joachim
2005-01-01
With the introduction of the ICD-10 as the standard for diagnosis, the development of an electronic representation of its complete content, inherent semantics and coding rules is necessary. Our concept refers to current efforts of the CEN/TC 251 to establish a European standard for hierarchical classification systems in healthcare. We have developed an electronic representation of the ICD-10 with the extensible Markup Language (XML) that facilitates the integration in current information systems or coding software taking into account different languages and versions. In this context, XML offers a complete framework of related technologies and standard tools for processing that helps to develop interoperable applications.
CRIE: An automated analyzer for Chinese texts.
Sung, Yao-Ting; Chang, Tao-Hsing; Lin, Wei-Chun; Hsieh, Kuan-Sheng; Chang, Kuo-En
2016-12-01
Textual analysis has been applied to various fields, such as discourse analysis, corpus studies, text leveling, and automated essay evaluation. Several tools have been developed for analyzing texts written in alphabetic languages such as English and Spanish. However, currently there is no tool available for analyzing Chinese-language texts. This article introduces a tool for the automated analysis of simplified and traditional Chinese texts, called the Chinese Readability Index Explorer (CRIE). Composed of four subsystems and incorporating 82 multilevel linguistic features, CRIE is able to conduct the major tasks of segmentation, syntactic parsing, and feature extraction. Furthermore, the integration of linguistic features with machine learning models enables CRIE to provide leveling and diagnostic information for texts in language arts, texts for learning Chinese as a foreign language, and texts with domain knowledge. The usage and validation of the functions provided by CRIE are also introduced.
Can mathematics explain the evolution of human language?
Witzany, Guenther
2011-09-01
Investigation into the sequence structure of the genetic code by means of an informatic approach is a real success story. The features of human language are also the object of investigation within the realm of formal language theories. They focus on the common rules of a universal grammar that lies behind all languages and determine generation of syntactic structures. This universal grammar is a depiction of material reality, i.e., the hidden logical order of things and its relations determined by natural laws. Therefore mathematics is viewed not only as an appropriate tool to investigate human language and genetic code structures through computer science-based formal language theory but is itself a depiction of material reality. This confusion between language as a scientific tool to describe observations/experiences within cognitive constructed models and formal language as a direct depiction of material reality occurs not only in current approaches but was the central focus of the philosophy of science debate in the twentieth century, with rather unexpected results. This article recalls these results and their implications for more recent mathematical approaches that also attempt to explain the evolution of human language.
Foreign Language Translation of Chemical Nomenclature by Computer
2009-01-01
Chemical compound names remain the primary method for conveying molecular structures between chemists and researchers. In research articles, patents, chemical catalogues, government legislation, and textbooks, the use of IUPAC and traditional compound names is universal, despite efforts to introduce more machine-friendly representations such as identifiers and line notations. Fortunately, advances in computing power now allow chemical names to be parsed and generated (read and written) with almost the same ease as conventional connection tables. A significant complication, however, is that although the vast majority of chemistry uses English nomenclature, a significant fraction is in other languages. This complicates the task of filing and analyzing chemical patents, purchasing from compound vendors, and text mining research articles or Web pages. We describe some issues with manipulating chemical names in various languages, including British, American, German, Japanese, Chinese, Spanish, Swedish, Polish, and Hungarian, and describe the current state-of-the-art in software tools to simplify the process. PMID:19239237
Siu, Elaine; Man, David W K
2006-09-01
Children with Specific Language Impairment present with delayed language development, but do not have a history of hearing impairment, mental deficiency, or associated social or behavioral problems. Non-word repetition was suggested as an index to reflect the capacity of phonological working memory. There is a paucity of such studies among Hong Kong Chinese children. This preliminary study aimed to examine the relationship between phonological working memory and Specific Language Impairment, through the processes of non-word repetition and sentence comprehension, of children with Specific Language Impairment and pre-school children with normal language development. Both groups of children were screened by a standardized language test. A list of Cantonese (the commonest dialect used in Hong Kong) multisyllabic nonsense utterances and a set of 18 sentences were developed for this study. t-tests and Pearson correlation were used to study the relationship between non-word repetition, working memory and specific language impairment. Twenty-three pre-school children with Specific Language Impairment (mean age = 68.30 months; SD = 6.90) and another 23 pre-school children (mean age = 67.30 months; SD = 6.16) participated in the study. Significant difference performance was found between the Specific Language Impairment group and normal language group in the multisyllabic nonsense utterances repetition task and the sentence comprehension task. Length effect was noted in Specific Language Impairment group children, which is consistent with the findings of other literature. In addition, correlations were also observed between the number of nonsense utterances repeated and the number of elements comprehended. Cantonese multisyllabic nonsense utterances might be worth further developing as a screening tool for the early detection of children with Specific Language Impairment.
Fryer, Caroline; Mackintosh, Shylie; Stanley, Mandy; Crichton, Jonathan
2012-01-01
This paper is a report of a methodological review of language appropriate practice in qualitative research, when language groups were not determined prior to participant recruitment. When older people from multiple language groups participate in research using in-depth interviews, additional challenges are posed for the trustworthiness of findings. This raises the question of how such challenges are addressed. The Cumulative Index to Nursing and Allied Health Literature, Scopus, Embase, Web of Science, Ageline, PsycINFO, Sociological abstracts, Google Scholar and Allied and Complementary Medicine databases were systematically searched for the period 1840 to September 2009. The combined search terms of 'ethnic', 'cultural', 'aged', 'health' and 'qualitative' were used. In this methodological review, studies were independently appraised by two authors using a quality appraisal tool developed for the review, based on a protocol from the McMaster University Occupational Therapy Evidence-Based Practice Research Group. Nine studies were included. Consideration of language diversity within research process was poor for all studies. The role of language assistants was largely absent from study methods. Only one study reported using participants' preferred languages for informed consent. More examples are needed of how to conduct rigorous in-depth interviews with older people from multiple language groups, when languages are not determined before recruitment. This will require both researchers and funding bodies to recognize the importance to contemporary healthcare of including linguistically diverse people in participant samples. © 2011 The Authors. Journal of Advanced Nursing © 2011 Blackwell Publishing Ltd.
Lodge, Amy C; Kuhn, Wendy; Earley, Juli; Stevens Manser, Stacey
2018-06-01
The Recovery Self-Assessment (RSA) is a reliable and valid tool used to measure recovery-oriented services. Recent studies, however, suggest that the length and reading level of the RSA makes its routine use in service settings difficult. Recognizing the importance of including people with lived experience of a mental health challenge in research processes and the need to enhance the utility of tools that measure recovery-oriented services, this paper describes an innovative researcher-peer provider consultant multistep process used to revise the provider version of the RSA to create a new instrument-the Recovery-Oriented Services Assessment (ROSA). The authors conducted an exploratory factor analysis (EFA) with principal axis factoring extraction and direct oblimin rotation to evaluate the underlying structure of the provider RSA using data from mental health employees (n = 323). To triangulate the findings of the EFA, quantitative and qualitative data were collected from peer provider consultants (n = 9) on the importance of and language of RSA items. EFA results indicated that a 1-factor solution provided the best fit and explained 48% of the total variance. Consultants triangulated EFA results and recommended the addition of 2 items and language revisions. These results were used to develop the ROSA-a 15-item instrument measuring recovery-oriented services with accessible language. Two versions of the ROSA were developed: a staff version and a people-in-services version. The ROSA may provide organizations with a more accessible way to measure the extent to which their services are recovery oriented. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Evaluation of PHI Hunter in Natural Language Processing Research.
Redd, Andrew; Pickard, Steve; Meystre, Stephane; Scehnet, Jeffrey; Bolton, Dan; Heavirland, Julia; Weaver, Allison Lynn; Hope, Carol; Garvin, Jennifer Hornung
2015-01-01
We introduce and evaluate a new, easily accessible tool using a common statistical analysis and business analytics software suite, SAS, which can be programmed to remove specific protected health information (PHI) from a text document. Removal of PHI is important because the quantity of text documents used for research with natural language processing (NLP) is increasing. When using existing data for research, an investigator must remove all PHI not needed for the research to comply with human subjects' right to privacy. This process is similar, but not identical, to de-identification of a given set of documents. PHI Hunter removes PHI from free-form text. It is a set of rules to identify and remove patterns in text. PHI Hunter was applied to 473 Department of Veterans Affairs (VA) text documents randomly drawn from a research corpus stored as unstructured text in VA files. PHI Hunter performed well with PHI in the form of identification numbers such as Social Security numbers, phone numbers, and medical record numbers. The most commonly missed PHI items were names and locations. Incorrect removal of information occurred with text that looked like identification numbers. PHI Hunter fills a niche role that is related to but not equal to the role of de-identification tools. It gives research staff a tool to reasonably increase patient privacy. It performs well for highly sensitive PHI categories that are rarely used in research, but still shows possible areas for improvement. More development for patterns of text and linked demographic tables from electronic health records (EHRs) would improve the program so that more precise identifiable information can be removed. PHI Hunter is an accessible tool that can flexibly remove PHI not needed for research. If it can be tailored to the specific data set via linked demographic tables, its performance will improve in each new document set.
Evaluation of PHI Hunter in Natural Language Processing Research
Redd, Andrew; Pickard, Steve; Meystre, Stephane; Scehnet, Jeffrey; Bolton, Dan; Heavirland, Julia; Weaver, Allison Lynn; Hope, Carol; Garvin, Jennifer Hornung
2015-01-01
Objectives We introduce and evaluate a new, easily accessible tool using a common statistical analysis and business analytics software suite, SAS, which can be programmed to remove specific protected health information (PHI) from a text document. Removal of PHI is important because the quantity of text documents used for research with natural language processing (NLP) is increasing. When using existing data for research, an investigator must remove all PHI not needed for the research to comply with human subjects’ right to privacy. This process is similar, but not identical, to de-identification of a given set of documents. Materials and methods PHI Hunter removes PHI from free-form text. It is a set of rules to identify and remove patterns in text. PHI Hunter was applied to 473 Department of Veterans Affairs (VA) text documents randomly drawn from a research corpus stored as unstructured text in VA files. Results PHI Hunter performed well with PHI in the form of identification numbers such as Social Security numbers, phone numbers, and medical record numbers. The most commonly missed PHI items were names and locations. Incorrect removal of information occurred with text that looked like identification numbers. Discussion PHI Hunter fills a niche role that is related to but not equal to the role of de-identification tools. It gives research staff a tool to reasonably increase patient privacy. It performs well for highly sensitive PHI categories that are rarely used in research, but still shows possible areas for improvement. More development for patterns of text and linked demographic tables from electronic health records (EHRs) would improve the program so that more precise identifiable information can be removed. Conclusions PHI Hunter is an accessible tool that can flexibly remove PHI not needed for research. If it can be tailored to the specific data set via linked demographic tables, its performance will improve in each new document set. PMID:26807078
ChemicalTagger: A tool for semantic text-mining in chemistry
2011-01-01
Background The primary method for scientific communication is in the form of published scientific articles and theses which use natural language combined with domain-specific terminology. As such, they contain free owing unstructured text. Given the usefulness of data extraction from unstructured literature, we aim to show how this can be achieved for the discipline of chemistry. The highly formulaic style of writing most chemists adopt make their contributions well suited to high-throughput Natural Language Processing (NLP) approaches. Results We have developed the ChemicalTagger parser as a medium-depth, phrase-based semantic NLP tool for the language of chemical experiments. Tagging is based on a modular architecture and uses a combination of OSCAR, domain-specific regex and English taggers to identify parts-of-speech. The ANTLR grammar is used to structure this into tree-based phrases. Using a metric that allows for overlapping annotations, we achieved machine-annotator agreements of 88.9% for phrase recognition and 91.9% for phrase-type identification (Action names). Conclusions It is possible parse to chemical experimental text using rule-based techniques in conjunction with a formal grammar parser. ChemicalTagger has been deployed for over 10,000 patents and has identified solvents from their linguistic context with >99.5% precision. PMID:21575201
White matter structure changes as adults learn a second language.
Schlegel, Alexander A; Rudelson, Justin J; Tse, Peter U
2012-08-01
Traditional models hold that the plastic reorganization of brain structures occurs mainly during childhood and adolescence, leaving adults with limited means to learn new knowledge and skills. Research within the last decade has begun to overturn this belief, documenting changes in the brain's gray and white matter as healthy adults learn simple motor and cognitive skills [Lövdén, M., Bodammer, N. C., Kühn, S., Kaufmann, J., Schütze, H., Tempelmann, C., et al. Experience-dependent plasticity of white-matter microstructure extends into old age. Neuropsychologia, 48, 3878-3883, 2010; Taubert, M., Draganski, B., Anwander, A., Müller, K., Horstmann, A., Villringer, A., et al. Dynamic properties of human brain structure: Learning-related changes in cortical areas and associated fiber connections. The Journal of Neuroscience, 30, 11670-11677, 2010; Scholz, J., Klein, M. C., Behrens, T. E. J., & Johansen-Berg, H. Training induces changes in white-matter architecture. Nature Neuroscience, 12, 1370-1371, 2009; Draganski, B., Gaser, C., Busch, V., Schuirer, G., Bogdahn, U., & May, A. Changes in grey matter induced by training. Nature, 427, 311-312, 2004]. Although the significance of these changes is not fully understood, they reveal a brain that remains plastic well beyond early developmental periods. Here we investigate the role of adult structural plasticity in the complex, long-term learning process of foreign language acquisition. We collected monthly diffusion tensor imaging scans of 11 English speakers who took a 9-month intensive course in written and spoken Modern Standard Chinese as well as from 16 control participants who did not study a language. We show that white matter reorganizes progressively across multiple sites as adults study a new language. Language learners exhibited progressive changes in white matter tracts associated with traditional left hemisphere language areas and their right hemisphere analogs. Surprisingly, the most significant changes occurred in frontal lobe tracts crossing the genu of the corpus callosum-a region not generally included in current neural models of language processing. These results indicate that plasticity of white matter plays an important role in adult language learning and additionally demonstrate the potential of longitudinal diffusion tensor imaging as a new tool to yield insights into cognitive processes.
Iranian EFL Teachers' Perceptions of the Difficulties of Implementing CALL
ERIC Educational Resources Information Center
Hedayati, Hora; Marandi, S. Susan
2014-01-01
Despite the spread of reliable technological tools and the availability of computers in Iranian universities, as well as the mounting evidence of the effectiveness of blended learning, many Iranian language teachers are still reluctant to incorporate such tools in their English as a foreign language (EFL) classes. This study inspected the status…
Codeswitching as a Tool in Teaching Italian in Malta
ERIC Educational Resources Information Center
Gauci, Hertian; Camilleri Grima, Antoinette
2013-01-01
This article addresses the issue of teacher codeswitching in the teaching of Italian in Malta. The analysis of teacher codeswitching shows that the learners' first language (L1), Maltese, is used as a pedagogical tool to enhance language learning. Teachers frequently resort to Maltese to provide more learner-friendly explanations of grammatical…
ERIC Educational Resources Information Center
Mills, Kathy A.; Chandra, Vinesh; Park, Ji Yong
2013-01-01
This paper demonstrates, following Vygotsky, that language and tool use has a critical role in the collaborative problem-solving behaviour of school-age children. It reports original ethnographic classroom research examining the convergence of speech and practical activity in children's collaborative problem solving with robotics programming…
Comparing Six Video Chat Tools: A Critical Evaluation by Language Teachers
ERIC Educational Resources Information Center
Eroz-Tuga, Betil; Sadler, Randall
2009-01-01
This article presents a critical comparison of the usefulness and practicality of six CMC video chat tools (CUworld, ICQ, MSN Messenger, Paltalk, Skype, and Yahoo Messenger) from the perspective of language teaching professionals. This comparison is based on the results of a semester-long project between graduate students at an American university…
Enhancing Beginners' Second Language Learning through an Informal Online Environment
ERIC Educational Resources Information Center
Chakowa, Jessica
2018-01-01
Web 2.0 tools are used increasingly to support second language learning, but there have been limited studies involving beginner learners, multiple technologies, and informal settings. This current study addresses this gap and investigates the factors affecting students' interest in a nongraded online learning environment and what kinds of tools,…
Mobile Learning: A Powerful Tool for Ubiquitous Language Learning
ERIC Educational Resources Information Center
Gomes, Nelson; Lopes, Sérgio; Araújo, Sílvia
2016-01-01
Mobile devices (smartphones, tablets, e-readers, etc.) have come to be used as tools for mobile learning. Several studies support the integration of such technological devices with learning, particularly with language learning. In this paper, we wish to present an Android app designed for the teaching and learning of Portuguese as a foreign…
Class Model Development Using Business Rules
NASA Astrophysics Data System (ADS)
Skersys, Tomas; Gudas, Saulius
New developments in the area of computer-aided system engineering (CASE) greatly improve processes of the information systems development life cycle (ISDLC). Much effort is put into the quality improvement issues, but IS development projects still suffer from the poor quality of models during the system analysis and design cycles. At some degree, quality of models that are developed using CASE tools can be assured using various. automated. model comparison, syntax. checking procedures. It. is also reasonable to check these models against the business domain knowledge, but the domain knowledge stored in the repository of CASE tool (enterprise model) is insufficient (Gudas et al. 2004). Involvement of business domain experts into these processes is complicated because non- IT people often find it difficult to understand models that were developed by IT professionals using some specific modeling language.