Visual Programming: A Programming Tool for Increasing Mathematics Achivement
ERIC Educational Resources Information Center
Swanier, Cheryl A.; Seals, Cheryl D.; Billionniere, Elodie V.
2009-01-01
This paper aims to address the need of increasing student achievement in mathematics using a visual programming language such as Scratch. This visual programming language facilitates creating an environment where students in K-12 education can develop mathematical simulations while learning a visual programming language at the same time.…
NASA Astrophysics Data System (ADS)
Smith, Bryan J.
Current research suggests that many students do not know how to program very well at the conclusion of their introductory programming course. We believe that a reason novices have such difficulties learning programming is because engineering novices often learn through a lecture format where someone with programming knowledge lectures to novices, the novices attempt to absorb the content, and then reproduce it during exams. By primarily appealing to programming novices who prefer to understand visually, we research whether programming novices understand programming better if computer science concepts are presented using a visual programming language than if these programs are presented using a text-based programming language. This method builds upon previous research that suggests that most engineering students are visual learners, and we propose that using a flow-based visual programming language will address some of the most important and difficult topics to novices of programming. We use an existing flow-model tool, RAPTOR, to test this method, and share the program understanding results using this theory.
The Next Generation of Ground Operations Command and Control; Scripting in C Sharp and Visual Basic
NASA Technical Reports Server (NTRS)
Ritter, George; Pedoto, Ramon
2010-01-01
This slide presentation reviews the use of scripting languages in Ground Operations Command and Control. It describes the use of scripting languages in a historical context, the advantages and disadvantages of scripts. It describes the Enhanced and Redesigned Scripting (ERS) language, that was designed to combine the features of a scripting language and the graphical and IDE richness of a programming language with the utility of scripting languages. ERS uses the Microsoft Visual Studio programming environment and offers custom controls that enable an ERS developer to extend the Visual Basic and C sharp language interface with the Payload Operations Integration Center (POIC) telemetry and command system.
ERIC Educational Resources Information Center
Simpkins, N. K.
2014-01-01
This article reports an investigation into undergraduate student experiences and views of a visual or "blocks" based programming language and its environment. An additional and central aspect of this enquiry is to substantiate the perceived degree of transferability of programming skills learnt within the visual environment to a typical…
ERIC Educational Resources Information Center
Wielgosz, Meg; Molyneux, Paul
2015-01-01
Students learning English as an additional language (EAL) in Australian schools frequently struggle with the cultural and linguistic demands of the classroom while concurrently grappling with issues of identity and belonging. This article reports on an investigation of the role primary school visual arts programs, distinct programs with a…
Exploring the Engagement Effects of Visual Programming Language for Data Structure Courses
ERIC Educational Resources Information Center
Chang, Chih-Kai; Yang, Ya-Fei; Tsai, Yu-Tzu
2017-01-01
Previous research indicates that understanding the state of learning motivation enables researchers to deeply understand students' learning processes. Studies have shown that visual programming languages use graphical code, enabling learners to learn effectively, improve learning effectiveness, increase learning fun, and offering various other…
Synchronization in Scratch: A Case Study with Education Science Students
ERIC Educational Resources Information Center
Nikolos, Dimitris; Komis, Vassilis
2015-01-01
The Scratch programming language is an introductory programming language for students. It is also a visual concurrent programming language, where multiple threads are executed simultaneously. Synchronization in concurrent languages is a complex task for novices to understand. Our research is focused on strategies and methods applied by novice…
Visual Basic Applications to Physics Teaching
ERIC Educational Resources Information Center
Chitu, Catalin; Inpuscatu, Razvan Constantin; Viziru, Marilena
2011-01-01
Derived from basic language, VB (Visual Basic) is a programming language focused on the video interface component. With graphics and functional components implemented, the programmer is able to bring and use their components to achieve the desired application in a relatively short time. Language VB is a useful tool in physics teaching by creating…
ERIC Educational Resources Information Center
Conroy, Paula
1999-01-01
Describes Total Physical Response, a teaching technique that teachers of English-as-a-Second-Language (ESL) use to instruct students who are in the process of learning a second language and modifications that can be made to the program to teach students with visual impairments. (CR)
Program Supports Scientific Visualization
NASA Technical Reports Server (NTRS)
Keith, Stephan
1994-01-01
Primary purpose of General Visualization System (GVS) computer program is to support scientific visualization of data generated by panel-method computer program PMARC_12 (inventory number ARC-13362) on Silicon Graphics Iris workstation. Enables user to view PMARC geometries and wakes as wire frames or as light shaded objects. GVS is written in C language.
Declarative language design for interactive visualization.
Heer, Jeffrey; Bostock, Michael
2010-01-01
We investigate the design of declarative, domain-specific languages for constructing interactive visualizations. By separating specification from execution, declarative languages can simplify development, enable unobtrusive optimization, and support retargeting across platforms. We describe the design of the Protovis specification language and its implementation within an object-oriented, statically-typed programming language (Java). We demonstrate how to support rich visualizations without requiring a toolkit-specific data model and extend Protovis to enable declarative specification of animated transitions. To support cross-platform deployment, we introduce rendering and event-handling infrastructures decoupled from the runtime platform, letting designers retarget visualization specifications (e.g., from desktop to mobile phone) with reduced effort. We also explore optimizations such as runtime compilation of visualization specifications, parallelized execution, and hardware-accelerated rendering. We present benchmark studies measuring the performance gains provided by these optimizations and compare performance to existing Java-based visualization tools, demonstrating scalability improvements exceeding an order of magnitude.
DiSalvo, Betsy
2014-01-01
To determine appropriate computer science curricula, educators sought to better understand the different affordances of teaching with a visual programming language (Alice) or a text-based language (Jython). Although students often preferred one language, that language wasn't necessarily the one from which they learned the most.
Visual Basic Programming Impact on Cognitive Style of College Students: Need for Prerequisites
ERIC Educational Resources Information Center
White, Garry L.
2012-01-01
This research investigated the impact learning a visual programming language, Visual Basic, has on hemispheric cognitive style, as measured by the Hemispheric Mode Indicator (HMI). The question to be answered is: will a computer programming course help students improve their cognitive abilities in order to perform well? The cognitive styles for…
Using Visual Basic to Teach Programming for Geographers.
ERIC Educational Resources Information Center
Slocum, Terry A.; Yoder, Stephen C.
1996-01-01
Outlines reasons why computer programming should be taught to geographers. These include experience using macro (scripting) languages and sophisticated visualization software, and developing a deeper understanding of general hardware and software capabilities. Discusses the distinct advantages and few disadvantages of the programming language…
NASA Astrophysics Data System (ADS)
Lim, Chen Kim; Tan, Kian Lam; Yusran, Hazwanni; Suppramaniam, Vicknesh
2017-10-01
Visual language or visual representation has been used in the past few years in order to express the knowledge in graphic. One of the important graphical elements is fractal and L-Systems is a mathematic-based grammatical model for modelling cell development and plant topology. From the plant model, L-Systems can be interpreted as music sound and score. In this paper, LSound which is a Visual Language Programming (VLP) framework has been developed to model plant to music sound and generate music score and vice versa. The objectives of this research has three folds: (i) To expand the grammar dictionary of L-Systems music based on visual programming, (ii) To design and produce a user-friendly and icon based visual language framework typically for L-Systems musical score generation which helps the basic learners in musical field and (iii) To generate music score from plant models and vice versa using L-Systems method. This research undergoes a four phases methodology where the plant is first modelled, then the music is interpreted, followed by the output of music sound through MIDI and finally score is generated. LSound is technically compared to other existing applications in the aspects of the capability of modelling the plant, rendering the music and generating the sound. It has been found that LSound is a flexible framework in which the plant can be easily altered through arrow-based programming and the music score can be altered through the music symbols and notes. This work encourages non-experts to understand L-Systems and music hand-in-hand.
NASA Technical Reports Server (NTRS)
Sen, Syamal K.; Shaykhian, Gholam Ali
2011-01-01
MatLab(TradeMark)(MATrix LABoratory) is a numerical computation and simulation tool that is used by thousands Scientists and Engineers in many countries. MatLab does purely numerical calculations, which can be used as a glorified calculator or interpreter programming language; its real strength is in matrix manipulations. Computer algebra functionalities are achieved within the MatLab environment using "symbolic" toolbox. This feature is similar to computer algebra programs, provided by Maple or Mathematica to calculate with mathematical equations using symbolic operations. MatLab in its interpreter programming language form (command interface) is similar with well known programming languages such as C/C++, support data structures and cell arrays to define classes in object oriented programming. As such, MatLab is equipped with most of the essential constructs of a higher programming language. MatLab is packaged with an editor and debugging functionality useful to perform analysis of large MatLab programs and find errors. We believe there are many ways to approach real-world problems; prescribed methods to ensure foregoing solutions are incorporated in design and analysis of data processing and visualization can benefit engineers and scientist in gaining wider insight in actual implementation of their perspective experiments. This presentation will focus on data processing and visualizations aspects of engineering and scientific applications. Specifically, it will discuss methods and techniques to perform intermediate-level data processing covering engineering and scientific problems. MatLab programming techniques including reading various data files formats to produce customized publication-quality graphics, importing engineering and/or scientific data, organizing data in tabular format, exporting data to be used by other software programs such as Microsoft Excel, data presentation and visualization will be discussed.
A String Search Marketing Application Using Visual Programming
ERIC Educational Resources Information Center
Chin, Jerry M.; Chin, Mary H.; Van Landuyt, Cathryn
2013-01-01
This paper demonstrates the use of programing software that provides the student programmer visual cues to construct the code to a student programming assignment. This method does not disregard or minimize the syntax or required logical constructs. The student can concentrate more on the logic and less on the language itself.
The Next Generation of Ground Operations Command and Control; Scripting in C no. and Visual Basic
NASA Technical Reports Server (NTRS)
Ritter, George; Pedoto, Ramon
2010-01-01
Scripting languages have become a common method for implementing command and control solutions in space ground operations. The Systems Test and Operations Language (STOL), the Huntsville Operations Support Center (HOSC) Scripting Language Processor (SLP), and the Spacecraft Control Language (SCL) offer script-commands that wrap tedious operations tasks into single calls. Since script-commands are interpreted, they also offer a certain amount of hands-on control that is highly valued in space ground operations. Although compiled programs seem to be unsuited for interactive user control and are more complex to develop, Marshall Space flight Center (MSFC) has developed a product called the Enhanced and Redesign Scripting (ERS) that makes use of the graphical and logical richness of a programming language while offering the hands-on and ease of control of a scripting language. ERS is currently used by the International Space Station (ISS) Payload Operations Integration Center (POIC) Cadre team members. ERS integrates spacecraft command mnemonics, telemetry measurements, and command and telemetry control procedures into a standard programming language, while making use of Microsoft's Visual Studio for developing Visual Basic (VB) or C# ground operations procedures. ERS also allows for script-style user control during procedure execution using a robust graphical user input and output feature. The availability of VB and C# programmers, and the richness of the languages and their development environment, has allowed ERS to lower our "script" development time and maintenance costs at the Marshall POIC.
Dynamic Learning Objects to Teach Java Programming Language
ERIC Educational Resources Information Center
Narasimhamurthy, Uma; Al Shawkani, Khuloud
2010-01-01
This article describes a model for teaching Java Programming Language through Dynamic Learning Objects. The design of the learning objects was based on effective learning design principles to help students learn the complex topic of Java Programming. Visualization was also used to facilitate the learning of the concepts. (Contains 1 figure and 2…
Gersten, J W; Foppe, K B; Gersten, R; Maxwell, S; Mirrett, P; Gipson, M; Houston, H; Grueter, B
1975-03-01
A program for children with learning disabilities associated with perceptual deficits was designed that included elements of gross and fine motor coordination, visual and somatosensory perceptual training, dance, art, music and language. The effectiveness of nonprofessional "perceptual-aides," who were trained in this program, was evaluated. Twenty-eight children with learning disabilities associated with perceptual deficits were treated by occupational, physical, recreational and language therapists; and 27 similarly involved children were treated by two aides, under supervision, after training by therapists. Treatment in both groups was for four hours weekly over a four to seven month period. There was significant improvement in motor skills, visual and somatosensory perception, language and educational skills in the two programs. Although there was no significant difference between the two groups, there was a slight advantage to the aide program. The cost of the aide program was 10 percent higher than the therapist program during the first year, but 22 percent lower than the therapist program during the second year.
Kindlmann, Gordon; Chiw, Charisee; Seltzer, Nicholas; Samuels, Lamont; Reppy, John
2016-01-01
Many algorithms for scientific visualization and image analysis are rooted in the world of continuous scalar, vector, and tensor fields, but are programmed in low-level languages and libraries that obscure their mathematical foundations. Diderot is a parallel domain-specific language that is designed to bridge this semantic gap by providing the programmer with a high-level, mathematical programming notation that allows direct expression of mathematical concepts in code. Furthermore, Diderot provides parallel performance that takes advantage of modern multicore processors and GPUs. The high-level notation allows a concise and natural expression of the algorithms and the parallelism allows efficient execution on real-world datasets.
1999-01-01
the system using widely available Microsoft Visual and Access Basic programming language . For SCE , SWAMI was upgraded to automatically update...into pseudo-code and pass it on to contractors to program, usually using a complex programming language like FORTRAN. Army operations research...easier to use than programming languages like FORTRAN or C, there was still very little expertise in HTML among the instructors and controllers who were
ERIC Educational Resources Information Center
Weiss, Charles J.
2017-01-01
The Scientific Computing for Chemists course taught at Wabash College teaches chemistry students to use the Python programming language, Jupyter notebooks, and a number of common Python scientific libraries to process, analyze, and visualize data. Assuming no prior programming experience, the course introduces students to basic programming and…
The Film and the ESL Program: To View or Not to View.
ERIC Educational Resources Information Center
Bordwell, Constance
1969-01-01
This paper discusses the acquisition of second-language skills through the use of visual aids. In teaching English as a second language, pictures, slides, and film loops are usually presented with appropriate, and more or less predictable, spoken or written English. These visual images, however, may arouse interests beyond the answers provided by…
ERIC Educational Resources Information Center
Hocking, Elton
This condensed article on the language laboratory describes educational and financial possibilities and limitations, often citing the foreign language program at Purdue University as an example. The author discusses: (1) costs and amortization, (2) preventive maintenance, (3) laboratory design, (4) the multichannel recorder, and (5) visuals. Other…
Programming Education with a Blocks-Based Visual Language for Mobile Application Development
ERIC Educational Resources Information Center
Mihci, Can; Ozdener, Nesrin
2014-01-01
The aim of this study is to assess the impact upon academic success of the use of a reference block-based visual programming tool, namely the MIT App Inventor for Android, as an educational instrument for teaching object-oriented GUI-application development (CS2) concepts to students; who have previously completed a fundamental programming course…
Automated Verification of Design Patterns with LePUS3
NASA Technical Reports Server (NTRS)
Nicholson, Jonathan; Gasparis, Epameinondas; Eden, Ammon H.; Kazman, Rick
2009-01-01
Specification and [visual] modelling languages are expected to combine strong abstraction mechanisms with rigour, scalability, and parsimony. LePUS3 is a visual, object-oriented design description language axiomatized in a decidable subset of the first-order predicate logic. We demonstrate how LePUS3 is used to formally specify a structural design pattern and prove ( verify ) whether any JavaTM 1.4 program satisfies that specification. We also show how LePUS3 specifications (charts) are composed and how they are verified fully automatically in the Two-Tier Programming Toolkit.
An Interactive Decision Support System for Scheduling Fighter Pilot Training
2002-03-26
Deitel , H.M. and Deitel , P.J. C: How to Program , 2nd ed., Prentice Hall, 1994. 8. Deitel , H.M. and Deitel , P.J. How to Program Java...Visual Basic Programming language, the Excel tool is modified in several ways. Scheduling Dispatch rules are implemented to automatically generate... programming language, the Excel tool was modified in several ways. Scheduling dispatch rules are implemented to automatically generate
A Prospective Curriculum Using Visual Literacy.
ERIC Educational Resources Information Center
Hortin, John A.
This report describes the uses of visual literacy programs in the schools and outlines four categories for incorporating training in visual thinking into school curriculums as part of the back to basics movement in education. The report recommends that curriculum writers include materials pertaining to: (1) reading visual language and…
Scenario-Based Programming, Usability-Oriented Perception
ERIC Educational Resources Information Center
Alexandron, Giora; Armoni, Michal; Gordon, Michal; Harel, David
2014-01-01
In this article, we discuss the possible connection between the programming language and the paradigm behind it, and programmers' tendency to adopt an external or internal perspective of the system they develop. Based on a qualitative analysis, we found that when working with the visual, interobject language of live sequence charts (LSC),…
CPP-TRS(C): On using visual cognitive symbols to enhance communication effectiveness
NASA Technical Reports Server (NTRS)
Tonfoni, Graziella
1994-01-01
Communicative Positioning Program/Text Representation Systems (CPP-TRS) is a visual language based on a system of 12 canvasses, 10 signals and 14 symbols. CPP-TRS is based on the fact that every communication action is the result of a set of cognitive processes and the whole system is based on the concept that you can enhance communication by visually perceiving text. With a simple syntax, CPP-TRS is capable of representing meaning and intention as well as communication functions visually. Those are precisely invisible aspects of natural language that are most relevant to getting the global meaning of a text. CPP-TRS reinforces natural language in human machine interaction systems. It complements natural language by adding certain important elements that are not represented by natural language by itself. These include communication intention and function of the text expressed by the sender, as well as the role the reader is supposed to play. The communication intention and function of a text and the reader's role are invisible in natural language because neither specific words nor punctuation conveys them sufficiently and unambiguously; they are therefore non-transparent.
A New Framework for Software Visualization: A Multi-Layer Approach
2006-09-01
primary target is an exploration of the current state of the area so that we can discover the challenges and propose solutions for them. The study ...Small define both areas of study to collectively be a part of Software Visualization. 22 Visual Programming as ’Visual Programming’ (VP) refers to...founded taxonomy, with the proper characteristics, can further investigation in any field of study . A common language or terminology and the existence of
The Effective Audio-Visual Program in Foreign Language and Literature Studies.
ERIC Educational Resources Information Center
Lawton, Ben
Foreign language teachers should exploit the American affinity for television and movies by using foreign language feature films and shorts in the classroom. Social and political history and literary trends illustrated in the films may be discussed and absorbed along with the language. The author teaches such a course in the Department of Italian…
Processing sequence annotation data using the Lua programming language.
Ueno, Yutaka; Arita, Masanori; Kumagai, Toshitaka; Asai, Kiyoshi
2003-01-01
The data processing language in a graphical software tool that manages sequence annotation data from genome databases should provide flexible functions for the tasks in molecular biology research. Among currently available languages we adopted the Lua programming language. It fulfills our requirements to perform computational tasks for sequence map layouts, i.e. the handling of data containers, symbolic reference to data, and a simple programming syntax. Upon importing a foreign file, the original data are first decomposed in the Lua language while maintaining the original data schema. The converted data are parsed by the Lua interpreter and the contents are stored in our data warehouse. Then, portions of annotations are selected and arranged into our catalog format to be depicted on the sequence map. Our sequence visualization program was successfully implemented, embedding the Lua language for processing of annotation data and layout script. The program is available at http://staff.aist.go.jp/yutaka.ueno/guppy/.
Assessment of short-term memory in Arabic speaking children with specific language impairment.
Kaddah, F A; Shoeib, R M; Mahmoud, H E
2010-12-15
Children with Specific Language Impairment (SLI) may have some kind of memory disorder that could increase their linguistic impairment. This study assessed the short-term memory skills in Arabic speaking children with either Expressive Language Impairment (ELI) or Receptive/Expressive Language Impairment (R/ELI) in comparison to controls in order to estimate the nature and extent of any specific deficits in these children that could explain the different prognostic results of language intervention. Eighteen children were included in each group. Receptive, expressive and total language quotients were calculated using the Arabic language test. Assessment of auditory and visual short-term memory was done using the Arabic version of the Illinois Test of Psycholinguistic Abilities. Both groups of SLI performed significantly lower linguistic abilities and poorer auditory and visual short-term memory in comparison to normal children. The R/ELI group presented an inferior performance than the ELI group in all measured parameters. Strong association was found between most tasks of auditory and visual short-term memory and linguistic abilities. The results of this study highlighted a specific degree of deficit of auditory and visual short-term memories in both groups of SLI. These deficits were more prominent in R/ELI group. Moreover, the strong association between the different auditory and visual short-term memories and language abilities in children with SLI must be taken into account when planning an intervention program for these children.
NASA Technical Reports Server (NTRS)
Sen, Syamal K.; Shaykhian, Gholam Ali
2011-01-01
MatLab(R) (MATrix LABoratory) is a numerical computation and simulation tool that is used by thousands Scientists and Engineers in many cou ntries. MatLab does purely numerical calculations, which can be used as a glorified calculator or interpreter programming language; its re al strength is in matrix manipulations. Computer algebra functionalities are achieved within the MatLab environment using "symbolic" toolbo x. This feature is similar to computer algebra programs, provided by Maple or Mathematica to calculate with mathematical equations using s ymbolic operations. MatLab in its interpreter programming language fo rm (command interface) is similar with well known programming languag es such as C/C++, support data structures and cell arrays to define c lasses in object oriented programming. As such, MatLab is equipped with most ofthe essential constructs of a higher programming language. M atLab is packaged with an editor and debugging functionality useful t o perform analysis of large MatLab programs and find errors. We belie ve there are many ways to approach real-world problems; prescribed methods to ensure foregoing solutions are incorporated in design and ana lysis of data processing and visualization can benefit engineers and scientist in gaining wider insight in actual implementation of their perspective experiments. This presentation will focus on data processing and visualizations aspects of engineering and scientific applicati ons. Specifically, it will discuss methods and techniques to perform intermediate-level data processing covering engineering and scientifi c problems. MatLab programming techniques including reading various data files formats to produce customized publication-quality graphics, importing engineering and/or scientific data, organizing data in tabu lar format, exporting data to be used by other software programs such as Microsoft Excel, data presentation and visualization will be discussed. The presentation will emphasize creating practIcal scripts (pro grams) that extend the basic features of MatLab TOPICS mclude (1) Ma trix and vector analysis and manipulations (2) Mathematical functions (3) Symbolic calculations & functions (4) Import/export data files (5) Program lOgic and flow control (6) Writing function and passing parameters (7) Test application programs
Enhancing Problem-Solving Capabilities Using Object-Oriented Programming Language
ERIC Educational Resources Information Center
Unuakhalu, Mike F.
2009-01-01
This study integrated object-oriented programming instruction with transfer training activities in everyday tasks, which might provide a mechanism that can be used for efficient problem solving. Specifically, a Visual BASIC embedded with everyday tasks group was compared to another group exposed to Visual BASIC instruction only. Subjects were 40…
A visual programming environment for the Navier-Stokes computer
NASA Technical Reports Server (NTRS)
Tomboulian, Sherryl; Crockett, Thomas W.; Middleton, David
1988-01-01
The Navier-Stokes computer is a high-performance, reconfigurable, pipelined machine designed to solve large computational fluid dynamics problems. Due to the complexity of the architecture, development of effective, high-level language compilers for the system appears to be a very difficult task. Consequently, a visual programming methodology has been developed which allows users to program the system at an architectural level by constructing diagrams of the pipeline configuration. These schematic program representations can then be checked for validity and automatically translated into machine code. The visual environment is illustrated by using a prototype graphical editor to program an example problem.
NASA Technical Reports Server (NTRS)
Hibbard, William L.; Dyer, Charles R.; Paul, Brian E.
1994-01-01
The VIS-AD data model integrates metadata about the precision of values, including missing data indicators and the way that arrays sample continuous functions, with the data objects of a scientific programming language. The data objects of this data model form a lattice, ordered by the precision with which they approximate mathematical objects. We define a similar lattice of displays and study visualization processes as functions from data lattices to display lattices. Such functions can be applied to visualize data objects of all data types and are thus polymorphic.
ERIC Educational Resources Information Center
Wang, Yuping; Chen, Nian-Shing; Levy, Mike
2010-01-01
This article discusses the learning process undertaken by language teachers in a cyber face-to-face teacher training program. Eight tertiary Chinese language teachers attended a 12-week training program conducted in an online synchronous learning environment characterized by multimedia-based, oral and visual interaction. The term "cyber…
ERIC Educational Resources Information Center
Mihci, Can; Ozdener Donmez, Nesrin
2017-01-01
The purpose of this research is to investigate the short and long-term effects of using GUI-oriented visual Blocks-Based Programming languages (BBL) as a 2nd tier tool when teaching programming to prospective K12 ICT teachers. In a mixed-method approach, the effect on academic success as well as the impact on professional opinions and preferences…
An Interpreted Language and System for the Visualization of Unstructured Meshes
NASA Technical Reports Server (NTRS)
Moran, Patrick J.; Gerald-Yamasaki, Michael (Technical Monitor)
1998-01-01
We present an interpreted language and system supporting the visualization of unstructured meshes and the manipulation of shapes defined in terms of mesh subsets. The language features primitives inspired by geometric modeling, mathematical morphology and algebraic topology. The adaptation of the topology ideas to an interpreted environment, along with support for programming constructs such, as user function definition, provide a flexible system for analyzing a mesh and for calculating with shapes defined in terms of the mesh. We present results demonstrating some of the capabilities of the language, based on an implementation called the Shape Calculator, for tetrahedral meshes in R^3.
Towards a visual modeling approach to designing microelectromechanical system transducers
NASA Astrophysics Data System (ADS)
Dewey, Allen; Srinivasan, Vijay; Icoz, Evrim
1999-12-01
In this paper, we address initial design capture and system conceptualization of microelectromechanical system transducers based on visual modeling and design. Visual modeling frames the task of generating hardware description language (analog and digital) component models in a manner similar to the task of generating software programming language applications. A structured topological design strategy is employed, whereby microelectromechanical foundry cell libraries are utilized to facilitate the design process of exploring candidate cells (topologies), varying key aspects of the transduction for each topology, and determining which topology best satisfies design requirements. Coupled-energy microelectromechanical system characterizations at a circuit level of abstraction are presented that are based on branch constitutive relations and an overall system of simultaneous differential and algebraic equations. The resulting design methodology is called visual integrated-microelectromechanical VHDL-AMS interactive design (VHDL-AMS is visual hardware design language for analog and mixed signal).
Language Coordinators Resource Kit. Section Ten: Picture Bank.
ERIC Educational Resources Information Center
Peace Corps, Washington, DC. Information Collection and Exchange Div.
The guide is one section of a resource kit designed to assist Peace Corps language instruction coordinators in countries around the world in understanding the principles underlying second language learning and teaching and in organizing instructional programs. This section contains a collection of pictures that can be used as visual aids in…
Dynamic visualization techniques for high consequence software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pollock, G.M.
1998-02-01
This report documents a prototype tool developed to investigate the use of visualization and virtual reality technologies for improving software surety confidence. The tool is utilized within the execution phase of the software life cycle. It provides a capability to monitor an executing program against prespecified requirements constraints provided in a program written in the requirements specification language SAGE. The resulting Software Attribute Visual Analysis Tool (SAVAnT) also provides a technique to assess the completeness of a software specification. The prototype tool is described along with the requirements constraint language after a brief literature review is presented. Examples of howmore » the tool can be used are also presented. In conclusion, the most significant advantage of this tool is to provide a first step in evaluating specification completeness, and to provide a more productive method for program comprehension and debugging. The expected payoff is increased software surety confidence, increased program comprehension, and reduced development and debugging time.« less
Visual Teaching Strategies for Children with Autism.
ERIC Educational Resources Information Center
Tissot, Catherine; Evans, Roy
2003-01-01
Describes the types of children with autism that would benefit from visual teaching strategies. Discusses the benefits and disadvantages of some of the more well-known programs that use visual teaching strategies, including movement-based systems relying on sign language, and materials-based systems such as Treatment and Education of Autistic and…
Effects of Using Alice and Scratch in an Introductory Programming Course for Corrective Instruction
ERIC Educational Resources Information Center
Chang, Chih-Kai
2014-01-01
Scratch, a visual programming language, was used in many studies in computer science education. Most of them reported positive results by integrating Scratch into K-12 computer courses. However, the object-oriented concept, one of the important computational thinking skills, is not represented well in Scratch. Alice, another visual programming…
ERIC Educational Resources Information Center
Imani, Sahar Sadat Afshar
2013-01-01
Modular EFL Educational Program has managed to offer specialized language education in two specific fields: Audio-visual Materials Translation and Translation of Deeds and Documents. However, no explicit empirical studies can be traced on both internal and external validity measures as well as the extent of compatibility of both courses with the…
The GLOBE Visualization Project: Using WWW in the Classroom.
ERIC Educational Resources Information Center
de La Beaujardiere, J-F; And Others
1997-01-01
Describes a World Wide Web-based, user-friendly, language-independent graphical user interface providing access to visualizations created for GLOBE (Global Learning and Observations to Benefit the Environment), a multinational program of education and science. (DDR)
ERIC Educational Resources Information Center
Csomay, Eniko; Petrovic, Marija
2012-01-01
Vocabulary is an essential element of every second/foreign language teaching and learning program. While the goal of language teaching programs is to focus on explicit vocabulary teaching to promote learning, "materials which provide visual and aural input such as movies may be conducive to incidental vocabulary learning." (Webb and Rodgers, 2009,…
Architectural Visualization of C/C++ Source Code for Program Comprehension
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panas, T; Epperly, T W; Quinlan, D
2006-09-01
Structural and behavioral visualization of large-scale legacy systems to aid program comprehension is still a major challenge. The challenge is even greater when applications are implemented in flexible and expressive languages such as C and C++. In this paper, we consider visualization of static and dynamic aspects of large-scale scientific C/C++ applications. For our investigation, we reuse and integrate specialized analysis and visualization tools. Furthermore, we present a novel layout algorithm that permits a compressive architectural view of a large-scale software system. Our layout is unique in that it allows traditional program visualizations, i.e., graph structures, to be seen inmore » relation to the application's file structure.« less
Software attribute visualization for high integrity software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pollock, G.M.
1998-03-01
This report documents a prototype tool developed to investigate the use of visualization and virtual reality technologies for improving software surety confidence. The tool is utilized within the execution phase of the software life cycle. It provides a capability to monitor an executing program against prespecified requirements constraints provided in a program written in the requirements specification language SAGE. The resulting Software Attribute Visual Analysis Tool (SAVAnT) also provides a technique to assess the completeness of a software specification.
Divide and Recombine for Large Complex Data
2017-12-01
Empirical Methods in Natural Language Processing , October 2014 Keywords Enter keywords for the publication. URL Enter the URL...low-latency data processing systems. Declarative Languages for Interactive Visualization: The Reactive Vega Stack Another thread of XDATA research...for array processing operations embedded in the R programming language . Vector virtual machines work well for long vectors. One of the most
ERIC Educational Resources Information Center
Hew, Soon-Hin; Ohki, Mitsuru
2004-01-01
This study examines the effectiveness of imagery and electronic visual feedback in facilitating students' acquisition of Japanese pronunciation skills. The independent variables, animated graphic annotation (AGA) and immediate visual feedback (IVF) were integrated into a Japanese computer-assisted language learning (JCALL) program focused on the…
Horowitz-Kraus, Tzipi; DiFrancesco, Mark; Kay, Benjamin; Wang, Yingying; Holland, Scott K.
2015-01-01
The Reading Acceleration Program, a computerized reading-training program, increases activation in neural circuits related to reading. We examined the effect of the training on the functional connectivity between independent components related to visual processing, executive functions, attention, memory, and language during rest after the training. Children 8–12 years old with reading difficulties and typical readers participated in the study. Behavioral testing and functional magnetic resonance imaging were performed before and after the training. Imaging data were analyzed using an independent component analysis approach. After training, both reading groups showed increased single-word contextual reading and reading comprehension scores. Greater positive correlations between the visual-processing component and the executive functions, attention, memory, or language components were found after training in children with reading difficulties. Training-related increases in connectivity between the visual and attention components and between the visual and executive function components were positively correlated with increased word reading and reading comprehension, respectively. Our findings suggest that the effect of the Reading Acceleration Program on basic cognitive domains can be detected even in the absence of an ongoing reading task. PMID:26199874
Horowitz-Kraus, Tzipi; DiFrancesco, Mark; Kay, Benjamin; Wang, Yingying; Holland, Scott K
2015-01-01
The Reading Acceleration Program, a computerized reading-training program, increases activation in neural circuits related to reading. We examined the effect of the training on the functional connectivity between independent components related to visual processing, executive functions, attention, memory, and language during rest after the training. Children 8-12 years old with reading difficulties and typical readers participated in the study. Behavioral testing and functional magnetic resonance imaging were performed before and after the training. Imaging data were analyzed using an independent component analysis approach. After training, both reading groups showed increased single-word contextual reading and reading comprehension scores. Greater positive correlations between the visual-processing component and the executive functions, attention, memory, or language components were found after training in children with reading difficulties. Training-related increases in connectivity between the visual and attention components and between the visual and executive function components were positively correlated with increased word reading and reading comprehension, respectively. Our findings suggest that the effect of the Reading Acceleration Program on basic cognitive domains can be detected even in the absence of an ongoing reading task.
So Wide a Web, So Little Time.
ERIC Educational Resources Information Center
McConville, David; And Others
1996-01-01
Discusses new trends in the World Wide Web. Highlights include multimedia; digitized audio-visual files; compression technology; telephony; virtual reality modeling language (VRML); open architecture; and advantages of Java, an object-oriented programming language, including platform independence, distributed development, and pay-per-use software.…
An adaptive structure data acquisition system using a graphical-based programming language
NASA Technical Reports Server (NTRS)
Baroth, Edmund C.; Clark, Douglas J.; Losey, Robert W.
1992-01-01
An example of the implementation of data fusion using a PC and a graphical programming language is discussed. A schematic of the data acquisition system and user interface panel for an adaptive structure test are presented. The computer programs (a series of icons 'wired' together) are also discussed. The way in which using graphical-based programming software to control a data acquisition system can simplify analysis of data, promote multidisciplinary interaction, and provide users a more visual key to understanding their data are shown.
Learning with a Missing Sense: What Can We Learn from the Interaction of a Deaf Child with a Turtle?
ERIC Educational Resources Information Center
Miller, Paul
2009-01-01
This case study reports on the progress of Navon, a 13-year-old boy with prelingual deafness, over a 3-month period following exposure to Logo, a computer programming language that visualizes specific programming commands by means of a virtual drawing tool called the Turtle. Despite an almost complete lack of skills in spoken and sign language,…
Telemetry Monitoring and Display Using LabVIEW
NASA Technical Reports Server (NTRS)
Wells, George; Baroth, Edmund C.
1993-01-01
The Measurement Technology Center of the Instrumentation Section configures automated data acquisition systems to meet the diverse needs of JPL's experimental research community. These systems are based on personal computers or workstations (Apple, IBM/Compatible, Hewlett-Packard, and Sun Microsystems) and often include integrated data analysis, visualization and experiment control functions in addition to data acquisition capabilities. These integrated systems may include sensors, signal conditioning, data acquisition interface cards, software, and a user interface. Graphical programming is used to simplify configuration of such systems. Employment of a graphical programming language is the most important factor in enabling the implementation of data acquisition, analysis, display and visualization systems at low cost. Other important factors are the use of commercial software packages and off-the-shelf data acquisition hardware where possible. Understanding the experimenter's needs is also critical. An interactive approach to user interface construction and training of operators is also important. One application was created as a result of a competative effort between a graphical programming language team and a text-based C language programming team to verify the advantages of using a graphical programming language approach. With approximately eight weeks of funding over a period of three months, the text-based programming team accomplished about 10% of the basic requirements, while the Macintosh/LabVIEW team accomplished about 150%, having gone beyond the original requirements to simulate a telemetry stream and provide utility programs. This application verified that using graphical programming can significantly reduce software development time. As a result of this initial effort, additional follow-on work was awarded to the graphical programming team.
Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki
2014-12-01
As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.
Analogy Mapping Development for Learning Programming
NASA Astrophysics Data System (ADS)
Sukamto, R. A.; Prabawa, H. W.; Kurniawati, S.
2017-02-01
Programming skill is an important skill for computer science students, whereas nowadays, there many computer science students are lack of skills and information technology knowledges in Indonesia. This is contrary with the implementation of the ASEAN Economic Community (AEC) since the end of 2015 which is the qualified worker needed. This study provided an effort for nailing programming skills by mapping program code to visual analogies as learning media. The developed media was based on state machine and compiler principle and was implemented in C programming language. The state of every basic condition in programming were successful determined as analogy visualization.
ERIC Educational Resources Information Center
LORGE, SARAH W.
TO INVESTIGATE THE EFFECTS OF THE LANGUAGE LABORATORY ON FOREIGN LANGUAGE LEARNING, THE BUREAU OF AUDIO-VISUAL INSTRUCTION OF NEW YORK CITY CONDUCTED EXPERIMENTS IN 1ST-, 2D-, AND 3D-YEAR HIGH SCHOOL CLASSES. THE FIRST EXPERIMENT, WHICH COMPARED CONVENTIONALLY TAUGHT CLASSES WITH GROUPS HAVING SOME LABORATORY TEACHING, SHOWED THAT GROUPS WITH…
Simulation environment and graphical visualization environment: a COPD use-case.
Huertas-Migueláñez, Mercedes; Mora, Daniel; Cano, Isaac; Maier, Dieter; Gomez-Cabrero, David; Lluch-Ariet, Magí; Miralles, Felip
2014-11-28
Today, many different tools are developed to execute and visualize physiological models that represent the human physiology. Most of these tools run models written in very specific programming languages which in turn simplify the communication among models. Nevertheless, not all of these tools are able to run models written in different programming languages. In addition, interoperability between such models remains an unresolved issue. In this paper we present a simulation environment that allows, first, the execution of models developed in different programming languages and second the communication of parameters to interconnect these models. This simulation environment, developed within the Synergy-COPD project, aims at helping and supporting bio-researchers and medical students understand the internal mechanisms of the human body through the use of physiological models. This tool is composed of a graphical visualization environment, which is a web interface through which the user can interact with the models, and a simulation workflow management system composed of a control module and a data warehouse manager. The control module monitors the correct functioning of the whole system. The data warehouse manager is responsible for managing the stored information and supporting its flow among the different modules. It has been proved that the simulation environment presented here allows the user to research and study the internal mechanisms of the human physiology by the use of models via a graphical visualization environment. A new tool for bio-researchers is ready for deployment in various use cases scenarios.
Programming by Choice: Urban Youth Learning Programming with Scratch
ERIC Educational Resources Information Center
Maloney, John; Peppler, Kylie; Kafai, Yasmin B.; Resnick, Mitchel; Rusk, Natalie
2008-01-01
This paper describes Scratch, a visual, block-based programming language designed to facilitate media manipulation for novice programmers. We report on the Scratch programming experiences of urban youth ages 8-18 at a Computer Clubhouse--an after school center--over an 18-month period. Our analyses of 536 Scratch projects collected during this…
Pedagogy and Processes for a Computer Programming Outreach Workshop--The Bridge to College Model
ERIC Educational Resources Information Center
Tangney, Brendan; Oldham, Elizabeth; Conneely, Claire; Barrett, Stephen; Lawlor, John
2010-01-01
This paper describes a model for computer programming outreach workshops aimed at second-level students (ages 15-16). Participants engage in a series of programming activities based on the Scratch visual programming language, and a very strong group-based pedagogy is followed. Participants are not required to have any prior programming experience.…
NASA Astrophysics Data System (ADS)
Sarsimbayeva, S. M.; Kospanova, K. K.
2015-11-01
The article provides the discussion of matters associated with the problems of transferring of object-oriented Windows applications from C++ programming language to .Net platform using C# programming language. C++ has always been considered to be the best language for the software development, but the implicit mistakes that come along with the tool may lead to infinite memory leaks and other errors. The platform .Net and the C#, made by Microsoft, are the solutions to the issues mentioned above. The world economy and production are highly demanding applications developed by C++, but the new language with its stability and transferability to .Net will bring many advantages. An example can be presented using the applications that imitate the work of queuing systems. Authors solved the problem of transferring of an application, imitating seaport works, from C++ to the platform .Net using C# in the scope of Visual Studio.
The Scratch Programming Language and Environment
ERIC Educational Resources Information Center
Maloney, John; Resnick, Mitchel; Rusk, Natalie; Silverman, Brian; Eastmond, Evelyn
2010-01-01
Scratch is a visual programming environment that allows users (primarily ages 8 to 16) to learn computer programming while working on personally meaningful projects such as animated stories and games. A key design goal of Scratch is to support self-directed learning through tinkering and collaboration with peers. This article explores how the…
ERIC Educational Resources Information Center
Valett, Robert E.
Research findings on auditory sequencing and auditory blending and fusion, auditory-visual integration, and language patterns are presented in support of the Linguistic Auditory Memory Patterns (LAMP) program. LAMP consists of 100 developmental lessons for young students with learning disabilities or language problems. The lessons are included in…
NASA Astrophysics Data System (ADS)
Godbole, Saurabh
Traditionally, textual tools have been utilized to teach basic programming languages and paradigms. Research has shown that students tend to be visual learners. Using flowcharts, students can quickly understand the logic of their programs and visualize the flow of commands in the algorithm. Moreover, applying programming to physical systems through the use of a microcontroller to facilitate this type of learning can spark an interest in students to advance their programming knowledge to create novel applications. This study examined if freshmen college students' attitudes towards programming changed after completing a graphical programming lesson. Various attributes about students' attitudes were examined including confidence, interest, stereotypes, and their belief in the usefulness of acquiring programming skills. The study found that there were no statistically significant differences in attitudes either immediately following the session or after a period of four weeks.
Barrès, Victor; Lee, Jinyong
2014-01-01
How does the language system coordinate with our visual system to yield flexible integration of linguistic, perceptual, and world-knowledge information when we communicate about the world we perceive? Schema theory is a computational framework that allows the simulation of perceptuo-motor coordination programs on the basis of known brain operating principles such as cooperative computation and distributed processing. We present first its application to a model of language production, SemRep/TCG, which combines a semantic representation of visual scenes (SemRep) with Template Construction Grammar (TCG) as a means to generate verbal descriptions of a scene from its associated SemRep graph. SemRep/TCG combines the neurocomputational framework of schema theory with the representational format of construction grammar in a model linking eye-tracking data to visual scene descriptions. We then offer a conceptual extension of TCG to include language comprehension and address data on the role of both world knowledge and grammatical semantics in the comprehension performances of agrammatic aphasic patients. This extension introduces a distinction between heavy and light semantics. The TCG model of language comprehension offers a computational framework to quantitatively analyze the distributed dynamics of language processes, focusing on the interactions between grammatical, world knowledge, and visual information. In particular, it reveals interesting implications for the understanding of the various patterns of comprehension performances of agrammatic aphasics measured using sentence-picture matching tasks. This new step in the life cycle of the model serves as a basis for exploring the specific challenges that neurolinguistic computational modeling poses to the neuroinformatics community.
Assessing the Effectiveness of Multimedia in Language Learning Software.
ERIC Educational Resources Information Center
Chun, Dorothy M.; Plass, Jan L.
In this paper, the effectiveness of a "CyberBuch," a multimedia program for reading authentic German texts, is assessed in three areas. First, based on user evaluation of the visual interface design, the usability of the program is assessed with particular regard to user reaction to the multimedia components of the program. Second,…
Visual Teaching Model for Introducing Programming Languages
ERIC Educational Resources Information Center
Shehane, Ronald; Sherman, Steven
2014-01-01
This study examines detailed usage of online training videos that were designed to address specific course problems that were encountered in an online computer programming course. The study presents the specifics of a programming course where training videos were used to provide students with a quick start path to learning a new programming…
Rocking Your Writing Program: Integration of Visual Art, Language Arts, & Science
ERIC Educational Resources Information Center
Poldberg, Monique M.,; Trainin, Guy; Andrzejczak, Nancy
2013-01-01
This paper explores the integration of art, literacy and science in a second grade classroom, showing how an integrative approach has a positive and lasting influence on student achievement in art, literacy, and science. Ways in which art, science, language arts, and cognition intersect are reviewed. Sample artifacts are presented along with their…
VPython: Writing Real-time 3D Physics Programs
NASA Astrophysics Data System (ADS)
Chabay, Ruth
2001-06-01
VPython (http://cil.andrew.cmu.edu/projects/visual) combines the Python programming language with an innovative 3D graphics module called Visual, developed by David Scherer. Designed to make 3D physics simulations accessible to novice programmers, VPython allows the programmer to write a purely computational program without any graphics code, and produces an interactive realtime 3D graphical display. In a program 3D objects are created and their positions modified by computational algorithms. Running in a separate thread, the Visual module monitors the positions of these objects and renders them many times per second. Using the mouse, one can zoom and rotate to navigate through the scene. After one hour of instruction, students in an introductory physics course at Carnegie Mellon University, including those who have never programmed before, write programs in VPython to model the behavior of physical systems and to visualize fields in 3D. The Numeric array processing module allows the construction of more sophisticated simulations and models as well. VPython is free and open source. The Visual module is based on OpenGL, and runs on Windows, Linux, and Macintosh.
Applying a visual language for image processing as a graphical teaching tool in medical imaging
NASA Astrophysics Data System (ADS)
Birchman, James J.; Tanimoto, Steven L.; Rowberg, Alan H.; Choi, Hyung-Sik; Kim, Yongmin
1992-05-01
Typical user interaction in image processing is with command line entries, pull-down menus, or text menu selections from a list, and as such is not generally graphical in nature. Although applying these interactive methods to construct more sophisticated algorithms from a series of simple image processing steps may be clear to engineers and programmers, it may not be clear to clinicians. A solution to this problem is to implement a visual programming language using visual representations to express image processing algorithms. Visual representations promote a more natural and rapid understanding of image processing algorithms by providing more visual insight into what the algorithms do than the interactive methods mentioned above can provide. Individuals accustomed to dealing with images will be more likely to understand an algorithm that is represented visually. This is especially true of referring physicians, such as surgeons in an intensive care unit. With the increasing acceptance of picture archiving and communications system (PACS) workstations and the trend toward increasing clinical use of image processing, referring physicians will need to learn more sophisticated concepts than simply image access and display. If the procedures that they perform commonly, such as window width and window level adjustment and image enhancement using unsharp masking, are depicted visually in an interactive environment, it will be easier for them to learn and apply these concepts. The software described in this paper is a visual programming language for imaging processing which has been implemented on the NeXT computer using NeXTstep user interface development tools and other tools in an object-oriented environment. The concept is based upon the description of a visual language titled `Visualization of Vision Algorithms' (VIVA). Iconic representations of simple image processing steps are placed into a workbench screen and connected together into a dataflow path by the user. As the user creates and edits a dataflow path, more complex algorithms can be built on the screen. Once the algorithm is built, it can be executed, its results can be reviewed, and operator parameters can be interactively adjusted until an optimized output is produced. The optimized algorithm can then be saved and added to the system as a new operator. This system has been evaluated as a graphical teaching tool for window width and window level adjustment, image enhancement using unsharp masking, and other techniques.
Learning with a missing sense: what can we learn from the interaction of a deaf child with a turtle?
Miller, Paul
2009-01-01
This case study reports on the progress of Navon, a 13-year-old boy with prelingual deafness, over a 3-month period following exposure to Logo, a computer programming language that visualizes specific programming commands by means of a virtual drawing tool called the Turtle. Despite an almost complete lack of skills in spoken and sign language, Navon made impressive progress in his programming skills, including acquisition of a notable active written vocabulary, which he learned to apply in a purposeful, rule-based manner. His achievements are discussed with reference to commonly held assumptions about the relationship between language and thought, in general, and the prerequisite of proper spoken language skills for the acquisition of reading and writing, in particular. Highlighted are the central principles responsible for Navon's unexpected cognitive and linguistic development, including the way it affected his social relations with peers and teachers.
Graphical programming interface: A development environment for MRI methods.
Zwart, Nicholas R; Pipe, James G
2015-11-01
To introduce a multiplatform, Python language-based, development environment called graphical programming interface for prototyping MRI techniques. The interface allows developers to interact with their scientific algorithm prototypes visually in an event-driven environment making tasks such as parameterization, algorithm testing, data manipulation, and visualization an integrated part of the work-flow. Algorithm developers extend the built-in functionality through simple code interfaces designed to facilitate rapid implementation. This article shows several examples of algorithms developed in graphical programming interface including the non-Cartesian MR reconstruction algorithms for PROPELLER and spiral as well as spin simulation and trajectory visualization of a FLORET example. The graphical programming interface framework is shown to be a versatile prototyping environment for developing numeric algorithms used in the latest MR techniques. © 2014 Wiley Periodicals, Inc.
Advocating for the Visual Arts in the Era of No Child Left Behind
ERIC Educational Resources Information Center
Daniel, Christine
2010-01-01
Research has shown that a solid visual arts program provided to students throughout the K-12 years increases academic achievement, increases self-confidence and self-concept and provides opportunities for students to tap all their intelligences. However, recent budget cuts and the high stake testing on Mathematics and English Language arts at all…
ERIC Educational Resources Information Center
CARROLL, JOHN B.
RESEARCH WAS UNDERTAKEN TO DETERMINE WHETHER SPOKEN AND WRITTEN FOREIGN LANGUAGE SKILLS COULD BE TAUGHT BY PROGRAMED SELF-INSTRUCTION USING THE MOST PRACTICAL AND WELL-DESIGNED AUDIOVISUAL TECHNIQUES AVAILABLE. THE PRESENTATION DEVICE, OR TEACHING MACHINE, WAS DESIGNED AND CONSTRUCTED TO SERVE THE SPECIAL REQUIREMENTS OF PROGRAMED SELF-INSTRUCTION…
Language-Specific Attention Treatment for Aphasia: Description and Preliminary Findings.
Peach, Richard K; Nathan, Meghana R; Beck, Katherine M
2017-02-01
The need for a specific, language-based treatment approach to aphasic impairments associated with attentional deficits is well documented. We describe language-specific attention treatment, a specific skill-based approach for aphasia that exploits increasingly complex linguistic tasks that focus attention. The program consists of eight tasks, some with multiple phases, to assess and treat lexical and sentence processing. Validation results demonstrate that these tasks load on six attentional domains: (1) executive attention; (2) attentional switching; (3) visual selective attention/processing speed; (4) sustained attention; (5) auditory-verbal working memory; and (6) auditory processing speed. The program demonstrates excellent inter- and intrarater reliability and adequate test-retest reliability. Two of four people with aphasia exposed to this program demonstrated good language recovery whereas three of the four participants showed improvements in auditory-verbal working memory. The results provide support for this treatment program in patients with aphasia having no greater than a moderate degree of attentional impairment. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Closed-Caption Television and Adult Students of English as a Second Language.
ERIC Educational Resources Information Center
Smith, Jennifer J.
The use of closed-caption television (CCTV) to help teach English as a Second Language (ESL) to adults was studied with a group of adult students in the Arlington, Virginia, Education and Employment Program. Although CCTV is designed for the hearing impaired, its combination of written with spoken English in the visual context of television makes…
Utilizing Oral-Motor Feedback in Auditory Conceptualization.
ERIC Educational Resources Information Center
Howard, Marilyn
The Auditory Discrimination in Depth (ADD) program, an oral-motor approach to beginning reading instruction, trains first grade children in auditory skills by a process in which language and oral-motor feedback are used to integrate auditory properties with visual properties. This emphasis of the ADD program makes the child's perceptual…
Simulation environment and graphical visualization environment: a COPD use-case
2014-01-01
Background Today, many different tools are developed to execute and visualize physiological models that represent the human physiology. Most of these tools run models written in very specific programming languages which in turn simplify the communication among models. Nevertheless, not all of these tools are able to run models written in different programming languages. In addition, interoperability between such models remains an unresolved issue. Results In this paper we present a simulation environment that allows, first, the execution of models developed in different programming languages and second the communication of parameters to interconnect these models. This simulation environment, developed within the Synergy-COPD project, aims at helping and supporting bio-researchers and medical students understand the internal mechanisms of the human body through the use of physiological models. This tool is composed of a graphical visualization environment, which is a web interface through which the user can interact with the models, and a simulation workflow management system composed of a control module and a data warehouse manager. The control module monitors the correct functioning of the whole system. The data warehouse manager is responsible for managing the stored information and supporting its flow among the different modules. This simulation environment has been validated with the integration of three models: two deterministic, i.e. based on linear and differential equations, and one probabilistic, i.e., based on probability theory. These models have been selected based on the disease under study in this project, i.e., chronic obstructive pulmonary disease. Conclusion It has been proved that the simulation environment presented here allows the user to research and study the internal mechanisms of the human physiology by the use of models via a graphical visualization environment. A new tool for bio-researchers is ready for deployment in various use cases scenarios. PMID:25471327
Hierarchical programming for data storage and visualization
Donovan, John M.; Smith, Peter E.; ,
2001-01-01
Graphics software is an essential tool for interpreting, analyzing, and presenting data from multidimensional hydrodynamic models used in estuarine and coastal ocean studies. The post-processing of time-varying three-dimensional model output presents unique requirements for data visualization because of the large volume of data that can be generated and the multitude of time scales that must be examined. Such data can relate to estuarine or coastal ocean environments and come from numerical models or field instruments. One useful software tool for the display, editing, visualization, and printing of graphical data is the Gr application, written by the first author for use in U.S. Geological Survey San Francisco Bay Program. The Gr application has been made available to the public via the Internet since the year 2000. The Gr application is written in the Java (Sun Microsystems, Nov. 29, 2001) programming language and uses the Extensible Markup Language standard for hierarchical data storage. Gr presents a hierarchy of objects to the user that can be edited using a common interface. Java's object-oriented capabilities allow Gr to treat data, graphics, and tools equally and to save them all to a single XML file.
Low-cost USB interface for operant research using Arduino and Visual Basic.
Escobar, Rogelio; Pérez-Herrera, Carlos A
2015-03-01
This note describes the design of a low-cost interface using Arduino microcontroller boards and Visual Basic programming for operant conditioning research. The board executes one program in Arduino programming language that polls the state of the inputs and generates outputs in an operant chamber. This program communicates through a USB port with another program written in Visual Basic 2010 Express Edition running on a laptop, desktop, netbook computer, or even a tablet equipped with Windows operating system. The Visual Basic program controls schedules of reinforcement and records real-time data. A single Arduino board can be used to control a total of 52 inputs/output lines, and multiple Arduino boards can be used to control multiple operant chambers. An external power supply and a series of micro relays are required to control 28-V DC devices commonly used in operant chambers. Instructions for downloading and using the programs to generate simple and concurrent schedules of reinforcement are provided. Testing suggests that the interface is reliable, accurate, and could serve as an inexpensive alternative to commercial equipment. © Society for the Experimental Analysis of Behavior.
ERIC Educational Resources Information Center
ANDERSON, MERLIN
A 1965-66 CONTROLLED EXPERIMENT AT THE FIFTH AND SIXTH GRADE LEVELS WAS CONDUCTED IN SELECTED SMALL SCHOOLS IN SOUTHERN NEVADA TO DETERMINE IF SUCCESSFUL BEGINNING INSTRUCTION IN A FOREIGN LANGUAGE (SPANISH) CAN BE ACHIEVED BY NON-SPECIALIST TEACHERS WITH THE USE OF AUDIO-LINGUAL-VISUAL MATERIALS. INSTRUCTIONAL MATERIALS USED WERE "LA FAMILIA…
CytoscapeRPC: a plugin to create, modify and query Cytoscape networks from scripting languages.
Bot, Jan J; Reinders, Marcel J T
2011-09-01
CytoscapeRPC is a plugin for Cytoscape which allows users to create, query and modify Cytoscape networks from any programming language which supports XML-RPC. This enables them to access Cytoscape functionality and visualize their data interactively without leaving the programming environment with which they are familiar. Install through the Cytoscape plugin manager or visit the web page: http://wiki.nbic.nl/index.php/CytoscapeRPC for the user tutorial and download. j.j.bot@tudelft.nl; j.j.bot@tudelft.nl.
An Overview of R in Health Decision Sciences.
Jalal, Hawre; Pechlivanoglou, Petros; Krijkamp, Eline; Alarid-Escudero, Fernando; Enns, Eva; Hunink, M G Myriam
2017-10-01
As the complexity of health decision science applications increases, high-level programming languages are increasingly adopted for statistical analyses and numerical computations. These programming languages facilitate sophisticated modeling, model documentation, and analysis reproducibility. Among the high-level programming languages, the statistical programming framework R is gaining increased recognition. R is freely available, cross-platform compatible, and open source. A large community of users who have generated an extensive collection of well-documented packages and functions supports it. These functions facilitate applications of health decision science methodology as well as the visualization and communication of results. Although R's popularity is increasing among health decision scientists, methodological extensions of R in the field of decision analysis remain isolated. The purpose of this article is to provide an overview of existing R functionality that is applicable to the various stages of decision analysis, including model design, input parameter estimation, and analysis of model outputs.
pWeb: A High-Performance, Parallel-Computing Framework for Web-Browser-Based Medical Simulation.
Halic, Tansel; Ahn, Woojin; De, Suvranu
2014-01-01
This work presents a pWeb - a new language and compiler for parallelization of client-side compute intensive web applications such as surgical simulations. The recently introduced HTML5 standard has enabled creating unprecedented applications on the web. Low performance of the web browser, however, remains the bottleneck of computationally intensive applications including visualization of complex scenes, real time physical simulations and image processing compared to native ones. The new proposed language is built upon web workers for multithreaded programming in HTML5. The language provides fundamental functionalities of parallel programming languages as well as the fork/join parallel model which is not supported by web workers. The language compiler automatically generates an equivalent parallel script that complies with the HTML5 standard. A case study on realistic rendering for surgical simulations demonstrates enhanced performance with a compact set of instructions.
Optimal Facility Location Tool for Logistics Battle Command (LBC)
2015-08-01
64 Appendix B. VBA Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Appendix C. Story...should city planners have located emergency service facilities so that all households (the demand) had equal access to coverage?” The critical...programming language called Visual Basic for Applications ( VBA ). CPLEX is a commercial solver for linear, integer, and mixed integer linear programming problems
A Conversion Tool for Mathematical Expressions in Web XML Files.
ERIC Educational Resources Information Center
Ohtake, Nobuyuki; Kanahori, Toshihiro
2003-01-01
This article discusses the conversion of mathematical equations into Extensible Markup Language (XML) on the World Wide Web for individuals with visual impairments. A program is described that converts the presentation markup style to the content markup style in MathML to allow browsers to render mathematical expressions without other programs.…
Thomas-Vaslin, Véronique; Six, Adrien; Ganascia, Jean-Gabriel; Bersini, Hugues
2013-01-01
Dynamic modeling of lymphocyte behavior has primarily been based on populations based differential equations or on cellular agents moving in space and interacting each other. The final steps of this modeling effort are expressed in a code written in a programing language. On account of the complete lack of standardization of the different steps to proceed, we have to deplore poor communication and sharing between experimentalists, theoreticians and programmers. The adoption of diagrammatic visual computer language should however greatly help the immunologists to better communicate, to more easily identify the models similarities and facilitate the reuse and extension of existing software models. Since immunologists often conceptualize the dynamical evolution of immune systems in terms of “state-transitions” of biological objects, we promote the use of unified modeling language (UML) state-transition diagram. To demonstrate the feasibility of this approach, we present a UML refactoring of two published models on thymocyte differentiation. Originally built with different modeling strategies, a mathematical ordinary differential equation-based model and a cellular automata model, the two models are now in the same visual formalism and can be compared. PMID:24101919
Spacecraft Guidance, Navigation, and Control Visualization Tool
NASA Technical Reports Server (NTRS)
Mandic, Milan; Acikmese, Behcet; Blackmore, Lars
2011-01-01
G-View is a 3D visualization tool for supporting spacecraft guidance, navigation, and control (GN&C) simulations relevant to small-body exploration and sampling (see figure). The tool is developed in MATLAB using Virtual Reality Toolbox and provides users with the ability to visualize the behavior of their simulations, regardless of which programming language (or machine) is used to generate simulation results. The only requirement is that multi-body simulation data is generated and placed in the proper format before applying G-View.
A description of the verbal behavior of students during two reading instruction methods
Daly, Patricia M.
1987-01-01
The responses of students during two reading methods, the language experience approach and two Mastery Learning programs, were analyzed using verbal operants. A description of student responding was generated for these methods. The purpose of the study was to answer the questions: What are the major controlling variables determining student reading behavior during the language experience approach and two Mastery Learning programs, and how do these controlling variables change across story reading sessions and across stories in the first method? Student responses by verbal operant were compared for both reading methods. Findings indicated higher frequencies of textual operants occurred in responses during the Mastery Learning programs. A greater reliance on intraverbal control was evident in responses during the language experience approach. It is suggested that students who can generate strong intraverbal responses and who may have visual discrimination problems during early reading instruction may benefit from use of the language experience approach at this stage. ImagesFigure 2Figure 3 PMID:22477535
MatLab Script and Functional Programming
NASA Technical Reports Server (NTRS)
Shaykhian, Gholam Ali
2007-01-01
MatLab Script and Functional Programming: MatLab is one of the most widely used very high level programming languages for scientific and engineering computations. It is very user-friendly and needs practically no formal programming knowledge. Presented here are MatLab programming aspects and not just the MatLab commands for scientists and engineers who do not have formal programming training and also have no significant time to spare for learning programming to solve their real world problems. Specifically provided are programs for visualization. The MatLab seminar covers the functional and script programming aspect of MatLab language. Specific expectations are: a) Recognize MatLab commands, script and function. b) Create, and run a MatLab function. c) Read, recognize, and describe MatLab syntax. d) Recognize decisions, loops and matrix operators. e) Evaluate scope among multiple files, and multiple functions within a file. f) Declare, define and use scalar variables, vectors and matrices.
ERIC Educational Resources Information Center
Liu, Xia; Liu, Lai C.; Koong, Kai S.; Lu, June
2003-01-01
Analysis of 300 information technology job postings in two Internet databases identified the following skill categories: programming languages (Java, C/C++, and Visual Basic were most frequent); website development (57% sought SQL and HTML skills); databases (nearly 50% required Oracle); networks (only Windows NT or wide-area/local-area networks);…
Real-time lexical comprehension in young children learning American Sign Language.
MacDonald, Kyle; LaMarr, Todd; Corina, David; Marchman, Virginia A; Fernald, Anne
2018-04-16
When children interpret spoken language in real time, linguistic information drives rapid shifts in visual attention to objects in the visual world. This language-vision interaction can provide insights into children's developing efficiency in language comprehension. But how does language influence visual attention when the linguistic signal and the visual world are both processed via the visual channel? Here, we measured eye movements during real-time comprehension of a visual-manual language, American Sign Language (ASL), by 29 native ASL-learning children (16-53 mos, 16 deaf, 13 hearing) and 16 fluent deaf adult signers. All signers showed evidence of rapid, incremental language comprehension, tending to initiate an eye movement before sign offset. Deaf and hearing ASL-learners showed similar gaze patterns, suggesting that the in-the-moment dynamics of eye movements during ASL processing are shaped by the constraints of processing a visual language in real time and not by differential access to auditory information in day-to-day life. Finally, variation in children's ASL processing was positively correlated with age and vocabulary size. Thus, despite competition for attention within a single modality, the timing and accuracy of visual fixations during ASL comprehension reflect information processing skills that are important for language acquisition regardless of language modality. © 2018 John Wiley & Sons Ltd.
A novel visual hardware behavioral language
NASA Technical Reports Server (NTRS)
Li, Xueqin; Cheng, H. D.
1992-01-01
Most hardware behavioral languages just use texts to describe the behavior of the desired hardware design. This is inconvenient for VLSI designers who enjoy using the schematic approach. The proposed visual hardware behavioral language has the ability to graphically express design information using visual parallel models (blocks), visual sequential models (processes) and visual data flow graphs (which consist of primitive operational icons, control icons, and Data and Synchro links). Thus, the proposed visual hardware behavioral language can not only specify hardware concurrent and sequential functionality, but can also visually expose parallelism, sequentiality, and disjointness (mutually exclusive operations) for the hardware designers. That would make the hardware designers capture the design ideas easily and explicitly using this visual hardware behavioral language.
Automatic visualization of 3D geometry contained in online databases
NASA Astrophysics Data System (ADS)
Zhang, Jie; John, Nigel W.
2003-04-01
In this paper, the application of the Virtual Reality Modeling Language (VRML) for efficient database visualization is analyzed. With the help of JAVA programming, three examples of automatic visualization from a database containing 3-D Geometry are given. The first example is used to create basic geometries. The second example is used to create cylinders with a defined start point and end point. The third example is used to processs data from an old copper mine complex in Cheshire, United Kingdom. Interactive 3-D visualization of all geometric data in an online database is achieved with JSP technology.
A component-based software environment for visualizing large macromolecular assemblies.
Sanner, Michel F
2005-03-01
The interactive visualization of large biological assemblies poses a number of challenging problems, including the development of multiresolution representations and new interaction methods for navigating and analyzing these complex systems. An additional challenge is the development of flexible software environments that will facilitate the integration and interoperation of computational models and techniques from a wide variety of scientific disciplines. In this paper, we present a component-based software development strategy centered on the high-level, object-oriented, interpretive programming language: Python. We present several software components, discuss their integration, and describe some of their features that are relevant to the visualization of large molecular assemblies. Several examples are given to illustrate the interoperation of these software components and the integration of structural data from a variety of experimental sources. These examples illustrate how combining visual programming with component-based software development facilitates the rapid prototyping of novel visualization tools.
Visible Languages for Program Visualization
1986-02-01
Comments 38 The Presentation of Program Metadata 39 The Spatial Composition of Comments 41 The Typography of Punctuation 42 Typographic Encodings... Typography of Program Punctuation 6. In this example the "" appears in 10 point regular Helvetica type, and thus uses the same typographic parameters as...Results. Conclusions Chapter 4 Graphic Design of C Source Code and Comments Section 4 3 1 he Typography of Punctuation Page 41 l ft. Section
Manchester visual query language
NASA Astrophysics Data System (ADS)
Oakley, John P.; Davis, Darryl N.; Shann, Richard T.
1993-04-01
We report a database language for visual retrieval which allows queries on image feature information which has been computed and stored along with images. The language is novel in that it provides facilities for dealing with feature data which has actually been obtained from image analysis. Each line in the Manchester Visual Query Language (MVQL) takes a set of objects as input and produces another, usually smaller, set as output. The MVQL constructs are mainly based on proven operators from the field of digital image analysis. An example is the Hough-group operator which takes as input a specification for the objects to be grouped, a specification for the relevant Hough space, and a definition of the voting rule. The output is a ranked list of high scoring bins. The query could be directed towards one particular image or an entire image database, in the latter case the bins in the output list would in general be associated with different images. We have implemented MVQL in two layers. The command interpreter is a Lisp program which maps each MVQL line to a sequence of commands which are used to control a specialized database engine. The latter is a hybrid graph/relational system which provides low-level support for inheritance and schema evolution. In the paper we outline the language and provide examples of useful queries. We also describe our solution to the engineering problems associated with the implementation of MVQL.
TOWARDS A MULTI-SCALE AGENT-BASED PROGRAMMING LANGUAGE METHODOLOGY
Somogyi, Endre; Hagar, Amit; Glazier, James A.
2017-01-01
Living tissues are dynamic, heterogeneous compositions of objects, including molecules, cells and extra-cellular materials, which interact via chemical, mechanical and electrical process and reorganize via transformation, birth, death and migration processes. Current programming language have difficulty describing the dynamics of tissues because: 1: Dynamic sets of objects participate simultaneously in multiple processes, 2: Processes may be either continuous or discrete, and their activity may be conditional, 3: Objects and processes form complex, heterogeneous relationships and structures, 4: Objects and processes may be hierarchically composed, 5: Processes may create, destroy and transform objects and processes. Some modeling languages support these concepts, but most cannot translate models into executable simulations. We present a new hybrid executable modeling language paradigm, the Continuous Concurrent Object Process Methodology (CCOPM) which naturally expresses tissue models, enabling users to visually create agent-based models of tissues, and also allows computer simulation of these models. PMID:29282379
TOWARDS A MULTI-SCALE AGENT-BASED PROGRAMMING LANGUAGE METHODOLOGY.
Somogyi, Endre; Hagar, Amit; Glazier, James A
2016-12-01
Living tissues are dynamic, heterogeneous compositions of objects , including molecules, cells and extra-cellular materials, which interact via chemical, mechanical and electrical process and reorganize via transformation, birth, death and migration processes . Current programming language have difficulty describing the dynamics of tissues because: 1: Dynamic sets of objects participate simultaneously in multiple processes, 2: Processes may be either continuous or discrete, and their activity may be conditional, 3: Objects and processes form complex, heterogeneous relationships and structures, 4: Objects and processes may be hierarchically composed, 5: Processes may create, destroy and transform objects and processes. Some modeling languages support these concepts, but most cannot translate models into executable simulations. We present a new hybrid executable modeling language paradigm, the Continuous Concurrent Object Process Methodology ( CCOPM ) which naturally expresses tissue models, enabling users to visually create agent-based models of tissues, and also allows computer simulation of these models.
Prototyping Visual Database Interface by Object-Oriented Language
1988-06-01
approach is to use object-oriented programming. Object-oriented languages are characterized by three criteria [Ref. 4:p. 1.2.1]: - encapsulation of...made it a sub-class of our DMWindow.Cls, which is discussed later in this chapter. This extension to the application had to be intergrated with our... abnormal behaviors similar to Korth’s discussion of pitfalls in relational database designing. Even extensions like GEM [Ref. 8] that are powerful and
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brun, B.
1997-07-01
Computer technology has improved tremendously during the last years with larger media capacity, memory and more computational power. Visual computing with high-performance graphic interface and desktop computational power have changed the way engineers accomplish everyday tasks, development and safety studies analysis. The emergence of parallel computing will permit simulation over a larger domain. In addition, new development methods, languages and tools have appeared in the last several years.
NASA Technical Reports Server (NTRS)
Watson, A. B.; Solomon, J. A.
1997-01-01
Psychophysica is a set of software tools for psychophysical research. Functions are provided for calibrated visual displays, for fitting and plotting of psychometric functions, and for the QUEST adaptive staircase procedure. The functions are written in the Mathematica programming language.
Speakers of Different Languages Process the Visual World Differently
Chabal, Sarah; Marian, Viorica
2015-01-01
Language and vision are highly interactive. Here we show that people activate language when they perceive the visual world, and that this language information impacts how speakers of different languages focus their attention. For example, when searching for an item (e.g., clock) in the same visual display, English and Spanish speakers look at different objects. Whereas English speakers searching for the clock also look at a cloud, Spanish speakers searching for the clock also look at a gift, because the Spanish names for gift (regalo) and clock (reloj) overlap phonologically. These different looking patterns emerge despite an absence of direct linguistic input, showing that language is automatically activated by visual scene processing. We conclude that the varying linguistic information available to speakers of different languages affects visual perception, leading to differences in how the visual world is processed. PMID:26030171
ERIC Educational Resources Information Center
Fields, Deborah; Vasudevan, Veena; Kafai, Yasmin B.
2015-01-01
We highlight ways to support interest-driven creation of digital media in Scratch, a visual-based programming language and community, within a high school programming workshop. We describe a collaborative approach, the programmers' collective, that builds on social models found in do-it-yourself and open source communities, but with scaffolding…
Speakers of different languages process the visual world differently.
Chabal, Sarah; Marian, Viorica
2015-06-01
Language and vision are highly interactive. Here we show that people activate language when they perceive the visual world, and that this language information impacts how speakers of different languages focus their attention. For example, when searching for an item (e.g., clock) in the same visual display, English and Spanish speakers look at different objects. Whereas English speakers searching for the clock also look at a cloud, Spanish speakers searching for the clock also look at a gift, because the Spanish names for gift (regalo) and clock (reloj) overlap phonologically. These different looking patterns emerge despite an absence of direct language input, showing that linguistic information is automatically activated by visual scene processing. We conclude that the varying linguistic information available to speakers of different languages affects visual perception, leading to differences in how the visual world is processed. (c) 2015 APA, all rights reserved).
Visual exploration and analysis of human-robot interaction rules
NASA Astrophysics Data System (ADS)
Zhang, Hui; Boyles, Michael J.
2013-01-01
We present a novel interaction paradigm for the visual exploration, manipulation and analysis of human-robot interaction (HRI) rules; our development is implemented using a visual programming interface and exploits key techniques drawn from both information visualization and visual data mining to facilitate the interaction design and knowledge discovery process. HRI is often concerned with manipulations of multi-modal signals, events, and commands that form various kinds of interaction rules. Depicting, manipulating and sharing such design-level information is a compelling challenge. Furthermore, the closed loop between HRI programming and knowledge discovery from empirical data is a relatively long cycle. This, in turn, makes design-level verification nearly impossible to perform in an earlier phase. In our work, we exploit a drag-and-drop user interface and visual languages to support depicting responsive behaviors from social participants when they interact with their partners. For our principal test case of gaze-contingent HRI interfaces, this permits us to program and debug the robots' responsive behaviors through a graphical data-flow chart editor. We exploit additional program manipulation interfaces to provide still further improvement to our programming experience: by simulating the interaction dynamics between a human and a robot behavior model, we allow the researchers to generate, trace and study the perception-action dynamics with a social interaction simulation to verify and refine their designs. Finally, we extend our visual manipulation environment with a visual data-mining tool that allows the user to investigate interesting phenomena such as joint attention and sequential behavioral patterns from multiple multi-modal data streams. We have created instances of HRI interfaces to evaluate and refine our development paradigm. As far as we are aware, this paper reports the first program manipulation paradigm that integrates visual programming interfaces, information visualization, and visual data mining methods to facilitate designing, comprehending, and evaluating HRI interfaces.
Visual Guidebooks: Documenting a Personal Thinking Language
ERIC Educational Resources Information Center
Shambaugh, Neal; Beacham, Cindy
2017-01-01
A personal thinking language consists of verbal and visual means to transform ideas to action in social and work settings. This verbal and visual interaction of images and language is influenced by one's personal history, cultural expectations and professional practices. The article first compares a personal thinking language to other languages…
Understanding Language, Hearing Status, and Visual-Spatial Skills
Marschark, Marc; Spencer, Linda J.; Durkin, Andreana; Borgna, Georgianna; Convertino, Carol; Machmer, Elizabeth; Kronenberger, William G.; Trani, Alexandra
2015-01-01
It is frequently assumed that deaf individuals have superior visual-spatial abilities relative to hearing peers and thus, in educational settings, they are often considered visual learners. There is some empirical evidence to support the former assumption, although it is inconsistent, and apparently none to support the latter. Three experiments examined visual-spatial and related cognitive abilities among deaf individuals who varied in their preferred language modality and use of cochlear implants (CIs) and hearing individuals who varied in their sign language skills. Sign language and spoken language assessments accompanied tasks involving visual-spatial processing, working memory, nonverbal logical reasoning, and executive function. Results were consistent with other recent studies indicating no generalized visual-spatial advantage for deaf individuals and suggested that their performance in that domain may be linked to the strength of their preferred language skills regardless of modality. Hearing individuals performed more strongly than deaf individuals on several visual-spatial and self-reported executive functioning measures, regardless of sign language skills or use of CIs. Findings are inconsistent with assumptions that deaf individuals are visual learners or are superior to hearing individuals across a broad range of visual-spatial tasks. Further, performance of deaf and hearing individuals on the same visual-spatial tasks was associated with differing cognitive abilities, suggesting that different cognitive processes may be involved in visual-spatial processing in these groups. PMID:26141071
Simulation with Python on transverse modes of the symmetric confocal resonator
NASA Astrophysics Data System (ADS)
Wang, Qing Hua; Qi, Jing; Ji, Yun Jing; Song, Yang; Li, Zhenhua
2017-08-01
Python is a popular open-source programming language that can be used to simulate various optical phenomena. We have developed a suite of programs to help teach the course of laser principle. The complicated transverse modes of the symmetric confocal resonator can be visualized in personal computers, which is significant to help the students understand the pattern distribution of laser resonator.
NASA Astrophysics Data System (ADS)
Lahti, Paul M.; Motyka, Eric J.; Lancashire, Robert J.
2000-05-01
A straightforward procedure is described to combine computation of molecular vibrational modes using commonly available molecular modeling programs with visualization of the modes using advanced features of the MDL Information Systems Inc. Chime World Wide Web browser plug-in. Minor editing of experimental spectra that are stored in the JCAMP-DX format allows linkage of IR spectral frequency ranges to Chime molecular display windows. The spectra and animation files can be combined by Hypertext Markup Language programming to allow interactive linkage between experimental spectra and computationally generated vibrational displays. Both the spectra and the molecular displays can be interactively manipulated to allow the user maximum control of the objects being viewed. This procedure should be very valuable not only for aiding students through visual linkage of spectra and various vibrational animations, but also by assisting them in learning the advantages and limitations of computational chemistry by comparison to experiment.
Visual Immersion for Cultural Understanding and Multimodal Literacy
ERIC Educational Resources Information Center
Smilan, Cathy
2017-01-01
When considering inclusive art curriculum that accommodates all learners, including English language learners, two distinct yet inseparable issues come to mind. The first is that English language learner students can use visual language and visual literacy skills inherent in visual arts curriculum to scaffold learning in and through the arts.…
Estimating aquifer transmissivity from specific capacity using MATLAB.
McLin, Stephen G
2005-01-01
Historically, specific capacity information has been used to calculate aquifer transmissivity when pumping test data are unavailable. This paper presents a simple computer program written in the MATLAB programming language that estimates transmissivity from specific capacity data while correcting for aquifer partial penetration and well efficiency. The program graphically plots transmissivity as a function of these factors so that the user can visually estimate their relative importance in a particular application. The program is compatible with any computer operating system running MATLAB, including Windows, Macintosh OS, Linux, and Unix. Two simple examples illustrate program usage.
ERIC Educational Resources Information Center
Gunnarsson, Petur
1992-01-01
Examines the resilience of small languages in the face of larger ones. Highlights include the concept of one dominant language, such as Esperanto; the threat of television to small visual-language societies; the power of visual media; man's relationship to language; and the resilience of language. (LRW)
Language-guided visual processing affects reasoning: the role of referential and spatial anchoring.
Dumitru, Magda L; Joergensen, Gitte H; Cruickshank, Alice G; Altmann, Gerry T M
2013-06-01
Language is more than a source of information for accessing higher-order conceptual knowledge. Indeed, language may determine how people perceive and interpret visual stimuli. Visual processing in linguistic contexts, for instance, mirrors language processing and happens incrementally, rather than through variously-oriented fixations over a particular scene. The consequences of this atypical visual processing are yet to be determined. Here, we investigated the integration of visual and linguistic input during a reasoning task. Participants listened to sentences containing conjunctions or disjunctions (Nancy examined an ant and/or a cloud) and looked at visual scenes containing two pictures that either matched or mismatched the nouns. Degree of match between nouns and pictures (referential anchoring) and between their expected and actual spatial positions (spatial anchoring) affected fixations as well as judgments. We conclude that language induces incremental processing of visual scenes, which in turn becomes susceptible to reasoning errors during the language-meaning verification process. Copyright © 2013 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Zimmerman, Ruth E.
Operations at the Boston Center for Blind Children's day preschool for visually-impaired, multihandicapped children (3- to 8-years-old) are described. The following stages of evaluation and planning are identified: development of a treatment plan based on performance in the developmental areas of self-help, language, motor skills, socialization,…
Understanding Language, Hearing Status, and Visual-Spatial Skills.
Marschark, Marc; Spencer, Linda J; Durkin, Andreana; Borgna, Georgianna; Convertino, Carol; Machmer, Elizabeth; Kronenberger, William G; Trani, Alexandra
2015-10-01
It is frequently assumed that deaf individuals have superior visual-spatial abilities relative to hearing peers and thus, in educational settings, they are often considered visual learners. There is some empirical evidence to support the former assumption, although it is inconsistent, and apparently none to support the latter. Three experiments examined visual-spatial and related cognitive abilities among deaf individuals who varied in their preferred language modality and use of cochlear implants (CIs) and hearing individuals who varied in their sign language skills. Sign language and spoken language assessments accompanied tasks involving visual-spatial processing, working memory, nonverbal logical reasoning, and executive function. Results were consistent with other recent studies indicating no generalized visual-spatial advantage for deaf individuals and suggested that their performance in that domain may be linked to the strength of their preferred language skills regardless of modality. Hearing individuals performed more strongly than deaf individuals on several visual-spatial and self-reported executive functioning measures, regardless of sign language skills or use of CIs. Findings are inconsistent with assumptions that deaf individuals are visual learners or are superior to hearing individuals across a broad range of visual-spatial tasks. Further, performance of deaf and hearing individuals on the same visual-spatial tasks was associated with differing cognitive abilities, suggesting that different cognitive processes may be involved in visual-spatial processing in these groups. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Improve Problem Solving Skills through Adapting Programming Tools
NASA Technical Reports Server (NTRS)
Shaykhian, Linda H.; Shaykhian, Gholam Ali
2007-01-01
There are numerous ways for engineers and students to become better problem-solvers. The use of command line and visual programming tools can help to model a problem and formulate a solution through visualization. The analysis of problem attributes and constraints provide insight into the scope and complexity of the problem. The visualization aspect of the problem-solving approach tends to make students and engineers more systematic in their thought process and help them catch errors before proceeding too far in the wrong direction. The problem-solver identifies and defines important terms, variables, rules, and procedures required for solving a problem. Every step required to construct the problem solution can be defined in program commands that produce intermediate output. This paper advocates improved problem solving skills through using a programming tool. MatLab created by MathWorks, is an interactive numerical computing environment and programming language. It is a matrix-based system that easily lends itself to matrix manipulation, and plotting of functions and data. MatLab can be used as an interactive command line or a sequence of commands that can be saved in a file as a script or named functions. Prior programming experience is not required to use MatLab commands. The GNU Octave, part of the GNU project, a free computer program for performing numerical computations, is comparable to MatLab. MatLab visual and command programming are presented here.
Bilingual Control: Sequential Memory in Language Switching
ERIC Educational Resources Information Center
Declerck, Mathieu; Philipp, Andrea M.; Koch, Iring
2013-01-01
To investigate bilingual language control, prior language switching studies presented visual objects, which had to be named in different languages, typically indicated by a visual cue. The present study examined language switching of predictable responses by introducing a novel sequence-based language switching paradigm. In 4 experiments,…
ERIC Educational Resources Information Center
Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I.
2015-01-01
Sign language comprehension requires visual attention to the linguistic signal and visual attention to referents in the surrounding world, whereas these processes are divided between the auditory and visual modalities for spoken language comprehension. Additionally, the age-onset of first language acquisition and the quality and quantity of…
Li, Wenjing; Li, Jianhong; Wang, Zhenchang; Li, Yong; Liu, Zhaohui; Yan, Fei; Xian, Junfang; He, Huiguang
2015-01-01
Previous studies have shown brain reorganizations after early deprivation of auditory sensory. However, changes of grey matter connectivity have not been investigated in prelingually deaf adolescents yet. In the present study, we aimed to investigate changes of grey matter connectivity within and between auditory, language and visual systems in prelingually deaf adolescents. We recruited 16 prelingually deaf adolescents and 16 age-and gender-matched normal controls, and extracted the grey matter volume as the structural characteristic from 14 regions of interest involved in auditory, language or visual processing to investigate the changes of grey matter connectivity within and between auditory, language and visual systems. Sparse inverse covariance estimation (SICE) was utilized to construct grey matter connectivity between these brain regions. The results show that prelingually deaf adolescents present weaker grey matter connectivity within auditory and visual systems, and connectivity between language and visual systems declined. Notably, significantly increased brain connectivity was found between auditory and visual systems in prelingually deaf adolescents. Our results indicate "cross-modal" plasticity after deprivation of the auditory input in prelingually deaf adolescents, especially between auditory and visual systems. Besides, auditory deprivation and visual deficits might affect the connectivity pattern within language and visual systems in prelingually deaf adolescents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruiz, Steven Adriel
The following discussion contains a high-level description of methods used to implement software for data processing. It describes the required directory structures and file handling required to use Excel's Visual Basic for Applications programming language and how to identify shot, test and capture types to appropriately process data. It also describes how to interface with the software.
Toward Global Communication Networks: How Television is Forging New Thinking Patterns.
ERIC Educational Resources Information Center
Adams, Dennis M.; Fuchs, Mary
1986-01-01
Recent alliances between communication providers and computer manufacturers will lead to new technological combinations that will deliver visually-based ideas and information to a worldwide audience. Urges that those in charge of future video programs to consider their effects on children's language skills, thinking patterns, and intellectual…
"Visual" Cortex Responds to Spoken Language in Blind Children.
Bedny, Marina; Richardson, Hilary; Saxe, Rebecca
2015-08-19
Plasticity in the visual cortex of blind individuals provides a rare window into the mechanisms of cortical specialization. In the absence of visual input, occipital ("visual") brain regions respond to sound and spoken language. Here, we examined the time course and developmental mechanism of this plasticity in blind children. Nineteen blind and 40 sighted children and adolescents (4-17 years old) listened to stories and two auditory control conditions (unfamiliar foreign speech, and music). We find that "visual" cortices of young blind (but not sighted) children respond to sound. Responses to nonlanguage sounds increased between the ages of 4 and 17. By contrast, occipital responses to spoken language were maximal by age 4 and were not related to Braille learning. These findings suggest that occipital plasticity for spoken language is independent of plasticity for Braille and for sound. We conclude that in the absence of visual input, spoken language colonizes the visual system during brain development. Our findings suggest that early in life, human cortex has a remarkably broad computational capacity. The same cortical tissue can take on visual perception and language functions. Studies of plasticity provide key insights into how experience shapes the human brain. The "visual" cortex of adults who are blind from birth responds to touch, sound, and spoken language. To date, all existing studies have been conducted with adults, so little is known about the developmental trajectory of plasticity. We used fMRI to study the emergence of "visual" cortex responses to sound and spoken language in blind children and adolescents. We find that "visual" cortex responses to sound increase between 4 and 17 years of age. By contrast, responses to spoken language are present by 4 years of age and are not related to Braille-learning. These findings suggest that, early in development, human cortex can take on a strikingly wide range of functions. Copyright © 2015 the authors 0270-6474/15/3511674-08$15.00/0.
ERIC Educational Resources Information Center
Hunter, Zoe R.; Brysbaert, Marc
2008-01-01
Traditional neuropsychology employs visual half-field (VHF) experiments to assess cerebral language dominance. This approach is based on the assumption that left cerebral dominance for language leads to faster and more accurate recognition of words in the right visual half-field (RVF) than in the left visual half-field (LVF) during tachistoscopic…
Opposite brain laterality in analogous auditory and visual tests.
Oltedal, Leif; Hugdahl, Kenneth
2017-11-01
Laterality for language processing can be assessed by auditory and visual tasks. Typically, a right ear/right visual half-field (VHF) advantage is observed, reflecting left-hemispheric lateralization for language. Historically, auditory tasks have shown more consistent and reliable results when compared to VHF tasks. While few studies have compared analogous tasks applied to both sensory modalities for the same participants, one such study by Voyer and Boudreau [(2003). Cross-modal correlation of auditory and visual language laterality tasks: a serendipitous finding. Brain Cogn, 53(2), 393-397] found opposite laterality for visual and auditory language tasks. We adapted an experimental paradigm based on a dichotic listening and VHF approach, and applied the combined language paradigm in two separate experiments, including fMRI in the second experiment to measure brain activation in addition to behavioural data. The first experiment showed a right-ear advantage for the auditory task, but a left half-field advantage for the visual task. The second experiment, confirmed the findings, with opposite laterality effects for the visual and auditory tasks. In conclusion, we replicate the finding by Voyer and Boudreau (2003) and support their interpretation that these visual and auditory language tasks measure different cognitive processes.
Web-based three-dimensional geo-referenced visualization
NASA Astrophysics Data System (ADS)
Lin, Hui; Gong, Jianhua; Wang, Freeman
1999-12-01
This paper addresses several approaches to implementing web-based, three-dimensional (3-D), geo-referenced visualization. The discussion focuses on the relationship between multi-dimensional data sets and applications, as well as the thick/thin client and heavy/light server structure. Two models of data sets are addressed in this paper. One is the use of traditional 3-D data format such as 3-D Studio Max, Open Inventor 2.0, Vis5D and OBJ. The other is modelled by a web-based language such as VRML. Also, traditional languages such as C and C++, as well as web-based programming tools such as Java, Java3D and ActiveX, can be used for developing applications. The strengths and weaknesses of each approach are elaborated. Four practical solutions for using VRML and Java, Java and Java3D, VRML and ActiveX and Java wrapper classes (Java and C/C++), to develop applications are presented for web-based, real-time interactive and explorative visualization.
MatLab Programming for Engineers Having No Formal Programming Knowledge
NASA Technical Reports Server (NTRS)
Shaykhian, Linda H.; Shaykhian, Gholam Ali
2007-01-01
MatLab is one of the most widely used very high level programming languages for Scientific and engineering computations. It is very user-friendly and needs practically no formal programming knowledge. Presented here are MatLab programming aspects and not just the MatLab commands for scientists and engineers who do not have formal programming training and also have no significant time to spare for learning programming to solve their real world problems. Specifically provided are programs for visualization. Also, stated are the current limitations of the MatLab, which possibly can be taken care of by Mathworks Inc. in a future version to make MatLab more versatile.
An amodal shared resource model of language-mediated visual attention
Smith, Alastair C.; Monaghan, Padraic; Huettig, Falk
2013-01-01
Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes involved in this complex multimodal behavior and a potential explanation for how this ability is acquired. We demonstrate that the model is not only sufficient to account for the experimental effects of Visual World Paradigm studies but also that these effects are emergent properties of the architecture of the model itself, rather than requiring separate information processing channels or modular processing systems. The model provides an explicit description of the connection between the modality-specific input from language and vision and the distribution of eye gaze in language-mediated visual attention. The paper concludes by discussing future applications for the model, specifically its potential for investigating the factors driving observed individual differences in language-mediated eye gaze. PMID:23966967
Software Aids In Graphical Depiction Of Flow Data
NASA Technical Reports Server (NTRS)
Stegeman, J. D.
1995-01-01
Interactive Data Display System (IDDS) computer program is graphical-display program designed to assist in visualization of three-dimensional flow in turbomachinery. Grid and simulation data files in PLOT3D format required for input. Able to unwrap volumetric data cone associated with centrifugal compressor and display results in easy-to-understand two- or three-dimensional plots. IDDS provides majority of visualization and analysis capability for Integrated Computational Fluid Dynamics and Experiment (ICE) system. IDDS invoked from any subsystem, or used as stand-alone package of display software. Generates contour, vector, shaded, x-y, and carpet plots. Written in C language. Input file format used by IDDS is that of PLOT3D (COSMIC item ARC-12782).
Language identification from visual-only speech signals
Ronquest, Rebecca E.; Levi, Susannah V.; Pisoni, David B.
2010-01-01
Our goal in the present study was to examine how observers identify English and Spanish from visual-only displays of speech. First, we replicated the recent findings of Soto-Faraco et al. (2007) with Spanish and English bilingual and monolingual observers using different languages and a different experimental paradigm (identification). We found that prior linguistic experience affected response bias but not sensitivity (Experiment 1). In two additional experiments, we investigated the visual cues that observers use to complete the language-identification task. The results of Experiment 2 indicate that some lexical information is available in the visual signal but that it is limited. Acoustic analyses confirmed that our Spanish and English stimuli differed acoustically with respect to linguistic rhythmic categories. In Experiment 3, we tested whether this rhythmic difference could be used by observers to identify the language when the visual stimuli is temporally reversed, thereby eliminating lexical information but retaining rhythmic differences. The participants performed above chance even in the backward condition, suggesting that the rhythmic differences between the two languages may aid language identification in visual-only speech signals. The results of Experiments 3A and 3B also confirm previous findings that increased stimulus length facilitates language identification. Taken together, the results of these three experiments replicate earlier findings and also show that prior linguistic experience, lexical information, rhythmic structure, and utterance length influence visual-only language identification. PMID:20675804
Allen, Thomas E; Letteri, Amy; Choi, Song Hoa; Dang, Daqian
2014-01-01
Brief review is provided of recent research on the impact of early visual language exposure on a variety of developmental outcomes, including literacy, cognition, and social adjustment. This body of work points to the great importance of giving young deaf children early exposure to a visual language as a critical precursor to the acquisition of literacy. Four analyses of data from the Visual Language and Visual Learning (VL2) Early Education Longitudinal Study are summarized. Each confirms findings from previously published laboratory findings and points to the positive effects of early sign language on, respectively, letter knowledge, social adaptability, sustained visual attention, and cognitive-behavioral milestones necessary for academic success. The article concludes with a consideration of the qualitative similarity hypothesis and a finding that the hypothesis is valid, but only if it can be presented as being modality independent.
Almeida, Diogo; Poeppel, David; Corina, David
The human auditory system distinguishes speech-like information from general auditory signals in a remarkably fast and efficient way. Combining psychophysics and neurophysiology (MEG), we demonstrate a similar result for the processing of visual information used for language communication in users of sign languages. We demonstrate that the earliest visual cortical responses in deaf signers viewing American Sign Language (ASL) signs show specific modulations to violations of anatomic constraints that would make the sign either possible or impossible to articulate. These neural data are accompanied with a significantly increased perceptual sensitivity to the anatomical incongruity. The differential effects in the early visual evoked potentials arguably reflect an expectation-driven assessment of somatic representational integrity, suggesting that language experience and/or auditory deprivation may shape the neuronal mechanisms underlying the analysis of complex human form. The data demonstrate that the perceptual tuning that underlies the discrimination of language and non-language information is not limited to spoken languages but extends to languages expressed in the visual modality.
Applications of Java and Vector Graphics to Astrophysical Visualization
NASA Astrophysics Data System (ADS)
Edirisinghe, D.; Budiardja, R.; Chae, K.; Edirisinghe, G.; Lingerfelt, E.; Guidry, M.
2002-12-01
We describe a series of projects utilizing the portability of Java programming coupled with the compact nature of vector graphics (SVG and SWF formats) for setup and control of calculations, local and collaborative visualization, and interactive 2D and 3D animation presentations in astrophysics. Through a set of examples, we demonstrate how such an approach can allow efficient and user-friendly control of calculations in compiled languages such as Fortran 90 or C++ through portable graphical interfaces written in Java, and how the output of such calculations can be packaged in vector-based animation having interactive controls and extremely high visual quality, but very low bandwidth requirements.
Computer programming for generating visual stimuli.
Bukhari, Farhan; Kurylo, Daniel D
2008-02-01
Critical to vision research is the generation of visual displays with precise control over stimulus metrics. Generating stimuli often requires adapting commercial software or developing specialized software for specific research applications. In order to facilitate this process, we give here an overview that allows nonexpert users to generate and customize stimuli for vision research. We first give a review of relevant hardware and software considerations, to allow the selection of display hardware, operating system, programming language, and graphics packages most appropriate for specific research applications. We then describe the framework of a generic computer program that can be adapted for use with a broad range of experimental applications. Stimuli are generated in the context of trial events, allowing the display of text messages, the monitoring of subject responses and reaction times, and the inclusion of contingency algorithms. This approach allows direct control and management of computer-generated visual stimuli while utilizing the full capabilities of modern hardware and software systems. The flowchart and source code for the stimulus-generating program may be downloaded from www.psychonomic.org/archive.
USDA-ARS?s Scientific Manuscript database
Background: A review of the literature produced no rigorously tested and validated Spanish-language physical activity survey or evaluation tools for use by USDA’s food assistance and education programs. The purpose of the current study was to develop and evaluate the face validity of a visually enha...
Exploring the Synergies between the Object Oriented Paradigm and Mathematics: A Java Led Approach
ERIC Educational Resources Information Center
Conrad, Marc; French, Tim
2004-01-01
While the object oriented paradigm and its instantiation within programming languages such as Java has become a ubiquitous part of both the commercial and educational landscapes, its usage as a visualization technique within mathematics undergraduate programmes of study has perhaps been somewhat underestimated. By regarding the object oriented…
Biomolecules in the Computer: Jmol to the Rescue
ERIC Educational Resources Information Center
Herraez, Angel
2006-01-01
Jmol is free, open source software for interactive molecular visualization. Since it is written in the Java[TM] programming language, it is compatible with all major operating systems and, in the applet form, with most modern web browsers. This article summarizes Jmol development and features that make it a valid and promising replacement for…
The New Explorers teacher`s guide: The new language of science
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-09-01
The Chicago Science Explorers Program is designed to make students aware of the many career options that are available to them which involve science. The program also hopes to encourage students to consider a career in science by providing interesting classroom experiences, information on various careers generated from the video tape, and a class field trip. In the videotape The New Language of Science, Dr. Larry Smarr of the University of Illinois illustrates how supercomputers can create visualizations of such complex scientific concepts and events as black holes in space, microbursts, smog, drug interactions in the body, earthquakes, and tornadoes.more » It also illustrates how math and science are integrated and emphasizes the need for students to take as much advanced mathematics as is offered at the junior high and high school level. Another underlying concept of the videotape is teamwork. Often students think of science as being an isolated career and this video tape clearly demonstrates that no one scientist would have enough knowledge to create a visualization alone. This report is the teacher`s guide for this video.« less
Dye, Matthew W G; Seymour, Jenessa L; Hauser, Peter C
2016-04-01
Deafness results in cross-modal plasticity, whereby visual functions are altered as a consequence of a lack of hearing. Here, we present a reanalysis of data originally reported by Dye et al. (PLoS One 4(5):e5640, 2009) with the aim of testing additional hypotheses concerning the spatial redistribution of visual attention due to deafness and the use of a visuogestural language (American Sign Language). By looking at the spatial distribution of errors made by deaf and hearing participants performing a visuospatial selective attention task, we sought to determine whether there was evidence for (1) a shift in the hemispheric lateralization of visual selective function as a result of deafness, and (2) a shift toward attending to the inferior visual field in users of a signed language. While no evidence was found for or against a shift in lateralization of visual selective attention as a result of deafness, a shift in the allocation of attention from the superior toward the inferior visual field was inferred in native signers of American Sign Language, possibly reflecting an adaptation to the perceptual demands imposed by a visuogestural language.
Impact of language on development of auditory-visual speech perception.
Sekiyama, Kaoru; Burnham, Denis
2008-03-01
The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various signal-to-noise levels. In Experiment 1 with two groups of adults, native speakers of Japanese and native speakers of English, the results on both percent visually influenced responses and reaction time supported previous reports of a weaker visual influence for Japanese participants. In Experiment 2, an additional three age groups (6, 8, and 11 years) in each language group were tested. The results showed that the degree of visual influence was low and equivalent for Japanese and English language 6-year-olds, and increased over age for English language participants, especially between 6 and 8 years, but remained the same for Japanese participants. This may be related to the fact that English language adults and older children processed visual speech information relatively faster than auditory information whereas no such inter-modal differences were found in the Japanese participants' reaction times.
Experimental evaluation of sensorimotor patterning used with mentally retarded children.
Neman, R; Roos, P; McCann, R M; Menolascino, F J; Heal, L W
1975-01-01
In the present study, a sensorimotor "patterning" program used with 66 institutionalized, mentally retarded children and adolescents was evaluated. The subjects were randomly assigned to one of three groups: (a) Experimental 1 group, which received a program of mobility exercises including patterning, creeping, and crawling; visual-motor training; and sensory stimulation exercises; (b) Experimental 2 group, which received a program of physical activity, personal attention, and the same sensory stimulation program given to the first group; or (c) Passive Control group, which provided baseline measures but which received no additional programming as part of the study. Experimental 1 group subjects improved more than subjects in the other groups in visual perception, program-related measures of mobility, and language ability. Intellectual functioning did not appear to be enhanced by the procedures, at least during the active phase of the project. The results were discussed with reference to other researchers who have failed to support the patterning approach, and some reasons were suggested for the differences between the present and past investigations.
Using Visual Literacy to Teach Science Academic Language: Experiences from Three Preservice Teachers
ERIC Educational Resources Information Center
Kelly-Jackson, Charlease; Delacruz, Stacy
2014-01-01
This original pedagogical study captured three preservice teachers' experiences using visual literacy strategies as an approach to teaching English language learners (ELLs) science academic language. The following research questions guided this study: (1) What are the experiences of preservice teachers' use of visual literacy to teach science…
Kover, Sara T.; McCary, Lindsay M.; Ingram, Alexandra M.; Hatton, Deborah D.; Roberts, Jane E.
2017-01-01
Fragile X syndrome (FXS) is associated with significant language and communication delays, as well as problems with attention. This study investigated early language abilities in infants and toddlers with FXS (n = 13) and considered visual attention as a predictor of those skills. We found that language abilities increased over the study period of 9 to 24 months with moderate correlations among language assessments. In comparison to typically developing infants (n = 11), language skills were delayed beyond chronological age- and developmental level-expectations. Aspects of early visual attention predicted later language ability. Atypical visual attention is an important aspect of the FXS phenotype with implications for early language development, particularly in the domain of vocabulary. PMID:25715182
Fortea-Sevilla, M Sol; Escandell-Bermúdez, M Olga; Castro-Sánchez, José Juan; Martos-Pérez, Juan
2015-02-25
The latest research findings show the importance of early intervention in children with autism spectrum disorder (ASD) in all areas of development, including language. The use of augmentative and alternative communication systems (AACS) favors linguistic and communicative development. To show the effectiveness of AACS to develop oral language in non-verbal toddlers diagnosed with ASD. Thirty children (25 males and 5 females) diagnosed with ASD when they were between 18 and 30 months of age, through the instruments ADOS and ADIR. None of them displayed oral language development at the time of assessment. An intervention program in the area of language was designed based on the use of total communication by the therapist and training the child in the Picture Exchange Communication System (PECS). One year later, the formal aspects of language were assessed with the PLON-R because oral language had been developed. All the children had developed oral language to some extent over a one-year period. Early intervention and the use of AACS with visual props favor the development of oral language in children with ASD in the first years of life.
Loo, Jenny Hooi Yin; Bamiou, Doris-Eva; Campbell, Nicci; Luxon, Linda M
2010-08-01
This article reviews the evidence for computer-based auditory training (CBAT) in children with language, reading, and related learning difficulties, and evaluates the extent it can benefit children with auditory processing disorder (APD). Searches were confined to studies published between 2000 and 2008, and they are rated according to the level of evidence hierarchy proposed by the American Speech-Language Hearing Association (ASHA) in 2004. We identified 16 studies of two commercially available CBAT programs (13 studies of Fast ForWord (FFW) and three studies of Earobics) and five further outcome studies of other non-speech and simple speech sounds training, available for children with language, learning, and reading difficulties. The results suggest that, apart from the phonological awareness skills, the FFW and Earobics programs seem to have little effect on the language, spelling, and reading skills of children. Non-speech and simple speech sounds training may be effective in improving children's reading skills, but only if it is delivered by an audio-visual method. There is some initial evidence to suggest that CBAT may be of benefit for children with APD. Further research is necessary, however, to substantiate these preliminary findings.
Visualization of International Solar-Terrestrial Physics Program (ISTP) data
NASA Technical Reports Server (NTRS)
Kessel, Ramona L.; Candey, Robert M.; Hsieh, Syau-Yun W.; Kayser, Susan
1995-01-01
The International Solar-Terrestrial Physics Program (ISTP) is a multispacecraft, multinational program whose objective is to promote further understanding of the Earth's complex plasma environment. Extensive data sharing and data analysis will be needed to ensure the success of the overall ISTP program. For this reason, there has been a special emphasis on data standards throughout ISTP. One of the key tools will be the common data format (CDF), developed, maintained, and evolved at the National Space Science Data Center (NSSDC), with the set of ISTP implementation guidelines specially designed for space physics data sets by the Space Physics Data Facility (associated with the NSSDC). The ISTP guidelines were developed to facilitate searching, plotting, merging, and subsetting of data sets. We focus here on the plotting application. A prototype software package was developed to plot key parameter (KP) data from the ISTP program at the Science Planning and Operations Facility (SPOF). The ISTP Key Parameter Visualization Tool is based on the Interactive Data Language (IDL) and is keyed to the ISTP guidelines, reading data stored in CDF. With the combination of CDF, the ISTP guidelines, and the visualization software, we can look forward to easier and more effective data sharing and use among ISTP scientists.
ERIC Educational Resources Information Center
Shafiro, Valeriy; Kharkhurin, Anatoliy V.
2009-01-01
Abstract Does native language phonology influence visual word processing in a second language? This question was investigated in two experiments with two groups of Russian-English bilinguals, differing in their English experience, and a monolingual English control group. Experiment 1 tested visual word recognition following semantic…
Koch, Jane; Salamonson, Yenna; Rolley, John X; Davidson, Patricia M
2011-08-01
The growth of accelerated graduate entry nursing programs has challenged traditional approaches to teaching and learning. To date, limited research has been undertaken in the role of learning preferences, language proficiency and academic performance in accelerated programs. Sixty-two first year accelerated graduate entry nursing students, in a single cohort at a university in the western region of Sydney, Australia, were surveyed to assess their learning preference using the Visual, Aural, Read/write and Kinaesthetic (VARK) learning preference questionnaire, together with sociodemographic data, English language acculturation and perceived academic control. Six months following course commencement, the participant's grade point average (GPA) was studied as a measurement of academic performance. A 93% response rate was achieved. The majority of students (62%) reported preference for multiple approaches to learning with the kinaesthetic sensory mode a significant (p=0.009) predictor of academic performance. Students who spoke only English at home had higher mean scores across two of the four categories of VARK sensory modalities, visual and kinaesthetic compared to those who spoke non-English. Further research is warranted to investigate the reasons why the kinaesthetic sensory mode is a predictor of academic performance and to what extent the VARK mean scores of the four learning preference(s) change with improved English language proficiency. Copyright © 2010 Elsevier Ltd. All rights reserved.
How does visual language affect crossmodal plasticity and cochlear implant success?
Lyness, C.R.; Woll, B.; Campbell, R.; Cardin, V.
2013-01-01
Cochlear implants (CI) are the most successful intervention for ameliorating hearing loss in severely or profoundly deaf children. Despite this, educational performance in children with CI continues to lag behind their hearing peers. From animal models and human neuroimaging studies it has been proposed the integrative functions of auditory cortex are compromised by crossmodal plasticity. This has been argued to result partly from the use of a visual language. Here we argue that ‘cochlear implant sensitive periods’ comprise both auditory and language sensitive periods, and thus cannot be fully described with animal models. Despite prevailing assumptions, there is no evidence to link the use of a visual language to poorer CI outcome. Crossmodal reorganisation of auditory cortex occurs regardless of compensatory strategies, such as sign language, used by the deaf person. In contrast, language deprivation during early sensitive periods has been repeatedly linked to poor language outcomes. Language sensitive periods have largely been ignored when considering variation in CI outcome, leading to ill-founded recommendations concerning visual language in CI habilitation. PMID:23999083
Visual Fast Mapping in School-Aged Children with Specific Language Impairment
ERIC Educational Resources Information Center
Alt, Mary
2013-01-01
Purpose: To determine whether children with specific language impairment (SLI) demonstrate impaired visual fast mapping skills compared with unimpaired peers and to test components of visual working memory that may contribute to a visual working memory deficit. Methods: Fifty children (25 SLI) played 2 computer-based visual fast mapping games…
Visual Literacy and Visual Thinking.
ERIC Educational Resources Information Center
Hortin, John A.
It is proposed that visual literacy be defined as the ability to understand (read) and use (write) images and to think and learn in terms of images. This definition includes three basic principles: (1) visuals are a language and thus analogous to verbal language; (2) a visually literate person should be able to understand (read) images and use…
Implied motion language can influence visual spatial memory.
Vinson, David W; Engelen, Jan; Zwaan, Rolf A; Matlock, Teenie; Dale, Rick
2017-07-01
How do language and vision interact? Specifically, what impact can language have on visual processing, especially related to spatial memory? What are typically considered errors in visual processing, such as remembering the location of an object to be farther along its motion trajectory than it actually is, can be explained as perceptual achievements that are driven by our ability to anticipate future events. In two experiments, we tested whether the prior presentation of motion language influences visual spatial memory in ways that afford greater perceptual prediction. Experiment 1 showed that motion language influenced judgments for the spatial memory of an object beyond the known effects of implied motion present in the image itself. Experiment 2 replicated this finding. Our findings support a theory of perception as prediction.
A Visual Editor in Java for View
NASA Technical Reports Server (NTRS)
Stansifer, Ryan
2000-01-01
In this project we continued the development of a visual editor in the Java programming language to create screens on which to display real-time data. The data comes from the numerous systems monitoring the operation of the space shuttle while on the ground and in space, and from the many tests of subsystems. The data can be displayed on any computer platform running a Java-enabled World Wide Web (WWW) browser and connected to the Internet. Previously a special-purpose program bad been written to display data on emulations of character-based display screens used for many years at NASA. The goal now is to display bit-mapped screens created by a visual editor. We report here on the visual editor that creates the display screens. This project continues the work we bad done previously. Previously we had followed the design of the 'beanbox,' a prototype visual editor created by Sun Microsystems. We abandoned this approach and implemented a prototype using a more direct approach. In addition, our prototype is based on newly released Java 2 graphical user interface (GUI) libraries. The result has been a visually more appealing appearance and a more robust application.
Visual cortex entrains to sign language.
Brookshire, Geoffrey; Lu, Jenny; Nusbaum, Howard C; Goldin-Meadow, Susan; Casasanto, Daniel
2017-06-13
Despite immense variability across languages, people can learn to understand any human language, spoken or signed. What neural mechanisms allow people to comprehend language across sensory modalities? When people listen to speech, electrophysiological oscillations in auditory cortex entrain to slow ([Formula: see text]8 Hz) fluctuations in the acoustic envelope. Entrainment to the speech envelope may reflect mechanisms specialized for auditory perception. Alternatively, flexible entrainment may be a general-purpose cortical mechanism that optimizes sensitivity to rhythmic information regardless of modality. Here, we test these proposals by examining cortical coherence to visual information in sign language. First, we develop a metric to quantify visual change over time. We find quasiperiodic fluctuations in sign language, characterized by lower frequencies than fluctuations in speech. Next, we test for entrainment of neural oscillations to visual change in sign language, using electroencephalography (EEG) in fluent speakers of American Sign Language (ASL) as they watch videos in ASL. We find significant cortical entrainment to visual oscillations in sign language <5 Hz, peaking at [Formula: see text]1 Hz. Coherence to sign is strongest over occipital and parietal cortex, in contrast to speech, where coherence is strongest over the auditory cortex. Nonsigners also show coherence to sign language, but entrainment at frontal sites is reduced relative to fluent signers. These results demonstrate that flexible cortical entrainment to language does not depend on neural processes that are specific to auditory speech perception. Low-frequency oscillatory entrainment may reflect a general cortical mechanism that maximizes sensitivity to informational peaks in time-varying signals.
Learning through Play: Portraits, Photoshop, and Visual Literacy Practices
ERIC Educational Resources Information Center
Honeyford, Michelle A.; Boyd, Karen
2015-01-01
Play has a significant role in language and literacy learning. However, even when valued in schools, opportunities for play are limited beyond early childhood education. This study of an after-school program for adolescents looks closely at several forms of play that students engaged in to produce self-portraits. The study suggests that play and…
Users Guide to VSMOKE-GIS for Workstations
Mary F. Harms; Leonidas G. Lavdas
1997-01-01
VSMOKE-GIS was developed to help prescribed burners in the national forests of the Southeastern United States visualize smoke dispersion and to plan prescribed burns. Developed for use on workstations, this decision-support system consists of a graphical user interface, written in Arc/Info Arc Macro Language, and is linked to a FORTRAN computer program. VSMOKE-GIS...
Righi, Giulia; Tenenbaum, Elena J; McCormick, Carolyn; Blossom, Megan; Amso, Dima; Sheinkopf, Stephen J
2018-04-01
Autism Spectrum Disorder (ASD) is often accompanied by deficits in speech and language processing. Speech processing relies heavily on the integration of auditory and visual information, and it has been suggested that the ability to detect correspondence between auditory and visual signals helps to lay the foundation for successful language development. The goal of the present study was to examine whether young children with ASD show reduced sensitivity to temporal asynchronies in a speech processing task when compared to typically developing controls, and to examine how this sensitivity might relate to language proficiency. Using automated eye tracking methods, we found that children with ASD failed to demonstrate sensitivity to asynchronies of 0.3s, 0.6s, or 1.0s between a video of a woman speaking and the corresponding audio track. In contrast, typically developing children who were language-matched to the ASD group, were sensitive to both 0.6s and 1.0s asynchronies. We also demonstrated that individual differences in sensitivity to audiovisual asynchronies and individual differences in orientation to relevant facial features were both correlated with scores on a standardized measure of language abilities. Results are discussed in the context of attention to visual language and audio-visual processing as potential precursors to language impairment in ASD. Autism Res 2018, 11: 645-653. © 2018 International Society for Autism Research, Wiley Periodicals, Inc. Speech processing relies heavily on the integration of auditory and visual information, and it has been suggested that the ability to detect correspondence between auditory and visual signals helps to lay the foundation for successful language development. The goal of the present study was to explore whether children with ASD process audio-visual synchrony in ways comparable to their typically developing peers, and the relationship between preference for synchrony and language ability. Results showed that there are differences in attention to audiovisual synchrony between typically developing children and children with ASD. Preference for synchrony was related to the language abilities of children across groups. © 2018 International Society for Autism Research, Wiley Periodicals, Inc.
How a Visual Language of Abstract Shapes Facilitates Cultural and International Border Crossings
ERIC Educational Resources Information Center
Conroy, Arthur Thomas, III
2016-01-01
This article describes a visual language comprised of abstract shapes that has been shown to be effective in communicating prior knowledge between and within members of a small team or group. The visual language includes a set of geometric shapes and rules that guide the construction of the abstract diagrams that are the external representation of…
Construction and validation of logMAR visual acuity charts in seven Indian languages.
Negiloni, Kalpa; Mazumdar, Deepmala; Neog, Aditya; Das, Biman; Medhi, Jnanankar; Choudhury, Mitalee; George, Ronnie Jacob; Ramani, Krishna Kumar
2018-05-01
The evaluation of visual impairment requires the measurement of visual acuity with a validated and standard logMAR visual acuity chart. We aimed to construct and validate new logMAR visual acuity chart in Indian languages (Hindi, Bengali, Telugu, Urdu, Kannada, Malayalam, and Assamese). The commonly used font in each language was chosen as the reference and designed to fit the 5 × 5 grid (Adobe Photoshop). Ten letters (easiest to difficult) around median legibility score calculated for each language based on the results of legibility experiment and differing by 10% were selected. The chart was constructed based on the standard recommendations. The repeatability of charts was tested and also compared with a standard English Early Treatment Diabetic Retinopathy Study (ETDRS) logMAR chart for validation. A total of 14 rows (1.0 to -0.3 logMAR) with five letters in each line were designed with the range of row legibility between 4.7 and 5.3 for all the language charts. Each chart showed good repeatability, and a maximum difference of four letters was noted. The median difference in visual acuity was 0.16 logMAR for Urdu and Assamese chart compared to ETDRS English chart. Hindi and Malayalam chart had a median difference of 0.12 logMAR. When compared to the English chart a median difference of 0.14 logMAR was noted in Telugu, Kannada, and Bengali chart. The newly developed Indian language visual acuity charts are designed based on the standard recommendations and will help to assess visual impairment in people of these languages across the country.
Drager, Kathryn; Light, Janice; Caron, Jessica Gosnell
2017-01-01
Purpose Augmentative and alternative communication (AAC) promotes communicative participation and language development for young children with complex communication needs. However, the motor, linguistic, and cognitive demands of many AAC technologies restrict young children's operational use of and influence over these technologies. The purpose of the current study is to better understand young children's participation in programming vocabulary “just in time” on an AAC application with minimized demands. Method A descriptive study was implemented to highlight the participation of 10 typically developing toddlers (M age: 16 months, range: 10–22 months) in just-in-time vocabulary programming in an AAC app with visual scene displays. Results All 10 toddlers participated in some capacity in adding new visual scene displays and vocabulary to the app just in time. Differences in participation across steps were observed, suggesting variation in the developmental demands of controls involved in vocabulary programming. Conclusions Results from the current study provide clinical insights toward involving young children in AAC programming just in time and steps that may allow for more independent participation or require more scaffolding. Technology designed to minimize motor, cognitive, and linguistic demands may allow children to participate in programming devices at a younger age. PMID:28586825
Holyfield, Christine; Drager, Kathryn; Light, Janice; Caron, Jessica Gosnell
2017-08-15
Augmentative and alternative communication (AAC) promotes communicative participation and language development for young children with complex communication needs. However, the motor, linguistic, and cognitive demands of many AAC technologies restrict young children's operational use of and influence over these technologies. The purpose of the current study is to better understand young children's participation in programming vocabulary "just in time" on an AAC application with minimized demands. A descriptive study was implemented to highlight the participation of 10 typically developing toddlers (M age: 16 months, range: 10-22 months) in just-in-time vocabulary programming in an AAC app with visual scene displays. All 10 toddlers participated in some capacity in adding new visual scene displays and vocabulary to the app just in time. Differences in participation across steps were observed, suggesting variation in the developmental demands of controls involved in vocabulary programming. Results from the current study provide clinical insights toward involving young children in AAC programming just in time and steps that may allow for more independent participation or require more scaffolding. Technology designed to minimize motor, cognitive, and linguistic demands may allow children to participate in programming devices at a younger age.
Relation of Infant Vision to Early Cognitive and Language Status.
ERIC Educational Resources Information Center
Duckman, Robert; Tulloch, Deborah
Relationships between infant visual skills and the development of object permanence and expressive language skills were examined with 31 infants in three groups: visually typical, visually atypical, and Down Syndrome. Measures used to evaluate visual status were: forced preferential looking, optokinetic nystagmus, and behavioral. Object permanence…
A VBA Desktop Database for Proposal Processing at National Optical Astronomy Observatories
NASA Astrophysics Data System (ADS)
Brown, Christa L.
National Optical Astronomy Observatories (NOAO) has developed a relational Microsoft Windows desktop database using Microsoft Access and the Microsoft Office programming language, Visual Basic for Applications (VBA). The database is used to track data relating to observing proposals from original receipt through the review process, scheduling, observing, and final statistical reporting. The database has automated proposal processing and distribution of information. It allows NOAO to collect and archive data so as to query and analyze information about our science programs in new ways.
Sensitivity to visual prosodic cues in signers and nonsigners.
Brentari, Diane; González, Carolina; Seidl, Amanda; Wilbur, Ronnie
2011-03-01
Three studies are presented in this paper that address how nonsigners perceive the visual prosodic cues in a sign language. In Study 1, adult American nonsigners and users of American Sign Language (ASL) were compared on their sensitivity to the visual cues in ASL Intonational Phrases. In Study 2, hearing, nonsigning American infants were tested using the same stimuli used in Study I to see whether maturity, exposure to gesture, or exposure to sign language is necessary to demonstrate this type of sensitivity. Study 3 addresses nonsigners' and signers' strategies for segmenting Prosodic Words in a sign language. Adult participants from six language groups (3 spoken languages and 3 sign languages) were tested.The results of these three studies indicate that nonsigners have a high degree of sensitivity to sign language prosodic cues at the Intonational Phrase level and the Prosodic Word level; these are attributed to modality or'channel' effects of the visual signal.There are also some differences between signers' and nonsigners' sensitivity; these differences are attributed to language experience or language-particular constraints.This work is useful in understanding the gestural competence of nonsigners and the ways in which this type of competence may contribute to the grammaticalization of these properties in a sign language.
Visual Sonority Modulates Infants' Attraction to Sign Language
ERIC Educational Resources Information Center
Stone, Adam; Petitto, Laura-Ann; Bosworth, Rain
2018-01-01
The infant brain may be predisposed to identify perceptually salient cues that are common to both signed and spoken languages. Recent theory based on spoken languages has advanced sonority as one of these potential language acquisition cues. Using a preferential looking paradigm with an infrared eye tracker, we explored visual attention of hearing…
Design of an off-axis visual display based on a free-form projection screen to realize stereo vision
NASA Astrophysics Data System (ADS)
Zhao, Yuanming; Cui, Qingfeng; Piao, Mingxu; Zhao, Lidong
2017-10-01
A free-form projection screen is designed for an off-axis visual display, which shows great potential in applications such as flight training for providing both accommodation and convergence cues for pilots. The method based on point cloud is proposed for the design of the free-form surface, and the design of the point cloud is controlled by a program written in the macro-language. In the visual display based on the free-form projection screen, when the error of the screen along Z-axis is 1 mm, the error of visual distance at each filed is less than 1%. And the resolution of the design for full field is better than 1‧, which meet the requirement of resolution for human eyes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helmus, Jonathan J.; Collis, Scott M.
The Python ARM Radar Toolkit is a package for reading, visualizing, correcting and analysing data from weather radars. Development began to meet the needs of the Atmospheric Radiation Measurement Climate Research Facility and has since expanded to provide a general-purpose framework for working with data from weather radars in the Python programming language. The toolkit is built on top of libraries in the Scientific Python ecosystem including NumPy, SciPy, and matplotlib, and makes use of Cython for interfacing with existing radar libraries written in C and to speed up computationally demanding algorithms. As a result, the source code for themore » toolkit is available on GitHub and is distributed under a BSD license.« less
Helmus, Jonathan J.; Collis, Scott M.
2016-07-18
The Python ARM Radar Toolkit is a package for reading, visualizing, correcting and analysing data from weather radars. Development began to meet the needs of the Atmospheric Radiation Measurement Climate Research Facility and has since expanded to provide a general-purpose framework for working with data from weather radars in the Python programming language. The toolkit is built on top of libraries in the Scientific Python ecosystem including NumPy, SciPy, and matplotlib, and makes use of Cython for interfacing with existing radar libraries written in C and to speed up computationally demanding algorithms. As a result, the source code for themore » toolkit is available on GitHub and is distributed under a BSD license.« less
Khomtchouk, Bohdan B; Van Booven, Derek J; Wahlestedt, Claes
2014-01-01
The graphical visualization of gene expression data using heatmaps has become an integral component of modern-day medical research. Heatmaps are used extensively to plot quantitative differences in gene expression levels, such as those measured with RNAseq and microarray experiments, to provide qualitative large-scale views of the transcriptonomic landscape. Creating high-quality heatmaps is a computationally intensive task, often requiring considerable programming experience, particularly for customizing features to a specific dataset at hand. Software to create publication-quality heatmaps is developed with the R programming language, C++ programming language, and OpenGL application programming interface (API) to create industry-grade high performance graphics. We create a graphical user interface (GUI) software package called HeatmapGenerator for Windows OS and Mac OS X as an intuitive, user-friendly alternative to researchers with minimal prior coding experience to allow them to create publication-quality heatmaps using R graphics without sacrificing their desired level of customization. The simplicity of HeatmapGenerator is that it only requires the user to upload a preformatted input file and download the publicly available R software language, among a few other operating system-specific requirements. Advanced features such as color, text labels, scaling, legend construction, and even database storage can be easily customized with no prior programming knowledge. We provide an intuitive and user-friendly software package, HeatmapGenerator, to create high-quality, customizable heatmaps generated using the high-resolution color graphics capabilities of R. The software is available for Microsoft Windows and Apple Mac OS X. HeatmapGenerator is released under the GNU General Public License and publicly available at: http://sourceforge.net/projects/heatmapgenerator/. The Mac OS X direct download is available at: http://sourceforge.net/projects/heatmapgenerator/files/HeatmapGenerator_MAC_OSX.tar.gz/download. The Windows OS direct download is available at: http://sourceforge.net/projects/heatmapgenerator/files/HeatmapGenerator_WINDOWS.zip/download.
Understanding and representing natural language meaning
NASA Astrophysics Data System (ADS)
Waltz, D. L.; Maran, L. R.; Dorfman, M. H.; Dinitz, R.; Farwell, D.
1982-12-01
During this contract period the authors have: (1) continued investigation of events and actions by means of representation schemes called 'event shape diagrams'; (2) written a parsing program which selects appropriate word and sentence meanings by a parallel process know as activation and inhibition; (3) begun investigation of the point of a story or event by modeling the motivations and emotional behaviors of story characters; (4) started work on combining and translating two machine-readable dictionaries into a lexicon and knowledge base which will form an integral part of our natural language understanding programs; (5) made substantial progress toward a general model for the representation of cognitive relations by comparing English scene and event descriptions with similar descriptions in other languages; (6) constructed a general model for the representation of tense and aspect of verbs; (7) made progress toward the design of an integrated robotics system which accepts English requests, and uses visual and tactile inputs in making decisions and learning new tasks.
ERIC Educational Resources Information Center
Allen, Thomas E.; Letteri, Amy; Choi, Song Hoa; Dang, Daqian
2014-01-01
A brief review is provided of recent research on the impact of early visual language exposure on a variety of developmental outcomes, including literacy, cognition, and social adjustment. This body of work points to the great importance of giving young deaf children early exposure to a visual language as a critical precursor to the acquisition of…
Perniss, Pamela; Özyürek, Asli; Morgan, Gary
2015-01-01
For humans, the ability to communicate and use language is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co-speech) gestures. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression. Co-speech gestures, though non-linguistic, are produced in tight semantic and temporal integration with speech and constitute an integral part of language together with speech. The articles in this issue explore and document how gestures and sign languages are similar or different and how communicative expression in the visual modality can change from being gestural to grammatical in nature through processes of conventionalization. As such, this issue contributes to our understanding of how the visual modality shapes language and the emergence of linguistic structure in newly developing systems. Studying the relationship between signs and gestures provides a new window onto the human ability to recruit multiple levels of representation (e.g., categorical, gradient, iconic, abstract) in the service of using or creating conventionalized communicative systems. Copyright © 2015 Cognitive Science Society, Inc.
Anatomical Substrates of Visual and Auditory Miniature Second-language Learning
Newman-Norlund, Roger D.; Frey, Scott H.; Petitto, Laura-Ann; Grafton, Scott T.
2007-01-01
Longitudinal changes in brain activity during second language (L2) acquisition of a miniature finite-state grammar, named Wernickese, were identified with functional magnetic resonance imaging (fMRI). Participants learned either a visual sign language form or an auditory-verbal form to equivalent proficiency levels. Brain activity during sentence comprehension while hearing/viewing stimuli was assessed at low, medium, and high levels of proficiency in three separate fMRI sessions. Activation in the left inferior frontal gyrus (Broca’s area) correlated positively with improving L2 proficiency, whereas activity in the right-hemisphere (RH) homologue was negatively correlated for both auditory and visual forms of the language. Activity in sequence learning areas including the premotor cortex and putamen also correlated with L2 proficiency. Modality-specific differences in the blood oxygenation level-dependent signal accompanying L2 acquisition were localized to the planum temporale (PT). Participants learning the auditory form exhibited decreasing reliance on bilateral PT sites across sessions. In the visual form, bilateral PT sites increased in activity between Session 1 and Session 2, then decreased in left PT activity from Session 2 to Session 3. Comparison of L2 laterality (as compared to L1 laterality) in auditory and visual groups failed to demonstrate greater RH lateralization for the visual versus auditory L2. These data establish a common role for Broca’s area in language acquisition irrespective of the perceptual form of the language and suggest that L2s are processed similar to first languages even when learned after the ‘‘critical period.’’ The right frontal cortex was not preferentially recruited by visual language after accounting for phonetic/structural complexity and performance. PMID:17129186
Kristensen, Hanne; Oerbeck, Beate
2006-01-01
Our main aim in this study was to explore the association between selective mutism (SM) and aspects of nonverbal cognition such as visual memory span and visual memory. Auditory-verbal memory span was also examined. The etiology of SM is unclear, and it probably represents a heterogeneous condition. SM is associated with language impairment, but nonspecific neurodevelopmental factors, including motor problems, are also reported in SM without language impairment. Furthermore, SM is described in Asperger's syndrome. Studies on nonverbal cognition in SM thus merit further investigation. Neuropsychological tests were administered to a clinical sample of 32 children and adolescents with SM (ages 6-17 years, 14 boys and 18 girls) and 62 nonreferred controls matched for age, gender, and socioeconomic status. We used independent t-tests to compare groups with regard to auditory-verbal memory span, visual memory span, and visual memory (Benton Visual Retention Test), and employed linear regression analysis to study the impact of SM on visual memory, controlling for IQ and measures of language and motor function. The SM group differed from controls on auditory-verbal memory span but not on visual memory span. Controlled for IQ, language, and motor function, the SM group did not differ from controls on visual memory. Motor function was the strongest predictor of visual memory performance. SM does not appear to be associated with deficits in visual memory span or visual memory. The reduced auditory-verbal memory span supports the association between SM and language impairment. More comprehensive neuropsychological studies are needed.
Visual Organizers as Scaffolds in Teaching English as a Foreign Language
ERIC Educational Resources Information Center
Chang, Yu-Liang
2006-01-01
This thesis deals with using visual organizers as scaffolds in teaching English as a foreign language (EFL). Based on the findings of scientific researches, the review of literature explicates the effectiveness and fruitfulness in employing visuals organizers in EFL instructions. It includes the five following components. First, visual organizers…
Hayashi, Yutaka; Kinoshita, Masashi; Nakada, Mitsutoshi; Hamada, Jun-ichiro
2012-11-01
Disturbance of the arcuate fasciculus in the dominant hemisphere is thought to be associated with language-processing disorders, including conduction aphasia. Although the arcuate fasciculus can be visualized in vivo with diffusion tensor imaging (DTI) tractography, its involvement in functional processes associated with language has not been shown dynamically using DTI tractography. In the present study, to clarify the participation of the arcuate fasciculus in language functions, postoperative changes in the arcuate fasciculus detected by DTI tractography were evaluated chronologically in relation to postoperative changes in language function after brain tumor surgery. Preoperative and postoperative arcuate fasciculus area and language function were examined in 7 right-handed patients with a brain tumor in the left hemisphere located in proximity to part of the arcuate fasciculus. The arcuate fasciculus was depicted, and its area was calculated using DTI tractography. Language functions were measured using the Western Aphasia Battery (WAB). After tumor resection, visualization of the arcuate fasciculus was increased in 5 of the 7 patients, and the total WAB score improved in 6 of the 7 patients. The relative ratio of postoperative visualized area of the arcuate fasciculus to preoperative visualized area of the arcuate fasciculus was increased in association with an improvement in postoperative language function (p = 0.0039). The role of the left arcuate fasciculus in language functions can be evaluated chronologically in vivo by DTI tractography after brain tumor surgery. Because increased postoperative visualization of the fasciculus was significantly associated with postoperative improvement in language functions, the arcuate fasciculus may play an important role in language function, as previously thought. In addition, postoperative changes in the arcuate fasciculus detected by DTI tractography could represent a predicting factor for postoperative language-dependent functional outcomes in patients with brain tumor.
Thomas-Antérion, C; Truche, A; Sciéssère, K; Guyot, E; Hibert, O; Paris, N
2005-01-01
We studied 23 vascular or traumatic head injury subjects, five years after their injury. Neuropsychological testing included language tests, memory performance, frontal lobe tests and standard tests of intelligence (QI). Behavior was evaluated with the neuropsychiatric interview (NPI). Using an analogic visual scale, subjects performed a self-evaluation of their memory, language, attention, physical and thymic complaints. Neuropsychological assessment was heterogeneous but seemed to show severe impairment. Mean NPI score was 31.4: 91 percent of patients showed depression or anxiety and 78 percent of them showed irritability. Mean memory and thymic complaints were scored 6 on the analogic visual scale. Thymic complaint was not correlated with neuropsychological tests but with physical complaints. Thymic complaint was correlated with NPI score. Language complaint was correlated with VIQ, attentional complaint was correlated with PIQ, memory complaint with memory tests. In a second part, we studied 21 patients again 6 months later and 14 patients 1 year later. Mean complaints were scored over 5 after 6 months and over 4 after 1 year. With neuropsychological remediation and social activities, memory complaints improved significantly after 6 months and attentional and thymic complaints after 1 year. Using of analogical visual scales appears to be feasible: patients were able to evaluate their difficulties. This could be useful to elaborate remediation programs and evaluate outcome.
The implement of Talmud property allocation algorithm based on graphic point-segment way
NASA Astrophysics Data System (ADS)
Cen, Haifeng
2017-04-01
Under the guidance of the Talmud allocation scheme's theory, the paper analyzes the algorithm implemented process via the perspective of graphic point-segment way, and designs the point-segment way's Talmud property allocation algorithm. Then it uses Java language to implement the core of allocation algorithm, by using Android programming to build a visual interface.
ERIC Educational Resources Information Center
Sabet, Masoud Khalili; Shalmani, Hamed Babaie
2010-01-01
The present study sought to explore the effects of Multimedia Computer-Assisted Language Learning (MCALL) programs drawing on two different text modalities on the vocabulary retention of Iranian EFL learners. The two groups under study received treatment on vocabulary items under two multimedia conditions: The first group received treatment on the…
ERIC Educational Resources Information Center
Rosenblum, L. Penny; Smith, Derrick
2012-01-01
Introduction: This study gathered data on methods and materials that are used to teach the Nemeth braille code, computer braille, foreign-language braille, and music braille in 26 university programs in the United States and Canada that prepare teachers of students with visual impairments. Information about instruction in the abacus and the…
The Development of a Token Reinforcement System for a Specific Lesson. Technical Report #11.
ERIC Educational Resources Information Center
Au, Kathryn
This paper presents a brief description of a token reinforcement system developed for a kindergarten language class in the Kamehameha Early Education Program (KEEP). Visual reinforcers (colored plastic tabs) were placed next to the names of individual children (each time they made a correct response) on a large chart in the front of the room. Five…
Visual Compositions and Language Development.
ERIC Educational Resources Information Center
Sinatra, Richard
1981-01-01
Presents an approach for improving verbal development by using organized slide shows to produce visual/verbal interaction in classroom. Suggests strength of visual involvement is that it provides a procedure for language discovery while achieving cooperation between right and left brain processing. (Author/BK)
A WebGIS-based system for analyzing and visualizing air quality data for Shanghai Municipality
NASA Astrophysics Data System (ADS)
Wang, Manyi; Liu, Chaoshun; Gao, Wei
2014-10-01
An online visual analytical system based on Java Web and WebGIS for air quality data for Shanghai Municipality was designed and implemented to quantitatively analyze and qualitatively visualize air quality data. By analyzing the architecture of WebGIS and Java Web, we firstly designed the overall scheme for system architecture, then put forward the software and hardware environment and also determined the main function modules for the system. The visual system was ultimately established with the DIV + CSS layout method combined with JSP, JavaScript, and some other computer programming languages based on the Java programming environment. Moreover, Struts, Spring, and Hibernate frameworks (SSH) were integrated in the system for the purpose of easy maintenance and expansion. To provide mapping service and spatial analysis functions, we selected ArcGIS for Server as the GIS server. We also used Oracle database and ESRI file geodatabase to store spatial data and non-spatial data in order to ensure the data security. In addition, the response data from the Web server are resampled to implement rapid visualization through the browser. The experimental successes indicate that this system can quickly respond to user's requests, and efficiently return the accurate processing results.
SIMPSON: A General Simulation Program for Solid-State NMR Spectroscopy
NASA Astrophysics Data System (ADS)
Bak, Mads; Rasmussen, Jimmy T.; Nielsen, Niels Chr.
2000-12-01
A computer program for fast and accurate numerical simulation of solid-state NMR experiments is described. The program is designed to emulate a NMR spectrometer by letting the user specify high-level NMR concepts such as spin systems, nuclear spin interactions, RF irradiation, free precession, phase cycling, coherence-order filtering, and implicit/explicit acquisition. These elements are implemented using the Tcl scripting language to ensure a minimum of programming overhead and direct interpretation without the need for compilation, while maintaining the flexibility of a full-featured programming language. Basicly, there are no intrinsic limitations to the number of spins, types of interactions, sample conditions (static or spinning, powders, uniaxially oriented molecules, single crystals, or solutions), and the complexity or number of spectral dimensions for the pulse sequence. The applicability ranges from simple 1D experiments to advanced multiple-pulse and multiple-dimensional experiments, series of simulations, parameter scans, complex data manipulation/visualization, and iterative fitting of simulated to experimental spectra. A major effort has been devoted to optimizing the computation speed using state-of-the-art algorithms for the time-consuming parts of the calculations implemented in the core of the program using the C programming language. Modification and maintenance of the program are facilitated by releasing the program as open source software (General Public License) currently at http://nmr.imsb.au.dk. The general features of the program are demonstrated by numerical simulations of various aspects for REDOR, rotational resonance, DRAMA, DRAWS, HORROR, C7, TEDOR, POST-C7, CW decoupling, TPPM, F-SLG, SLF, SEMA-CP, PISEMA, RFDR, QCPMG-MAS, and MQ-MAS experiments.
SIMPSON: A general simulation program for solid-state NMR spectroscopy
NASA Astrophysics Data System (ADS)
Bak, Mads; Rasmussen, Jimmy T.; Nielsen, Niels Chr.
2011-12-01
A computer program for fast and accurate numerical simulation of solid-state NMR experiments is described. The program is designed to emulate a NMR spectrometer by letting the user specify high-level NMR concepts such as spin systems, nuclear spin interactions, RF irradiation, free precession, phase cycling, coherence-order filtering, and implicit/explicit acquisition. These elements are implemented using the Tel scripting language to ensure a minimum of programming overhead and direct interpretation without the need for compilation, while maintaining the flexibility of a full-featured programming language. Basicly, there are no intrinsic limitations to the number of spins, types of interactions, sample conditions (static or spinning, powders, uniaxially oriented molecules, single crystals, or solutions), and the complexity or number of spectral dimensions for the pulse sequence. The applicability ranges from simple ID experiments to advanced multiple-pulse and multiple-dimensional experiments, series of simulations, parameter scans, complex data manipulation/visualization, and iterative fitting of simulated to experimental spectra. A major effort has been devoted to optimizing the computation speed using state-of-the-art algorithms for the time-consuming parts of the calculations implemented in the core of the program using the C programming language. Modification and maintenance of the program are facilitated by releasing the program as open source software (General Public License) currently at http://nmr.imsb.au.dk. The general features of the program are demonstrated by numerical simulations of various aspects for REDOR, rotational resonance, DRAMA, DRAWS, HORROR, C7, TEDOR, POST-C7, CW decoupling, TPPM, F-SLG, SLF, SEMA-CP, PISEMA, RFDR, QCPMG-MAS, and MQ-MAS experiments.
Developing Matlab scripts for image analysis and quality assessment
NASA Astrophysics Data System (ADS)
Vaiopoulos, A. D.
2011-11-01
Image processing is a very helpful tool in many fields of modern sciences that involve digital imaging examination and interpretation. Processed images however, often need to be correlated with the original image, in order to ensure that the resulting image fulfills its purpose. Aside from the visual examination, which is mandatory, image quality indices (such as correlation coefficient, entropy and others) are very useful, when deciding which processed image is the most satisfactory. For this reason, a single program (script) was written in Matlab language, which automatically calculates eight indices by utilizing eight respective functions (independent function scripts). The program was tested in both fused hyperspectral (Hyperion-ALI) and multispectral (ALI, Landsat) imagery and proved to be efficient. Indices were found to be in agreement with visual examination and statistical observations.
Pedrami, Farnoush; Asenso, Pamela; Devi, Sachin
2016-08-25
Objective. To identify trends in pharmacy education during last two decades using text mining. Methods. Articles published in the American Journal of Pharmaceutical Education (AJPE) in the past two decades were compiled in a database. Custom text analytics software was written using Visual Basic programming language in the Visual Basic for Applications (VBA) editor of Excel 2007. Frequency of words appearing in article titles was calculated using the custom VBA software. Data were analyzed to identify the emerging trends in pharmacy education. Results. Three educational trends emerged: active learning, interprofessional, and cultural competency. Conclusion. The text analytics program successfully identified trends in article topics and may be a useful compass to predict the future course of pharmacy education.
Learning abstract visual concepts via probabilistic program induction in a Language of Thought.
Overlan, Matthew C; Jacobs, Robert A; Piantadosi, Steven T
2017-11-01
The ability to learn abstract concepts is a powerful component of human cognition. It has been argued that variable binding is the key element enabling this ability, but the computational aspects of variable binding remain poorly understood. Here, we address this shortcoming by formalizing the Hierarchical Language of Thought (HLOT) model of rule learning. Given a set of data items, the model uses Bayesian inference to infer a probability distribution over stochastic programs that implement variable binding. Because the model makes use of symbolic variables as well as Bayesian inference and programs with stochastic primitives, it combines many of the advantages of both symbolic and statistical approaches to cognitive modeling. To evaluate the model, we conducted an experiment in which human subjects viewed training items and then judged which test items belong to the same concept as the training items. We found that the HLOT model provides a close match to human generalization patterns, significantly outperforming two variants of the Generalized Context Model, one variant based on string similarity and the other based on visual similarity using features from a deep convolutional neural network. Additional results suggest that variable binding happens automatically, implying that binding operations do not add complexity to peoples' hypothesized rules. Overall, this work demonstrates that a cognitive model combining symbolic variables with Bayesian inference and stochastic program primitives provides a new perspective for understanding people's patterns of generalization. Copyright © 2017 Elsevier B.V. All rights reserved.
van Weert, Julia C M; van Noort, Guda; Bol, Nadine; van Dijk, Liset; Tates, Kiek; Jansen, Jesse
2011-09-01
This study was designed to investigate the effects of visual cues and language complexity on satisfaction and information recall using a personalised website for lung cancer patients. In addition, age effects were investigated. An experiment using a 2 (complex vs. non-complex language)×3 (text only vs. photograph vs. drawing) factorial design was conducted. In total, 200 respondents without cancer were exposed to one of the six conditions. Respondents were more satisfied with the comprehensibility of both websites when they were presented with a visual cue. A significant interaction effect was found between language complexity and photograph use such that satisfaction with comprehensibility improved when a photograph was added to the complex language condition. Next, an interaction effect was found between age and satisfaction, which indicates that adding a visual cue is more important for older adults than younger adults. Finally, respondents who were exposed to a website with less complex language showed higher recall scores. The use of visual cues enhances satisfaction with the information presented on the website, and the use of non-complex language improves recall. The results of the current study can be used to improve computer-based information systems for patients. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Hsiao, Janet H.; Lam, Sze Man
2013-01-01
Through computational modeling, here we examine whether visual and task characteristics of writing systems alone can account for lateralization differences in visual word recognition between different languages without assuming influence from left hemisphere (LH) lateralized language processes. We apply a hemispheric processing model of face…
Visualizing Syllables: Real-Time Computerized Feedback within a Speech-Language Intervention
ERIC Educational Resources Information Center
DeThorne, Laura; Aparicio Betancourt, Mariana; Karahalios, Karrie; Halle, Jim; Bogue, Ellen
2015-01-01
Computerized technologies now offer unprecedented opportunities to provide real-time visual feedback to facilitate children's speech-language development. We employed a mixed-method design to examine the effectiveness of two speech-language interventions aimed at facilitating children's multisyllabic productions: one incorporated a novel…
Generalizing the extensibility of a dynamic geometry software
NASA Astrophysics Data System (ADS)
Herceg, Đorđe; Radaković, Davorka; Herceg, Dejana
2012-09-01
Plug-and-play visual components in a Dynamic Geometry Software (DGS) enable development of visually attractive, rich and highly interactive dynamic drawings. We are developing SLGeometry, a DGS that contains a custom programming language, a computer algebra system (CAS engine) and a graphics subsystem. The basic extensibility framework on SLGeometry supports dynamic addition of new functions from attribute annotated classes that implement runtime metadata registration in code. We present a general plug-in framework for dynamic importing of arbitrary Silverlight user interface (UI) controls into SLGeometry at runtime. The CAS engine maintains a metadata storage that describes each imported visual component and enables two-way communication between the expressions stored in the engine and the UI controls on the screen.
A visual interface for generic message translation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blattner, M.M.; Kou, L.T.; Carlson, J.W.
1988-06-21
This paper is concerned with the translation of data structures we call messages. Messages are an example of a type of data structure encountered in generic data translation. Our objective is to provide a system that the nonprogrammer can use to specify the nature of translations from one type to another. For this reason we selected a visual interface that uses interaction techniques that do not require a knowledge of programming or command languages. The translator must accomplish two tasks: create a mapping between fields in different message types that specifies which fields have similar semantic content, and reformat ormore » translate data specifications within those fields. The translations are accomplished with appropriate, but different, visual metaphors. 14 refs., 4 figs.« less
HBNG: Graph theory based visualization of hydrogen bond networks in protein structures.
Tiwari, Abhishek; Tiwari, Vivek
2007-07-09
HBNG is a graph theory based tool for visualization of hydrogen bond network in 2D. Digraphs generated by HBNG facilitate visualization of cooperativity and anticooperativity chains and rings in protein structures. HBNG takes hydrogen bonds list files (output from HBAT, HBEXPLORE, HBPLUS and STRIDE) as input and generates a DOT language script and constructs digraphs using freeware AT and T Graphviz tool. HBNG is useful in the enumeration of favorable topologies of hydrogen bond networks in protein structures and determining the effect of cooperativity and anticooperativity on protein stability and folding. HBNG can be applied to protein structure comparison and in the identification of secondary structural regions in protein structures. Program is available from the authors for non-commercial purposes.
NASA Astrophysics Data System (ADS)
Ye, Z.; Xiang, H.
2014-04-01
The paper discusses the basic principles and the problem solutions during the design and implementation of the mobile GIS system, and base on the research result, we developed the General Provincial Situation Visualization System Based on iOS of Shandong Province. The system is developed in the Objective-C programming language, and use the ArcGIS Runtime SDK for IOS as the development tool to call the "World-map Shandong" services to implement the development of the General Provincial Situation Visualization System Based on iOS devices. The system is currently available for download in the Appstore and is chosen as the typical application case of ESRI China ArcGIS API for iOS.
Guigas, Bruno
2017-09-01
SpecPad is a new device-independent software program for the visualization and processing of one-dimensional and two-dimensional nuclear magnetic resonance (NMR) time domain (FID) and frequency domain (spectrum) data. It is the result of a project to investigate whether the novel programming language DART, in combination with Html5 Web technology, forms a suitable base to write an NMR data evaluation software which runs on modern computing devices such as Android, iOS, and Windows tablets as well as on Windows, Linux, and Mac OS X desktop PCs and notebooks. Another topic of interest is whether this technique also effectively supports the required sophisticated graphical and computational algorithms. SpecPad is device-independent because DART's compiled executable code is JavaScript and can, therefore, be run by the browsers of PCs and tablets. Because of Html5 browser cache technology, SpecPad may be operated off-line. Network access is only required during data import or export, e.g. via a Cloud service, or for software updates. A professional and easy to use graphical user interface consistent across all hardware platforms supports touch screen features on mobile devices for zooming and panning and for NMR-related interactive operations such as phasing, integration, peak picking, or atom assignment. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Python for large-scale electrophysiology.
Spacek, Martin; Blanche, Tim; Swindale, Nicholas
2008-01-01
Electrophysiology is increasingly moving towards highly parallel recording techniques which generate large data sets. We record extracellularly in vivo in cat and rat visual cortex with 54-channel silicon polytrodes, under time-locked visual stimulation, from localized neuronal populations within a cortical column. To help deal with the complexity of generating and analysing these data, we used the Python programming language to develop three software projects: one for temporally precise visual stimulus generation ("dimstim"); one for electrophysiological waveform visualization and spike sorting ("spyke"); and one for spike train and stimulus analysis ("neuropy"). All three are open source and available for download (http://swindale.ecc.ubc.ca/code). The requirements and solutions for these projects differed greatly, yet we found Python to be well suited for all three. Here we present our software as a showcase of the extensive capabilities of Python in neuroscience.
Audio-visual temporal perception in children with restored hearing.
Gori, Monica; Chilosi, Anna; Forli, Francesca; Burr, David
2017-05-01
It is not clear how audio-visual temporal perception develops in children with restored hearing. In this study we measured temporal discrimination thresholds with an audio-visual temporal bisection task in 9 deaf children with restored audition, and 22 typically hearing children. In typically hearing children, audition was more precise than vision, with no gain in multisensory conditions (as previously reported in Gori et al. (2012b)). However, deaf children with restored audition showed similar thresholds for audio and visual thresholds and some evidence of gain in audio-visual temporal multisensory conditions. Interestingly, we found a strong correlation between auditory weighting of multisensory signals and quality of language: patients who gave more weight to audition had better language skills. Similarly, auditory thresholds for the temporal bisection task were also a good predictor of language skills. This result supports the idea that the temporal auditory processing is associated with language development. Copyright © 2017. Published by Elsevier Ltd.
Applying Pragmatics Principles for Interaction with Visual Analytics.
Hoque, Enamul; Setlur, Vidya; Tory, Melanie; Dykeman, Isaac
2018-01-01
Interactive visual data analysis is most productive when users can focus on answering the questions they have about their data, rather than focusing on how to operate the interface to the analysis tool. One viable approach to engaging users in interactive conversations with their data is a natural language interface to visualizations. These interfaces have the potential to be both more expressive and more accessible than other interaction paradigms. We explore how principles from language pragmatics can be applied to the flow of visual analytical conversations, using natural language as an input modality. We evaluate the effectiveness of pragmatics support in our system Evizeon, and present design considerations for conversation interfaces to visual analytics tools.
ERIC Educational Resources Information Center
Shepherd, Terry R.
The author, a university professor, describes his experiences in teaching language to his autistic-like son who also has visual impairments. "Experience Language," an adaptation of Language Experience Approach (LEA) is described, and its contributions to the child's reading, writing, and talking are noted. Suggestions are made on the importance of…
ERIC Educational Resources Information Center
Tadic, Valerija; Pring, Linda; Dale, Naomi
2013-01-01
Background: Lack of sight compromises insight into other people's mental states. Little is known about the role of maternal language in assisting the development of mental state language in children with visual impairment (VI). Aims: To investigate mental state language strategies of mothers of school-aged children with VI and to compare…
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Triantafyllopoulos, Spiros
1985-01-01
A collection of presentation visuals associated with the companion report entitled KARL: A Knowledge-Assisted Retrieval Language, is presented. Information is given on data retrieval, natural language database front ends, generic design objectives, processing capababilities and the query processing cycle.
ERIC Educational Resources Information Center
Mei, Leilei; Xue, Gui; Lu, Zhong-Lin; He, Qinghua; Zhang, Mingxia; Xue, Feng; Chen, Chuansheng; Dong, Qi
2013-01-01
The laterality difference in the occipitotemporal region between Chinese (bilaterality) and alphabetic languages (left laterality) has been attributed to their difference in visual appearance. However, these languages also differ in orthographic transparency. To disentangle the effect of orthographic transparency from visual appearance, we trained…
NASA Astrophysics Data System (ADS)
Wyatt, R.
2014-01-01
There is a visual language present in all images and this article explores the meaning of these languages, their importance, and what it means for the visualisation of science. Do we, as science communicators, confuse and confound our audiences by assuming the visual vernacular of the scientist or isolate our scientific audience by ignoring it?
Structured Natural-Language Descriptions for Semantic Content Retrieval of Visual Materials.
ERIC Educational Resources Information Center
Tam, A. M.; Leung, C. H. C.
2001-01-01
Proposes a structure for natural language descriptions of the semantic content of visual materials that requires descriptions to be (modified) keywords, phrases, or simple sentences, with components that are grammatical relations common to many languages. This structure makes it easy to implement a collection's descriptions as a relational…
Willems, Roel M; Clevis, Krien; Hagoort, Peter
2011-09-01
We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants' brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information.
McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P
2010-01-01
Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.
... living. Functions affected include memory, language skills, visual perception, problem solving, self-management, and the ability to ... living. Functions affected include memory, language skills, visual perception, problem solving, self-management, and the ability to ...
Data acquisition and real-time control using spreadsheets: interfacing Excel with external hardware.
Aliane, Nourdine
2010-07-01
Spreadsheets have become a popular computational tool and a powerful platform for performing engineering calculations. Moreover, spreadsheets include a macro language, which permits the inclusion of standard computer code in worksheets, and thereby enable developers to greatly extend spreadsheets' capabilities by designing specific add-ins. This paper describes how to use Excel spreadsheets in conjunction to Visual Basic for Application programming language to perform data acquisition and real-time control. Afterwards, the paper presents two Excel applications with interactive user interfaces developed for laboratory demonstrations and experiments in an introductory course in control. 2010 ISA. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Roul, Sushanta Kumar
2014-01-01
Preschool may not be a place where formal education is imparted but yes, it definitely is a place where children have their first taste of independence. Preschool education is the provision of education for children before the commencement of statutory education usually between the ages of 2 and 5. Thus the purposes of the study were: to study the…
Language Proficiency Modulates the Recruitment of Non-Classical Language Areas in Bilinguals
Leonard, Matthew K.; Torres, Christina; Travis, Katherine E.; Brown, Timothy T.; Hagler, Donald J.; Dale, Anders M.; Elman, Jeffrey L.; Halgren, Eric
2011-01-01
Bilingualism provides a unique opportunity for understanding the relative roles of proficiency and order of acquisition in determining how the brain represents language. In a previous study, we combined magnetoencephalography (MEG) and magnetic resonance imaging (MRI) to examine the spatiotemporal dynamics of word processing in a group of Spanish-English bilinguals who were more proficient in their native language. We found that from the earliest stages of lexical processing, words in the second language evoke greater activity in bilateral posterior visual regions, while activity to the native language is largely confined to classical left hemisphere fronto-temporal areas. In the present study, we sought to examine whether these effects relate to language proficiency or order of language acquisition by testing Spanish-English bilingual subjects who had become dominant in their second language. Additionally, we wanted to determine whether activity in bilateral visual regions was related to the presentation of written words in our previous study, so we presented subjects with both written and auditory words. We found greater activity for the less proficient native language in bilateral posterior visual regions for both the visual and auditory modalities, which started during the earliest word encoding stages and continued through lexico-semantic processing. In classical left fronto-temporal regions, the two languages evoked similar activity. Therefore, it is the lack of proficiency rather than secondary acquisition order that determines the recruitment of non-classical areas for word processing. PMID:21455315
Impact of Language on Development of Auditory-Visual Speech Perception
ERIC Educational Resources Information Center
Sekiyama, Kaoru; Burnham, Denis
2008-01-01
The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various…
VISUAL AIDS HANDBOOK FOR FOREIGN LANGUAGE TEACHERS.
ERIC Educational Resources Information Center
GARIBALDI, VIRGINIA; STRASHEIM, LORRAINE A.
TEACHERS ARE SHOWN HOW TO CONSTRUCT AND USE THEIR OWN VISUAL AIDS FOR ILLUSTRATING USEFUL BUT DIFFICULT EXPRESSIONS COMMON TO ALL LANGUAGES. SUCH SPECIFIC AIDS AS PROPS, REALIA, FLASHCARDS, CHARTS, FLANNEL AND MAGNETIC BOARDS, POCKET CHARTS, PUPPETS, DRILL CUING DEVICES, AND CULTURALLY ORIENTED VISUAL AIDS ARE DESCRIBED. LISTS OF PROFESSIONAL…
Slipped Lips: Onset Asynchrony Detection of Auditory-Visual Language in Autism
ERIC Educational Resources Information Center
Grossman, Ruth B.; Schneps, Matthew H.; Tager-Flusberg, Helen
2009-01-01
Background: It has frequently been suggested that individuals with autism spectrum disorder (ASD) have deficits in auditory-visual (AV) sensory integration. Studies of language integration have mostly used non-word syllables presented in congruent and incongruent AV combinations and demonstrated reduced influence of visual speech in individuals…
Cognitive Task Analysis of the Battalion Level Visualization Process
2007-10-01
of the visualization space are identified using commonly understood doctrinal language and mnemonic devices. a Degree to which the commander and staff...the elements of the visualization space are identified using commonly understood doctrinal language and mnemonic devices. Visualization elements are...11 skill areas were identified as potential focal points for future training development. The findings were used to design and develop exemplar
ERIC Educational Resources Information Center
Stauffer, Linda K.
2010-01-01
Given the visual-gestural nature of ASL it is reasonable to assume that visualization abilities may be one predictor of aptitude for learning ASL. This study tested a hypothesis that visualization abilities are a foundational aptitude for learning a signed language and that measurements of these skills will increase as students progress from…
Are Deaf Students Visual Learners?
Marschark, Marc; Morrison, Carolyn; Lukomski, Jennifer; Borgna, Georgianna; Convertino, Carol
2013-01-01
It is frequently assumed that by virtue of their hearing losses, deaf students are visual learners. Deaf individuals have some visual-spatial advantages relative to hearing individuals, but most have been are linked to use of sign language rather than auditory deprivation. How such cognitive differences might affect academic performance has been investigated only rarely. This study examined relations among deaf college students’ language and visual-spatial abilities, mathematics problem solving, and hearing thresholds. Results extended some previous findings and clarified others. Contrary to what might be expected, hearing students exhibited visual-spatial skills equal to or better than deaf students. Scores on a Spatial Relations task were associated with better mathematics problem solving. Relations among the several variables, however, suggested that deaf students are no more likely to be visual learners than hearing students and that their visual-spatial skill may be related more to their hearing than to sign language skills. PMID:23750095
Kukona, Anuenue; Tabor, Whitney
2011-01-01
The visual world paradigm presents listeners with a challenging problem: they must integrate two disparate signals, the spoken language and the visual context, in support of action (e.g., complex movements of the eyes across a scene). We present Impulse Processing, a dynamical systems approach to incremental eye movements in the visual world that suggests a framework for integrating language, vision, and action generally. Our approach assumes that impulses driven by the language and the visual context impinge minutely on a dynamical landscape of attractors corresponding to the potential eye-movement behaviors of the system. We test three unique predictions of our approach in an empirical study in the visual world paradigm, and describe an implementation in an artificial neural network. We discuss the Impulse Processing framework in relation to other models of the visual world paradigm. PMID:21609355
Symbolic Play Connects to Language through Visual Object Recognition
ERIC Educational Resources Information Center
Smith, Linda B.; Jones, Susan S.
2011-01-01
Object substitutions in play (e.g. using a box as a car) are strongly linked to language learning and their absence is a diagnostic marker of language delay. Classic accounts posit a symbolic function that underlies both words and object substitutions. Here we show that object substitutions depend on developmental changes in visual object…
ERIC Educational Resources Information Center
Altvater-Mackensen, Nicole; Grossmann, Tobias
2015-01-01
Infants' language exposure largely involves face-to-face interactions providing acoustic and visual speech cues but also social cues that might foster language learning. Yet, both audiovisual speech information and social information have so far received little attention in research on infants' early language development. Using a preferential…
ERIC Educational Resources Information Center
Yuan, Yifeng; Shen, Huizhong
2016-01-01
This design-based study examines the creation and development of audio-visual Chinese language teaching and learning materials for Australian schools by incorporating users' feedback and content writers' input that emerged in the designing process. Data were collected from workshop feedback of two groups of Chinese-language teachers from primary…
Spatial Language, Visual Attention, and Perceptual Simulation
ERIC Educational Resources Information Center
Coventry, Kenny R.; Lynott, Dermot; Cangelosi, Angelo; Monrouxe, Lynn; Joyce, Dan; Richardson, Daniel C.
2010-01-01
Spatial language descriptions, such as "The bottle is over the glass", direct the attention of the hearer to particular aspects of the visual world. This paper asks how they do so, and what brain mechanisms underlie this process. In two experiments employing behavioural and eye tracking methodologies we examined the effects of spatial language on…
ERIC Educational Resources Information Center
Moore, Vanessa; McConachie, Helen
This study investigated variables that might be associated with outcome differences in language development of 10 children (ages 10-20 months) with blindness or severe visual impairments, attending a developmental vision clinic in southern England. Subjects' early patterns of expressive language development were examined and related to observed…
Painting in Tongues: Faith-Based Languages of Formalist Art
ERIC Educational Resources Information Center
Moore, Kevin Z.
2007-01-01
Cephalopods have a visual language that may be considered artful; they flash color-forms on their hides to communicate intents and emotions. Formalist inspired artists and their critic-expositors describe abstract pattern-painting as though it were a visual language equal to that of the cephalopod. In this article, the author argues that although…
ERIC Educational Resources Information Center
Hirschfeld, Gerrit; Zwitserlood, Pienie; Dobel, Christian
2011-01-01
We investigated whether and when information conveyed by spoken language impacts on the processing of visually presented objects. In contrast to traditional views, grounded-cognition posits direct links between language comprehension and perceptual processing. We used a magnetoencephalographic cross-modal priming paradigm to disentangle these…
The Cooperate Assistive Teamwork Environment for Software Description Languages.
Groenda, Henning; Seifermann, Stephan; Müller, Karin; Jaworek, Gerhard
2015-01-01
Versatile description languages such as the Unified Modeling Language (UML) are commonly used in software engineering across different application domains in theory and practice. They often use graphical notations and leverage visual memory for expressing complex relations. Those notations are hard to access for people with visual impairment and impede their smooth inclusion in an engineering team. Existing approaches provide textual notations but require manual synchronization between the notations. This paper presents requirements for an accessible and language-aware team work environment as well as our plan for the assistive implementation of Cooperate. An industrial software engineering team consisting of people with and without visual impairment will evaluate the implementation.
Comprehension of Spacecraft Telemetry Using Hierarchical Specifications of Behavior
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Joshi, Rajeev
2014-01-01
A key challenge in operating remote spacecraft is that ground operators must rely on the limited visibility available through spacecraft telemetry in order to assess spacecraft health and operational status. We describe a tool for processing spacecraft telemetry that allows ground operators to impose structure on received telemetry in order to achieve a better comprehension of system state. A key element of our approach is the design of a domain-specific language that allows operators to express models of expected system behavior using partial specifications. The language allows behavior specifications with data fields, similar to other recent runtime verification systems. What is notable about our approach is the ability to develop hierarchical specifications of behavior. The language is implemented as an internal DSL in the Scala programming language that synthesizes rules from patterns of specification behavior. The rules are automatically applied to received telemetry and the inferred behaviors are available to ground operators using a visualization interface that makes it easier to understand and track spacecraft state. We describe initial results from applying our tool to telemetry received from the Curiosity rover currently roving the surface of Mars, where the visualizations are being used to trend subsystem behaviors, in order to identify potential problems before they happen. However, the technology is completely general and can be applied to any system that generates telemetry such as event logs.
Placement from community-based mental retardation programs: how well do clients do?
Schalock, R L; Harper, R S
1978-11-01
Mentally retarded clients (N = 131) placed during a 2-year period from either an independent living or competitive employment training program were evaluated as to placement success. Thirteen percent returned to the training program. Successful independent living placement was related to intelligence and demonstrated skills in symbolic operations, personal maintenance, clothing care and use, socially appropriate behavior, and functional academics. Successful employment was related to sensorimotor, visual-auditory processing, language, and symbolic-operations skills. Major reasons for returning from a job to the competitive employment training program included inappropriate behavior or need for more training; returning from community living placement was related to money management, apartment cleanliness, social behavior, and meal preparation.
"Visual" Cortex of Congenitally Blind Adults Responds to Syntactic Movement.
Lane, Connor; Kanjlia, Shipra; Omaki, Akira; Bedny, Marina
2015-09-16
Human cortex is comprised of specialized networks that support functions, such as visual motion perception and language processing. How do genes and experience contribute to this specialization? Studies of plasticity offer unique insights into this question. In congenitally blind individuals, "visual" cortex responds to auditory and tactile stimuli. Remarkably, recent evidence suggests that occipital areas participate in language processing. We asked whether in blindness, occipital cortices: (1) develop domain-specific responses to language and (2) respond to a highly specialized aspect of language-syntactic movement. Nineteen congenitally blind and 18 sighted participants took part in two fMRI experiments. We report that in congenitally blind individuals, but not in sighted controls, "visual" cortex is more active during sentence comprehension than during a sequence memory task with nonwords, or a symbolic math task. This suggests that areas of occipital cortex become selective for language, relative to other similar higher-cognitive tasks. Crucially, we find that these occipital areas respond more to sentences with syntactic movement but do not respond to the difficulty of math equations. We conclude that regions within the visual cortex of blind adults are involved in syntactic processing. Our findings suggest that the cognitive function of human cortical areas is largely determined by input during development. Human cortex is made up of specialized regions that perform different functions, such as visual motion perception and language processing. How do genes and experience contribute to this specialization? Studies of plasticity show that cortical areas can change function from one sensory modality to another. Here we demonstrate that input during development can alter cortical function even more dramatically. In blindness a subset of "visual" areas becomes specialized for language processing. Crucially, we find that the same "visual" areas respond to a highly specialized and uniquely human aspect of language-syntactic movement. These data suggest that human cortex has broad functional capacity during development, and input plays a major role in determining functional specialization. Copyright © 2015 the authors 0270-6474/15/3512859-10$15.00/0.
From Petascale to Exascale: Eight Focus Areas of R&D Challenges for HPC Simulation Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Springmeyer, R; Still, C; Schulz, M
2011-03-17
Programming models bridge the gap between the underlying hardware architecture and the supporting layers of software available to applications. Programming models are different from both programming languages and application programming interfaces (APIs). Specifically, a programming model is an abstraction of the underlying computer system that allows for the expression of both algorithms and data structures. In comparison, languages and APIs provide implementations of these abstractions and allow the algorithms and data structures to be put into practice - a programming model exists independently of the choice of both the programming language and the supporting APIs. Programming models are typically focusedmore » on achieving increased developer productivity, performance, and portability to other system designs. The rapidly changing nature of processor architectures and the complexity of designing an exascale platform provide significant challenges for these goals. Several other factors are likely to impact the design of future programming models. In particular, the representation and management of increasing levels of parallelism, concurrency and memory hierarchies, combined with the ability to maintain a progressive level of interoperability with today's applications are of significant concern. Overall the design of a programming model is inherently tied not only to the underlying hardware architecture, but also to the requirements of applications and libraries including data analysis, visualization, and uncertainty quantification. Furthermore, the successful implementation of a programming model is dependent on exposed features of the runtime software layers and features of the operating system. Successful use of a programming model also requires effective presentation to the software developer within the context of traditional and new software development tools. Consideration must also be given to the impact of programming models on both languages and the associated compiler infrastructure. Exascale programming models must reflect several, often competing, design goals. These design goals include desirable features such as abstraction and separation of concerns. However, some aspects are unique to large-scale computing. For example, interoperability and composability with existing implementations will prove critical. In particular, performance is the essential underlying goal for large-scale systems. A key evaluation metric for exascale models will be the extent to which they support these goals rather than merely enable them.« less
Selective transfer of visual working memory training on Chinese character learning.
Opitz, Bertram; Schneiders, Julia A; Krick, Christoph M; Mecklinger, Axel
2014-01-01
Previous research has shown a systematic relationship between phonological working memory capacity and second language proficiency for alphabetic languages. However, little is known about the impact of working memory processes on second language learning in a non-alphabetic language such as Mandarin Chinese. Due to the greater complexity of the Chinese writing system we expect that visual working memory rather than phonological working memory exerts a unique influence on learning Chinese characters. This issue was explored in the present experiment by comparing visual working memory training with an active (auditory working memory training) control condition and a passive, no training control condition. Training induced modulations in language-related brain networks were additionally examined using functional magnetic resonance imaging in a pretest-training-posttest design. As revealed by pre- to posttest comparisons and analyses of individual differences in working memory training gains, visual working memory training led to positive transfer effects on visual Chinese vocabulary learning compared to both control conditions. In addition, we found sustained activation after visual working memory training in the (predominantly visual) left infero-temporal cortex that was associated with behavioral transfer. In the control conditions, activation either increased (active control condition) or decreased (passive control condition) without reliable behavioral transfer effects. This suggests that visual working memory training leads to more efficient processing and more refined responses in brain regions involved in visual processing. Furthermore, visual working memory training boosted additional activation in the precuneus, presumably reflecting mental image generation of the learned characters. We, therefore, suggest that the conjoint activity of the mid-fusiform gyrus and the precuneus after visual working memory training reflects an interaction of working memory and imagery processes with complex visual stimuli that fosters the coherent synthesis of a percept from a complex visual input in service of enhanced Chinese character learning. © 2013 Published by Elsevier Ltd.
SIMPSON: a general simulation program for solid-state NMR spectroscopy.
Bak, M; Rasmussen, J T; Nielsen, N C
2000-12-01
A computer program for fast and accurate numerical simulation of solid-state NMR experiments is described. The program is designed to emulate a NMR spectrometer by letting the user specify high-level NMR concepts such as spin systems, nuclear spin interactions, RF irradiation, free precession, phase cycling, coherence-order filtering, and implicit/explicit acquisition. These elements are implemented using the Tcl scripting language to ensure a minimum of programming overhead and direct interpretation without the need for compilation, while maintaining the flexibility of a full-featured programming language. Basically, there are no intrinsic limitations to the number of spins, types of interactions, sample conditions (static or spinning, powders, uniaxially oriented molecules, single crystals, or solutions), and the complexity or number of spectral dimensions for the pulse sequence. The applicability ranges from simple 1D experiments to advanced multiple-pulse and multiple-dimensional experiments, series of simulations, parameter scans, complex data manipulation/visualization, and iterative fitting of simulated to experimental spectra. A major effort has been devoted to optimizing the computation speed using state-of-the-art algorithms for the time-consuming parts of the calculations implemented in the core of the program using the C programming language. Modification and maintenance of the program are facilitated by releasing the program as open source software (General Public License) currently at http://nmr.imsb.au.dk. The general features of the program are demonstrated by numerical simulations of various aspects for REDOR, rotational resonance, DRAMA, DRAWS, HORROR, C7, TEDOR, POST-C7, CW decoupling, TPPM, F-SLG, SLF, SEMA-CP, PISEMA, RFDR, QCPMG-MAS, and MQ-MAS experiments. Copyright 2000 Academic Press.
Integrating mechanisms of visual guidance in naturalistic language production.
Coco, Moreno I; Keller, Frank
2015-05-01
Situated language production requires the integration of visual attention and linguistic processing. Previous work has not conclusively disentangled the role of perceptual scene information and structural sentence information in guiding visual attention. In this paper, we present an eye-tracking study that demonstrates that three types of guidance, perceptual, conceptual, and structural, interact to control visual attention. In a cued language production experiment, we manipulate perceptual (scene clutter) and conceptual guidance (cue animacy) and measure structural guidance (syntactic complexity of the utterance). Analysis of the time course of language production, before and during speech, reveals that all three forms of guidance affect the complexity of visual responses, quantified in terms of the entropy of attentional landscapes and the turbulence of scan patterns, especially during speech. We find that perceptual and conceptual guidance mediate the distribution of attention in the scene, whereas structural guidance closely relates to scan pattern complexity. Furthermore, the eye-voice span of the cued object and its perceptual competitor are similar; its latency mediated by both perceptual and structural guidance. These results rule out a strict interpretation of structural guidance as the single dominant form of visual guidance in situated language production. Rather, the phase of the task and the associated demands of cross-modal cognitive processing determine the mechanisms that guide attention.
Szécsi, László; Kacsó, Ágota; Zeck, Günther; Hantz, Péter
2017-01-01
Light stimulation with precise and complex spatial and temporal modulation is demanded by a series of research fields like visual neuroscience, optogenetics, ophthalmology, and visual psychophysics. We developed a user-friendly and flexible stimulus generating framework (GEARS GPU-based Eye And Retina Stimulation Software), which offers access to GPU computing power, and allows interactive modification of stimulus parameters during experiments. Furthermore, it has built-in support for driving external equipment, as well as for synchronization tasks, via USB ports. The use of GEARS does not require elaborate programming skills. The necessary scripting is visually aided by an intuitive interface, while the details of the underlying software and hardware components remain hidden. Internally, the software is a C++/Python hybrid using OpenGL graphics. Computations are performed on the GPU, and are defined in the GLSL shading language. However, all GPU settings, including the GPU shader programs, are automatically generated by GEARS. This is configured through a method encountered in game programming, which allows high flexibility: stimuli are straightforwardly composed using a broad library of basic components. Stimulus rendering is implemented solely in C++, therefore intermediary libraries for interfacing could be omitted. This enables the program to perform computationally demanding tasks like en-masse random number generation or real-time image processing by local and global operations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, H; Tan, J; Kavanaugh, J
Purpose: Radiotherapy (RT) contours delineated either manually or semiautomatically require verification before clinical usage. Manual evaluation is very time consuming. A new integrated software tool using supervised pattern contour recognition was thus developed to facilitate this process. Methods: The contouring tool was developed using an object-oriented programming language C# and application programming interfaces, e.g. visualization toolkit (VTK). The C# language served as the tool design basis. The Accord.Net scientific computing libraries were utilized for the required statistical data processing and pattern recognition, while the VTK was used to build and render 3-D mesh models from critical RT structures in real-timemore » and 360° visualization. Principal component analysis (PCA) was used for system self-updating geometry variations of normal structures based on physician-approved RT contours as a training dataset. The inhouse design of supervised PCA-based contour recognition method was used for automatically evaluating contour normality/abnormality. The function for reporting the contour evaluation results was implemented by using C# and Windows Form Designer. Results: The software input was RT simulation images and RT structures from commercial clinical treatment planning systems. Several abilities were demonstrated: automatic assessment of RT contours, file loading/saving of various modality medical images and RT contours, and generation/visualization of 3-D images and anatomical models. Moreover, it supported the 360° rendering of the RT structures in a multi-slice view, which allows physicians to visually check and edit abnormally contoured structures. Conclusion: This new software integrates the supervised learning framework with image processing and graphical visualization modules for RT contour verification. This tool has great potential for facilitating treatment planning with the assistance of an automatic contour evaluation module in avoiding unnecessary manual verification for physicians/dosimetrists. In addition, its nature as a compact and stand-alone tool allows for future extensibility to include additional functions for physicians’ clinical needs.« less
NASA Astrophysics Data System (ADS)
Benakli, Nadia; Kostadinov, Boyan; Satyanarayana, Ashwin; Singh, Satyanand
2017-04-01
The goal of this paper is to promote computational thinking among mathematics, engineering, science and technology students, through hands-on computer experiments. These activities have the potential to empower students to learn, create and invent with technology, and they engage computational thinking through simulations, visualizations and data analysis. We present nine computer experiments and suggest a few more, with applications to calculus, probability and data analysis, which engage computational thinking through simulations, visualizations and data analysis. We are using the free (open-source) statistical programming language R. Our goal is to give a taste of what R offers rather than to present a comprehensive tutorial on the R language. In our experience, these kinds of interactive computer activities can be easily integrated into a smart classroom. Furthermore, these activities do tend to keep students motivated and actively engaged in the process of learning, problem solving and developing a better intuition for understanding complex mathematical concepts.
Toward an Understanding of Language Symptomatology of Visually-Impaired Children.
ERIC Educational Resources Information Center
Prizant, Barry M.
The paper examines theoretical issues regarding the symptomatology of echolalia in the language of visually impaired children. Literature on echolalia is reviewed from a variety of perspectives and clinical work and research with visual impairment and with autism is discussed. Problems of definition are cited, and explanations for occurrence of…
Picturing German: Teaching Language and Literature through Visual Art
ERIC Educational Resources Information Center
Knapp, Thyra E.
2012-01-01
This article examines the importance of visual culture with regard to its pedagogical applications in the German language classroom. I begin by outlining the benefits and concerns associated with making visual art a part of the curriculum. Next, practical ideas are presented for using paintings in beginning, intermediate, and advanced courses.…
Auditory-Visual Speech Integration by Adults with and without Language-Learning Disabilities
ERIC Educational Resources Information Center
Norrix, Linda W.; Plante, Elena; Vance, Rebecca
2006-01-01
Auditory and auditory-visual (AV) speech perception skills were examined in adults with and without language-learning disabilities (LLD). The AV stimuli consisted of congruent consonant-vowel syllables (auditory and visual syllables matched in terms of syllable being produced) and incongruent McGurk syllables (auditory syllable differed from…
Pijnacker, Judith; Vervloed, Mathijs P J; Steenbergen, Bert
2012-11-01
Children with congenital visual impairment have been reported to be delayed in theory of mind development. So far, research focused on first-order theory of mind, and included mainly blind children, whereas the majority of visually impaired children is not totally blind. The present study set out to explore whether children with a broader range of congenital visual impairments have a delay in more advanced theory of mind understanding, in particular second-order theory of mind (i.e. awareness that other people have beliefs about beliefs) and non-literal language (e.g. irony or figure of speech). Twenty-four children with congenital visual impairment and 24 typically developing sighted children aged between 6 and 13 were included. All children were presented with a series of stories involving understanding of theory of mind and non-literal language. When compared with sighted children of similar age and verbal intelligence, performance of children with congenital visual impairment on advanced theory of mind and non-literal stories was alike. The ability to understand the motivations behind non-literal language was associated with age, verbal intelligence and theory of mind skills, but was not associated with visual ability.
Discussion on the 3D visualizing of 1:200 000 geological map
NASA Astrophysics Data System (ADS)
Wang, Xiaopeng
2018-01-01
Using United States National Aeronautics and Space Administration Shuttle Radar Topography Mission (SRTM) terrain data as digital elevation model (DEM), overlap scanned 1:200 000 scale geological map, program using Direct 3D of Microsoft with C# computer language, the author realized the three-dimensional visualization of the standard division geological map. User can inspect the regional geology content with arbitrary angle, rotating, roaming, and can examining the strata synthetical histogram, map section and legend at any moment. This will provide an intuitionistic analyzing tool for the geological practitioner to do structural analysis with the assistant of landform, dispose field exploration route etc.
Legacy model integration for enhancing hydrologic interdisciplinary research
NASA Astrophysics Data System (ADS)
Dozier, A.; Arabi, M.; David, O.
2013-12-01
Many challenges are introduced to interdisciplinary research in and around the hydrologic science community due to advances in computing technology and modeling capabilities in different programming languages, across different platforms and frameworks by researchers in a variety of fields with a variety of experience in computer programming. Many new hydrologic models as well as optimization, parameter estimation, and uncertainty characterization techniques are developed in scripting languages such as Matlab, R, Python, or in newer languages such as Java and the .Net languages, whereas many legacy models have been written in FORTRAN and C, which complicates inter-model communication for two-way feedbacks. However, most hydrologic researchers and industry personnel have little knowledge of the computing technologies that are available to address the model integration process. Therefore, the goal of this study is to address these new challenges by utilizing a novel approach based on a publish-subscribe-type system to enhance modeling capabilities of legacy socio-economic, hydrologic, and ecologic software. Enhancements include massive parallelization of executions and access to legacy model variables at any point during the simulation process by another program without having to compile all the models together into an inseparable 'super-model'. Thus, this study provides two-way feedback mechanisms between multiple different process models that can be written in various programming languages and can run on different machines and operating systems. Additionally, a level of abstraction is given to the model integration process that allows researchers and other technical personnel to perform more detailed and interactive modeling, visualization, optimization, calibration, and uncertainty analysis without requiring deep understanding of inter-process communication. To be compatible, a program must be written in a programming language with bindings to a common implementation of the message passing interface (MPI), which includes FORTRAN, C, Java, the .NET languages, Python, R, Matlab, and many others. The system is tested on a longstanding legacy hydrologic model, the Soil and Water Assessment Tool (SWAT), to observe and enhance speed-up capabilities for various optimization, parameter estimation, and model uncertainty characterization techniques, which is particularly important for computationally intensive hydrologic simulations. Initial results indicate that the legacy extension system significantly decreases developer time, computation time, and the cost of purchasing commercial parallel processing licenses, while enhancing interdisciplinary research by providing detailed two-way feedback mechanisms between various process models with minimal changes to legacy code.
Evidence for a basal temporal visual language center: cortical stimulation producing pure alexia.
Mani, J; Diehl, B; Piao, Z; Schuele, S S; Lapresto, E; Liu, P; Nair, D R; Dinner, D S; Lüders, H O
2008-11-11
Dejerine and Benson and Geschwind postulated disconnection of the dominant angular gyrus from both visual association cortices as the basis for pure alexia, emphasizing disruption of white matter tracts in the dominant temporooccipital region. Recently functional imaging studies provide evidence for direct participation of basal temporal and occipital cortices in the cognitive process of reading. The exact location and function of these areas remain a matter of debate. To confirm the participation of the basal temporal region in reading. Extraoperative electrical stimulation of the dominant hemisphere was performed in three subjects using subdural electrodes, as part of presurgical evaluation for refractory epilepsy. Pure alexia was reproduced during cortical stimulation of the dominant posterior fusiform and inferior temporal gyri in all three patients. Stimulation resulted in selective reading difficulty with intact auditory comprehension and writing. Reading difficulty involved sentences and words with intact letter by letter reading. Picture naming difficulties were also noted at some electrodes. This region is located posterior to and contiguous with the basal temporal language area (BTLA) where stimulation resulted in global language dysfunction in visual and auditory realms. The location corresponded with the visual word form area described on functional MRI. These observations support the existence of a visual language area in the dominant fusiform and occipitotemporal gyri, contiguous with basal temporal language area. A portion of visual language area was exclusively involved in lexical processing while the other part of this region processed both lexical and nonlexical symbols.
Python for Large-Scale Electrophysiology
Spacek, Martin; Blanche, Tim; Swindale, Nicholas
2008-01-01
Electrophysiology is increasingly moving towards highly parallel recording techniques which generate large data sets. We record extracellularly in vivo in cat and rat visual cortex with 54-channel silicon polytrodes, under time-locked visual stimulation, from localized neuronal populations within a cortical column. To help deal with the complexity of generating and analysing these data, we used the Python programming language to develop three software projects: one for temporally precise visual stimulus generation (“dimstim”); one for electrophysiological waveform visualization and spike sorting (“spyke”); and one for spike train and stimulus analysis (“neuropy”). All three are open source and available for download (http://swindale.ecc.ubc.ca/code). The requirements and solutions for these projects differed greatly, yet we found Python to be well suited for all three. Here we present our software as a showcase of the extensive capabilities of Python in neuroscience. PMID:19198646
Clevis, Krien; Hagoort, Peter
2011-01-01
We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants’ brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information. PMID:20530540
Beyond Reading: Developing Visual Literacy in French.
ERIC Educational Resources Information Center
Sacco, Steven J.; Marckel, Beverly G.
Reading can and should be a more widely used foreign language skill, and visual literacy in a foreign language goes beyond comprehension of basal reading materials. Authentic, real-life reading need not wait for foreign language mastery, but can begin at an early level if materials geared to the students' prior knowledge and interest are chosen.…
ERIC Educational Resources Information Center
Williams, Joshua T.; Newman, Sharlene D.
2016-01-01
The roles of visual sonority and handshape markedness in sign language acquisition and production were investigated. In Experiment 1, learners were taught sign-nonobject correspondences that varied in sign movement sonority and handshape markedness. Results from a sign-picture matching task revealed that high sonority signs were more accurately…
Teaching Turkish as a Foreign Language: Extrapolating from Experimental Psychology
ERIC Educational Resources Information Center
Erdener, Dogu
2017-01-01
Speech perception is beyond the auditory domain and a multimodal process, specifically, an auditory-visual one--we process lip and face movements during speech. In this paper, the findings in cross-language studies of auditory-visual speech perception in the past two decades are interpreted to the applied domain of second language (L2)…
Visual tool for estimating the fractal dimension of images
NASA Astrophysics Data System (ADS)
Grossu, I. V.; Besliu, C.; Rusu, M. V.; Jipa, Al.; Bordeianu, C. C.; Felea, D.
2009-10-01
This work presents a new Visual Basic 6.0 application for estimating the fractal dimension of images, based on an optimized version of the box-counting algorithm. Following the attempt to separate the real information from "noise", we considered also the family of all band-pass filters with the same band-width (specified as parameter). The fractal dimension can be thus represented as a function of the pixel color code. The program was used for the study of paintings cracks, as an additional tool which can help the critic to decide if an artistic work is original or not. Program summaryProgram title: Fractal Analysis v01 Catalogue identifier: AEEG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 29 690 No. of bytes in distributed program, including test data, etc.: 4 967 319 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 30M Classification: 14 Nature of problem: Estimating the fractal dimension of images. Solution method: Optimized implementation of the box-counting algorithm. Use of a band-pass filter for separating the real information from "noise". User friendly graphical interface. Restrictions: Although various file-types can be used, the application was mainly conceived for the 8-bit grayscale, windows bitmap file format. Running time: In a first approximation, the algorithm is linear.
ERIC Educational Resources Information Center
Baker, Richard Allen, Jr.
2011-01-01
The purpose of this study was to examine the policy implications allowing administrators to exempt a student from required arts instruction if the student obtained unsatisfactory scores on the high-stake state mandated tests in English and mathematics. This study examined English language arts and math test scores for 37,222 eighth grade students…
ERIC Educational Resources Information Center
Masciantonio, Rudolph; And Others
This curriculum guide, developed for use in a sixth-grade FLES (foreign language in elementary school) program, embraces a visual-audiolingual approach to the teaching of Latin while providing a source of materials for the teaching of the culture of ancient Rome. The course is organized around eight major units on: (1) Jupiter and His Siblings,…
NASA Astrophysics Data System (ADS)
Oberhauser, Nils; Nurisso, Alessandra; Carrupt, Pierre-Alain
2014-05-01
The molecular lipophilicity potential (MLP) is a well-established method to calculate and visualize lipophilicity on molecules. We are here introducing a new computational tool named MLP Tools, written in the programming language Python, and conceived as a free plugin for the popular open source molecular viewer PyMOL. The plugin is divided into several sub-programs which allow the visualization of the MLP on molecular surfaces, as well as in three-dimensional space in order to analyze lipophilic properties of binding pockets. The sub-program Log MLP also implements the virtual log P which allows the prediction of the octanol/water partition coefficients on multiple three-dimensional conformations of the same molecule. An implementation on the recently introduced MLP GOLD procedure, improving the GOLD docking performance in hydrophobic pockets, is also part of the plugin. In this article, all functions of the MLP Tools will be described through a few chosen examples.
ERIC Educational Resources Information Center
Argyropoulos, Vassilios; Sideridis, Georgios D.; Botsas, George; Padeliadu, Susana
2012-01-01
The purpose of the present study was to assess self-regulation of students with visual impairments across two academic subjects, language and math. The participants were 46 Greek students with visual impairments who completed self-regulation measures across the subject matters of language and math. Initially, the factorial validity of the scale…
A Reggio-Inspired Music Atelier: Opening the Door between Visual Arts and Music
ERIC Educational Resources Information Center
Hanna, Wendell
2014-01-01
The Reggio Emilia approach is based on the idea that every child has at least, "one hundred languages" available for expressing perspectives of the world, and one of those languages is music. While all of the arts (visual, music, dance, drama) are considered equally important in Reggio schools, the visual arts have been particularly…
Sensitivity to Visual Prosodic Cues in Signers and Nonsigners
ERIC Educational Resources Information Center
Brentari, Diane; Gonzalez, Carolina; Seidl, Amanda; Wilbur, Ronnie
2011-01-01
Three studies are presented in this paper that address how nonsigners perceive the visual prosodic cues in a sign language. In Study 1, adult American nonsigners and users of American Sign Language (ASL) were compared on their sensitivity to the visual cues in ASL Intonational Phrases. In Study 2, hearing, nonsigning American infants were tested…
Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I.
2014-01-01
Sign language comprehension requires visual attention to the linguistic signal and visual attention to referents in the surrounding world, whereas these processes are divided between the auditory and visual modalities for spoken language comprehension. Additionally, the age-onset of first language acquisition and the quality and quantity of linguistic input and for deaf individuals is highly heterogeneous, which is rarely the case for hearing learners of spoken languages. Little is known about how these modality and developmental factors affect real-time lexical processing. In this study, we ask how these factors impact real-time recognition of American Sign Language (ASL) signs using a novel adaptation of the visual world paradigm in deaf adults who learned sign from birth (Experiment 1), and in deaf individuals who were late-learners of ASL (Experiment 2). Results revealed that although both groups of signers demonstrated rapid, incremental processing of ASL signs, only native-signers demonstrated early and robust activation of sub-lexical features of signs during real-time recognition. Our findings suggest that the organization of the mental lexicon into units of both form and meaning is a product of infant language learning and not the sensory and motor modality through which the linguistic signal is sent and received. PMID:25528091
A Unique Testing System for Audio Visual Foreign Language Laboratory.
ERIC Educational Resources Information Center
Stama, Spelios T.
1980-01-01
Described is the design of a low maintenance, foreign language laboratory at Ithaca College, New York, that provides visual and audio instruction, flexibility for testing, and greater student involvement in the lessons. (Author/CS)
Lieberman, Amy M; Borovsky, Arielle; Mayberry, Rachel I
2018-01-01
Prediction during sign language comprehension may enable signers to integrate linguistic and non-linguistic information within the visual modality. In two eyetracking experiments, we investigated American Sign language (ASL) semantic prediction in deaf adults and children (aged 4-8 years). Participants viewed ASL sentences in a visual world paradigm in which the sentence-initial verb was either neutral or constrained relative to the sentence-final target noun. Adults and children made anticipatory looks to the target picture before the onset of the target noun in the constrained condition only, showing evidence for semantic prediction. Crucially, signers alternated gaze between the stimulus sign and the target picture only when the sentential object could be predicted from the verb. Signers therefore engage in prediction by optimizing visual attention between divided linguistic and referential signals. These patterns suggest that prediction is a modality-independent process, and theoretical implications are discussed.
ERIC Educational Resources Information Center
Mishra, Ramesh Kumar; Singh, Niharika
2014-01-01
Previous psycholinguistic studies have shown that bilinguals activate lexical items of both the languages during auditory and visual word processing. In this study we examined if Hindi-English bilinguals activate the orthographic forms of phonological neighbors of translation equivalents of the non target language while listening to words either…
ERIC Educational Resources Information Center
Cohen, Elena
Recognizing that creativity facilitates children's learning and development, the Head Start Program Performance Standards require Head Start programs to include opportunities for creative self-expression. This guide with accompanying videotape, both in English- and Spanish- language versions, encourages and assists adults to support children's…
ERIC Educational Resources Information Center
Wu, Shiyu; Ma, Zheng
2017-01-01
Previous research has indicated that, in viewing a visual word, the activated phonological representation in turn activates its homophone, causing semantic interference. Using this mechanism of phonological mediation, this study investigated native-language phonological interference in visual recognition of Chinese two-character compounds by early…
Huettig, Falk; Altmann, Gerry T M
2005-05-01
When participants are presented simultaneously with spoken language and a visual display depicting objects to which that language refers, participants spontaneously fixate the visual referents of the words being heard [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language: A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6(1), 84-107; ]. We demonstrate here that such spontaneous fixation can be driven by partial semantic overlap between a word and a visual object. Participants heard the word 'piano' when (a) a piano was depicted amongst unrelated distractors; (b) a trumpet was depicted amongst those same distractors; and (c), both the piano and trumpet were depicted. The probability of fixating the piano and the trumpet in the first two conditions rose as the word 'piano' unfolded. In the final condition, only fixations to the piano rose, although the trumpet was fixated more than the distractors. We conclude that eye movements are driven by the degree of match, along various dimensions that go beyond simple visual form, between a word and the mental representations of objects in the concurrent visual field.
NASA Technical Reports Server (NTRS)
Chouinard, Caroline; Fisher, Forest; Estlin, Tara; Gaines, Daniel; Schaffer, Steven
2005-01-01
The Grid Visualization Tool (GVT) is a computer program for displaying the path of a mobile robotic explorer (rover) on a terrain map. The GVT reads a map-data file in either portable graymap (PGM) or portable pixmap (PPM) format, representing a gray-scale or color map image, respectively. The GVT also accepts input from path-planning and activity-planning software. From these inputs, the GVT generates a map overlaid with one or more rover path(s), waypoints, locations of targets to be explored, and/or target-status information (indicating success or failure in exploring each target). The display can also indicate different types of paths or path segments, such as the path actually traveled versus a planned path or the path traveled to the present position versus planned future movement along a path. The program provides for updating of the display in real time to facilitate visualization of progress. The size of the display and the map scale can be changed as desired by the user. The GVT was written in the C++ language using the Open Graphics Library (OpenGL) software. It has been compiled for both Sun Solaris and Linux operating systems.
ERIC Educational Resources Information Center
Kohl, Virginia; Dressler, Becky; Hoback, John
As a co-author of the GEAR-UP (Gaining Early Awareness and Readiness for Undergraduate Programs) grant proposal to the Department of Education in 1999, the primary author (Kohl) of this paper is in her third year of working at Franklin Middle School, which largely serves at-risk minority students through the University of South Florida (USF),…
ERIC Educational Resources Information Center
Cole, Rachel L.; Pickering, Susan J.
2010-01-01
This study investigated the encoding strategies employed by Chinese and English language users when recalling sequences of pictured objects. The working memory performance of native English participants (n = 14) and Chinese speakers of English as a second language (Chinese ESL; n = 14) was compared using serial recall of visually-presented…
uPy: a ubiquitous computer graphics Python API with Biological Modeling Applications
Autin, L.; Johnson, G.; Hake, J.; Olson, A.; Sanner, M.
2015-01-01
In this paper we describe uPy, an extension module for the Python programming language that provides a uniform abstraction of the APIs of several 3D computer graphics programs called hosts, including: Blender, Maya, Cinema4D, and DejaVu. A plugin written with uPy is a unique piece of code that will run in all uPy-supported hosts. We demonstrate the creation of complex plug-ins for molecular/cellular modeling and visualization and discuss how uPy can more generally simplify programming for many types of projects (not solely science applications) intended for multi-host distribution. uPy is available at http://upy.scripps.edu PMID:24806987
Kim, Kyung Hwan; Kim, Ja Hyun
2006-02-20
The aim of this study was to compare spatiotemporal cortical activation patterns during the visual perception of Korean, English, and Chinese words. The comparison of these three languages offers an opportunity to study the effect of written forms on cortical processing of visually presented words, because of partial similarity/difference among words of these languages, and the familiarity of native Koreans with these three languages at the word level. Single-character words and pictograms were excluded from the stimuli in order to activate neuronal circuitries that are involved only in word perception. Since a variety of cerebral processes are sequentially evoked during visual word perception, a high-temporal resolution is required and thus we utilized event-related potential (ERP) obtained from high-density electroencephalograms. The differences and similarities observed from statistical analyses of ERP amplitudes, the correlation between ERP amplitudes and response times, and the patterns of current source density, appear to be in line with demands of visual and semantic analysis resulting from the characteristics of each language, and the expected task difficulties for native Korean subjects.
Teaching for Different Learning Styles.
ERIC Educational Resources Information Center
Cropper, Carolyn
1994-01-01
This study examined learning styles in 137 high ability fourth-grade students. All students were administered two learning styles inventories. Characteristics of students with the following learning styles are summarized: auditory language, visual language, auditory numerical, visual numerical, tactile concrete, individual learning, group…
A WebGL Tool for Visualizing the Topology of the Sun's Coronal Magnetic Field
NASA Astrophysics Data System (ADS)
Duffy, A.; Cheung, C.; DeRosa, M. L.
2012-12-01
We present a web-based, topology-viewing tool that allows users to visualize the geometry and topology of the Sun's 3D coronal magnetic field in an interactive manner. The tool is implemented using, open-source, mature, modern web technologies including WebGL, jQuery, HTML 5, and CSS 3, which are compatible with nearly all modern web browsers. As opposed to the traditional method of visualization, which involves the downloading and setup of various software packages-proprietary and otherwise-the tool presents a clean interface that allows the user to easily load and manipulate the model, while also offering great power to choose which topological features are displayed. The tool accepts data encoded in the JSON open format that has libraries available for nearly every major programming language, making it simple to generate the data.
A Sign Language Screen Reader for Deaf
NASA Astrophysics Data System (ADS)
El Ghoul, Oussama; Jemni, Mohamed
Screen reader technology has appeared first to allow blind and people with reading difficulties to use computer and to access to the digital information. Until now, this technology is exploited mainly to help blind community. During our work with deaf people, we noticed that a screen reader can facilitate the manipulation of computers and the reading of textual information. In this paper, we propose a novel screen reader dedicated to deaf. The output of the reader is a visual translation of the text to sign language. The screen reader is composed by two essential modules: the first one is designed to capture the activities of users (mouse and keyboard events). For this purpose, we adopted Microsoft MSAA application programming interfaces. The second module, which is in classical screen readers a text to speech engine (TTS), is replaced by a novel text to sign (TTSign) engine. This module converts text into sign language animation based on avatar technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bethel, E. Wes; Frank, Randy; Fulcomer, Sam
Scientific visualization is the transformation of abstract information into images, and it plays an integral role in the scientific process by facilitating insight into observed or simulated phenomena. Visualization as a discipline spans many research areas from computer science, cognitive psychology and even art. Yet the most successful visualization applications are created when close synergistic interactions with domain scientists are part of the algorithmic design and implementation process, leading to visual representations with clear scientific meaning. Visualization is used to explore, to debug, to gain understanding, and as an analysis tool. Visualization is literally everywhere--images are present in this report,more » on television, on the web, in books and magazines--the common theme is the ability to present information visually that is rapidly assimilated by human observers, and transformed into understanding or insight. As an indispensable part a modern science laboratory, visualization is akin to the biologist's microscope or the electrical engineer's oscilloscope. Whereas the microscope is limited to small specimens or use of optics to focus light, the power of scientific visualization is virtually limitless: visualization provides the means to examine data that can be at galactic or atomic scales, or at any size in between. Unlike the traditional scientific tools for visual inspection, visualization offers the means to ''see the unseeable.'' Trends in demographics or changes in levels of atmospheric CO{sub 2} as a function of greenhouse gas emissions are familiar examples of such unseeable phenomena. Over time, visualization techniques evolve in response to scientific need. Each scientific discipline has its ''own language,'' verbal and visual, used for communication. The visual language for depicting electrical circuits is much different than the visual language for depicting theoretical molecules or trends in the stock market. There is no ''one visualization too'' that can serve as a panacea for all science disciplines. Instead, visualization researchers work hand in hand with domain scientists as part of the scientific research process to define, create, adapt and refine software that ''speaks the visual language'' of each scientific domain.« less
Visualized kinematics code for two-body nuclear reactions
NASA Astrophysics Data System (ADS)
Lee, E. J.; Chae, K. Y.
2016-05-01
The one or few nucleon transfer reaction has been a great tool for investigating the single-particle properties of a nucleus. Both stable and exotic beams are utilized to study transfer reactions in normal and inverse kinematics, respectively. Because many energy levels of the heavy recoil from the two-body nuclear reaction can be populated by using a single beam energy, identifying each populated state, which is not often trivial owing to high level-density of the nucleus, is essential. For identification of the energy levels, a visualized kinematics code called VISKIN has been developed by utilizing the Java programming language. The development procedure, usage, and application of the VISKIN is reported.
ERIC Educational Resources Information Center
Emmorey, Karen; Gertsberg, Nelly; Korpics, Franco; Wright, Charles E.
2009-01-01
Speakers monitor their speech output by listening to their own voice. However, signers do not look directly at their hands and cannot see their own face. We investigated the importance of a visual perceptual loop for sign language monitoring by examining whether changes in visual input alter sign production. Deaf signers produced American Sign…
ERIC Educational Resources Information Center
Brooks, Kevin
2009-01-01
This article provides an analysis of Marshall McLuhan and Quentin Fiore's "The Medium Is the Massage," a visual-verbal text that is generally acknowledged as innovative but seldom taken seriously or read carefully. The analysis draws on the visual language vocabulary developed by Scott McCloud in "Understanding Comics" and argues that the field of…
Flexible Environmental Modeling with Python and Open - GIS
NASA Astrophysics Data System (ADS)
Pryet, Alexandre; Atteia, Olivier; Delottier, Hugo; Cousquer, Yohann
2015-04-01
Numerical modeling now represents a prominent task of environmental studies. During the last decades, numerous commercial programs have been made available to environmental modelers. These software applications offer user-friendly graphical user interfaces that allow an efficient management of many case studies. However, they suffer from a lack of flexibility and closed-source policies impede source code reviewing and enhancement for original studies. Advanced modeling studies require flexible tools capable of managing thousands of model runs for parameter optimization, uncertainty and sensitivity analysis. In addition, there is a growing need for the coupling of various numerical models associating, for instance, groundwater flow modeling to multi-species geochemical reactions. Researchers have produced hundreds of open-source powerful command line programs. However, there is a need for a flexible graphical user interface allowing an efficient processing of geospatial data that comes along any environmental study. Here, we present the advantages of using the free and open-source Qgis platform and the Python scripting language for conducting environmental modeling studies. The interactive graphical user interface is first used for the visualization and pre-processing of input geospatial datasets. Python scripting language is then employed for further input data processing, call to one or several models, and post-processing of model outputs. Model results are eventually sent back to the GIS program, processed and visualized. This approach combines the advantages of interactive graphical interfaces and the flexibility of Python scripting language for data processing and model calls. The numerous python modules available facilitate geospatial data processing and numerical analysis of model outputs. Once input data has been prepared with the graphical user interface, models may be run thousands of times from the command line with sequential or parallel calls. We illustrate this approach with several case studies in groundwater hydrology and geochemistry and provide links to several python libraries that facilitate pre- and post-processing operations.
Blue-green color categorization in Mandarin-English speakers.
Wuerger, Sophie; Xiao, Kaida; Mylonas, Dimitris; Huang, Qingmei; Karatzas, Dimosthenis; Hird, Emily; Paramei, Galina
2012-02-01
Observers are faster to detect a target among a set of distracters if the targets and distracters come from different color categories. This cross-boundary advantage seems to be limited to the right visual field, which is consistent with the dominance of the left hemisphere for language processing [Gilbert et al., Proc. Natl. Acad. Sci. USA 103, 489 (2006)]. Here we study whether a similar visual field advantage is found in the color identification task in speakers of Mandarin, a language that uses a logographic system. Forty late Mandarin-English bilinguals performed a blue-green color categorization task, in a blocked design, in their first language (L1: Mandarin) or second language (L2: English). Eleven color singletons ranging from blue to green were presented for 160 ms, randomly in the left visual field (LVF) or right visual field (RVF). Color boundary and reaction times (RTs) at the color boundary were estimated in L1 and L2, for both visual fields. We found that the color boundary did not differ between the languages; RTs at the color boundary, however, were on average more than 100 ms shorter in the English compared to the Mandarin sessions, but only when the stimuli were presented in the RVF. The finding may be explained by the script nature of the two languages: Mandarin logographic characters are analyzed visuospatially in the right hemisphere, which conceivably facilitates identification of color presented to the LVF. © 2012 Optical Society of America
Szécsi, László; Kacsó, Ágota; Zeck, Günther; Hantz, Péter
2017-01-01
Light stimulation with precise and complex spatial and temporal modulation is demanded by a series of research fields like visual neuroscience, optogenetics, ophthalmology, and visual psychophysics. We developed a user-friendly and flexible stimulus generating framework (GEARS GPU-based Eye And Retina Stimulation Software), which offers access to GPU computing power, and allows interactive modification of stimulus parameters during experiments. Furthermore, it has built-in support for driving external equipment, as well as for synchronization tasks, via USB ports. The use of GEARS does not require elaborate programming skills. The necessary scripting is visually aided by an intuitive interface, while the details of the underlying software and hardware components remain hidden. Internally, the software is a C++/Python hybrid using OpenGL graphics. Computations are performed on the GPU, and are defined in the GLSL shading language. However, all GPU settings, including the GPU shader programs, are automatically generated by GEARS. This is configured through a method encountered in game programming, which allows high flexibility: stimuli are straightforwardly composed using a broad library of basic components. Stimulus rendering is implemented solely in C++, therefore intermediary libraries for interfacing could be omitted. This enables the program to perform computationally demanding tasks like en-masse random number generation or real-time image processing by local and global operations. PMID:29326579
SBOL Visual: A Graphical Language for Genetic Designs.
Quinn, Jacqueline Y; Cox, Robert Sidney; Adler, Aaron; Beal, Jacob; Bhatia, Swapnil; Cai, Yizhi; Chen, Joanna; Clancy, Kevin; Galdzicki, Michal; Hillson, Nathan J; Le Novère, Nicolas; Maheshwari, Akshay J; McLaughlin, James Alastair; Myers, Chris J; P, Umesh; Pocock, Matthew; Rodriguez, Cesar; Soldatova, Larisa; Stan, Guy-Bart V; Swainston, Neil; Wipat, Anil; Sauro, Herbert M
2015-12-01
Synthetic Biology Open Language (SBOL) Visual is a graphical standard for genetic engineering. It consists of symbols representing DNA subsequences, including regulatory elements and DNA assembly features. These symbols can be used to draw illustrations for communication and instruction, and as image assets for computer-aided design. SBOL Visual is a community standard, freely available for personal, academic, and commercial use (Creative Commons CC0 license). We provide prototypical symbol images that have been used in scientific publications and software tools. We encourage users to use and modify them freely, and to join the SBOL Visual community: http://www.sbolstandard.org/visual.
Lu, Aitao; Wang, Lu; Guo, Yuyang; Zeng, Jiahong; Zheng, Dongping; Wang, Xiaolu; Shao, Yulan; Wang, Ruiming
2017-09-01
The current study investigated the mechanism of language switching in unbalanced visual unimodal bilinguals as well as balanced and unbalanced bimodal bilinguals during a picture naming task. All three groups exhibited significant switch costs across two languages, with symmetrical switch cost in balanced bimodal bilinguals and asymmetrical switch cost in unbalanced unimodal bilinguals and bimodal bilinguals. Moreover, the relative proficiency of the two languages but not their absolute proficiency had an effect on language switch cost. For the bimodal bilinguals the language switch cost also arose from modality switching. These findings suggest that the language switch cost might originate from multiple sources from both outside (e.g., modality switching) and inside (e.g., the relative proficiency of the two languages) the linguistic lexicon.
Visual gene developer: a fully programmable bioinformatics software for synthetic gene optimization.
Jung, Sang-Kyu; McDonald, Karen
2011-08-16
Direct gene synthesis is becoming more popular owing to decreases in gene synthesis pricing. Compared with using natural genes, gene synthesis provides a good opportunity to optimize gene sequence for specific applications. In order to facilitate gene optimization, we have developed a stand-alone software called Visual Gene Developer. The software not only provides general functions for gene analysis and optimization along with an interactive user-friendly interface, but also includes unique features such as programming capability, dedicated mRNA secondary structure prediction, artificial neural network modeling, network & multi-threaded computing, and user-accessible programming modules. The software allows a user to analyze and optimize a sequence using main menu functions or specialized module windows. Alternatively, gene optimization can be initiated by designing a gene construct and configuring an optimization strategy. A user can choose several predefined or user-defined algorithms to design a complicated strategy. The software provides expandable functionality as platform software supporting module development using popular script languages such as VBScript and JScript in the software programming environment. Visual Gene Developer is useful for both researchers who want to quickly analyze and optimize genes, and those who are interested in developing and testing new algorithms in bioinformatics. The software is available for free download at http://www.visualgenedeveloper.net.
Visual gene developer: a fully programmable bioinformatics software for synthetic gene optimization
2011-01-01
Background Direct gene synthesis is becoming more popular owing to decreases in gene synthesis pricing. Compared with using natural genes, gene synthesis provides a good opportunity to optimize gene sequence for specific applications. In order to facilitate gene optimization, we have developed a stand-alone software called Visual Gene Developer. Results The software not only provides general functions for gene analysis and optimization along with an interactive user-friendly interface, but also includes unique features such as programming capability, dedicated mRNA secondary structure prediction, artificial neural network modeling, network & multi-threaded computing, and user-accessible programming modules. The software allows a user to analyze and optimize a sequence using main menu functions or specialized module windows. Alternatively, gene optimization can be initiated by designing a gene construct and configuring an optimization strategy. A user can choose several predefined or user-defined algorithms to design a complicated strategy. The software provides expandable functionality as platform software supporting module development using popular script languages such as VBScript and JScript in the software programming environment. Conclusion Visual Gene Developer is useful for both researchers who want to quickly analyze and optimize genes, and those who are interested in developing and testing new algorithms in bioinformatics. The software is available for free download at http://www.visualgenedeveloper.net. PMID:21846353
Research issues of geometry-based visual languages and some solutions
NASA Astrophysics Data System (ADS)
Green, Thorn G.
This dissertation addresses the problem of how to design visual language systems that are based upon Geometric Algebra, and provide a visual coupling of algebraic expressions and geometric depictions. This coupling of algebraic expressions and geometric depictions provides a new means for expressing both mathematical and geometric relationships present in mathematics, physics, and Computer-Aided Geometric Design (CAGD). Another significant feature of such a system is that the result of changing a parameter (by dragging the mouse) can be seen immediately in the depiction(s) of all expressions that use that parameter. This greatly aides the cognition of the relationships between variables. Systems for representing such a coupling of algebra and geometry have characteristics of both visual language systems, and systems for scientific visualization. Instead of using a parsing or dataflow paradigm for the visual language representation, the systems instead represent equations as manipulatible constrained diagrams for their visualization. This requires that the design of such a system have (but is not limited to) a means for parsing equations entered by the user, a scheme for producing a visual representation of these equations; techniques for maintaining the coupling between the expressions entered and the diagrams displayed; algorithms for maintaining the consistency of the diagrams; and, indexing capabilities that are efficient enough to allow diagrams to be created, and manipulated in a short enough period of time. The author proposes solutions for how such a design can be realized.
Visual-auditory integration during speech imitation in autism.
Williams, Justin H G; Massaro, Dominic W; Peel, Natalie J; Bosseler, Alexis; Suddendorf, Thomas
2004-01-01
Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training.
Don’t Assume Deaf Students are Visual Learners
Marschark, Marc; Paivio, Allan; Spencer, Linda J.; Durkin, Andreana; Borgna, Georgianna; Convertino, Carol; Machmer, Elizabeth
2016-01-01
In the education of deaf learners, from primary school to postsecondary settings, it frequently is suggested that deaf students are visual learners. That assumption appears to be based on the visual nature of signed languages—used by some but not all deaf individuals—and the fact that with greater hearing losses, deaf students will rely relatively more on vision than audition. However, the questions of whether individuals with hearing loss are more likely to be visual learners than verbal learners or more likely than hearing peers to be visual learners have not been empirically explored. Several recent studies, in fact, have indicated that hearing learners typically perform as well or better than deaf learners on a variety of visual-spatial tasks. The present study used two standardized instruments to examine learning styles among college deaf students who primarily rely on sign language or spoken language and their hearing peers. The visual-verbal dimension was of particular interest. Consistent with recent indirect findings, results indicated that deaf students are no more likely than hearing students to be visual learners and are no stronger in their visual skills and habits than their verbal skills and habits, nor are deaf students’ visual orientations associated with sign language skills. The results clearly have specific implications for the educating of deaf learners. PMID:28344430
Gender differences in identifying emotions from auditory and visual stimuli.
Waaramaa, Teija
2017-12-01
The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.
Viewpoint in the Visual-Spatial Modality: The Coordination of Spatial Perspective
Pyers, Jennie E.; Perniss, Pamela; Emmorey, Karen
2015-01-01
Sign languages express viewpoint-dependent spatial relations (e.g., left, right) iconically but must conventionalize from whose viewpoint the spatial relation is being described, the signer's or the perceiver's. In Experiment 1, ASL signers and sign-naïve gesturers expressed viewpoint-dependent relations egocentrically, but only signers successfully interpreted the descriptions non-egocentrically, suggesting that viewpoint convergence in the visual modality emerges with language conventionalization. In Experiment 2, we observed that the cost of adopting a non-egocentric viewpoint was greater for producers than for perceivers, suggesting that sign languages have converged on the most cognitively efficient means of expressing left-right spatial relations. We suggest that non-linguistic cognitive factors such as visual perspective-taking and motor embodiment may constrain viewpoint convergence in the visual-spatial modality. PMID:26981027
Viewpoint in the Visual-Spatial Modality: The Coordination of Spatial Perspective.
Pyers, Jennie E; Perniss, Pamela; Emmorey, Karen
2015-06-01
Sign languages express viewpoint-dependent spatial relations (e.g., left, right) iconically but must conventionalize from whose viewpoint the spatial relation is being described, the signer's or the perceiver's. In Experiment 1, ASL signers and sign-naïve gesturers expressed viewpoint-dependent relations egocentrically, but only signers successfully interpreted the descriptions non-egocentrically, suggesting that viewpoint convergence in the visual modality emerges with language conventionalization. In Experiment 2, we observed that the cost of adopting a non-egocentric viewpoint was greater for producers than for perceivers, suggesting that sign languages have converged on the most cognitively efficient means of expressing left-right spatial relations. We suggest that non-linguistic cognitive factors such as visual perspective-taking and motor embodiment may constrain viewpoint convergence in the visual-spatial modality.
Strasser, T; Peters, T; Jagle, H; Zrenner, E; Wilke, R
2010-01-01
Electrophysiology of vision - especially the electroretinogram (ERG) - is used as a non-invasive way for functional testing of the visual system. The ERG is a combined electrical response generated by neural and non-neuronal cells in the retina in response to light stimulation. This response can be recorded and used for diagnosis of numerous disorders. For both clinical practice and clinical trials it is important to process those signals in an accurate and fast way and to provide the results as structured, consistent reports. Therefore, we developed a freely available and open-source framework in Java (http://www.eye.uni-tuebingen.de/project/idsI4sigproc). The framework is focused on an easy integration with existing applications. By leveraging well-established software patterns like pipes-and-filters and fluent interfaces as well as by designing the application programming interfaces (API) as an integrated domain specific language (DSL) the overall framework provides a smooth learning curve. Additionally, it already contains several processing methods and visualization features and can be extended easily by implementing the provided interfaces. In this way, not only can new processing methods be added but the framework can also be adopted for other areas of signal processing. This article describes in detail the structure and implementation of the framework and demonstrate its application through the software package used in clinical practice and clinical trials at the University Eye Hospital Tuebingen one of the largest departments in the field of visual electrophysiology in Europe.
ERIC Educational Resources Information Center
Declerck, Mathieu; Koch, Iring; Philipp, Andrea M.
2015-01-01
The current study systematically examined the influence of sequential predictability of languages and concepts on language switching. To this end, 2 language switching paradigms were combined. To measure language switching with a random sequence of languages and/or concepts, we used a language switching paradigm that implements visual cues and…
2013-03-01
operation. 2.1.2 Canine model The canine experiment (n ¼ 1) was performed as a validation of the correlation of visible reflectance imaging measurements...http://spiedl.org/terms with actual blood oxygenation. The canine laparotomy, as part of an animal protocol approved by the Institutional Animal Care and...All data analysis was performed using algorithms and software written in-house using the programming languages Matlab and IDL/ ENVI (ITT Visual
2002-09-01
Basic for Applications ( VBA ) 6.0 as macros may not be supported in 8 future versions of Access. Access 2000 offers Internet- related features for...security features from Microsoft’s SQL Server. [1] 3. System Requirements Access 2000 is a resource-intensive application as are all Office 2000...1] • Modules – Functions and procedures written in the Visual Basic for Applications ( VBA ) programming language. The capabilities of modules
Campbell, T A; Wright, J C; Huston, A C
1987-06-01
An experiment was designed to assess the effects of formal production features and content difficulty on children's processing of televised messages about nutrition. Messages with identical content (the same script and visual shot sequence) were made in two forms: child program forms (animated film, second-person address, and character voice narration with sprightly music) and adult program forms (live photography, third-person address, and adult male narration with sedate background music). For each form, messages were made at three levels of content difficulty. Easier versions were longer, more redundant, and used simpler language; difficult versions presented information more quickly with less redundancy and more abstract language. Regardless of form or difficulty level, each set of bits presented the same basic information. Kindergarten children (N = 120) were assigned to view three different bits of the same form type and difficulty embedded in a miniprogram. Visual attention to child forms was significantly greater than to adult forms; free and cued recall scores were also higher for child than for adult forms. Although all recall and recognition scores were best for easy versions and worst for difficult versions, attention showed only minor variation as a function of content difficulty. Results are interpreted to indicate that formal production features, independently of content, influence the effort and level of processing that children use to understand televised educational messages.
Bozić-Vrbancić, Senka; Vrbancić, Mario; Orlić, Olga
2008-12-01
Questions of diversity and multiculturalism are at the heart of many discussions on European supranational identity within contemporary anthropology, sociology, cultural studies, linguistics and so on. Since we are living in a period marked by the economic and political changes which emerged after European unification, a call for a new analysis of heterogeneity, cultural difference and issues of belonging is not surprising. This call has been fuelled by the European Union's concern with "culture" as one of the main driving forces for constructing "European identity". While the official European policy describes European culture as common to all Europeans, Europe is also-seen as representing "unity in diversity". By analysing contemporary European MEDIA policies and programs this article attempts to contribute to a small but growing body of work that explores what role "language" and "visual images" play in the process of constructing European culture and supranational European identity. More specifically, the article explores the complex articulation of language and culture in order to analyse supranational imaginary of European identity as it is expressed through the simple slogan "Europe: unity in diversity". We initially grounded our interest in the politics of identity within the European Union within theoretical frameworks of "power and knowledge" and "identity and subjectivity". We consider contemporary debates in social sciences and humanities over the concepts of language", "culture" and "identity" as inseparable from each other (Ahmed 2000; Brah 1996, 2000; Butler 1993, Derrida 1981; Gilroy 2004; Laclau 1990). Cultural and postcolonial studies theorists (e.g. Brah 1996; Bhabha 1994; Hall 1992, 1996, among others) argue that concepts of "culture" and "identity" signify a historically variable nexus of social meanings. That is to say, "culture" and "identity" are discursive articulations. According to this view, "culture" and "identity" are not separate fields from economic, social and political issues, on the contrary "culture" and "identity" are constructed through social, economic and political relations. Issues of "language" and "images" are central to both of them. By questioning the role that "language" and "visual images play in the construction of European identity and culture, we are considering "language" as well as "visual images" not just as representations, but also as forms of social action. In addition to that, inspired by discourse theory (Laclau 1985, 1994, 2007) and psychoanalysis (Zizek 1989, 1993, 1994; Stavrakakis 1999, 2005, 2007) we explore the libidinal dimension of identification processes. We focus on the European MEDIA Programme in order to analyse how different languages and images are being used to create a sense of "European unity in diversity". Along with Stavrakakis we argue that due to the lack of libidinal investment into discourses of Europeanness, Europe is failing to create a strong supranational identity. However we also show that there have been recent attempts by European policy makers to try and fill this gap through various projects which focus entirely on emotions; which appears to reinforce new possibilities of identification with Europe.
Oryadi Zanjani, Mohammad Majid; Hasanzadeh, Saeid; Rahgozar, Mehdi; Shemshadi, Hashem; Purdy, Suzanne C; Mahmudi Bakhtiari, Behrooz; Vahab, Maryam
2013-09-01
Since the introduction of cochlear implantation, researchers have considered children's communication and educational success before and after implantation. Therefore, the present study aimed to compare auditory, speech, and language development scores following one-sided cochlear implantation between two groups of prelingual deaf children educated through either auditory-only (unisensory) or auditory-visual (bisensory) modes. A randomized controlled trial with a single-factor experimental design was used. The study was conducted in the Instruction and Rehabilitation Private Centre of Hearing Impaired Children and their Family, called Soroosh in Shiraz, Iran. We assessed 30 Persian deaf children for eligibility and 22 children qualified to enter the study. They were aged between 27 and 66 months old and had been implanted between the ages of 15 and 63 months. The sample of 22 children was randomly assigned to two groups: auditory-only mode and auditory-visual mode; 11 participants in each group were analyzed. In both groups, the development of auditory perception, receptive language, expressive language, speech, and speech intelligibility was assessed pre- and post-intervention by means of instruments which were validated and standardized in the Persian population. No significant differences were found between the two groups. The children with cochlear implants who had been instructed using either the auditory-only or auditory-visual modes acquired auditory, receptive language, expressive language, and speech skills at the same rate. Overall, spoken language significantly developed in both the unisensory group and the bisensory group. Thus, both the auditory-only mode and the auditory-visual mode were effective. Therefore, it is not essential to limit access to the visual modality and to rely solely on the auditory modality when instructing hearing, language, and speech in children with cochlear implants who are exposed to spoken language both at home and at school when communicating with their parents and educators prior to and after implantation. The trial has been registered at IRCT.ir, number IRCT201109267637N1. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Detection of Brain Reorganization in Pediatric Multiple Sclerosis Using Functional MRI
2015-10-01
accomplish this, we apply comparative assessments of fMRI mappings of language, memory , and motor function, and performance on clinical neurocognitive...community at a target rate of 13 volunteers per quarter period; acquire fMRI data for language, memory , and visual-motor functions (months 3-12). c...consensus fMRI activation maps for language, memory , and visual-motor tasks (months 8-12). f) Subtask 1f. Prepare publication to disseminate our
2005-05-01
visual concordance rates varied with the patient’s language and level of health literacy . Fifty percent of patients achieved verbal concordance and...and having inadequate health literacy were associated with a lower odds ratio for verbal concordance compared to being an English speaker and having...adequate health literacy . Neither language nor health literacy was associated with visual discordance. The authors conclude that clinician-patient
People can understand descriptions of motion without activating visual motion brain regions
Dravida, Swethasri; Saxe, Rebecca; Bedny, Marina
2013-01-01
What is the relationship between our perceptual and linguistic neural representations of the same event? We approached this question by asking whether visual perception of motion and understanding linguistic depictions of motion rely on the same neural architecture. The same group of participants took part in two language tasks and one visual task. In task 1, participants made semantic similarity judgments with high motion (e.g., “to bounce”) and low motion (e.g., “to look”) words. In task 2, participants made plausibility judgments for passages describing movement (“A centaur hurled a spear … ”) or cognitive events (“A gentleman loved cheese …”). Task 3 was a visual motion localizer in which participants viewed animations of point-light walkers, randomly moving dots, and stationary dots changing in luminance. Based on the visual motion localizer we identified classic visual motion areas of the temporal (MT/MST and STS) and parietal cortex (inferior and superior parietal lobules). We find that these visual cortical areas are largely distinct from neural responses to linguistic depictions of motion. Motion words did not activate any part of the visual motion system. Motion passages produced a small response in the right superior parietal lobule, but none of the temporal motion regions. These results suggest that (1) as compared to words, rich language stimuli such as passages are more likely to evoke mental imagery and more likely to affect perceptual circuits and (2) effects of language on the visual system are more likely in secondary perceptual areas as compared to early sensory areas. We conclude that language and visual perception constitute distinct but interacting systems. PMID:24009592
Universality in eye movements and reading: A trilingual investigation.
Liversedge, Simon P; Drieghe, Denis; Li, Xin; Yan, Guoli; Bai, Xuejun; Hyönä, Jukka
2016-02-01
Universality in language has been a core issue in the fields of linguistics and psycholinguistics for many years (e.g., Chomsky, 1965). Recently, Frost (2012) has argued that establishing universals of process is critical to the development of meaningful, theoretically motivated, cross-linguistic models of reading. In contrast, other researchers argue that there is no such thing as universals of reading (e.g., Coltheart & Crain, 2012). Reading is a complex, visually mediated psychological process, and eye movements are the behavioural means by which we encode the visual information required for linguistic processing. To investigate universality of representation and process across languages we examined eye movement behaviour during reading of very comparable stimuli in three languages, Chinese, English and Finnish. These languages differ in numerous respects (character based vs. alphabetic, visual density, informational density, word spacing, orthographic depth, agglutination, etc.). We used linear mixed modelling techniques to identify variables that captured common variance across languages. Despite fundamental visual and linguistic differences in the orthographies, statistical models of reading behaviour were strikingly similar in a number of respects, and thus, we argue that their composition might reflect universality of representation and process in reading. Copyright © 2015 Elsevier B.V. All rights reserved.
Visualization and Interaction in Research, Teaching, and Scientific Communication
NASA Astrophysics Data System (ADS)
Ammon, C. J.
2017-12-01
Modern computing provides many tools for exploring observations, numerical calculations, and theoretical relationships. The number of options is, in fact, almost overwhelming. But the choices provide those with modest programming skills opportunities to create unique views of scientific information and to develop deeper insights into their data, their computations, and the underlying theoretical data-model relationships. I present simple examples of using animation and human-computer interaction to explore scientific data and scientific-analysis approaches. I illustrate how valuable a little programming ability can free scientists from the constraints of existing tools and can facilitate the development of deeper appreciation data and models. I present examples from a suite of programming languages ranging from C to JavaScript including the Wolfram Language. JavaScript is valuable for sharing tools and insight (hopefully) with others because it is integrated into one of the most powerful communication tools in human history, the web browser. Although too much of that power is often spent on distracting advertisements, the underlying computation and graphics engines are efficient, flexible, and almost universally available in desktop and mobile computing platforms. Many are working to fulfill the browser's potential to become the most effective tool for interactive study. Open-source frameworks for visualizing everything from algorithms to data are available, but advance rapidly. One strategy for dealing with swiftly changing tools is to adopt common, open data formats that are easily adapted (often by framework or tool developers). I illustrate the use of animation and interaction in research and teaching with examples from earthquake seismology.
GCLAS: a graphical constituent loading analysis system
McKallip, T.E.; Koltun, G.F.; Gray, J.R.; Glysson, G.D.
2001-01-01
The U. S. Geological Survey has developed a program called GCLAS (Graphical Constituent Loading Analysis System) to aid in the computation of daily constituent loads transported in stream flow. Due to the relative paucity with which most water-quality data are collected, computation of daily constituent loads is moderately to highly dependent on human interpretation of the relation between stream hydraulics and constituent transport. GCLAS provides a visual environment for evaluating the relation between hydraulic and other covariate time series and the constituent chemograph. GCLAS replaces the computer program Sedcalc, which is the most recent USGS sanctioned tool for constructing sediment chemographs and computing suspended-sediment loads. Written in a portable language, GCLAS has an interactive graphical interface that permits easy entry of estimated values and provides new tools to aid in making those estimates. The use of a portable language for program development imparts a degree of computer platform independence that was difficult to obtain in the past, making implementation more straightforward within the USGS' s diverse computing environment. Some of the improvements introduced in GCLAS include (1) the ability to directly handle periods of zero or reverse flow, (2) the ability to analyze and apply coefficient adjustments to concentrations as a function of time, streamflow, or both, (3) the ability to compute discharges of constituents other than suspended sediment, (4) the ability to easily view data related to the chemograph at different levels of detail, and (5) the ability to readily display covariate time series data to provide enhanced visual cues for drawing the constituent chemograph.
Changes in intrinsic local connectivity after reading intervention in children with autism.
Maximo, Jose O; Murdaugh, Donna L; O'Kelley, Sarah; Kana, Rajesh K
2017-12-01
Most of the existing behavioral and cognitive intervention programs in autism spectrum disorders (ASD) have not been tested at the neurobiological level, thus falling short of finding quantifiable neurobiological changes underlying behavioral improvement. The current study takes a translational neuroimaging approach to test the impact of a structured visual imagery-based reading intervention on improving reading comprehension and assessing its underlying local neural circuitry. Behavioral and resting state functional MRI (rs-fMRI) data were collected from children with ASD who were randomly assigned to an Experimental group (ASD-EXP; n=14) and a Wait-list control group (ASD-WLC; n=14). Participants went through an established reading intervention training program (Visualizing and Verbalizing for language comprehension and thinking or V/V; 4-h per day, 10-weeks, 200h of face-to-face instruction). Local functional connectivity was examined using a connection density approach from graph theory focusing on brain areas considered part of the Reading Network. The main results are as follows: (I) the ASD-EXP group showed significant improvement, compared to the ASD-WLC group, in their reading comprehension ability evidenced from change in comprehension scores; (II) the ASD-EXP group showed increased local brain connectivity in Reading Network regions compared to the ASD-WLC group post-intervention; (III) intervention-related changes in local brain connectivity were observed in the ASD-EXP from pre to post-intervention; and (IV) improvement in language comprehension significantly predicted changes in local connectivity. The findings of this study provide novel insights into brain plasticity in children with developmental disorders using targeted intervention programs. Published by Elsevier Inc.
Jongman, Suzanne R; Roelofs, Ardi; Scheper, Annette R; Meyer, Antje S
2017-05-01
Children with specific language impairment (SLI) have problems not only with language performance but also with sustained attention, which is the ability to maintain alertness over an extended period of time. Although there is consensus that this ability is impaired with respect to processing stimuli in the auditory perceptual modality, conflicting evidence exists concerning the visual modality. To address the outstanding issue whether the impairment in sustained attention is limited to the auditory domain, or if it is domain-general. Furthermore, to test whether children's sustained attention ability relates to their word-production skills. Groups of 7-9 year olds with SLI (N = 28) and typically developing (TD) children (N = 22) performed a picture-naming task and two sustained attention tasks, namely auditory and visual continuous performance tasks (CPTs). Children with SLI performed worse than TD children on picture naming and on both the auditory and visual CPTs. Moreover, performance on both the CPTs correlated with picture-naming latencies across developmental groups. These results provide evidence for a deficit in both auditory and visual sustained attention in children with SLI. Moreover, the study indicates there is a relationship between domain-general sustained attention and picture-naming performance in both TD and language-impaired children. Future studies should establish whether this relationship is causal. If attention influences language, training of sustained attention may improve language production in children from both developmental groups. © 2016 Royal College of Speech and Language Therapists.
Strickland, Brent; Geraci, Carlo; Chemla, Emmanuel; Schlenker, Philippe; Kelepir, Meltem; Pfau, Roland
2015-05-12
According to a theoretical tradition dating back to Aristotle, verbs can be classified into two broad categories. Telic verbs (e.g., "decide," "sell," "die") encode a logical endpoint, whereas atelic verbs (e.g., "think," "negotiate," "run") do not, and the denoted event could therefore logically continue indefinitely. Here we show that sign languages encode telicity in a seemingly universal way and moreover that even nonsigners lacking any prior experience with sign language understand these encodings. In experiments 1-5, nonsigning English speakers accurately distinguished between telic (e.g., "decide") and atelic (e.g., "think") signs from (the historically unrelated) Italian Sign Language, Sign Language of the Netherlands, and Turkish Sign Language. These results were not due to participants' inferring that the sign merely imitated the action in question. In experiment 6, we used pseudosigns to show that the presence of a salient visual boundary at the end of a gesture was sufficient to elicit telic interpretations, whereas repeated movement without salient boundaries elicited atelic interpretations. Experiments 7-10 confirmed that these visual cues were used by all of the sign languages studied here. Together, these results suggest that signers and nonsigners share universally accessible notions of telicity as well as universally accessible "mapping biases" between telicity and visual form.
Visual Attention and Quantifier-Spreading in Heritage Russian Bilinguals
ERIC Educational Resources Information Center
Sekerina, Irina A.; Sauermann, Antje
2015-01-01
It is well established in language acquisition research that monolingual children and adult second language learners misinterpret sentences with the universal quantifier "every" and make quantifier-spreading errors that are attributed to a preference for a match in number between two sets of objects. The present Visual World eye-tracking…
A Neurobehavioral Model of Flexible Spatial Language Behaviors
ERIC Educational Resources Information Center
Lipinski, John; Schneegans, Sebastian; Sandamirskaya, Yulia; Spencer, John P.; Schoner, Gregor
2012-01-01
We propose a neural dynamic model that specifies how low-level visual processes can be integrated with higher level cognition to achieve flexible spatial language behaviors. This model uses real-word visual input that is linked to relational spatial descriptions through a neural mechanism for reference frame transformations. We demonstrate that…
Audio Visual Technology and the Teaching of Foreign Languages.
ERIC Educational Resources Information Center
Halbig, Michael C.
Skills in comprehending the spoken language source are becoming increasingly important due to the audio-visual orientation of our culture. It would seem natural, therefore, to adjust the learning goals and environment accordingly. The video-cassette machine is an ideal means for creating this learning environment and developing the listening…
Next generation simulation tools: the Systems Biology Workbench and BioSPICE integration.
Sauro, Herbert M; Hucka, Michael; Finney, Andrew; Wellock, Cameron; Bolouri, Hamid; Doyle, John; Kitano, Hiroaki
2003-01-01
Researchers in quantitative systems biology make use of a large number of different software packages for modelling, analysis, visualization, and general data manipulation. In this paper, we describe the Systems Biology Workbench (SBW), a software framework that allows heterogeneous application components--written in diverse programming languages and running on different platforms--to communicate and use each others' capabilities via a fast binary encoded-message system. Our goal was to create a simple, high performance, opensource software infrastructure which is easy to implement and understand. SBW enables applications (potentially running on separate, distributed computers) to communicate via a simple network protocol. The interfaces to the system are encapsulated in client-side libraries that we provide for different programming languages. We describe in this paper the SBW architecture, a selection of current modules, including Jarnac, JDesigner, and SBWMeta-tool, and the close integration of SBW into BioSPICE, which enables both frameworks to share tools and compliment and strengthen each others capabilities.
Epi info - present and future.
Su, Y; Yoon, S S
2003-01-01
Epi Info is a suite of public domain computer programs for public health professionals developed by the Centers for Disease Control and Prevention (CDC). Epi Info is used for rapid questionnaire design, data entry and validation, data analysis including mapping and graphing, and creation of reports. Epi Info was originally created in 1985 using Turbo Pascal. In 1998, the last version of Epi Info for DOS, version 6, was released. Epi Info for DOS is currently supported by CDC but is no longer updated. The current version, Epi Info 2002, is Windows-based software developed using Microsoft Visual Basic. Approximately 300,000 downloads of Epi Info software occurred in 2002 from approximately 130 countries. These numbers make Epi Info probably one of the most widely distributed and used public domain programs in the world. The DOS version of Epi Info was translated into 13 languages, and efforts are underway to translate the Windows version into other major languages. Versions already exist for Spanish, French, Portuguese, Chinese, Japanese, and Arabic.
ERIC Educational Resources Information Center
King, Paul; King, Eva
This language-through-literature program is designed to be used as a native language program (language arts/reading readiness), as a second language program, or as a combined native and second language program in early childhood education. Sequentially developed over the year and within each unit, the program is subdivided into 14 units of about…
SBOL Visual: A Graphical Language for Genetic Designs
Quinn, Jacqueline Y.; Cox, Robert Sidney; Adler, Aaron; ...
2015-12-03
Synthetic Biology Open Language (SBOL) Visual is a graphical standard for genetic engineering. We report that it consists of symbols representing DNA subsequences, including regulatory elements and DNA assembly features. These symbols can be used to draw illustrations for communication and instruction, and as image assets for computer-aided design. SBOL Visual is a community standard, freely available for personal, academic, and commercial use (Creative Commons CC0 license). We provide prototypical symbol images that have been used in scientific publications and software tools. We encourage users to use and modify them freely, and to join the SBOL Visual community: http://www.sbolstandard.org/visual.
SBOL Visual: A Graphical Language for Genetic Designs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinn, Jacqueline Y.; Cox, Robert Sidney; Adler, Aaron
Synthetic Biology Open Language (SBOL) Visual is a graphical standard for genetic engineering. We report that it consists of symbols representing DNA subsequences, including regulatory elements and DNA assembly features. These symbols can be used to draw illustrations for communication and instruction, and as image assets for computer-aided design. SBOL Visual is a community standard, freely available for personal, academic, and commercial use (Creative Commons CC0 license). We provide prototypical symbol images that have been used in scientific publications and software tools. We encourage users to use and modify them freely, and to join the SBOL Visual community: http://www.sbolstandard.org/visual.
SBOL Visual: A Graphical Language for Genetic Designs
Adler, Aaron; Beal, Jacob; Bhatia, Swapnil; Cai, Yizhi; Chen, Joanna; Clancy, Kevin; Galdzicki, Michal; Hillson, Nathan J.; Le Novère, Nicolas; Maheshwari, Akshay J.; McLaughlin, James Alastair; Myers, Chris J.; P, Umesh; Pocock, Matthew; Rodriguez, Cesar; Soldatova, Larisa; Stan, Guy-Bart V.; Swainston, Neil; Wipat, Anil; Sauro, Herbert M.
2015-01-01
Synthetic Biology Open Language (SBOL) Visual is a graphical standard for genetic engineering. It consists of symbols representing DNA subsequences, including regulatory elements and DNA assembly features. These symbols can be used to draw illustrations for communication and instruction, and as image assets for computer-aided design. SBOL Visual is a community standard, freely available for personal, academic, and commercial use (Creative Commons CC0 license). We provide prototypical symbol images that have been used in scientific publications and software tools. We encourage users to use and modify them freely, and to join the SBOL Visual community: http://www.sbolstandard.org/visual. PMID:26633141
Markert, H; Kaufmann, U; Kara Kayikci, Z; Palm, G
2009-03-01
Language understanding is a long-standing problem in computer science. However, the human brain is capable of processing complex languages with seemingly no difficulties. This paper shows a model for language understanding using biologically plausible neural networks composed of associative memories. The model is able to deal with ambiguities on the single word and grammatical level. The language system is embedded into a robot in order to demonstrate the correct semantical understanding of the input sentences by letting the robot perform corresponding actions. For that purpose, a simple neural action planning system has been combined with neural networks for visual object recognition and visual attention control mechanisms.
Signs of Change: Contemporary Attitudes to Australian Sign Language
ERIC Educational Resources Information Center
Slegers, Claudia
2010-01-01
This study explores contemporary attitudes to Australian Sign Language (Auslan). Since at least the 1960s, sign languages have been accepted by linguists as natural languages with all of the key ingredients common to spoken languages. However, these visual-spatial languages have historically been subject to ignorance and myth in Australia and…
ERIC Educational Resources Information Center
Linn, Mary S.; Naranjo, Tessie; Nicholas, Sheilah; Slaughter, Inee; Yamamoto, Akira; Zepeda, Ofelia
The Indigenous Language Institute (ILI) collaborates with indigenous language communities to combat language decline. ILI facilitates community-based language programs, increases public awareness of language endangerment, and disseminates information on language preservation and successful language revitalization programs. In response to numerous…
Visual speech segmentation: using facial cues to locate word boundaries in continuous speech
Mitchel, Aaron D.; Weiss, Daniel J.
2014-01-01
Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition. PMID:25018577
Simulation of Planetary Formation using Python
NASA Astrophysics Data System (ADS)
Bufkin, James; Bixler, David
2015-03-01
A program to simulate planetary formation was developed in the Python programming language. The program consists of randomly placed and massed bodies surrounding a central massive object in order to approximate a protoplanetary disk. The orbits of these bodies are time-stepped, with accelerations, velocities and new positions calculated in each step. Bodies are allowed to merge if their disks intersect. Numerous parameters (orbital distance, masses, number of particles, etc.) were varied in order to optimize the program. The program uses an iterative difference equation approach to solve the equations of motion using a kinematic model. Conservation of energy and angular momentum are not specifically forced, but conservation of momentum is forced during the merging of bodies. The initial program was created in Visual Python (VPython) but the current intention is to allow for higher particle count and faster processing by utilizing PyOpenCl and PyOpenGl. Current results and progress will be reported.
Foreign Language Day--A Living Language Experience.
ERIC Educational Resources Information Center
Wood, Paul W.
St. Bonaventure University holds a Language Day each spring, hosting some 3,900 area junior high and high school students. The buildings and facilities of the university campus are used, and activities include language competitions (exhibits, interpretative readings, language productions, audio-visual presentations and essays); a fiesta; foreign…
Are preservice teachers prepared to teach struggling readers?
Washburn, Erin K; Joshi, R Malatesha; Binks Cantrell, Emily
2011-06-01
Reading disabilities such as dyslexia, a specific learning disability that affects an individual's ability to process written language, are estimated to affect 15-20% of the general population. Consequently, elementary school teachers encounter students who struggle with inaccurate or slow reading, poor spelling, poor writing, and other language processing difficulties. However, recent evidence may suggest that teacher preparation programs are not providing preservice teachers with information about basic language constructs and other components related to scientifically based reading instruction. As a consequence preservice teachers have not exhibited explicit knowledge of such concepts in previous studies. Few studies have sought to assess preservice teachers' knowledge about dyslexia in conjunction with knowledge of basic language concepts. The purpose of the present study was to examine elementary school preservice teachers' knowledge of basic language constructs and their perceptions and knowledge about dyslexia. Findings from the present study suggest that preservice teachers, on average, are able to display implicit skills related to certain basic language constructs (i.e., syllable counting), but fail to demonstrate explicit knowledge of others (i.e., phonics principles). Also, preservice teachers seem to hold the common misconception that dyslexia is a visual perception deficit rather than a problem with phonological processing. Implications for future research as well as teacher preparation are discussed.
An error-resistant linguistic protocol for air traffic control
NASA Technical Reports Server (NTRS)
Cushing, Steven
1989-01-01
The research results described here are intended to enhance the effectiveness of the DATALINK interface that is scheduled by the Federal Aviation Administration (FAA) to be deployed during the 1990's to improve the safety of various aspects of aviation. While voice has a natural appeal as the preferred means of communication both among humans themselves and between humans and machines as the form of communication that people find most convenient, the complexity and flexibility of natural language are problematic, because of the confusions and misunderstandings that can arise as a result of ambiguity, unclear reference, intonation peculiarities, implicit inference, and presupposition. The DATALINK interface will avoid many of these problems by replacing voice with vision and speech with written instructions. This report describes results achieved to date on an on-going research effort to refine the protocol of the DATALINK system so as to avoid many of the linguistic problems that still remain in the visual mode. In particular, a working prototype DATALINK simulator system has been developed consisting of an unambiguous, context-free grammar and parser, based on the current air-traffic-control language and incorporated into a visual display involving simulated touch-screen buttons and three levels of menu screens. The system is written in the C programming language and runs on the Macintosh II computer. After reviewing work already done on the project, new tasks for further development are described.
PONS, FERRAN; ANDREU, LLORENC.; SANZ-TORRENT, MONICA; BUIL-LEGAZ, LUCIA; LEWKOWICZ, DAVID J.
2014-01-01
Speech perception involves the integration of auditory and visual articulatory information and, thus, requires the perception of temporal synchrony between this information. There is evidence that children with Specific Language Impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component followed the visual component. None of the groups perceived an audiovisual asynchrony of 366ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception. PMID:22874648
Query2Question: Translating Visualization Interaction into Natural Language.
Nafari, Maryam; Weaver, Chris
2015-06-01
Richly interactive visualization tools are increasingly popular for data exploration and analysis in a wide variety of domains. Existing systems and techniques for recording provenance of interaction focus either on comprehensive automated recording of low-level interaction events or on idiosyncratic manual transcription of high-level analysis activities. In this paper, we present the architecture and translation design of a query-to-question (Q2Q) system that automatically records user interactions and presents them semantically using natural language (written English). Q2Q takes advantage of domain knowledge and uses natural language generation (NLG) techniques to translate and transcribe a progression of interactive visualization states into a visual log of styled text that complements and effectively extends the functionality of visualization tools. We present Q2Q as a means to support a cross-examination process in which questions rather than interactions are the focus of analytic reasoning and action. We describe the architecture and implementation of the Q2Q system, discuss key design factors and variations that effect question generation, and present several visualizations that incorporate Q2Q for analysis in a variety of knowledge domains.
Pons, Ferran; Andreu, Llorenç; Sanz-Torrent, Monica; Buil-Legaz, Lucía; Lewkowicz, David J
2013-06-01
Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666 ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component preceded [corrected] the visual component. None of the groups perceived an audiovisual asynchrony of 366 ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception.
NaviCell Web Service for network-based data visualization.
Bonnet, Eric; Viara, Eric; Kuperstein, Inna; Calzone, Laurence; Cohen, David P A; Barillot, Emmanuel; Zinovyev, Andrei
2015-07-01
Data visualization is an essential element of biological research, required for obtaining insights and formulating new hypotheses on mechanisms of health and disease. NaviCell Web Service is a tool for network-based visualization of 'omics' data which implements several data visual representation methods and utilities for combining them together. NaviCell Web Service uses Google Maps and semantic zooming to browse large biological network maps, represented in various formats, together with different types of the molecular data mapped on top of them. For achieving this, the tool provides standard heatmaps, barplots and glyphs as well as the novel map staining technique for grasping large-scale trends in numerical values (such as whole transcriptome) projected onto a pathway map. The web service provides a server mode, which allows automating visualization tasks and retrieving data from maps via RESTful (standard HTTP) calls. Bindings to different programming languages are provided (Python and R). We illustrate the purpose of the tool with several case studies using pathway maps created by different research groups, in which data visualization provides new insights into molecular mechanisms involved in systemic diseases such as cancer and neurodegenerative diseases. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Modeling and visualizing borehole information on virtual globes using KML
NASA Astrophysics Data System (ADS)
Zhu, Liang-feng; Wang, Xi-feng; Zhang, Bing
2014-01-01
Advances in virtual globes and Keyhole Markup Language (KML) are providing the Earth scientists with the universal platforms to manage, visualize, integrate and disseminate geospatial information. In order to use KML to represent and disseminate subsurface geological information on virtual globes, we present an automatic method for modeling and visualizing a large volume of borehole information. Based on a standard form of borehole database, the method first creates a variety of borehole models with different levels of detail (LODs), including point placemarks representing drilling locations, scatter dots representing contacts and tube models representing strata. Subsequently, the level-of-detail based (LOD-based) multi-scale representation is constructed to enhance the efficiency of visualizing large numbers of boreholes. Finally, the modeling result can be loaded into a virtual globe application for 3D visualization. An implementation program, termed Borehole2KML, is developed to automatically convert borehole data into KML documents. A case study of using Borehole2KML to create borehole models in Shanghai shows that the modeling method is applicable to visualize, integrate and disseminate borehole information on the Internet. The method we have developed has potential use in societal service of geological information.
NaviCell Web Service for network-based data visualization
Bonnet, Eric; Viara, Eric; Kuperstein, Inna; Calzone, Laurence; Cohen, David P. A.; Barillot, Emmanuel; Zinovyev, Andrei
2015-01-01
Data visualization is an essential element of biological research, required for obtaining insights and formulating new hypotheses on mechanisms of health and disease. NaviCell Web Service is a tool for network-based visualization of ‘omics’ data which implements several data visual representation methods and utilities for combining them together. NaviCell Web Service uses Google Maps and semantic zooming to browse large biological network maps, represented in various formats, together with different types of the molecular data mapped on top of them. For achieving this, the tool provides standard heatmaps, barplots and glyphs as well as the novel map staining technique for grasping large-scale trends in numerical values (such as whole transcriptome) projected onto a pathway map. The web service provides a server mode, which allows automating visualization tasks and retrieving data from maps via RESTful (standard HTTP) calls. Bindings to different programming languages are provided (Python and R). We illustrate the purpose of the tool with several case studies using pathway maps created by different research groups, in which data visualization provides new insights into molecular mechanisms involved in systemic diseases such as cancer and neurodegenerative diseases. PMID:25958393
NASA Technical Reports Server (NTRS)
1995-01-01
The Interactive Data Language (IDL), developed by Research Systems, Inc., is a tool for scientists to investigate their data without having to write a custom program for each study. IDL is based on the Mariners Mars spectral Editor (MMED) developed for studies from NASA's Mars spacecraft flights. The company has also developed Environment for Visualizing Images (ENVI), an image processing system for easily analyzing remotely sensed data written in IDL. The Visible Human CD, another Research Systems product, is the first complete digital reference of photographic images for exploring human anatomy.
1993-04-01
the use of thus seems more natural . It eliminates the parameter a symbolic manipulation program. Their robustness is 790 questionable. variance and...and learning (UMd/GMU), IU and reasoning (ISI/USC), IU and natural language (SUNY Buffalo), and IU and neural nets (new BAA; contracts to be awarded...visual navigation is defined as different natures . Among these are theoretical questions, the process of motion control based on an analysis of im
Freeware for reporting radiation dosimetry following the administration of radiopharmaceuticals.
Gómez Perales, Jesús Luis; García Mendoza, Antonio
2015-09-01
This work describes the development of a software application for reporting patient radiation dosimetry following radiopharmaceutical administration. The resulting report may be included within the patient's medical records. The application was developed in the Visual Basic programming language. The dosimetric calculations are based on the values given by the International Commission on Radiological Protection (ICRP). The software is available in both Spanish and English and can be downloaded at no cost from www.radiopharmacy.net. Copyright © 2015 Elsevier Ltd. All rights reserved.
Wu, Yiping; Liu, Shu-Guang
2012-01-01
R program language-Soil and Water Assessment Tool-Flexible Modeling Environment (R-SWAT-FME) (Wu and Liu, 2012) is a comprehensive modeling framework that adopts an R package, Flexible Modeling Environment (FME) (Soetaert and Petzoldt, 2010), for the Soil and Water Assessment Tool (SWAT) model (Arnold and others, 1998; Neitsch and others, 2005). This framework provides the functionalities of parameter identifiability, model calibration, and sensitivity and uncertainty analysis with instant visualization. This user's guide shows how to apply this framework for a customized SWAT project.
2012-09-01
Thesis Advisor: Mikhail Auguston Second Reader: Terry Norbraten THIS PAGE INTENTIONALLY LEFT BLANK i REPORT DOCUMENTATION PAGE Form Approved...Language (GraphML). MPGrapher compiles well- formed XML files that conform to the yEd GraphML schema. These files will be opened and analyzed using...ABSTRACT UU NSN 7540-01-280-5500 Standard Form 298 (Rev. 2-89) Prescribed by ANSI Std. 239-18 ii THIS PAGE INTENTIONALLY LEFT BLANK iii Approved
MONO FOR CROSS-PLATFORM CONTROL SYSTEM ENVIRONMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nishimura, Hiroshi; Timossi, Chris
2006-10-19
Mono is an independent implementation of the .NET Frameworkby Novell that runs on multiple operating systems (including Windows,Linux and Macintosh) and allows any .NET compatible application to rununmodified. For instance Mono can run programs with graphical userinterfaces (GUI) developed with the C# language on Windows with VisualStudio (a full port of WinForm for Mono is in progress). We present theresults of tests we performed to evaluate the portability of our controlssystem .NET applications from MS Windows to Linux.
ERIC Educational Resources Information Center
Rossetto, Marietta; Chiera-Macchia, Antonella
2011-01-01
This study investigated the use of comics (Cary, 2004) in a guided writing experience in secondary school Italian language learning. The main focus of the peer group interaction task included the exploration of visual sequencing and visual integration (Bailey, O'Grady-Jones, & McGown, 1995) using image and text to create a comic strip narrative in…
Framing Indigenous Languages as Secondary to Matrix Languages
ERIC Educational Resources Information Center
Meek, Barbra A.; Messing, Jacqueline
2007-01-01
Reversing language shift has proven to be difficult for many reasons. Although much of the literature has focused on educational practices, little research has attended to the visual presentation of language used in educational texts aimed at reversing shift. In this article, we compare language materials developed for two different language…
Visual Sequence Learning in Infancy: Domain-General and Domain-Specific Associations with Language
ERIC Educational Resources Information Center
Shafto, Carissa L.; Conway, Christopher M.; Field, Suzanne L.; Houston, Derek M.
2012-01-01
Research suggests that nonlinguistic sequence learning abilities are an important contributor to language development (Conway, Bauernschmidt, Huang, & Pisoni, 2010). The current study investigated visual sequence learning (VSL) as a possible predictor of vocabulary development in infants. Fifty-eight 8.5-month-old infants were presented with a…
ERIC Educational Resources Information Center
Duyck, Wouter; Van Assche, Eva; Drieghe, Denis; Hartsuiker, Robert J.
2007-01-01
Recent research on bilingualism has shown that lexical access in visual word recognition by bilinguals is not selective with respect to language. In the present study, the authors investigated language-independent lexical access in bilinguals reading sentences, which constitutes a strong unilingual linguistic context. In the first experiment,…
Autorretratos en la Clase de Espanol (Self-Portraits in Spanish Classroom)
ERIC Educational Resources Information Center
Palos, Jose M.
2007-01-01
Foreign language teachers know the benefits of bringing art into their classrooms. Works of art are cultural artifacts that convey rich cultural perspectives. Integrating art into the foreign language curriculum facilitates visual learning, providing valuable opportunities to help the students develop visual thinking. Art can also be an effective…
ERIC Educational Resources Information Center
Tadic, Valerie; Pring, Linda; Dale, Naomi
2010-01-01
Background: Development of children with congenital visual impairment (VI) has been associated with vulnerable socio-communicative outcomes often bearing striking similarities to those of sighted children with autism. To date, very little is known about language and social communication in children with VI of normal intelligence. Methods: We…
Conversations about Visual Arts: Facilitating Oral Language
ERIC Educational Resources Information Center
Chang, Ni; Cress, Susan
2014-01-01
Visual arts, such as drawings, are attractive to most young children. Marks left on paper by young children contain meaning. Although it is known that children's oral language could be enhanced through communication with adults, rarely is there a series of dialogues between adults and young children about their drawings. Often heard instead…
Learning to Look for Language: Development of Joint Attention in Young Deaf Children
ERIC Educational Resources Information Center
Lieberman, Amy M.; Hatrak, Marla; Mayberry, Rachel I.
2014-01-01
Joint attention between hearing children and their caregivers is typically achieved when the adult provides spoken, auditory linguistic input that relates to the child's current visual focus of attention. Deaf children interacting through sign language must learn to continually switch visual attention between people and objects in order to achieve…
Visual Learning: A Learner Centered Approach to Enhance English Language Teaching
ERIC Educational Resources Information Center
Philominraj, Andrew; Jeyabalan, David; Vidal-Silva, Christian
2017-01-01
This article presents an empirical study carried out among the students of higher secondary schools to find out how English language learning occurs naturally in an environment where learners are encouraged by an appropriate method such as visual learning. The primary data was collected from 504 students with different pretested questionnaires. A…
Eye Movements Reveal the Dynamic Simulation of Speed in Language
ERIC Educational Resources Information Center
Speed, Laura J.; Vigliocco, Gabriella
2014-01-01
This study investigates how speed of motion is processed in language. In three eye-tracking experiments, participants were presented with visual scenes and spoken sentences describing fast or slow events (e.g., "The lion ambled/dashed to the balloon"). Results showed that looking time to relevant objects in the visual scene was affected…
Teaching Coin Discrimination to Children with Visual Impairments
ERIC Educational Resources Information Center
Hanney, Nicole M.; Tiger, Jeffrey H.
2012-01-01
We taught 2 children with visual impairments to select a coin from an array using tactile cues after hearing its name and then to select a coin after hearing its value. Following the acquisition of these listener (receptive language) skills, we then observed the emergence of speaker (expressive language) skills without direct instruction.…
Visual and Auditory Input in Second-Language Speech Processing
ERIC Educational Resources Information Center
Hardison, Debra M.
2010-01-01
The majority of studies in second-language (L2) speech processing have involved unimodal (i.e., auditory) input; however, in many instances, speech communication involves both visual and auditory sources of information. Some researchers have argued that multimodal speech is the primary mode of speech perception (e.g., Rosenblum 2005). Research on…
Mapping language to visual referents: Does the degree of image realism matter?
Saryazdi, Raheleh; Chambers, Craig G
2018-01-01
Studies of real-time spoken language comprehension have shown that listeners rapidly map unfolding speech to available referents in the immediate visual environment. This has been explored using various kinds of 2-dimensional (2D) stimuli, with convenience or availability typically motivating the choice of a particular image type. However, work in other areas has suggested that certain cognitive processes are sensitive to the level of realism in 2D representations. The present study examined the process of mapping language to depictions of objects that are more or less realistic, namely photographs versus clipart images. A custom stimulus set was first created by generating clipart images directly from photographs of real objects. Two visual world experiments were then conducted, varying whether referent identification was driven by noun or verb information. A modest benefit for clipart stimuli was observed during real-time processing, but only for noun-driving mappings. The results are discussed in terms of their implications for studies of visually situated language processing. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
Altvater-Mackensen, Nicole; Grossmann, Tobias
2015-01-01
Infants' language exposure largely involves face-to-face interactions providing acoustic and visual speech cues but also social cues that might foster language learning. Yet, both audiovisual speech information and social information have so far received little attention in research on infants' early language development. Using a preferential looking paradigm, 44 German 6-month olds' ability to detect mismatches between concurrently presented auditory and visual native vowels was tested. Outcomes were related to mothers' speech style and interactive behavior assessed during free play with their infant, and to infant-specific factors assessed through a questionnaire. Results show that mothers' and infants' social behavior modulated infants' preference for matching audiovisual speech. Moreover, infants' audiovisual speech perception correlated with later vocabulary size, suggesting a lasting effect on language development. © 2014 The Authors. Child Development © 2014 Society for Research in Child Development, Inc.
Willis, Suzi; Goldbart, Juliet; Stansfield, Jois
2014-07-01
To compare verbal short-term memory and visual working memory abilities of six children with congenital hearing-impairment identified as having significant language learning difficulties with normative data from typically hearing children using standardized memory assessments. Six children with hearing loss aged 8-15 years were assessed on measures of verbal short-term memory (Non-word and word recall) and visual working memory annually over a two year period. All children had cognitive abilities within normal limits and used spoken language as the primary mode of communication. The language assessment scores at the beginning of the study revealed that all six participants exhibited delays of two years or more on standardized assessments of receptive and expressive vocabulary and spoken language. The children with hearing-impairment scores were significantly higher on the non-word recall task than the "real" word recall task. They also exhibited significantly higher scores on visual working memory than those of the age-matched sample from the standardized memory assessment. Each of the six participants in this study displayed the same pattern of strengths and weaknesses in verbal short-term memory and visual working memory despite their very different chronological ages. The children's poor ability to recall single syllable words in relation to non-words is a clinical indicator of their difficulties in verbal short-term memory. However, the children with hearing-impairment do not display generalized processing difficulties and indeed demonstrate strengths in visual working memory. The poor ability to recall words, in combination with difficulties with early word learning may be indicators of children with hearing-impairment who will struggle to develop spoken language equal to that of their normally hearing peers. This early identification has the potential to allow for target specific intervention that may remediate their difficulties. Copyright © 2014. Published by Elsevier Ireland Ltd.
Dynamic spatial organization of the occipito-temporal word form area for second language processing.
Gao, Yue; Sun, Yafeng; Lu, Chunming; Ding, Guosheng; Guo, Taomei; Malins, Jeffrey G; Booth, James R; Peng, Danling; Liu, Li
2017-08-01
Despite the left occipito-temporal region having shown consistent activation in visual word form processing across numerous studies in different languages, the mechanisms by which word forms of second languages are processed in this region remain unclear. To examine this more closely, 16 Chinese-English and 14 English-Chinese late bilinguals were recruited to perform lexical decision tasks to visually presented words in both their native and second languages (L1 and L2) during functional magnetic resonance imaging scanning. Here we demonstrate that visual word form processing for L1 versus L2 engaged different spatial areas of the left occipito-temporal region. Namely, the spatial organization of the visual word form processing in the left occipito-temporal region is more medial and posterior for L2 than L1 processing in Chinese-English bilinguals, whereas activation is more lateral and anterior for L2 in English-Chinese bilinguals. In addition, for Chinese-English bilinguals, more lateral recruitment of the occipito-temporal region was correlated with higher L2 proficiency, suggesting higher L2 proficiency is associated with greater involvement of L1-preferred mechanisms. For English-Chinese bilinguals, higher L2 proficiency was correlated with more lateral and anterior activation of the occipito-temporal region, suggesting higher L2 proficiency is associated with greater involvement of L2-preferred mechanisms. Taken together, our results indicate that L1 and L2 recruit spatially different areas of the occipito-temporal region in visual word processing when the two scripts belong to different writing systems, and that the spatial organization of this region for L2 visual word processing is dynamically modulated by L2 proficiency. Specifically, proficiency in L2 in Chinese-English is associated with assimilation to the native language mechanisms, whereas L2 in English-Chinese is associated with accommodation to second language mechanisms. Copyright © 2017. Published by Elsevier Ltd.
Optimal linguistic expression in negotiations depends on visual appearance
Kwon, Jinhwan; Tamada, Hikaru; Hirahara, Yumi
2018-01-01
We investigate the influence of the visual appearance of a negotiator on persuasiveness within the context of negotiations. Psychological experiments were conducted to quantitatively analyze the relationship between visual appearance and the use of language. Male and female participants were shown three female and male photographs, respectively. They were asked to report how they felt about each photograph using a seven-point semantic differential (SD) scale for six affective factors (positive impression, extraversion, intelligence, conscientiousness, emotional stability, and agreeableness). Participants then answered how they felt about each negotiation scenario (they were presented with pictures and a situation combined with negotiation sentences) using a seven-point SD scale for seven affective factors (positive impression, extraversion, intelligence, conscientiousness, emotional stability, agreeableness, and degree of persuasion). Two experiments were conducted using different participant groups depending on the negotiation situations. Photographs with good or bad appearances were found to show high or low degrees of persuasion, respectively. A multiple regression equation was obtained, indicating the importance of the three language factors (euphemistic, honorific, and sympathy expressions) to impressions made during negotiation. The result shows that there are optimal negotiation sentences based on various negotiation factors, such as visual appearance and use of language. For example, persons with good appearance might worsen their impression during negotiations by using certain language, although their initial impression was positive, and persons with bad appearance could effectively improve their impressions in negotiations through their use of language, although the final impressions of their negotiation counterpart might still be more negative than those for persons with good appearance. In contrast, the impressions made by persons of normal appearance were not easily affected by their use of language. The results of the present study have significant implications for future studies of effective negotiation strategies considering visual appearance as well as gender. PMID:29621361
Optimal linguistic expression in negotiations depends on visual appearance.
Sakamoto, Maki; Kwon, Jinhwan; Tamada, Hikaru; Hirahara, Yumi
2018-01-01
We investigate the influence of the visual appearance of a negotiator on persuasiveness within the context of negotiations. Psychological experiments were conducted to quantitatively analyze the relationship between visual appearance and the use of language. Male and female participants were shown three female and male photographs, respectively. They were asked to report how they felt about each photograph using a seven-point semantic differential (SD) scale for six affective factors (positive impression, extraversion, intelligence, conscientiousness, emotional stability, and agreeableness). Participants then answered how they felt about each negotiation scenario (they were presented with pictures and a situation combined with negotiation sentences) using a seven-point SD scale for seven affective factors (positive impression, extraversion, intelligence, conscientiousness, emotional stability, agreeableness, and degree of persuasion). Two experiments were conducted using different participant groups depending on the negotiation situations. Photographs with good or bad appearances were found to show high or low degrees of persuasion, respectively. A multiple regression equation was obtained, indicating the importance of the three language factors (euphemistic, honorific, and sympathy expressions) to impressions made during negotiation. The result shows that there are optimal negotiation sentences based on various negotiation factors, such as visual appearance and use of language. For example, persons with good appearance might worsen their impression during negotiations by using certain language, although their initial impression was positive, and persons with bad appearance could effectively improve their impressions in negotiations through their use of language, although the final impressions of their negotiation counterpart might still be more negative than those for persons with good appearance. In contrast, the impressions made by persons of normal appearance were not easily affected by their use of language. The results of the present study have significant implications for future studies of effective negotiation strategies considering visual appearance as well as gender.
Field: a new meta-authoring platform for data-intensive scientific visualization
NASA Astrophysics Data System (ADS)
Downie, M.; Ameres, E.; Fox, P. A.; Goebel, J.; Graves, A.; Hendler, J.
2012-12-01
This presentation will demonstrate a new platform for data-intensive scientific visualization, called Field, that rethinks the problem of visual data exploration. Several new opportunities for scientific visualization present themselves here at this moment in time. We believe that when taken together they may catalyze a transformation of the practice of science and to begin to seed a technical culture within science that fuses data analysis, programming and myriad visual strategies. It is at integrative levels that the principle challenges exist, for many fundamental technical components of our field are now well understood and widely available. File formats from CSV through HDF all have broad library support; low-level high-performance graphics APIs (OpenGL) are in a period of stable growth; and a dizzying ecosystem of analysis and machine learning libraries abound. The hardware of computer graphics offers unprecedented computing power within commodity components; programming languages and platforms are coalescing around a core set of umbrella runtimes. Each of these trends are each set to continue — computer graphics hardware is developing at a super-Moore-law rate, and trends in publication and dissemination point only towards an increasing amount of access to code and data. The critical opportunity here for scientific visualization is, we maintain, not a in developing a new statistical library, nor a new tool centered on a particular technique, but rather new visual, "live" programming environment that is promiscuous in its scope. We can identify the necessarily methodological practice and traditions required here not in science or engineering but in the "live-coding" practices prevalent in the fields of digital art and design. We can define this practice as an approach to programming that is live, iterative, integrative, speculative and exploratory. "Live" because it is exclusively practiced in real-time (often during performance); "iterative", because intermediate programs and this visual results are constantly being made and remade en route; "speculative", because these programs and images result out of mode of inquiry into image-making not unlike that of hypothesis formation and testing; "integrative" because this style draws deeply upon the libraries of algorithms and materials available online today; and "exploratory" because the results of these speculations are inherently open to the data and unforeseen out the outset. To this end our development environment — Field — comprises a minimal core and a powerful plug-in system that can be extended from within the environment itself. By providing a hybrid text editor that can incorporate text-based programming at the same time with graphical user-interface elements, its flexible and extensible interface provides space as necessary for notation, visualization, interface construction, and introspection. In addition, it provides an advanced GPU-accelerated graphics system ideal for large-scale data visualization. Since Field was created in the context of widely divergent interdisciplinary projects, its aim is to give its users not only the ability to work rapidly, but to shape their Field environment extensively and flexibly for their own demands.
A Visual Interface for Querying Heterogeneous Phylogenetic Databases.
Jamil, Hasan M
2017-01-01
Despite the recent growth in the number of phylogenetic databases, access to these wealth of resources remain largely tool or form-based interface driven. It is our thesis that the flexibility afforded by declarative query languages may offer the opportunity to access these repositories in a better way, and to use such a language to pose truly powerful queries in unprecedented ways. In this paper, we propose a substantially enhanced closed visual query language, called PhyQL, that can be used to query phylogenetic databases represented in a canonical form. The canonical representation presented helps capture most phylogenetic tree formats in a convenient way, and is used as the storage model for our PhyloBase database for which PhyQL serves as the query language. We have implemented a visual interface for the end users to pose PhyQL queries using visual icons, and drag and drop operations defined over them. Once a query is posed, the interface translates the visual query into a Datalog query for execution over the canonical database. Responses are returned as hyperlinks to phylogenies that can be viewed in several formats using the tree viewers supported by PhyloBase. Results cached in PhyQL buffer allows secondary querying on the computed results making it a truly powerful querying architecture.
Characteristics of Chinese-English bilingual dyslexia in right occipito-temporal lesion.
Ting, Simon Kang Seng; Chia, Pei Shi; Chan, Yiong Huak; Kwek, Kevin Jun Hong; Tan, Wilnard; Hameed, Shahul; Tan, Eng-King
2017-11-01
Current literature suggests that right hemisphere lesions produce predominant spatial-related dyslexic error in English speakers. However, little is known regarding such lesions in Chinese speakers. In this paper, we describe the dyslexic characteristics of a Chinese-English bilingual patient with a right posterior cortical lesion. He was found to have profound spatial-related errors during his English word reading, in both real and non-words. During Chinese word reading, there was significantly less error compared to English, probably due to the ideographic nature of the Chinese language. He was also found to commit phonological-like visual errors in English, characterized by error responses that were visually similar to the actual word. There was no significant difference in visual errors during English word reading compared with Chinese. In general, our patient's performance in both languages appears to be consistent with the current literature on right posterior hemisphere lesions. Additionally, his performance also likely suggests that the right posterior cortical region participates in the visual analysis of orthographical word representation, both in ideographical and alphabetic languages, at least from a bilingual perspective. Future studies should further examine the role of the right posterior region in initial visual analysis of both languages. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cross-species 3D virtual reality toolbox for visual and cognitive experiments.
Doucet, Guillaume; Gulli, Roberto A; Martinez-Trujillo, Julio C
2016-06-15
Although simplified visual stimuli, such as dots or gratings presented on homogeneous backgrounds, provide strict control over the stimulus parameters during visual experiments, they fail to approximate visual stimulation in natural conditions. Adoption of virtual reality (VR) in neuroscience research has been proposed to circumvent this problem, by combining strict control of experimental variables and behavioral monitoring within complex and realistic environments. We have created a VR toolbox that maximizes experimental flexibility while minimizing implementation costs. A free VR engine (Unreal 3) has been customized to interface with any control software via text commands, allowing seamless introduction into pre-existing laboratory data acquisition frameworks. Furthermore, control functions are provided for the two most common programming languages used in visual neuroscience: Matlab and Python. The toolbox offers milliseconds time resolution necessary for electrophysiological recordings and is flexible enough to support cross-species usage across a wide range of paradigms. Unlike previously proposed VR solutions whose implementation is complex and time-consuming, our toolbox requires minimal customization or technical expertise to interface with pre-existing data acquisition frameworks as it relies on already familiar programming environments. Moreover, as it is compatible with a variety of display and input devices, identical VR testing paradigms can be used across species, from rodents to humans. This toolbox facilitates the addition of VR capabilities to any laboratory without perturbing pre-existing data acquisition frameworks, or requiring any major hardware changes. Copyright © 2016 Z. All rights reserved.
Effects of Hearing Status and Sign Language Use on Working Memory
Sarchet, Thomastine; Trani, Alexandra
2016-01-01
Deaf individuals have been found to score lower than hearing individuals across a variety of memory tasks involving both verbal and nonverbal stimuli, particularly those requiring retention of serial order. Deaf individuals who are native signers, meanwhile, have been found to score higher on visual-spatial memory tasks than on verbal-sequential tasks and higher on some visual-spatial tasks than hearing nonsigners. However, hearing status and preferred language modality (signed or spoken) frequently are confounded in such studies. That situation is resolved in the present study by including deaf students who use spoken language and sign language interpreting students (hearing signers) as well as deaf signers and hearing nonsigners. Three complex memory span tasks revealed overall advantages for hearing signers and nonsigners over both deaf signers and deaf nonsigners on 2 tasks involving memory for verbal stimuli (letters). There were no differences among the groups on the task involving visual-spatial stimuli. The results are consistent with and extend recent findings concerning the effects of hearing status and language on memory and are discussed in terms of language modality, hearing status, and cognitive abilities among deaf and hearing individuals. PMID:26755684
Pinky Extension as a Phonestheme in Mongolian Sign Language
ERIC Educational Resources Information Center
Healy, Christina
2011-01-01
Mongolian Sign Language (MSL) is a visual-gestural language that developed from multiple languages interacting as a result of both geographic proximity and political relations and of the natural development of a communication system by deaf community members. Similar to the phonological systems of other signed languages, MSL combines handshapes,…
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false Can a school integrate Language Development programs into... Language Development Programs § 39.132 Can a school integrate Language Development programs into its regular instructional program? A school may offer Language Development programs to students as part of its...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 1 2011-04-01 2011-04-01 false Can a school integrate Language Development programs into... Language Development Programs § 39.132 Can a school integrate Language Development programs into its regular instructional program? A school may offer Language Development programs to students as part of its...
Communication training in mute autistic adolescents using the written work.
LaVigna, G W
1977-06-01
The expressive and receptive use of three written words was taught to three mute autistic adolescents using a procedure based on Terrace's errorless discrimination model and Premack's language training with chimps. Expressive language was measured by the subject's selection of the appropriate word card from among the available alternatives when the corresponding object was presented. Receptive language was measured by the subject's selection of the appropriate object from among the available alternatives when the corresponding word card was presented. The sequence of the presentations and the order of placement of the available alternatives were randomized. The three subjects required 979, 1,791, and 1,644 trails, respectively, to master both the expressive and receptive use of the three words. The correct response rates for the three subjects over the entire training program were 92, 92, and 90%, respectively. It was concluded that, as concrete visual symbols, written words may provide a viable communication system for the mute autistic. The implications for treatment are discussed and suggestions for future research are made.
Event Processing in the Visual World: Projected Motion Paths during Spoken Sentence Comprehension
ERIC Educational Resources Information Center
Kamide, Yuki; Lindsay, Shane; Scheepers, Christoph; Kukona, Anuenue
2016-01-01
Motion events in language describe the movement of an entity to another location along a path. In 2 eye-tracking experiments, we found that comprehension of motion events involves the online construction of a spatial mental model that integrates language with the visual world. In Experiment 1, participants listened to sentences describing the…
Language Networks in Anophthalmia: Maintained Hierarchy of Processing in "Visual" Cortex
ERIC Educational Resources Information Center
Watkins, Kate E.; Cowey, Alan; Alexander, Iona; Filippini, Nicola; Kennedy, James M.; Smith, Stephen M.; Ragge, Nicola; Bridge, Holly
2012-01-01
Imaging studies in blind subjects have consistently shown that sensory and cognitive tasks evoke activity in the occipital cortex, which is normally visual. The precise areas involved and degree of activation are dependent upon the cause and age of onset of blindness. Here, we investigated the cortical language network at rest and during an…
ERIC Educational Resources Information Center
Pons, Ferran; Andreu, Llorenc; Sanz-Torrent, Monica; Buil-Legaz, Lucia; Lewkowicz, David J.
2013-01-01
Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the…
Language and Play in Students with Multiple Disabilities and Visual Impairments or Deaf-Blindness
ERIC Educational Resources Information Center
Pizzo, Lianna; Bruce, Susan M.
2010-01-01
This article investigates the relationships between play and language development in students with multiple disabilities and visual impairments or deaf-blindness. The findings indicate that students with higher levels of communication demonstrate more advanced play skills and that the use of play-based assessment and exposure to symbolic play are…
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Liu, I-Hsiung
1985-01-01
This Working Paper Series entry represents a collection of presentation visuals associated with the companion report entitled Natural Language Query System Design for Interactive Information Storage and Retrieval Systems, USL/DBMS NASA/RECON Working Paper Series report number DBMS.NASA/RECON-17.
ERIC Educational Resources Information Center
Cousin, Emilie; Perrone, Marcela; Baciu, Monica
2009-01-01
This behavioral study aimed at assessing the effect of two variables on the degree of hemispheric specialization for language. One of them was the "grapho-phonemic translation (transformation)" (letter-sound mapping) and the other was the participants' "gender". The experiment was conducted with healthy volunteers. A divided visual field procedure…
Neural Correlates of Morphological Decomposition in a Morphologically Rich Language: An fMRI Study
ERIC Educational Resources Information Center
Lehtonen, Minna; Vorobyev, Victor A.; Hugdahl, Kenneth; Tuokkola, Terhi; Laine, Matti
2006-01-01
By employing visual lexical decision and functional MRI, we studied the neural correlates of morphological decomposition in a highly inflected language (Finnish) where most inflected noun forms elicit a consistent processing cost during word recognition. This behavioral effect could reflect suffix stripping at the visual word form level and/or…
Efficiency of Lexical Access in Children with Autism Spectrum Disorders: Does Modality Matter?
ERIC Educational Resources Information Center
Harper-Hill, Keely; Copland, David; Arnott, Wendy
2014-01-01
The provision of visual support to individuals with an autism spectrum disorder (ASD) is widely recommended. We explored one mechanism underlying the use of visual supports: efficiency of language processing. Two groups of children, one with and one without an ASD, participated. The groups had comparable oral and written language skills and…
Visual Speech Perception in Children with Language Learning Impairments
ERIC Educational Resources Information Center
Knowland, Victoria C. P.; Evans, Sam; Snell, Caroline; Rosen, Stuart
2016-01-01
Purpose: The purpose of the study was to assess the ability of children with developmental language learning impairments (LLIs) to use visual speech cues from the talking face. Method: In this cross-sectional study, 41 typically developing children (mean age: 8 years 0 months, range: 4 years 5 months to 11 years 10 months) and 27 children with…
The Effect of Visual Variability on the Learning of Academic Concepts
ERIC Educational Resources Information Center
Bourgoyne, Ashley; Alt, Mary
2017-01-01
Purpose: The purpose of this study was to identify effects of variability of visual input on development of conceptual representations of academic concepts for college-age students with normal language (NL) and those with language-learning disabilities (LLD). Method: Students with NL (n = 11) and LLD (n = 11) participated in a computer-based…
Morphological Processing during Visual Word Recognition in Hebrew as a First and a Second Language
ERIC Educational Resources Information Center
Norman, Tal; Degani, Tamar; Peleg, Orna
2017-01-01
The present study examined whether sublexical morphological processing takes place during visual word-recognition in Hebrew, and whether morphological decomposition of written words depends on lexical activation of the complete word. Furthermore, it examined whether morphological processing is similar when reading Hebrew as a first language (L1)…
PsyToolkit: a software package for programming psychological experiments using Linux.
Stoet, Gijsbert
2010-11-01
PsyToolkit is a set of software tools for programming psychological experiments on Linux computers. Given that PsyToolkit is freely available under the Gnu Public License, open source, and designed such that it can easily be modified and extended for individual needs, it is suitable not only for technically oriented Linux users, but also for students, researchers on small budgets, and universities in developing countries. The software includes a high-level scripting language, a library for the programming language C, and a questionnaire presenter. The software easily integrates with other open source tools, such as the statistical software package R. PsyToolkit is designed to work with external hardware (including IoLab and Cedrus response keyboards and two common digital input/output boards) and to support millisecond timing precision. Four in-depth examples explain the basic functionality of PsyToolkit. Example 1 demonstrates a stimulus-response compatibility experiment. Example 2 demonstrates a novel mouse-controlled visual search experiment. Example 3 shows how to control light emitting diodes using PsyToolkit, and Example 4 shows how to build a light-detection sensor. The last two examples explain the electronic hardware setup such that they can even be used with other software packages.
25 CFR 39.136 - What is the WSU for Language Development programs?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 1 2011-04-01 2011-04-01 false What is the WSU for Language Development programs? 39.136... EQUALIZATION PROGRAM Indian School Equalization Formula Language Development Programs § 39.136 What is the WSU for Language Development programs? Language Development programs are funded at 0.13 WSUs per student. ...
Östling, Robert; Börstell, Carl; Courtaux, Servane
2018-01-01
We use automatic processing of 120,000 sign videos in 31 different sign languages to show a cross-linguistic pattern for two types of iconic form–meaning relationships in the visual modality. First, we demonstrate that the degree of inherent plurality of concepts, based on individual ratings by non-signers, strongly correlates with the number of hands used in the sign forms encoding the same concepts across sign languages. Second, we show that certain concepts are iconically articulated around specific parts of the body, as predicted by the associational intuitions by non-signers. The implications of our results are both theoretical and methodological. With regard to theoretical implications, we corroborate previous research by demonstrating and quantifying, using a much larger material than previously available, the iconic nature of languages in the visual modality. As for the methodological implications, we show how automatic methods are, in fact, useful for performing large-scale analysis of sign language data, to a high level of accuracy, as indicated by our manual error analysis. PMID:29867684
MATLAB implementation of a dynamic clamp with bandwidth >125 KHz capable of generating INa at 37°C
Clausen, Chris; Valiunas, Virginijus; Brink, Peter R.; Cohen, Ira S.
2012-01-01
We describe the construction of a dynamic clamp with bandwidth >125 KHz that utilizes a high performance, yet low cost, standard home/office PC interfaced with a high-speed (16 bit) data acquisition module. High bandwidth is achieved by exploiting recently available software advances (code-generation technology, optimized real-time kernel). Dynamic-clamp programs are constructed using Simulink, a visual programming language. Blocks for computation of membrane currents are written in the high-level matlab language; no programming in C is required. The instrument can be used in single- or dual-cell configurations, with the capability to modify programs while experiments are in progress. We describe an algorithm for computing the fast transient Na+ current (INa) in real time, and test its accuracy and stability using rate constants appropriate for 37°C. We then construct a program capable of supplying three currents to a cell preparation: INa, the hyperpolarizing-activated inward pacemaker current (If), and an inward-rectifier K+ current (IK1). The program corrects for the IR drop due to electrode current flow, and also records all voltages and currents. We tested this program on dual patch-clamped HEK293 cells where the dynamic clamp controls a current-clamp amplifier and a voltage-clamp amplifier controls membrane potential, and current-clamped HEK293 cells where the dynamic clamp produces spontaneous pacing behavior exhibiting Na+ spikes in otherwise passive cells. PMID:23224681
ERIC Educational Resources Information Center
Teschner, Richard V., Ed.
This collection of papers includes: "Foreign Language Testing Today: Issues in Language Program Direction" (Frank Nuessel); "Assessing the Problems of Assessment" (M. Peter Hagiwara); "Testing in Foreign Language Programs and Testing Programs in Foreign Language Departments: Reflections and Recommendations" (Elizabeth…
25 CFR 39.131 - What is a Language Development Program?
Code of Federal Regulations, 2014 CFR
2014-04-01
... EQUALIZATION PROGRAM Indian School Equalization Formula Language Development Programs § 39.131 What is a Language Development Program? A Language Development program is one that serves students who either: (a...
25 CFR 39.131 - What is a Language Development Program?
Code of Federal Regulations, 2013 CFR
2013-04-01
... EQUALIZATION PROGRAM Indian School Equalization Formula Language Development Programs § 39.131 What is a Language Development Program? A Language Development program is one that serves students who either: (a...
25 CFR 39.131 - What is a Language Development Program?
Code of Federal Regulations, 2012 CFR
2012-04-01
... EQUALIZATION PROGRAM Indian School Equalization Formula Language Development Programs § 39.131 What is a Language Development Program? A Language Development program is one that serves students who either: (a...
ERIC Educational Resources Information Center
Heilenman, L. Kathy, Ed.
This collection of papers is divided into two parts. After "Introduction" (L. Kathy Heilenman), Part 1, "Research and Language Program Directors: The Relationship," includes "Research Domains and Language Program Direction" (Bill VanPatten); "Language Program Direction and the Modernist Agenda" (Celeste…
Learning of grammar-like visual sequences by adults with and without language-learning disabilities.
Aguilar, Jessica M; Plante, Elena
2014-08-01
Two studies examined learning of grammar-like visual sequences to determine whether a general deficit in statistical learning characterizes this population. Furthermore, we tested the hypothesis that difficulty in sustaining attention during the learning task might account for differences in statistical learning. In Study 1, adults with normal language (NL) or language-learning disability (LLD) were familiarized with the visual artificial grammar and then tested using items that conformed or deviated from the grammar. In Study 2, a 2nd sample of adults with NL and LLD were presented auditory word pairs with weak semantic associations (e.g., groom + clean) along with the visual learning task. Participants were instructed to attend to visual sequences and to ignore the auditory stimuli. Incidental encoding of these words would indicate reduced attention to the primary task. In Studies 1 and 2, both groups demonstrated learning and generalization of the artificial grammar. In Study 2, neither the NL nor the LLD group appeared to encode the words presented during the learning phase. The results argue against a general deficit in statistical learning for individuals with LLD and demonstrate that both NL and LLD learners can ignore extraneous auditory stimuli during visual learning.
Lévy-like diffusion in eye movements during spoken-language comprehension.
Stephen, Damian G; Mirman, Daniel; Magnuson, James S; Dixon, James A
2009-05-01
This study explores the diffusive properties of human eye movements during a language comprehension task. In this task, adults are given auditory instructions to locate named objects on a computer screen. Although it has been convention to model visual search as standard Brownian diffusion, we find evidence that eye movements are hyperdiffusive. Specifically, we use comparisons of maximum-likelihood fit as well as standard deviation analysis and diffusion entropy analysis to show that visual search during language comprehension exhibits Lévy-like rather than Gaussian diffusion.
Lévy-like diffusion in eye movements during spoken-language comprehension
NASA Astrophysics Data System (ADS)
Stephen, Damian G.; Mirman, Daniel; Magnuson, James S.; Dixon, James A.
2009-05-01
This study explores the diffusive properties of human eye movements during a language comprehension task. In this task, adults are given auditory instructions to locate named objects on a computer screen. Although it has been convention to model visual search as standard Brownian diffusion, we find evidence that eye movements are hyperdiffusive. Specifically, we use comparisons of maximum-likelihood fit as well as standard deviation analysis and diffusion entropy analysis to show that visual search during language comprehension exhibits Lévy-like rather than Gaussian diffusion.
Situated sentence processing: the coordinated interplay account and a neurobehavioral model.
Crocker, Matthew W; Knoeferle, Pia; Mayberry, Marshall R
2010-03-01
Empirical evidence demonstrating that sentence meaning is rapidly reconciled with the visual environment has been broadly construed as supporting the seamless interaction of visual and linguistic representations during situated comprehension. Based on recent behavioral and neuroscientific findings, however, we argue for the more deeply rooted coordination of the mechanisms underlying visual and linguistic processing, and for jointly considering the behavioral and neural correlates of scene-sentence reconciliation during situated comprehension. The Coordinated Interplay Account (CIA; Knoeferle, P., & Crocker, M. W. (2007). The influence of recent scene events on spoken comprehension: Evidence from eye movements. Journal of Memory and Language, 57(4), 519-543) asserts that incremental linguistic interpretation actively directs attention in the visual environment, thereby increasing the salience of attended scene information for comprehension. We review behavioral and neuroscientific findings in support of the CIA's three processing stages: (i) incremental sentence interpretation, (ii) language-mediated visual attention, and (iii) the on-line influence of non-linguistic visual context. We then describe a recently developed connectionist model which both embodies the central CIA proposals and has been successfully applied in modeling a range of behavioral findings from the visual world paradigm (Mayberry, M. R., Crocker, M. W., & Knoeferle, P. (2009). Learning to attend: A connectionist model of situated language comprehension. Cognitive Science). Results from a new simulation suggest the model also correlates with event-related brain potentials elicited by the immediate use of visual context for linguistic disambiguation (Knoeferle, P., Habets, B., Crocker, M. W., & Münte, T. F. (2008). Visual scenes trigger immediate syntactic reanalysis: Evidence from ERPs during situated spoken comprehension. Cerebral Cortex, 18(4), 789-795). Finally, we argue that the mechanisms underlying interpretation, visual attention, and scene apprehension are not only in close temporal synchronization, but have co-adapted to optimize real-time visual grounding of situated spoken language, thus facilitating the association of linguistic, visual and motor representations that emerge during the course of our embodied linguistic experience in the world. Copyright 2009 Elsevier Inc. All rights reserved.
Reading and Language Learning: Crosslinguistic Constraints on Second Language Reading Development
ERIC Educational Resources Information Center
Koda, Keiko
2007-01-01
The ultimate goal of reading is to construct text meaning based on visually encoded information. Essentially, it entails converting print into language and then to the message intended by the author. It is hardly accidental, therefore, that, in all languages, reading builds on oral language competence and that learning to read uniformly requires…
ProteoLens: a visual analytic tool for multi-scale database-driven biological network data mining.
Huan, Tianxiao; Sivachenko, Andrey Y; Harrison, Scott H; Chen, Jake Y
2008-08-12
New systems biology studies require researchers to understand how interplay among myriads of biomolecular entities is orchestrated in order to achieve high-level cellular and physiological functions. Many software tools have been developed in the past decade to help researchers visually navigate large networks of biomolecular interactions with built-in template-based query capabilities. To further advance researchers' ability to interrogate global physiological states of cells through multi-scale visual network explorations, new visualization software tools still need to be developed to empower the analysis. A robust visual data analysis platform driven by database management systems to perform bi-directional data processing-to-visualizations with declarative querying capabilities is needed. We developed ProteoLens as a JAVA-based visual analytic software tool for creating, annotating and exploring multi-scale biological networks. It supports direct database connectivity to either Oracle or PostgreSQL database tables/views, on which SQL statements using both Data Definition Languages (DDL) and Data Manipulation languages (DML) may be specified. The robust query languages embedded directly within the visualization software help users to bring their network data into a visualization context for annotation and exploration. ProteoLens supports graph/network represented data in standard Graph Modeling Language (GML) formats, and this enables interoperation with a wide range of other visual layout tools. The architectural design of ProteoLens enables the de-coupling of complex network data visualization tasks into two distinct phases: 1) creating network data association rules, which are mapping rules between network node IDs or edge IDs and data attributes such as functional annotations, expression levels, scores, synonyms, descriptions etc; 2) applying network data association rules to build the network and perform the visual annotation of graph nodes and edges according to associated data values. We demonstrated the advantages of these new capabilities through three biological network visualization case studies: human disease association network, drug-target interaction network and protein-peptide mapping network. The architectural design of ProteoLens makes it suitable for bioinformatics expert data analysts who are experienced with relational database management to perform large-scale integrated network visual explorations. ProteoLens is a promising visual analytic platform that will facilitate knowledge discoveries in future network and systems biology studies.
CAD system for automatic analysis of CT perfusion maps
NASA Astrophysics Data System (ADS)
Hachaj, T.; Ogiela, M. R.
2011-03-01
In this article, authors present novel algorithms developed for the computer-assisted diagnosis (CAD) system for analysis of dynamic brain perfusion, computer tomography (CT) maps, cerebral blood flow (CBF), and cerebral blood volume (CBV). Those methods perform both quantitative analysis [detection and measurement and description with brain anatomy atlas (AA) of potential asymmetries/lesions] and qualitative analysis (semantic interpretation of visualized symptoms). The semantic interpretation (decision about type of lesion: ischemic/hemorrhagic, is the brain tissue at risk of infraction or not) of visualized symptoms is done by, so-called, cognitive inference processes allowing for reasoning on character of pathological regions based on specialist image knowledge. The whole system is implemented in.NET platform (C# programming language) and can be used on any standard PC computer with.NET framework installed.
Enabling Data Fusion via a Common Data Model and Programming Interface
NASA Astrophysics Data System (ADS)
Lindholm, D. M.; Wilson, A.
2011-12-01
Much progress has been made in scientific data interoperability, especially in the areas of metadata and discovery. However, while a data user may have improved techniques for finding data, there is often a large chasm to span when it comes to acquiring the desired subsets of various datasets and integrating them into a data processing environment. Some tools such as OPeNDAP servers and the Unidata Common Data Model (CDM) have introduced improved abstractions for accessing data via a common interface, but they alone do not go far enough to enable fusion of data from multidisciplinary sources. Although data from various scientific disciplines may represent semantically similar concepts (e.g. time series), the user may face widely varying structural representations of the data (e.g. row versus column oriented), not to mention radically different storage formats. It is not enough to convert data to a common format. The key to fusing scientific data is to represent each dataset with consistent sampling. This can best be done by using a data model that expresses the functional relationship that each dataset represents. The domain of those functions determines how the data can be combined. The Visualization for Algorithm Development (VisAD) Java API has provided a sophisticated data model for representing the functional nature of scientific datasets for well over a decade. Because VisAD is largely designed for its visualization capabilities, the data model can be cumbersome to use for numerical computation, especially for those not comfortable with Java. Although both VisAD and the implementation of the CDM are written in Java, neither defines a pure Java interface that others could implement and program to, further limiting potential for interoperability. In this talk, we will present a solution for data integration based on a simple discipline-agnostic scientific data model and programming interface that enables a dataset to be defined in terms of three variable types: Scalar (a), Tuple (a,b), and Function (a -> b). These basic building blocks can be combined and nested to represent any arbitrarily complex dataset. For example, a time series of surface temperature and pressure could be represented as: time -> ((lon,lat) -> (T,P)). Our data model is expressed in UML and can be implemented in numerous programming languages. We will demonstrate an implementation of our data model and interface using the Scala programming language. Given its functional programming constructs, sophisticated type system, and other language features, Scala enables us to construct complex data structures that can be manipulated using natural mathematical expressions while taking advantage of the language's ability to operate on collections in parallel. This API will be applied to the problem of assimilating various measurements of the solar spectrum and other proxies from multiple sources to construct a composite Lyman-alpha irradiance dataset.
Visual statistical learning is related to natural language ability in adults: An ERP study.
Daltrozzo, Jerome; Emerson, Samantha N; Deocampo, Joanne; Singh, Sonia; Freggens, Marjorie; Branum-Martin, Lee; Conway, Christopher M
2017-03-01
Statistical learning (SL) is believed to enable language acquisition by allowing individuals to learn regularities within linguistic input. However, neural evidence supporting a direct relationship between SL and language ability is scarce. We investigated whether there are associations between event-related potential (ERP) correlates of SL and language abilities while controlling for the general level of selective attention. Seventeen adults completed tests of visual SL, receptive vocabulary, grammatical ability, and sentence completion. Response times and ERPs showed that SL is related to receptive vocabulary and grammatical ability. ERPs indicated that the relationship between SL and grammatical ability was independent of attention while the association between SL and receptive vocabulary depended on attention. The implications of these dissociative relationships in terms of underlying mechanisms of SL and language are discussed. These results further elucidate the cognitive nature of the links between SL mechanisms and language abilities. Copyright © 2017 Elsevier Inc. All rights reserved.
Visual statistical learning is related to natural language ability in adults: An ERP Study
Daltrozzo, Jerome; Emerson, Samantha N.; Deocampo, Joanne; Singh, Sonia; Freggens, Marjorie; Branum-Martin, Lee; Conway, Christopher M.
2017-01-01
Statistical learning (SL) is believed to enable language acquisition by allowing individuals to learn regularities within linguistic input. However, neural evidence supporting a direct relationship between SL and language ability is scarce. We investigated whether there are associations between event-related potential (ERP) correlates of SL and language abilities while controlling for the general level of selective attention. Seventeen adults completed tests of visual SL, receptive vocabulary, grammatical ability, and sentence completion. Response times and ERPs showed that SL is related to receptive vocabulary and grammatical ability. ERPs indicated that the relationship between SL and grammatical ability was independent of attention while the association between SL and receptive vocabulary depended on attention. The implications of these dissociative relationships in terms of underlying mechanisms of SL and language are discussed. These results further elucidate the cognitive nature of the links between SL mechanisms and language abilities. PMID:28086142
Auditory Technology and Its Impact on Bilingual Deaf Education
ERIC Educational Resources Information Center
Mertes, Jennifer
2015-01-01
Brain imaging studies suggest that children can simultaneously develop, learn, and use two languages. A visual language, such as American Sign Language (ASL), facilitates development at the earliest possible moments in a child's life. Spoken language development can be delayed due to diagnostic evaluations, device fittings, and auditory skill…
Teaching Reading through Language. TECHNIQUES.
ERIC Educational Resources Information Center
Jones, Edward V.
1986-01-01
Because reading is first and foremost a language comprehension process focusing on the visual form of spoken language, such teaching strategies as language experience and assisted reading have much to offer beginning readers. These techniques have been slow to become accepted by many adult literacy instructors; however, the two strategies,…
Audience Effects in American Sign Language Interpretation
ERIC Educational Resources Information Center
Weisenberg, Julia
2009-01-01
There is a system of English mouthing during interpretation that appears to be the result of language contact between spoken language and signed language. English mouthing is a voiceless visual representation of words on a signer's lips produced concurrently with manual signs. It is a type of borrowing prevalent among English-dominant…
Lexical Processing in Spanish Sign Language (LSE)
ERIC Educational Resources Information Center
Carreiras, Manuel; Gutierrez-Sigut, Eva; Baquero, Silvia; Corina, David
2008-01-01
Lexical access is concerned with how the spoken or visual input of language is projected onto the mental representations of lexical forms. To date, most theories of lexical access have been based almost exclusively on studies of spoken languages and/or orthographic representations of spoken languages. Relatively few studies have examined how…
ERIC Educational Resources Information Center
Altmann, Gerry T. M.; Kamide, Yuki
2009-01-01
Two experiments explored the mapping between language and mental representations of visual scenes. In both experiments, participants viewed, for example, a scene depicting a woman, a wine glass and bottle on the floor, an empty table, and various other objects. In Experiment 1, participants concurrently heard either "The woman will put the glass…
ERIC Educational Resources Information Center
Vogel, Susan A.
1990-01-01
Among conclusions of the review of the literature are that learning-disabled (LD) females have lower IQ's and more severe academic achievement deficits in some aspects of reading and math, but are somewhat better in visual-motor abilities, spelling, and written language mechanics than LD males. (Author/DB)
ERIC Educational Resources Information Center
Bedny, Marina; Pascual-Leone, Alvaro; Dravida, Swethasri; Saxe, Rebecca
2012-01-01
Recent evidence suggests that blindness enables visual circuits to contribute to language processing. We examined whether this dramatic functional plasticity has a sensitive period. BOLD fMRI signal was measured in congenitally blind, late blind (blindness onset 9-years-old or later) and sighted participants while they performed a sentence…
ERIC Educational Resources Information Center
Erdener, Dogu
2016-01-01
Traditionally, second language (L2) instruction has emphasised auditory-based instruction methods. However, this approach is restrictive in the sense that speech perception by humans is not just an auditory phenomenon but a multimodal one, and specifically, a visual one as well. In the past decade, experimental studies have shown that the…
ERIC Educational Resources Information Center
Maun, Ian
2006-01-01
This paper examines visual and affective factors involved in the reading of foreign language texts. It draws on the results of a pilot study among students of post-compulsory school stage studying French in England. Through a detailed analysis of students' reactions to texts, it demonstrates that the use of "authentic" documents under…
ERIC Educational Resources Information Center
Heimann, Mikael; Strid, Karin; Smith, Lars; Tjus, Tomas; Ulvund, Stein Erik; Meltzoff, Andrew N.
2006-01-01
The relationship between recall memory, visual recognition memory, social communication, and the emergence of language skills was measured in a longitudinal study. Thirty typically developing Swedish children were tested at 6, 9 and 14 months. The result showed that, in combination, visual recognition memory at 6 months, deferred imitation at 9…
ERIC Educational Resources Information Center
JENSON, PAUL G.; WESTERMEIER, FRANZ X.
A RESEARCH PROJECT USING THE OSCILLOSCOPE TO DETERMINE VISUAL FEEDBACK IN THE TEACHING OF FOREIGN LANGUAGE PRONUNCIATION WAS TERMINATED BECAUSE OF TECHNICAL DIFFICULTIES THAT COULD NOT BE RESOLVED WITH THE EQUIPMENT AVAILABLE. FAILURE IS ATTRIBUTED TO SUCH FACTORS AS (1) THE SPEECH SOUND WAVES SOUND THE SAME THOUGH THEIR WAVE SHAPES DIFFER, (2)…
Effect of a synesthete's photisms on name recall.
Mills, Carol Bergfeld; Innis, Joanne; Westendorf, Taryn; Owsianiecki, Lauren; McDonald, Angela
2006-02-01
A multilingual, colored-letter synesthete professor (MLS), 9 nonsynesthete multilingual professors and 4 nonsynesthete art professors learned 30 names of individuals (first and last name pairs) in three trials. They recalled the names after each trial and six months later, as well as performed cued recall trials initially and after six months. As hypothesized, MLS recalled significantly more names than control groups on all free recall tests (except after the first trial) and on cued recall tests. In addition, MLS gave qualitatively different reasons for remembering names than any individual control participant. MLS gave mostly color reasons for remembering the names, whereas nonsynesthetes gave reasons based on familiarity or language or art knowledge. Results on standardized memory tests showed that MLS had average performance on non-language visual memory tests (the Benton Visual Retention Test-Revised--BURT-R, and the Rey-Osterrieth Complex Figure Test--CFT), but had superior memory performance on a verbal test consisting of lists of nouns (Rey Auditory-Verbal Learning Test--RAVLT). MLS's synesthesia seems to aid memory for visually or auditorily presented language stimuli (names and nouns), but not for non-language visual stimuli (simple and complex figures).
IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics
2016-01-01
Background We live in an era of explosive data generation that will continue to grow and involve all industries. One of the results of this explosion is the need for newer and more efficient data analytics procedures. Traditionally, data analytics required a substantial background in statistics and computer science. In 2015, International Business Machines Corporation (IBM) released the IBM Watson Analytics (IBMWA) software that delivered advanced statistical procedures based on the Statistical Package for the Social Sciences (SPSS). The latest entry of Watson Analytics into the field of analytical software products provides users with enhanced functions that are not available in many existing programs. For example, Watson Analytics automatically analyzes datasets, examines data quality, and determines the optimal statistical approach. Users can request exploratory, predictive, and visual analytics. Using natural language processing (NLP), users are able to submit additional questions for analyses in a quick response format. This analytical package is available free to academic institutions (faculty and students) that plan to use the tools for noncommercial purposes. Objective To report the features of IBMWA and discuss how this software subjectively and objectively compares to other data mining programs. Methods The salient features of the IBMWA program were examined and compared with other common analytical platforms, using validated health datasets. Results Using a validated dataset, IBMWA delivered similar predictions compared with several commercial and open source data mining software applications. The visual analytics generated by IBMWA were similar to results from programs such as Microsoft Excel and Tableau Software. In addition, assistance with data preprocessing and data exploration was an inherent component of the IBMWA application. Sensitivity and specificity were not included in the IBMWA predictive analytics results, nor were odds ratios, confidence intervals, or a confusion matrix. Conclusions IBMWA is a new alternative for data analytics software that automates descriptive, predictive, and visual analytics. This program is very user-friendly but requires data preprocessing, statistical conceptual understanding, and domain expertise. PMID:27729304
IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics.
Hoyt, Robert Eugene; Snider, Dallas; Thompson, Carla; Mantravadi, Sarita
2016-10-11
We live in an era of explosive data generation that will continue to grow and involve all industries. One of the results of this explosion is the need for newer and more efficient data analytics procedures. Traditionally, data analytics required a substantial background in statistics and computer science. In 2015, International Business Machines Corporation (IBM) released the IBM Watson Analytics (IBMWA) software that delivered advanced statistical procedures based on the Statistical Package for the Social Sciences (SPSS). The latest entry of Watson Analytics into the field of analytical software products provides users with enhanced functions that are not available in many existing programs. For example, Watson Analytics automatically analyzes datasets, examines data quality, and determines the optimal statistical approach. Users can request exploratory, predictive, and visual analytics. Using natural language processing (NLP), users are able to submit additional questions for analyses in a quick response format. This analytical package is available free to academic institutions (faculty and students) that plan to use the tools for noncommercial purposes. To report the features of IBMWA and discuss how this software subjectively and objectively compares to other data mining programs. The salient features of the IBMWA program were examined and compared with other common analytical platforms, using validated health datasets. Using a validated dataset, IBMWA delivered similar predictions compared with several commercial and open source data mining software applications. The visual analytics generated by IBMWA were similar to results from programs such as Microsoft Excel and Tableau Software. In addition, assistance with data preprocessing and data exploration was an inherent component of the IBMWA application. Sensitivity and specificity were not included in the IBMWA predictive analytics results, nor were odds ratios, confidence intervals, or a confusion matrix. IBMWA is a new alternative for data analytics software that automates descriptive, predictive, and visual analytics. This program is very user-friendly but requires data preprocessing, statistical conceptual understanding, and domain expertise.
A survey of functional programming language principles
NASA Technical Reports Server (NTRS)
Holloway, C. M.
1986-01-01
Research in the area of functional programming languages has intensified in the 8 years since John Backus' Turing Award Lecture on the topic was published. The purpose of this paper is to present a survey of the ideas of functional programming languages. The paper assumes the reader is comfortable with mathematics and has knowledge of the basic principles of traditional programming languages, but does not assume any prior knowledge of the ideas of functional languages. A simple functional language is defined and used to illustrate the basic ideas. Topics discussed include the reasons for developing functional languages, methods of expressing concurrency, the algebra of functional programming languages, program transformation techniques, and implementations of functional languages. Existing functional languages are also mentioned. The paper concludes with the author's opinions as to the future of functional languages. An annotated bibliography on the subject is also included.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false May schools operate a language development program... Formula Language Development Programs § 39.137 May schools operate a language development program without a specific appropriation from Congress? Yes, a school may operate a language development program...
34 CFR 658.1 - What is the Undergraduate International Studies and Foreign Language Program?
Code of Federal Regulations, 2011 CFR
2011-07-01
... Foreign Language Program? 658.1 Section 658.1 Education Regulations of the Offices of the Department of... STUDIES AND FOREIGN LANGUAGE PROGRAM General § 658.1 What is the Undergraduate International Studies and Foreign Language Program? The Undergraduate International Studies and Foreign Language Program is designed...
34 CFR 669.1 - What is the Language Resource Centers Program?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 3 2010-07-01 2010-07-01 false What is the Language Resource Centers Program? 669.1... POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION LANGUAGE RESOURCE CENTERS PROGRAM General § 669.1 What is the Language Resource Centers Program? The Language Resource Centers Program makes awards, through grants or...
34 CFR 658.1 - What is the Undergraduate International Studies and Foreign Language Program?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Foreign Language Program? 658.1 Section 658.1 Education Regulations of the Offices of the Department of... STUDIES AND FOREIGN LANGUAGE PROGRAM General § 658.1 What is the Undergraduate International Studies and Foreign Language Program? The Undergraduate International Studies and Foreign Language Program is designed...
34 CFR 669.1 - What is the Language Resource Centers Program?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 34 Education 3 2011-07-01 2011-07-01 false What is the Language Resource Centers Program? 669.1... POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION LANGUAGE RESOURCE CENTERS PROGRAM General § 669.1 What is the Language Resource Centers Program? The Language Resource Centers Program makes awards, through grants or...
NASA Astrophysics Data System (ADS)
Gaik Tay, Kim; Cheong, Tau Han; Foong Lee, Ming; Kek, Sie Long; Abdul-Kahar, Rosmila
2017-08-01
In the previous work on Euler’s spreadsheet calculator for solving an ordinary differential equation, the Visual Basic for Application (VBA) programming was used, however, a graphical user interface was not developed to capture users input. This weakness may make users confuse on the input and output since those input and output are displayed in the same worksheet. Besides, the existing Euler’s spreadsheet calculator is not interactive as there is no prompt message if there is a mistake in inputting the parameters. On top of that, there are no users’ instructions to guide users to input the derivative function. Hence, in this paper, we improved previous limitations by developing a user-friendly and interactive graphical user interface. This improvement is aimed to capture users’ input with users’ instructions and interactive prompt error messages by using VBA programming. This Euler’s graphical user interface spreadsheet calculator is not acted as a black box as users can click on any cells in the worksheet to see the formula used to implement the numerical scheme. In this way, it could enhance self-learning and life-long learning in implementing the numerical scheme in a spreadsheet and later in any programming language.
Fujiwara, Keizo; Naito, Yasushi; Senda, Michio; Mori, Toshiko; Manabe, Tomoko; Shinohara, Shogo; Kikuchi, Masahiro; Hori, Shin-Ya; Tona, Yosuke; Yamazaki, Hiroshi
2008-04-01
The use of fluorodeoxyglucose positron emission tomography (FDG-PET) with a visual language task provided objective information on the development and plasticity of cortical language networks. This approach could help individuals involved in the habilitation and education of prelingually deafened children to decide upon the appropriate mode of communication. To investigate the cortical processing of the visual component of language and the effect of deafness upon this activity. Six prelingually deafened children participated in this study. The subjects were numbered 1-6 in the order of their spoken communication skills. In the time period between an intravenous injection of 370 MBq 18F-FDG and PET scanning of the brain, each subject was instructed to watch a video of the face of a speaking person. The cortical radioactivity of each deaf child was compared with that of a group of normal- hearing adults using a t test in a basic SPM2 model. The widest bilaterally activated cortical area was detected in subject 1, who was the worst user of spoken language. By contrast, there was no significant difference between subject 6, who was the best user of spoken language with a hearing aid, and the normal hearing group.
Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.
2014-01-01
To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language. PMID:24904497
Using Films in Vocabulary Teaching of Turkish as a Foreign Language
ERIC Educational Resources Information Center
Iscan, Adem
2017-01-01
The use and utility of auditory and visual tools in language teaching is a common practice. Films constitute one of the tools. It has been found that using films in language teaching is also effective in the development of vocabulary of foreign language learners. The literature review reveals that while films are used in foreign language teaching…
Corina, David P; Knapp, Heather Patterson
2008-12-01
In the quest to further understand the neural underpinning of human communication, researchers have turned to studies of naturally occurring signed languages used in Deaf communities. The comparison of the commonalities and differences between spoken and signed languages provides an opportunity to determine core neural systems responsible for linguistic communication independent of the modality in which a language is expressed. The present article examines such studies, and in addition asks what we can learn about human languages by contrasting formal visual-gestural linguistic systems (signed languages) with more general human action perception. To understand visual language perception, it is important to distinguish the demands of general human motion processing from the highly task-dependent demands associated with extracting linguistic meaning from arbitrary, conventionalized gestures. This endeavor is particularly important because theorists have suggested close homologies between perception and production of actions and functions of human language and social communication. We review recent behavioral, functional imaging, and neuropsychological studies that explore dissociations between the processing of human actions and signed languages. These data suggest incomplete overlap between the mirror-neuron systems proposed to mediate human action and language.
Flight program language requirements. Volume 2: Requirements and evaluations
NASA Technical Reports Server (NTRS)
1972-01-01
The efforts and results are summarized for a study to establish requirements for a flight programming language for future onboard computer applications. Several different languages were available as potential candidates for future NASA flight programming efforts. The study centered around an evaluation of the four most pertinent existing aerospace languages. Evaluation criteria were established, and selected kernels from the current Saturn 5 and Skylab flight programs were used as benchmark problems for sample coding. An independent review of the language specifications incorporated anticipated future programming requirements into the evaluation. A set of detailed language requirements was synthesized from these activities. The details of program language requirements and of the language evaluations are described.
Wu, Chung-Hsien; Chiu, Yu-Hsien; Guo, Chi-Shiang
2004-12-01
This paper proposes a novel approach to the generation of Chinese sentences from ill-formed Taiwanese Sign Language (TSL) for people with hearing impairments. First, a sign icon-based virtual keyboard is constructed to provide a visualized interface to retrieve sign icons from a sign database. A proposed language model (LM), based on a predictive sentence template (PST) tree, integrates a statistical variable n-gram LM and linguistic constraints to deal with the translation problem from ill-formed sign sequences to grammatical written sentences. The PST tree trained by a corpus collected from the deaf schools was used to model the correspondence between signed and written Chinese. In addition, a set of phrase formation rules, based on trigger pair category, was derived for sentence pattern expansion. These approaches improved the efficiency of text generation and the accuracy of word prediction and, therefore, improved the input rate. For the assessment of practical communication aids, a reading-comprehension training program with ten profoundly deaf students was undertaken in a deaf school in Tainan, Taiwan. Evaluation results show that the literacy aptitude test and subjective satisfactory level are significantly improved.
Artificial grammar learning meets formal language theory: an overview
Fitch, W. Tecumseh; Friederici, Angela D.
2012-01-01
Formal language theory (FLT), part of the broader mathematical theory of computation, provides a systematic terminology and set of conventions for describing rules and the structures they generate, along with a rich body of discoveries and theorems concerning generative rule systems. Despite its name, FLT is not limited to human language, but is equally applicable to computer programs, music, visual patterns, animal vocalizations, RNA structure and even dance. In the last decade, this theory has been profitably used to frame hypotheses and to design brain imaging and animal-learning experiments, mostly using the ‘artificial grammar-learning’ paradigm. We offer a brief, non-technical introduction to FLT and then a more detailed analysis of empirical research based on this theory. We suggest that progress has been hampered by a pervasive conflation of distinct issues, including hierarchy, dependency, complexity and recursion. We offer clarifications of several relevant hypotheses and the experimental designs necessary to test them. We finally review the recent brain imaging literature, using formal languages, identifying areas of convergence and outstanding debates. We conclude that FLT has much to offer scientists who are interested in rigorous empirical investigations of human cognition from a neuroscientific and comparative perspective. PMID:22688631
Alt, Mary; Arizmendi, Genesis D; Beal, Carole R
2014-07-01
The present study examined the relationship between mathematics and language to better understand the nature of the deficit and the academic implications associated with specific language impairment (SLI) and academic implications for English language learners (ELLs). School-age children (N = 61; 20 SLI, 20 ELL, 21 native monolingual English [NE]) were assessed using a norm-referenced mathematics instrument and 3 experimental computer-based mathematics games that varied in language demands. Group means were compared with analyses of variance. The ELL group was less accurate than the NE group only when tasks were language heavy. In contrast, the group with SLI was less accurate than the groups with NE and ELLs on language-heavy tasks and some language-light tasks. Specifically, the group with SLI was less accurate on tasks that involved comparing numerical symbols and using visual working memory for patterns. However, there were no group differences between children with SLI and peers without SLI on language-light mathematics tasks that involved visual working memory for numerical symbols. Mathematical difficulties of children who are ELLs appear to be related to the language demands of mathematics tasks. In contrast, children with SLI appear to have difficulty with mathematics tasks because of linguistic as well as nonlinguistic processing constraints.
Microsoft C#.NET program and electromagnetic depth sounding for large loop source
NASA Astrophysics Data System (ADS)
Prabhakar Rao, K.; Ashok Babu, G.
2009-07-01
A program, in the C# (C Sharp) language with Microsoft.NET Framework, is developed to compute the normalized vertical magnetic field of a horizontal rectangular loop source placed on the surface of an n-layered earth. The field can be calculated either inside or outside the loop. Five C# classes with member functions in each class are, designed to compute the kernel, Hankel transform integral, coefficients for cubic spline interpolation between computed values and the normalized vertical magnetic field. The program computes the vertical magnetic field in the frequency domain using the integral expressions evaluated by a combination of straightforward numerical integration and the digital filter technique. The code utilizes different object-oriented programming (OOP) features. It finally computes the amplitude and phase of the normalized vertical magnetic field. The computed results are presented for geometric and parametric soundings. The code is developed in Microsoft.NET visual studio 2003 and uses various system class libraries.
ERIC Educational Resources Information Center
Campbell, Wenonah N.; Skarakis-Doyle, Elizabeth
2011-01-01
This preliminary study explored peer conflict resolution knowledge in children with and without language impairment (LI). Specifically, it evaluated the utility of a visual analogue scale (VAS) for measuring nuances in such knowledge. Children aged 9-12 years, 26 with typically developing language (TLD) and 6 with LI, completed a training protocol…
ERIC Educational Resources Information Center
Miolo, Giuliana; Chapman, Robins S.; Sindberg, Heidi A.
2005-01-01
The authors evaluated the roles of auditory-verbal short-term memory, visual short-term memory, and group membership in predicting language comprehension, as measured by an experimental sentence comprehension task (SCT) and the Test for Auditory Comprehension of Language--Third Edition (TACL-3; E. Carrow-Woolfolk, 1999) in 38 participants: 19 with…
ERIC Educational Resources Information Center
Yang, Chi Cheung Ruby
2016-01-01
The present study examines how gender is represented in the visuals (or illustrations) of two English Language textbook series used in most primary schools in Hong Kong. Instead of conducting frequency counts of the occurrence of male and female characters in illustrations, or the spheres of activities they engaged in as in many previous textbook…
Sign Language Structure: An Outline of the Visual Communication Systems of the American Deaf
ERIC Educational Resources Information Center
Stokoe, William C., Jr.
2005-01-01
It is approaching a half century since Bill Stokoe published his revolutionary monograph, "Sign Language Structure: An Outline of the Visual Communication Systems of the American Deaf." It is rare for a work of innovative scholarship to spark a social as well as an intellectual revolution, but that is just what Stokoe's 1960 paper did. And it is…
2017-10-01
networks of the brain responsible for visual processing, mood regulation, motor coordination, sensory processing, and language command, but increased...4 For each subject, the rsFMRI voxel time-series were temporally shifted to account for differences in slice acquisition times...responsible for visual processing, mood regulation, motor coordination, sensory processing, and language command, but increased connectivity in
ERIC Educational Resources Information Center
Atherton, Mark
1993-01-01
The medieval writer, the nun Hildegard von Bingen, learned Latin without any formal instruction in it. Her case is described as an example of language acquisition by hearing it read, sung, and expounded and by visualizing it as though it were written down in a kind of phonetic script. (21 references) (Author/LB)
ERIC Educational Resources Information Center
Cunnings, Ian; Fotiadou, Georgia; Tsimpli, Ianthi
2017-01-01
In a visual world paradigm study, we manipulated gender congruence between a subject pronoun and two antecedents to investigate whether second language (L2) learners with a null subject first language (L1) acquire and process overt subject pronouns in a nonnull subject L2 in a nativelike way. We also investigated whether L2 speakers revise an…
ERIC Educational Resources Information Center
Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I.
2016-01-01
In this reply to Salverda (2016), we address a critique of the claims made in our recent study of real-time processing of American Sign Language (ASL) signs using a novel visual world eye-tracking paradigm (Lieberman, Borovsky, Hatrak, & Mayberry, 2015). Salverda asserts that our data do not support our conclusion that native signers and…
Comic Books: A Learning Tool for Meaningful Acquisition of Written Sign Language
ERIC Educational Resources Information Center
Guimarães, Cayley; Oliveira Machado, Milton César; Fernandes, Sueli F.
2018-01-01
Deaf people use Sign Language (SL) for intellectual development, communications and other human activities that are mediated by language--such as the expression of complex and abstract thoughts and feelings; and for literature, culture and knowledge. The Brazilian Sign Language (Libras) is a complete linguistic system of visual-spatial manner,…
Cognitive Process in Second Language Reading: Transfer of L1 Reading Skills and Strategies.
ERIC Educational Resources Information Center
Koda, Keiko
1988-01-01
Experiments with skilled readers (N=83) from four native-language orthographic backgrounds examined the effects of: (1) blocked visual or auditory information on lexical decision-making; and (2) heterographic homophones on reading comprehension. Native and second language transfer does occur in second language reading, and orthographic structure…
Visualization Analytics for Second Language Vocabulary Learning in Virtual Worlds
ERIC Educational Resources Information Center
Hsiao, Indy Y. T.; Lan, Yu-Ju; Kao, Chia-Ling; Li, Ping
2017-01-01
Language learning occurring in authentic contexts has been shown to be more effective. Virtual worlds provide simulated contexts that have the necessary elements of authentic contexts for language learning, and as a result, many studies have adopted virtual worlds as a useful platform for language learning. However, few studies so far have…
ERIC Educational Resources Information Center
Ebbels, Susan H.; Maric, Nataša; Murphy, Aoife; Turner, Gail
2014-01-01
Background: Little evidence exists for the effectiveness of therapy for children with receptive language difficulties, particularly those whose difficulties are severe and persistent. Aims: To establish the effectiveness of explicit speech and language therapy with visual support for secondary school-aged children with language impairments…
Cross-Language Priming of Word Meaning during Second Language Sentence Comprehension
ERIC Educational Resources Information Center
Yuan, Yanli; Woltz, Dan; Zheng, Robert
2010-01-01
The experiment investigated the benefit to second language (L2) sentence comprehension of priming word meanings with brief visual exposure to first language (L1) translation equivalents. Native English speakers learning Mandarin evaluated the validity of aurally presented Mandarin sentences. For selected words in half of the sentences there was…
Sixteen-month-olds can use language to update their expectations about the visual world.
Ganea, Patricia A; Fitch, Allison; Harris, Paul L; Kaldy, Zsuzsa
2016-11-01
The capacity to use language to form new representations and to revise existing knowledge is a crucial aspect of human cognition. Here we examined whether infants can use language to adjust their representation of a recently encoded scene. Using an eye-tracking paradigm, we asked whether 16-month-old infants (N=26; mean age=16;0 [months;days], range=14;15-17;15) can use language about an occluded event to inform their expectation about what the world will look like when the occluder is removed. We compared looking time to outcome scenes that matched the language input with looking time to those that did not. Infants looked significantly longer at the event outcome when the outcome did not match the language input, suggesting that they generated an expectation of the outcome based on that input alone. This effect was unrelated to infants' vocabulary size. Thus, using language to adjust expectations about the visual world is present at an early developmental stage even when language skills are rudimentary. Copyright © 2016 Elsevier Inc. All rights reserved.
How does visual thinking work in the mind of a person with autism? A personal account.
Grandin, Temple
2009-05-27
My mind is similar to an Internet search engine that searches for photographs. I use language to narrate the photo-realistic pictures that pop up in my imagination. When I design equipment for the cattle industry, I can test run it in my imagination similar to a virtual reality computer program. All my thinking is associative and not linear. To form concepts, I sort pictures into categories similar to computer files. To form the concept of orange, I see many different orange objects, such as oranges, pumpkins, orange juice and marmalade. I have observed that there are three different specialized autistic/Asperger cognitive types. They are: (i) visual thinkers such as I who are often poor at algebra, (ii) pattern thinkers such as Daniel Tammet who excel in math and music but may have problems with reading or writing composition, and (iii) verbal specialists who are good at talking and writing but they lack visual skills.
How does visual thinking work in the mind of a person with autism? A personal account
Grandin, Temple
2009-01-01
My mind is similar to an Internet search engine that searches for photographs. I use language to narrate the photo-realistic pictures that pop up in my imagination. When I design equipment for the cattle industry, I can test run it in my imagination similar to a virtual reality computer program. All my thinking is associative and not linear. To form concepts, I sort pictures into categories similar to computer files. To form the concept of orange, I see many different orange objects, such as oranges, pumpkins, orange juice and marmalade. I have observed that there are three different specialized autistic/Asperger cognitive types. They are: (i) visual thinkers such as I who are often poor at algebra, (ii) pattern thinkers such as Daniel Tammet who excel in math and music but may have problems with reading or writing composition, and (iii) verbal specialists who are good at talking and writing but they lack visual skills. PMID:19528028
Iconic Factors and Language Word Order
ERIC Educational Resources Information Center
Moeser, Shannon Dawn
1975-01-01
College students were presented with an artificial language in which spoken nonsense words were correlated with visual references. Inferences regarding vocabulary acquisition were drawn, and it was suggested that the processing of the language was mediated through a semantic memory system. (CK)
Suemitsu, Atsuo; Dang, Jianwu; Ito, Takayuki; Tiede, Mark
2015-10-01
Articulatory information can support learning or remediating pronunciation of a second language (L2). This paper describes an electromagnetic articulometer-based visual-feedback approach using an articulatory target presented in real-time to facilitate L2 pronunciation learning. This approach trains learners to adjust articulatory positions to match targets for a L2 vowel estimated from productions of vowels that overlap in both L1 and L2. Training of Japanese learners for the American English vowel /æ/ that included visual training improved its pronunciation regardless of whether audio training was also included. Articulatory visual feedback is shown to be an effective method for facilitating L2 pronunciation learning.
A new version of Visual tool for estimating the fractal dimension of images
NASA Astrophysics Data System (ADS)
Grossu, I. V.; Felea, D.; Besliu, C.; Jipa, Al.; Bordeianu, C. C.; Stan, E.; Esanu, T.
2010-04-01
This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images (Grossu et al., 2009 [1]). The earlier version was limited to bi-dimensional sets of points, stored in bitmap files. The application was extended for working also with comma separated values files and three-dimensional images. New version program summaryProgram title: Fractal Analysis v02 Catalogue identifier: AEEG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9999 No. of bytes in distributed program, including test data, etc.: 4 366 783 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 30 M Classification: 14 Catalogue identifier of previous version: AEEG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 1999 Does the new version supersede the previous version?: Yes Nature of problem: Estimating the fractal dimension of 2D and 3D images. Solution method: Optimized implementation of the box-counting algorithm. Reasons for new version:The previous version was limited to bitmap image files. The new application was extended in order to work with objects stored in comma separated values (csv) files. The main advantages are: Easier integration with other applications (csv is a widely used, simple text file format); Less resources consumed and improved performance (only the information of interest, the "black points", are stored); Higher resolution (the points coordinates are loaded into Visual Basic double variables [2]); Possibility of storing three-dimensional objects (e.g. the 3D Sierpinski gasket). In this version the optimized box-counting algorithm [1] was extended to the three-dimensional case. Summary of revisions:The application interface was changed from SDI (single document interface) to MDI (multi-document interface). One form was added in order to provide a graphical user interface for the new functionalities (fractal analysis of 2D and 3D images stored in csv files). Additional comments: User friendly graphical interface; Easy deployment mechanism. Running time: In the first approximation, the algorithm is linear. References:[1] I.V. Grossu, C. Besliu, M.V. Rusu, Al. Jipa, C.C. Bordeianu, D. Felea, Comput. Phys. Comm. 180 (2009) 1999-2001.[2] F. Balena, Programming Microsoft Visual Basic 6.0, Microsoft Press, US, 1999.
Gong, Tao; Lam, Yau W.; Shuai, Lan
2016-01-01
Psychological experiments have revealed that in normal visual perception of humans, color cues are more salient than shape cues, which are more salient than textural patterns. We carried out an artificial language learning experiment to study whether such perceptual saliency hierarchy (color > shape > texture) influences the learning of orders regulating adjectives of involved visual features in a manner either congruent (expressing a salient feature in a salient part of the form) or incongruent (expressing a salient feature in a less salient part of the form) with that hierarchy. Results showed that within a few rounds of learning participants could learn the compositional segments encoding the visual features and the order between them, generalize the learned knowledge to unseen instances with the same or different orders, and show learning biases for orders that are congruent with the perceptual saliency hierarchy. Although the learning performances for both the biased and unbiased orders became similar given more learning trials, our study confirms that this type of individual perceptual constraint could contribute to the structural configuration of language, and points out that such constraint, as well as other factors, could collectively affect the structural diversity in languages. PMID:28066281
Hendrix, Philipp; Senger, Sebastian; Griessenauer, Christoph J; Simgen, Andreas; Linsler, Stefan; Oertel, Joachim
2018-01-01
To report a technique for endoscopic cystoventriculostomy guided by preoperative navigated transcranial magnetic stimulation (nTMS) and tractography in a patient with a large speech eloquent arachnoid cyst. A 74-year old woman presented with a seizure and subsequent persistent anomic aphasia from a progressive left-sided parietal arachnoid cyst. An endoscopic cystoventriculostomy and endoscope-assisted ventricle catheter placement were performed. Surgery was guided by preoperative nTMS and tractography to avoid eloquent language, motor, and visual pathways. Preoperative nTMS motor and language mapping were used to guide tractography of motor and language white matter tracts. The ideal locations of entry point and cystoventriculostomy as well as trajectory for stent-placement were determined preoperatively with a pseudo-3-dimensional model visualizing eloquent language, motor, and visual cortical and subcortical information. The early postoperative course was uneventful. At her 3-month follow-up visit, her language impairments had completely recovered. Additionally, magnetic resonance imaging demonstrated complete collapse of the arachnoid cyst. The combination of nTMS and tractography supports the identification of a safe trajectory for cystoventriculostomy in eloquent arachnoid cysts. Copyright © 2017 Elsevier Inc. All rights reserved.
Gong, Tao; Lam, Yau W; Shuai, Lan
2016-01-01
Psychological experiments have revealed that in normal visual perception of humans, color cues are more salient than shape cues, which are more salient than textural patterns. We carried out an artificial language learning experiment to study whether such perceptual saliency hierarchy (color > shape > texture) influences the learning of orders regulating adjectives of involved visual features in a manner either congruent (expressing a salient feature in a salient part of the form) or incongruent (expressing a salient feature in a less salient part of the form) with that hierarchy. Results showed that within a few rounds of learning participants could learn the compositional segments encoding the visual features and the order between them, generalize the learned knowledge to unseen instances with the same or different orders, and show learning biases for orders that are congruent with the perceptual saliency hierarchy. Although the learning performances for both the biased and unbiased orders became similar given more learning trials, our study confirms that this type of individual perceptual constraint could contribute to the structural configuration of language, and points out that such constraint, as well as other factors, could collectively affect the structural diversity in languages.
LEGO Mindstorms NXT for elderly and visually impaired people in need: A platform.
Al-Halhouli, Ala'aldeen; Qitouqa, Hala; Malkosh, Nancy; Shubbak, Alaa; Al-Gharabli, Samer; Hamad, Eyad
2016-07-27
This paper presents the employment of LEGO Mindstorms NXT robotics as core component of low cost multidisciplinary platform for assisting elderly and visually impaired people. LEGO Mindstorms system offers a plug-and-play programmable robotics toolkit, incorporating construction guides, microcontrollers and sensors, all connected via a comprehensive programming language. It facilitates, without special training and at low cost, the use of such device for interpersonal communication and for handling multiple tasks required for elderly and visually impaired people in-need. The research project provides a model for larger-scale implementation, tackling the issues of creating additional functions in order to assist people in-need. The new functions were built and programmed using MATLAB through a user friendly Graphical User Interface (GUI). Power consumption problem, besides the integration of WiFi connection has been resolved, incorporating GPS application on smart phones enhanced the guiding and tracking functions. We believe that developing and expanding the system to encompass a range of applications beyond the initial design schematics to ease conducting a limited number of pre-described protocols. However, the beneficiaries for the proposed research would be limited to elderly people who require assistance within their household as assistive-robot to facilitate a low-cost solution for a highly demanding health circumstance.
SU-E-J-114: Web-Browser Medical Physics Applications Using HTML5 and Javascript.
Bakhtiari, M
2012-06-01
Since 2010, there has been a great attention about HTML5. Application developers and browser makers fully embrace and support the web of the future. Consumers have started to embrace HTML5, especially as more users understand the benefits and potential that HTML5 can mean for the future.Modern browsers such as Firefox, Google Chrome, and Safari are offering better and more robust support for HTML5, CSS3, and JavaScript. The idea is to introduce the HTML5 to medical physics community for open source software developments. The benefit of using HTML5 is developing portable software systems. The HTML5, CSS, and JavaScript programming languages were used to develop several applications for Quality Assurance in radiation therapy. The canvas element of HTML5 was used for handling and displaying the images, and JavaScript was used to manipulate the data. Sample application were developed to: 1. analyze the flatness and symmetry of the radiotherapy fields in a web browser, 2.analyze the Dynalog files from Varian machines, 3. visualize the animated Dynamic MLC files, 4. Simulation via Monte Carlo, and 5. interactive image manipulation. The programs showed great performance and speed in uploading the data and displaying the results. The flatness and symmetry program and Dynalog file analyzer ran in a fraction of second. The reason behind this performance is using JavaScript language which is a lower level programming language in comparison to the most of the scientific programming packages such as Matlab. The second reason is that JavaScript runs locally on client side computers not on the web-servers. HTML5 and JavaScript can be used to develop useful applications that can be run online or offline on different modern web-browsers. The programming platform can be also one of the modern web-browsers which are mostly open source (such as Firefox). © 2012 American Association of Physicists in Medicine.
Lavaur, Jean-Marc; Bairstow, Dominique
2011-12-01
This research aimed at studying the role of subtitling in film comprehension. It focused on the languages in which the subtitles are written and on the participants' fluency levels in the languages presented in the film. In a preliminary part of the study, the most salient visual and dialogue elements of a short sequence of an English film were extracted by the means of a free recall task after showing two versions of the film (first a silent, then a dubbed-into-French version) to native French speakers. This visual and dialogue information was used in the setting of a questionnaire concerning the understanding of the film presented in the main part of the study, in which other French native speakers with beginner, intermediate, or advanced fluency levels in English were shown one of three versions of the film used in the preliminary part. Respectively, these versions had no subtitles or they included either English or French subtitles. The results indicate a global interaction between all three factors in this study: For the beginners, visual processing dropped from the version without subtitles to that with English subtitles, and even more so if French subtitles were provided, whereas the effect of film version on dialogue comprehension was the reverse. The advanced participants achieved higher comprehension for both types of information with the version without subtitles, and dialogue information processing was always better than visual information processing. The intermediate group similarly processed dialogues in a better way than visual information, but was not affected by film version. These results imply that, depending on the viewers' fluency levels, the language of subtitles can have different effects on movie information processing.
ERIC Educational Resources Information Center
Usborne, Esther; Peck, Josephine; Smith, Donna-Lee; Taylor, Donald M.
2011-01-01
Aboriginal communities across Canada are implementing Aboriginal language programs in their schools. In the present research, we explore the impact of learning through an Aboriginal language on students' English and Aboriginal language skills by contrasting a Mi'kmaq language immersion program with a Mi'kmaq as a second language program. The…
Learning an Embodied Visual Language: Four Imitation Strategies Available to Sign Learners
Shield, Aaron; Meier, Richard P.
2018-01-01
The parts of the body that are used to produce and perceive signed languages (the hands, face, and visual system) differ from those used to produce and perceive spoken languages (the vocal tract and auditory system). In this paper we address two factors that have important consequences for sign language acquisition. First, there are three types of lexical signs: one-handed, two-handed symmetrical, and two-handed asymmetrical. Natural variation in hand dominance in the population leads to varied input to children learning sign. Children must learn that signs are not specified for the right or left hand but for dominant and non-dominant. Second, we posit that children have at least four imitation strategies available for imitating signs: anatomical (Activate the same muscles as the sign model), which could lead learners to inappropriately use their non-dominant hand; mirroring (Produce a mirror image of the modeled sign), which could lead learners to produce lateral movement reversal errors or to use the non-dominant hand; visual matching (Reproduce what you see from your perspective), which could lead learners to produce inward–outward movement and palm orientation reversals; and reversing (Reproduce what the sign model would see from his/her perspective). This last strategy is the only one that always yields correct phonological forms in signed languages. To test our hypotheses, we turn to evidence from typical and atypical hearing and deaf children as well as from typical adults; the data come from studies of both sign acquisition and gesture imitation. Specifically, we posit that all children initially use a visual matching strategy but typical children switch to a mirroring strategy sometime in the second year of life; typical adults tend to use a mirroring strategy in learning signs and imitating gestures. By contrast, children and adults with autism spectrum disorder (ASD) appear to use the visual matching strategy well into childhood or even adulthood. Finally, we present evidence that sign language exposure changes how adults imitate gestures, switching from a mirroring strategy to the correct reversal strategy. These four strategies for imitation do not exist in speech and as such constitute a unique problem for research in language acquisition. PMID:29899716
NASA Technical Reports Server (NTRS)
Mathur, F. P.
1972-01-01
Several common higher level program languages are described. FORTRAN, ALGOL, COBOL, PL/1, and LISP 1.5 are summarized and compared. FORTRAN is the most widely used scientific programming language. ALGOL is a more powerful language for scientific programming. COBOL is used for most commercial programming applications. LISP 1.5 is primarily a list-processing language. PL/1 attempts to combine the desirable features of FORTRAN, ALGOL, and COBOL into a single language.
Zachau, Swantje; Korpilahti, Pirjo; Hämäläinen, Jarmo A; Ervast, Leena; Heinänen, Kaisu; Suominen, Kalervo; Lehtihalmes, Matti; Leppänen, Paavo H T
2014-07-01
We explored semantic integration mechanisms in native and non-native hearing users of sign language and non-signing controls. Event-related brain potentials (ERPs) were recorded while participants performed a semantic decision task for priming lexeme pairs. Pairs were presented either within speech or across speech and sign language. Target-related ERP responses were subjected to principal component analyses (PCA), and neurocognitive basis of semantic integration processes were assessed by analyzing the N400 and the late positive complex (LPC) components in response to spoken (auditory) and signed (visual) antonymic and unrelated targets. Semantically-related effects triggered across modalities would indicate a similar tight interconnection between the signers׳ two languages like that described for spoken language bilinguals. Remarkable structural similarity of the N400 and LPC components with varying group differences between the spoken and signed targets were found. The LPC was the dominant response. The controls׳ LPC differed from the LPC of the two signing groups. It was reduced to the auditory unrelated targets and was less frontal for all the visual targets. The visual LPC was more broadly distributed in native than non-native signers and was left-lateralized for the unrelated targets in the native hearing signers only. Semantic priming effects were found for the auditory N400 in all groups, but only native hearing signers revealed a clear N400 effect to the visual targets. Surprisingly, the non-native signers revealed no semantically-related processing effect to the visual targets reflected in the N400 or the LPC; instead they appeared to rely more on visual post-lexical analyzing stages than native signers. We conclude that native and non-native signers employed different processing strategies to integrate signed and spoken semantic content. It appeared that the signers׳ semantic processing system was affected by group-specific factors like language background and/or usage. Copyright © 2014 Elsevier Ltd. All rights reserved.
Neo: an object model for handling electrophysiology data in multiple formats
Garcia, Samuel; Guarino, Domenico; Jaillet, Florent; Jennings, Todd; Pröpper, Robert; Rautenberg, Philipp L.; Rodgers, Chris C.; Sobolev, Andrey; Wachtler, Thomas; Yger, Pierre; Davison, Andrew P.
2014-01-01
Neuroscientists use many different software tools to acquire, analyze and visualize electrophysiological signals. However, incompatible data models and file formats make it difficult to exchange data between these tools. This reduces scientific productivity, renders potentially useful analysis methods inaccessible and impedes collaboration between labs. A common representation of the core data would improve interoperability and facilitate data-sharing. To that end, we propose here a language-independent object model, named “Neo,” suitable for representing data acquired from electroencephalographic, intracellular, or extracellular recordings, or generated from simulations. As a concrete instantiation of this object model we have developed an open source implementation in the Python programming language. In addition to representing electrophysiology data in memory for the purposes of analysis and visualization, the Python implementation provides a set of input/output (IO) modules for reading/writing the data from/to a variety of commonly used file formats. Support is included for formats produced by most of the major manufacturers of electrophysiology recording equipment and also for more generic formats such as MATLAB. Data representation and data analysis are conceptually separate: it is easier to write robust analysis code if it is focused on analysis and relies on an underlying package to handle data representation. For that reason, and also to be as lightweight as possible, the Neo object model and the associated Python package are deliberately limited to representation of data, with no functions for data analysis or visualization. Software for neurophysiology data analysis and visualization built on top of Neo automatically gains the benefits of interoperability, easier data sharing and automatic format conversion; there is already a burgeoning ecosystem of such tools. We intend that Neo should become the standard basis for Python tools in neurophysiology. PMID:24600386
Neo: an object model for handling electrophysiology data in multiple formats.
Garcia, Samuel; Guarino, Domenico; Jaillet, Florent; Jennings, Todd; Pröpper, Robert; Rautenberg, Philipp L; Rodgers, Chris C; Sobolev, Andrey; Wachtler, Thomas; Yger, Pierre; Davison, Andrew P
2014-01-01
Neuroscientists use many different software tools to acquire, analyze and visualize electrophysiological signals. However, incompatible data models and file formats make it difficult to exchange data between these tools. This reduces scientific productivity, renders potentially useful analysis methods inaccessible and impedes collaboration between labs. A common representation of the core data would improve interoperability and facilitate data-sharing. To that end, we propose here a language-independent object model, named "Neo," suitable for representing data acquired from electroencephalographic, intracellular, or extracellular recordings, or generated from simulations. As a concrete instantiation of this object model we have developed an open source implementation in the Python programming language. In addition to representing electrophysiology data in memory for the purposes of analysis and visualization, the Python implementation provides a set of input/output (IO) modules for reading/writing the data from/to a variety of commonly used file formats. Support is included for formats produced by most of the major manufacturers of electrophysiology recording equipment and also for more generic formats such as MATLAB. Data representation and data analysis are conceptually separate: it is easier to write robust analysis code if it is focused on analysis and relies on an underlying package to handle data representation. For that reason, and also to be as lightweight as possible, the Neo object model and the associated Python package are deliberately limited to representation of data, with no functions for data analysis or visualization. Software for neurophysiology data analysis and visualization built on top of Neo automatically gains the benefits of interoperability, easier data sharing and automatic format conversion; there is already a burgeoning ecosystem of such tools. We intend that Neo should become the standard basis for Python tools in neurophysiology.
ERIC Educational Resources Information Center
Schaller-Schwaner, Iris
2015-01-01
This article originated in a creative attempt to engage audiences visually, on a poster, with ideas about language(s), teaching and learning which have been informing language education at university language centres. It was originally locally grounded and devised to take soundings with colleagues and with participants at the CercleS 2014…
ERIC Educational Resources Information Center
ERIC Clearinghouse on Reading and Communication Skills, Urbana, IL.
This collection of abstracts is part of a continuing series providing information on recent doctoral dissertations. The 27 titles deal with a variety of topics, including the following: facilitation of language development in disadvantaged preschool children; auditory-visual discrimination skills, language performance, and development of manual…
Specvis: Free and open-source software for visual field examination.
Dzwiniel, Piotr; Gola, Mateusz; Wójcik-Gryciuk, Anna; Waleszczyk, Wioletta J
2017-01-01
Visual field impairment affects more than 100 million people globally. However, due to the lack of the access to appropriate ophthalmic healthcare in undeveloped regions as a result of associated costs and expertise this number may be an underestimate. Improved access to affordable diagnostic software designed for visual field examination could slow the progression of diseases, such as glaucoma, allowing for early diagnosis and intervention. We have developed Specvis, a free and open-source application written in Java programming language that can run on any personal computer to meet this requirement (http://www.specvis.pl/). Specvis was tested on glaucomatous, retinitis pigmentosa and stroke patients and the results were compared to results using the Medmont M700 Automated Static Perimeter. The application was also tested for inter-test intrapersonal variability. The results from both validation studies indicated low inter-test intrapersonal variability, and suitable reliability for a fast and simple assessment of visual field impairment. Specvis easily identifies visual field areas of zero sensitivity and allows for evaluation of its levels throughout the visual field. Thus, Specvis is a new, reliable application that can be successfully used for visual field examination and can fill the gap between confrontation and perimetry tests. The main advantages of Specvis over existing methods are its availability (free), affordability (runs on any personal computer), and reliability (comparable to high-cost solutions).
Specvis: Free and open-source software for visual field examination
Dzwiniel, Piotr; Gola, Mateusz; Wójcik-Gryciuk, Anna
2017-01-01
Visual field impairment affects more than 100 million people globally. However, due to the lack of the access to appropriate ophthalmic healthcare in undeveloped regions as a result of associated costs and expertise this number may be an underestimate. Improved access to affordable diagnostic software designed for visual field examination could slow the progression of diseases, such as glaucoma, allowing for early diagnosis and intervention. We have developed Specvis, a free and open-source application written in Java programming language that can run on any personal computer to meet this requirement (http://www.specvis.pl/). Specvis was tested on glaucomatous, retinitis pigmentosa and stroke patients and the results were compared to results using the Medmont M700 Automated Static Perimeter. The application was also tested for inter-test intrapersonal variability. The results from both validation studies indicated low inter-test intrapersonal variability, and suitable reliability for a fast and simple assessment of visual field impairment. Specvis easily identifies visual field areas of zero sensitivity and allows for evaluation of its levels throughout the visual field. Thus, Specvis is a new, reliable application that can be successfully used for visual field examination and can fill the gap between confrontation and perimetry tests. The main advantages of Specvis over existing methods are its availability (free), affordability (runs on any personal computer), and reliability (comparable to high-cost solutions). PMID:29028825
Visually defining and querying consistent multi-granular clinical temporal abstractions.
Combi, Carlo; Oliboni, Barbara
2012-02-01
The main goal of this work is to propose a framework for the visual specification and query of consistent multi-granular clinical temporal abstractions. We focus on the issue of querying patient clinical information by visually defining and composing temporal abstractions, i.e., high level patterns derived from several time-stamped raw data. In particular, we focus on the visual specification of consistent temporal abstractions with different granularities and on the visual composition of different temporal abstractions for querying clinical databases. Temporal abstractions on clinical data provide a concise and high-level description of temporal raw data, and a suitable way to support decision making. Granularities define partitions on the time line and allow one to represent time and, thus, temporal clinical information at different levels of detail, according to the requirements coming from the represented clinical domain. The visual representation of temporal information has been considered since several years in clinical domains. Proposed visualization techniques must be easy and quick to understand, and could benefit from visual metaphors that do not lead to ambiguous interpretations. Recently, physical metaphors such as strips, springs, weights, and wires have been proposed and evaluated on clinical users for the specification of temporal clinical abstractions. Visual approaches to boolean queries have been considered in the last years and confirmed that the visual support to the specification of complex boolean queries is both an important and difficult research topic. We propose and describe a visual language for the definition of temporal abstractions based on a set of intuitive metaphors (striped wall, plastered wall, brick wall), allowing the clinician to use different granularities. A new algorithm, underlying the visual language, allows the physician to specify only consistent abstractions, i.e., abstractions not containing contradictory conditions on the component abstractions. Moreover, we propose a visual query language where different temporal abstractions can be composed to build complex queries: temporal abstractions are visually connected through the usual logical connectives AND, OR, and NOT. The proposed visual language allows one to simply define temporal abstractions by using intuitive metaphors, and to specify temporal intervals related to abstractions by using different temporal granularities. The physician can interact with the designed and implemented tool by point-and-click selections, and can visually compose queries involving several temporal abstractions. The evaluation of the proposed granularity-related metaphors consisted in two parts: (i) solving 30 interpretation exercises by choosing the correct interpretation of a given screenshot representing a possible scenario, and (ii) solving a complex exercise, by visually specifying through the interface a scenario described only in natural language. The exercises were done by 13 subjects. The percentage of correct answers to the interpretation exercises were slightly different with respect to the considered metaphors (54.4--striped wall, 73.3--plastered wall, 61--brick wall, and 61--no wall), but post hoc statistical analysis on means confirmed that differences were not statistically significant. The result of the user's satisfaction questionnaire related to the evaluation of the proposed granularity-related metaphors ratified that there are no preferences for one of them. The evaluation of the proposed logical notation consisted in two parts: (i) solving five interpretation exercises provided by a screenshot representing a possible scenario and by three different possible interpretations, of which only one was correct, and (ii) solving five exercises, by visually defining through the interface a scenario described only in natural language. Exercises had an increasing difficulty. The evaluation involved a total of 31 subjects. Results related to this evaluation phase confirmed us about the soundness of the proposed solution even in comparison with a well known proposal based on a tabular query form (the only significant difference is that our proposal requires more time for the training phase: 21 min versus 14 min). In this work we have considered the issue of visually composing and querying temporal clinical patient data. In this context we have proposed a visual framework for the specification of consistent temporal abstractions with different granularities and for the visual composition of different temporal abstractions to build (possibly) complex queries on clinical databases. A new algorithm has been proposed to check the consistency of the specified granular abstraction. From the evaluation of the proposed metaphors and interfaces and from the comparison of the visual query language with a well known visual method for boolean queries, the soundness of the overall system has been confirmed; moreover, pros and cons and possible improvements emerged from the comparison of different visual metaphors and solutions. Copyright © 2011 Elsevier B.V. All rights reserved.
XAFS Data Interchange: A single spectrum XAFS data file format.
Ravel, B; Newville, M
We propose a standard data format for the interchange of XAFS data. The XAFS Data Interchange (XDI) standard is meant to encapsulate a single spectrum of XAFS along with relevant metadata. XDI is a text-based format with a simple syntax which clearly delineates metadata from the data table in a way that is easily interpreted both by a computer and by a human. The metadata header is inspired by the format of an electronic mail header, representing metadata names and values as an associative array. The data table is represented as columns of numbers. This format can be imported as is into most existing XAFS data analysis, spreadsheet, or data visualization programs. Along with a specification and a dictionary of metadata types, we provide an application-programming interface written in C and bindings for programming dynamic languages.
XAFS Data Interchange: A single spectrum XAFS data file format
NASA Astrophysics Data System (ADS)
Ravel, B.; Newville, M.
2016-05-01
We propose a standard data format for the interchange of XAFS data. The XAFS Data Interchange (XDI) standard is meant to encapsulate a single spectrum of XAFS along with relevant metadata. XDI is a text-based format with a simple syntax which clearly delineates metadata from the data table in a way that is easily interpreted both by a computer and by a human. The metadata header is inspired by the format of an electronic mail header, representing metadata names and values as an associative array. The data table is represented as columns of numbers. This format can be imported as is into most existing XAFS data analysis, spreadsheet, or data visualization programs. Along with a specification and a dictionary of metadata types, we provide an application-programming interface written in C and bindings for programming dynamic languages.
Dalmaijer, Edwin S; Mathôt, Sebastiaan; Van der Stigchel, Stefan
2014-12-01
The PyGaze toolbox is an open-source software package for Python, a high-level programming language. It is designed for creating eyetracking experiments in Python syntax with the least possible effort, and it offers programming ease and script readability without constraining functionality and flexibility. PyGaze can be used for visual and auditory stimulus presentation; for response collection via keyboard, mouse, joystick, and other external hardware; and for the online detection of eye movements using a custom algorithm. A wide range of eyetrackers of different brands (EyeLink, SMI, and Tobii systems) are supported. The novelty of PyGaze lies in providing an easy-to-use layer on top of the many different software libraries that are required for implementing eyetracking experiments. Essentially, PyGaze is a software bridge for eyetracking research.
Optimizing Visually-Assisted Listening Comprehension
ERIC Educational Resources Information Center
Kashani, Ahmad Sabouri; Sajjadi, Samad; Sohrabi, Mohammad Reza; Younespour, Shima
2011-01-01
The fact that visual aids such as pictures or graphs can lead to greater comprehension by language learners has been well established. Nonetheless, the order of presenting visuals to listeners is left unattended. This study examined listening comprehension from a strategy of introducing visual information, either prior to or during an audio…
Visualizing the Verbal and Verbalizing the Visual.
ERIC Educational Resources Information Center
Braden, Roberts A.
This paper explores relationships of visual images to verbal elements, beginning with a discussion of visible language as represented by words printed on the page. The visual flexibility inherent in typography is discussed in terms of the appearance of the letters and the denotative and connotative meanings represented by type, typographical…
Computer aided fixture design - A case based approach
NASA Astrophysics Data System (ADS)
Tanji, Shekhar; Raiker, Saiesh; Mathew, Arun Tom
2017-11-01
Automated fixture design plays important role in process planning and integration of CAD and CAM. An automated fixture setup design system is developed where when fixturing surfaces and points are described allowing modular fixture components to get automatically select for generating fixture units and placed into position with satisfying assembled conditions. In past, various knowledge based system have been developed to implement CAFD in practice. In this paper, to obtain an acceptable automated machining fixture design, a case-based reasoning method with developed retrieval system is proposed. Visual Basic (VB) programming language is used in integrating with SolidWorks API (Application programming interface) module for better retrieval procedure reducing computational time. These properties are incorporated in numerical simulation to determine the best fit for practical use.
ERIC Educational Resources Information Center
Harbusch, Karin; Hausdörfer, Annette
2016-01-01
COMPASS is an e-learning system that can visualize grammar errors during sentence production in German as a first or second language. Via drag-and-drop dialogues, it allows users to freely select word forms from a lexicon and to combine them into phrases and sentences. The system's core component is a natural-language generator that, for every new…
ERIC Educational Resources Information Center
Giordano, Gerard
Neurological data indicate that the universal aptitude for functional language is biologically based, species specific, and developmental. The universality of functional oral speech is indisputable. Everyone, however, does not exhibit similar expertise in processing oral and visual language. Many people can speak two languages functionally but…
ERIC Educational Resources Information Center
Mounty, Judith L.; Pucci, Concetta T.; Harmon, Kristen C.
2014-01-01
A primary tenet underlying American Sign Language/English bilingual education for deaf students is that early access to a visual language, developed in conjunction with language planning principles, provides a foundation for literacy in English. The goal of this study is to obtain an emic perspective on bilingual deaf readers transitioning from…
Knowledge of a Second Language Influences Auditory Word Recognition in the Native Language
ERIC Educational Resources Information Center
Lagrou, Evelyne; Hartsuiker, Robert J.; Duyck, Wouter
2011-01-01
Many studies in bilingual visual word recognition have demonstrated that lexical access is not language selective. However, research on bilingual word recognition in the auditory modality has been scarce, and it has yielded mixed results with regard to the degree of this language nonselectivity. In the present study, we investigated whether…
The Film in Language Teaching Association (FILTA): A Multilingual Community of Practice
ERIC Educational Resources Information Center
Herrero, Carmen
2016-01-01
This article presents the Film in Language Teaching Association (FILTA) project, a community of practice (CoP) whose main goals are first to engage language teachers in practical uses of film and audio-visual media in the second language classroom; second, to value the artistic features of cinema; and third, to encourage a dialogue between…
An Infinite Game in a Finite Setting: Visualizing Foreign Language Teaching and Learning in America.
ERIC Educational Resources Information Center
Mantero, Miguel
According to contemporary thought and foundational research, this paper presents various elements of the foreign language teaching profession and language learning environment in the United States as either product-driven or process-based. It is argued that a process-based approach to language teaching and learning benefits not only second…
Teaching Film with Blinders On: The Importance of Knowing the Language.
ERIC Educational Resources Information Center
Blakely, Richard
1984-01-01
Suggests the use of foreign films as a teaching aid for foreign language study. However, a thorough knowledge of the film's oral language (or languages) and culture is essential as a first step toward a clear understanding of the film's visual aesthetic. A dependence on subtitles or dubbing is discouraged, due to frequent errors and…
ERIC Educational Resources Information Center
Renish, Angela J.
2016-01-01
Nineteen students whose first language is not English (English Language Learners, ELL) participated in an action research study that focused on the marriage of an art education curriculum and literacy practice. The study introduced students to the consistent use of language in art education as a means to discuss, inform, explain, and demonstrate…
Shedding Light on Words and Sentences: Near-Infrared Spectroscopy in Language Research
ERIC Educational Resources Information Center
Rossi, Sonja; Telkemeyer, Silke; Wartenburger, Isabell; Obrig, Hellmuth
2012-01-01
Investigating the neuronal network underlying language processing may contribute to a better understanding of how the brain masters this complex cognitive function with surprising ease and how language is acquired at a fast pace in infancy. Modern neuroimaging methods permit to visualize the evolvement and the function of the language network. The…
ERIC Educational Resources Information Center
Chambers, Craig G.; Cooke, Hilary
2009-01-01
A spoken language eye-tracking methodology was used to evaluate the effects of sentence context and proficiency on parallel language activation during spoken language comprehension. Nonnative speakers with varying proficiency levels viewed visual displays while listening to French sentences (e.g., "Marie va decrire la poule" [Marie will…
Language networks in anophthalmia: maintained hierarchy of processing in 'visual' cortex.
Watkins, Kate E; Cowey, Alan; Alexander, Iona; Filippini, Nicola; Kennedy, James M; Smith, Stephen M; Ragge, Nicola; Bridge, Holly
2012-05-01
Imaging studies in blind subjects have consistently shown that sensory and cognitive tasks evoke activity in the occipital cortex, which is normally visual. The precise areas involved and degree of activation are dependent upon the cause and age of onset of blindness. Here, we investigated the cortical language network at rest and during an auditory covert naming task in five bilaterally anophthalmic subjects, who have never received visual input. When listening to auditory definitions and covertly retrieving words, these subjects activated lateral occipital cortex bilaterally in addition to the language areas activated in sighted controls. This activity was significantly greater than that present in a control condition of listening to reversed speech. The lateral occipital cortex was also recruited into a left-lateralized resting-state network that usually comprises anterior and posterior language areas. Levels of activation to the auditory naming and reversed speech conditions did not differ in the calcarine (striate) cortex. This primary 'visual' cortex was not recruited to the left-lateralized resting-state network and showed high interhemispheric correlation of activity at rest, as is typically seen in unimodal cortical areas. In contrast, the interhemispheric correlation of resting activity in extrastriate areas was reduced in anophthalmia to the level of cortical areas that are heteromodal, such as the inferior frontal gyrus. Previous imaging studies in the congenitally blind show that primary visual cortex is activated in higher-order tasks, such as language and memory to a greater extent than during more basic sensory processing, resulting in a reversal of the normal hierarchy of functional organization across 'visual' areas. Our data do not support such a pattern of organization in anophthalmia. Instead, the patterns of activity during task and the functional connectivity at rest are consistent with the known hierarchy of processing in these areas normally seen for vision. The differences in cortical organization between bilateral anophthalmia and other forms of congenital blindness are considered to be due to the total absence of stimulation in 'visual' cortex by light or retinal activity in the former condition, and suggests development of subcortical auditory input to the geniculo-striate pathway.
Visualization of Real-Time Data
NASA Technical Reports Server (NTRS)
Stansifer, Ryan; Engrand, Peter
1996-01-01
In this project we explored various approaches to presenting real-time data from the numerous systems monitored on the space shuttle to computer users. We examined the approach that several projects at the Kennedy Space Center (KSC) used to accomplish this. We undertook to build a prototype system to demonstrate that the Internet and the Java programming language could be used to present the real-time data conveniently. Several Java programs were developed that presented real-time data in different forms including one form that emulated the display screens of the PC GOAL system which is familiar to many at KSC. Also, we developed several communications programs to supply the data continuously. Furthermore, a framework was created using the World Wide Web (WWW) to organize the collection and presentation of the real-time data. We believe our demonstration project shows the great flexibility of the approach. We had no particular use of the data in mind, instead we wanted the most general and the least complex framework possible. People who wish to view data need only know how to use a WWW browser and the address (the URL). People wanting to build WWW documents containing real-time data need only know the values of a few parameters, they do not need to program in Java or any other language. These are stunning advantages over more monolithic systems.
Flight program language requirements. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
1972-01-01
The activities and results of a study for the definition of flight program language requirements are described. A set of detailed requirements are presented for a language capable of supporting onboard application programming for the Marshall Space Flight Center's anticipated future activities in the decade of 1975-85. These requirements are based, in part, on the evaluation of existing flight programming language designs to determine the applicability of these designs to flight programming activities which are anticipated. The coding of benchmark problems in the selected programming languages is discussed. These benchmarks are in the form of program kernels selected from existing flight programs. This approach was taken to insure that the results of the study would reflect state of the art language capabilities, as well as to determine whether an existing language design should be selected for adaptation.
Barton, Andrea; Sevcik, Rose A; Romski, Mary Ann
2006-03-01
The process of language acquisition requires an individual to organize the world through a system of symbols and referents. For children with severe intellectual disabilities and language delays, the ability to link a symbol to its referent may be a difficult task. In addition to the intervention strategy, issues such as the visual complexity and iconicity of a symbol arise when deciding what to select as a medium to teach language. This study explored the ability of four pre-school age children with developmental and language delays to acquire the meanings of Blissymbols and lexigrams using an observational experiential language intervention. In production, all four of the participants demonstrated symbol-referent relationships, while in comprehension, three of the four participants demonstrated at least emerging symbol-referent relationships. Although the number of symbols learned across participants varied, there were no differences between the learning of arbitrary and comparatively iconic symbols. The participants' comprehension skills appeared to influence their performance.
25 CFR 39.130 - Can ISEF funds be used for Language Development Programs?
Code of Federal Regulations, 2010 CFR
2010-04-01
... INDIAN SCHOOL EQUALIZATION PROGRAM Indian School Equalization Formula Language Development Programs § 39.130 Can ISEF funds be used for Language Development Programs? Yes, schools can use ISEF funds to... 25 Indians 1 2010-04-01 2010-04-01 false Can ISEF funds be used for Language Development Programs...
25 CFR 39.130 - Can ISEF funds be used for Language Development Programs?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 1 2011-04-01 2011-04-01 false Can ISEF funds be used for Language Development Programs... INDIAN SCHOOL EQUALIZATION PROGRAM Indian School Equalization Formula Language Development Programs § 39.130 Can ISEF funds be used for Language Development Programs? Yes, schools can use ISEF funds to...
NASA Technical Reports Server (NTRS)
Shaykhian, Gholam Ali
2007-01-01
C++ Programming Language: The C++ seminar covers the fundamentals of C++ programming language. The C++ fundamentals are grouped into three parts where each part includes both concept and programming examples aimed at for hands-on practice. The first part covers the functional aspect of C++ programming language with emphasis on function parameters and efficient memory utilization. The second part covers the essential framework of C++ programming language, the object-oriented aspects. Information necessary to evaluate various features of object-oriented programming; including encapsulation, polymorphism and inheritance will be discussed. The last part of the seminar covers template and generic programming. Examples include both user defined and standard templates.
ExcelAutomat: a tool for systematic processing of files as applied to quantum chemical calculations
NASA Astrophysics Data System (ADS)
Laloo, Jalal Z. A.; Laloo, Nassirah; Rhyman, Lydia; Ramasami, Ponnadurai
2017-07-01
The processing of the input and output files of quantum chemical calculations often necessitates a spreadsheet as a key component of the workflow. Spreadsheet packages with a built-in programming language editor can automate the steps involved and thus provide a direct link between processing files and the spreadsheet. This helps to reduce user-interventions as well as the need to switch between different programs to carry out each step. The ExcelAutomat tool is the implementation of this method in Microsoft Excel (MS Excel) using the default Visual Basic for Application (VBA) programming language. The code in ExcelAutomat was adapted to work with the platform-independent open-source LibreOffice Calc, which also supports VBA. ExcelAutomat provides an interface through the spreadsheet to automate repetitive tasks such as merging input files, splitting, parsing and compiling data from output files, and generation of unique filenames. Selected extracted parameters can be retrieved as variables which can be included in custom codes for a tailored approach. ExcelAutomat works with Gaussian files and is adapted for use with other computational packages including the non-commercial GAMESS. ExcelAutomat is available as a downloadable MS Excel workbook or as a LibreOffice workbook.
ExcelAutomat: a tool for systematic processing of files as applied to quantum chemical calculations.
Laloo, Jalal Z A; Laloo, Nassirah; Rhyman, Lydia; Ramasami, Ponnadurai
2017-07-01
The processing of the input and output files of quantum chemical calculations often necessitates a spreadsheet as a key component of the workflow. Spreadsheet packages with a built-in programming language editor can automate the steps involved and thus provide a direct link between processing files and the spreadsheet. This helps to reduce user-interventions as well as the need to switch between different programs to carry out each step. The ExcelAutomat tool is the implementation of this method in Microsoft Excel (MS Excel) using the default Visual Basic for Application (VBA) programming language. The code in ExcelAutomat was adapted to work with the platform-independent open-source LibreOffice Calc, which also supports VBA. ExcelAutomat provides an interface through the spreadsheet to automate repetitive tasks such as merging input files, splitting, parsing and compiling data from output files, and generation of unique filenames. Selected extracted parameters can be retrieved as variables which can be included in custom codes for a tailored approach. ExcelAutomat works with Gaussian files and is adapted for use with other computational packages including the non-commercial GAMESS. ExcelAutomat is available as a downloadable MS Excel workbook or as a LibreOffice workbook.
A Home-Language Free Adult Pre-Vocational Audio-Visual Course in English-as-a-Second Language.
ERIC Educational Resources Information Center
Smith, Philip D., Jr.
A pre-vocational English-as-a-second language course for adults was developed for the non-native speaker based upon the following assumptions: the teacher does not have to speak the language of the student; students in a class do not have to speak each others' language; the teacher need not be professionally trained in the field of teaching ESL;…
ERIC Educational Resources Information Center
Liddicoat, Anthony J.; Curnow, Timothy Jowan; Scarino, Angela
2016-01-01
This paper examines the development of the First Language Maintenance and Development (FLMD) program in South Australia. This program is the main language policy activity that specifically focuses on language maintenance in government primary schools and has existed since 1986. During this time, the program has evolved largely as the result of ad…
2002-01-01
wrappers to other widely used languages, namely TCL/TK, Java, and Python . VTK is very powerful and covers polygonal models and image processing classes and...follows: � Large Data Visualization and Rendering � Information Visualization for Beginners � Rendering and Visualization in Parallel Environments
Reading Visual Representations
ERIC Educational Resources Information Center
Rubenstein, Rheta N.; Thompson, Denisse R.
2012-01-01
Mathematics is rich in visual representations. Such visual representations are the means by which mathematical patterns "are recorded and analyzed." With respect to "vocabulary" and "symbols," numerous educators have focused on issues inherent in the language of mathematics that influence students' success with mathematics communication.…
Snowden, Lonnie R; McClellan, Sean R
2013-09-01
We investigated the extent to which implementing language assistance programming through contracting with community-based organizations improved the accessibility of mental health care under Medi-Cal (California's Medicaid program) for Spanish-speaking persons with limited English proficiency, and whether it reduced language-based treatment access disparities. Using a time series nonequivalent control group design, we studied county-level penetration of language assistance programming over 10 years (1997-2006) for Spanish-speaking persons with limited English proficiency covered under Medi-Cal. We used linear regression with county fixed effects to control for ongoing trends and other influences. When county mental health plans contracted with community-based organizations, those implementing language assistance programming increased penetration rates of Spanish-language mental health services under Medi-Cal more than other plans (0.28 percentage points, a 25% increase on average; P < .05). However, the increase was insufficient to significantly reduce language-related disparities. Mental health treatment programs operated by community-based organizations may have moderately improved access after implementing required language assistance programming, but the programming did not reduce entrenched disparities in the accessibility of mental health services.
McClellan, Sean R.
2013-01-01
Objectives. We investigated the extent to which implementing language assistance programming through contracting with community-based organizations improved the accessibility of mental health care under Medi-Cal (California’s Medicaid program) for Spanish-speaking persons with limited English proficiency, and whether it reduced language-based treatment access disparities. Methods. Using a time series nonequivalent control group design, we studied county-level penetration of language assistance programming over 10 years (1997–2006) for Spanish-speaking persons with limited English proficiency covered under Medi-Cal. We used linear regression with county fixed effects to control for ongoing trends and other influences. Results. When county mental health plans contracted with community-based organizations, those implementing language assistance programming increased penetration rates of Spanish-language mental health services under Medi-Cal more than other plans (0.28 percentage points, a 25% increase on average; P < .05). However, the increase was insufficient to significantly reduce language-related disparities. Conclusions. Mental health treatment programs operated by community-based organizations may have moderately improved access after implementing required language assistance programming, but the programming did not reduce entrenched disparities in the accessibility of mental health services. PMID:23865663
Huysmans, Elke; Bolk, Elske; Zekveld, Adriana A; Festen, Joost M; de Groot, Annette M B; Goverts, S Theo
2016-01-01
The authors first examined the influence of moderate to severe congenital hearing impairment (CHI) on the correctness of samples of elicited spoken language. Then, the authors used this measure as an indicator of linguistic proficiency and examined its effect on performance in language reception, independent of bottom-up auditory processing. In groups of adults with normal hearing (NH, n = 22), acquired hearing impairment (AHI, n = 22), and moderate to severe CHI (n = 21), the authors assessed linguistic proficiency by analyzing the morphosyntactic correctness of their spoken language production. Language reception skills were examined with a task for masked sentence recognition in the visual domain (text), at a readability level of 50%, using grammatically correct sentences and sentences with distorted morphosyntactic cues. The actual performance on the tasks was compared between groups. Adults with CHI made more morphosyntactic errors in spoken language production than adults with NH, while no differences were observed between the AHI and NH group. This outcome pattern sustained when comparisons were restricted to subgroups of AHI and CHI adults, matched for current auditory speech reception abilities. The data yielded no differences between groups in performance in masked text recognition of grammatically correct sentences in a test condition in which subjects could fully take advantage of their linguistic knowledge. Also, no difference between groups was found in the sensitivity to morphosyntactic distortions when processing short masked sentences, presented visually. These data showed that problems with the correct use of specific morphosyntactic knowledge in spoken language production are a long-term effect of moderate to severe CHI, independent of current auditory processing abilities. However, moderate to severe CHI generally does not impede performance in masked language reception in the visual modality, as measured in this study with short, degraded sentences. Aspects of linguistic proficiency that are affected by CHI thus do not seem to play a role in masked sentence recognition in the visual modality.
Adaptive Modeling Language and Its Derivatives
NASA Technical Reports Server (NTRS)
Chemaly, Adel
2006-01-01
Adaptive Modeling Language (AML) is the underlying language of an object-oriented, multidisciplinary, knowledge-based engineering framework. AML offers an advanced modeling paradigm with an open architecture, enabling the automation of the entire product development cycle, integrating product configuration, design, analysis, visualization, production planning, inspection, and cost estimation.
Python-Based Applications for Hydrogeological Modeling
NASA Astrophysics Data System (ADS)
Khambhammettu, P.
2013-12-01
Python is a general-purpose, high-level programming language whose design philosophy emphasizes code readability. Add-on packages supporting fast array computation (numpy), plotting (matplotlib), scientific /mathematical Functions (scipy), have resulted in a powerful ecosystem for scientists interested in exploratory data analysis, high-performance computing and data visualization. Three examples are provided to demonstrate the applicability of the Python environment in hydrogeological applications. Python programs were used to model an aquifer test and estimate aquifer parameters at a Superfund site. The aquifer test conducted at a Groundwater Circulation Well was modeled with the Python/FORTRAN-based TTIM Analytic Element Code. The aquifer parameters were estimated with PEST such that a good match was produced between the simulated and observed drawdowns. Python scripts were written to interface with PEST and visualize the results. A convolution-based approach was used to estimate source concentration histories based on observed concentrations at receptor locations. Unit Response Functions (URFs) that relate the receptor concentrations to a unit release at the source were derived with the ATRANS code. The impact of any releases at the source could then be estimated by convolving the source release history with the URFs. Python scripts were written to compute and visualize receptor concentrations for user-specified source histories. The framework provided a simple and elegant way to test various hypotheses about the site. A Python/FORTRAN-based program TYPECURVEGRID-Py was developed to compute and visualize groundwater elevations and drawdown through time in response to a regional uniform hydraulic gradient and the influence of pumping wells using either the Theis solution for a fully-confined aquifer or the Hantush-Jacob solution for a leaky confined aquifer. The program supports an arbitrary number of wells that can operate according to arbitrary schedules. The python wrapper invokes the underlying FORTRAN layer to compute transient groundwater elevations and processes this information to create time-series and 2D plots.
ERIC Educational Resources Information Center
Feldman, David
1975-01-01
Stresses the importance of language laboratories and other technical devices used in foreign language teaching, particularly in programed language instruction. Illustrates, by means of taxonomies, the various stages a foreign language learning program should follow. (Text is in Spanish.) (DS)
Does visual impairment lead to additional disability in adults with intellectual disabilities?
Evenhuis, H M; Sjoukes, L; Koot, H M; Kooijman, A C
2009-01-01
This study addresses the question to what extent visual impairment leads to additional disability in adults with intellectual disabilities (ID). In a multi-centre cross-sectional study of 269 adults with mild to profound ID, social and behavioural functioning was assessed with observant-based questionnaires, prior to expert assessment of visual function. With linear regression analysis the percentage of variance, explained by levels of visual function, was calculated for the total population and per ID level. A total of 107/269 participants were visually impaired or blind (WHO criteria). On top of the decrease by ID visual impairment significantly decreased daily living skills, communication & language, recognition/communication. Visual impairment did not cause more self-absorbed and withdrawn behaviour or anxiety. Peculiar looking habits correlated with visual impairment and not with ID. In the groups with moderate and severe ID this effect seems stronger than in the group with profound ID. Although ID alone impairs daily functioning, visual impairment diminishes the daily functioning even more. Timely detection and treatment or rehabilitation of visual impairment may positively influence daily functioning, language development, initiative and persistence, social skills, communication skills and insecure movement.
Scalable and portable visualization of large atomistic datasets
NASA Astrophysics Data System (ADS)
Sharma, Ashish; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya
2004-10-01
A scalable and portable code named Atomsviewer has been developed to interactively visualize a large atomistic dataset consisting of up to a billion atoms. The code uses a hierarchical view frustum-culling algorithm based on the octree data structure to efficiently remove atoms outside of the user's field-of-view. Probabilistic and depth-based occlusion-culling algorithms then select atoms, which have a high probability of being visible. Finally a multiresolution algorithm is used to render the selected subset of visible atoms at varying levels of detail. Atomsviewer is written in C++ and OpenGL, and it has been tested on a number of architectures including Windows, Macintosh, and SGI. Atomsviewer has been used to visualize tens of millions of atoms on a standard desktop computer and, in its parallel version, up to a billion atoms. Program summaryTitle of program: Atomsviewer Catalogue identifier: ADUM Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUM Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: 2.4 GHz Pentium 4/Xeon processor, professional graphics card; Apple G4 (867 MHz)/G5, professional graphics card Operating systems under which the program has been tested: Windows 2000/XP, Mac OS 10.2/10.3, SGI IRIX 6.5 Programming languages used: C++, C and OpenGL Memory required to execute with typical data: 1 gigabyte of RAM High speed storage required: 60 gigabytes No. of lines in the distributed program including test data, etc.: 550 241 No. of bytes in the distributed program including test data, etc.: 6 258 245 Number of bits in a word: Arbitrary Number of processors used: 1 Has the code been vectorized or parallelized: No Distribution format: tar gzip file Nature of physical problem: Scientific visualization of atomic systems Method of solution: Rendering of atoms using computer graphic techniques, culling algorithms for data minimization, and levels-of-detail for minimal rendering Restrictions on the complexity of the problem: None Typical running time: The program is interactive in its execution Unusual features of the program: None References: The conceptual foundation and subsequent implementation of the algorithms are found in [A. Sharma, A. Nakano, R.K. Kalia, P. Vashishta, S. Kodiyalam, P. Miller, W. Zhao, X.L. Liu, T.J. Campbell, A. Haas, Presence—Teleoperators and Virtual Environments 12 (1) (2003)].
ERIC Educational Resources Information Center
Kover, Sara T.; McCary, Lindsay M.; Ingram, Alexandra M.; Hatton, Deborah D.; Roberts, Jane E.
2015-01-01
Fragile X syndrome (FXS) is associated with significant language and communication delays, as well as problems with attention. This study investigated early language abilities in infants and toddlers with FXS (n = 13) and considered visual attention as a predictor of those skills. We found that language abilities increased over the study period of…
Establishing a Framework for the Study of English.
ERIC Educational Resources Information Center
Woolf, Leonard; Velder, Milton
1970-01-01
This 2-week unit uses visuals of a recent space walk as the basis for an introductory study of the nature of language. The aims of the unit are (1) to show pupils how language operates, (2) to stimulate awareness of the importance of basic language skills, (3) to develop an increased desire in using language effectively, and (4) to acquaint…