Sex differences on a computerized mental rotation task disappear with computer familiarization.
Roberts, J E; Bell, M A
2000-12-01
The area of cognitive research that has produced the most consistent sex differences is spatial ability. Particularly, men consistently perform better on mental rotation tasks than do women. This study examined the effects of familiarization with a computer on performance of a computerized two-dimensional mental rotation task. Two groups of college students (N=44) performed the rotation task, with one group performing a color-matching task that allowed them to be familiarized with the computer prior to the rotation task. Among the participants who only performed the rotation task, the 11 men performed better than the 11 women. Among the participants who performed the computer familiarization task before the rotation task, how ever, there were no sex differences on the mental rotation task between the 10 men and 12 women. These data indicate that sex differences on this two-dimensional task may reflect familiarization with the computer, not the mental rotation component of the task. Further research with larger samples and increased range of task difficulty is encouraged.
Computer task performance by subjects with Duchenne muscular dystrophy.
Malheiros, Silvia Regina Pinheiro; da Silva, Talita Dias; Favero, Francis Meire; de Abreu, Luiz Carlos; Fregni, Felipe; Ribeiro, Denise Cardoso; de Mello Monteiro, Carlos Bandeira
2016-01-01
Two specific objectives were established to quantify computer task performance among people with Duchenne muscular dystrophy (DMD). First, we compared simple computational task performance between subjects with DMD and age-matched typically developing (TD) subjects. Second, we examined correlations between the ability of subjects with DMD to learn the computational task and their motor functionality, age, and initial task performance. The study included 84 individuals (42 with DMD, mean age of 18±5.5 years, and 42 age-matched controls). They executed a computer maze task; all participants performed the acquisition (20 attempts) and retention (five attempts) phases, repeating the same maze. A different maze was used to verify transfer performance (five attempts). The Motor Function Measure Scale was applied, and the results were compared with maze task performance. In the acquisition phase, a significant decrease was found in movement time (MT) between the first and last acquisition block, but only for the DMD group. For the DMD group, MT during transfer was shorter than during the first acquisition block, indicating improvement from the first acquisition block to transfer. In addition, the TD group showed shorter MT than the DMD group across the study. DMD participants improved their performance after practicing a computational task; however, the difference in MT was present in all attempts among DMD and control subjects. Computational task improvement was positively influenced by the initial performance of individuals with DMD. In turn, the initial performance was influenced by their distal functionality but not their age or overall functionality.
Performing a global barrier operation in a parallel computer
Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E
2014-12-09
Executing computing tasks on a parallel computer that includes compute nodes coupled for data communications, where each compute node executes tasks, with one task on each compute node designated as a master task, including: for each task on each compute node until all master tasks have joined a global barrier: determining whether the task is a master task; if the task is not a master task, joining a single local barrier; if the task is a master task, joining the global barrier and the single local barrier only after all other tasks on the compute node have joined the single local barrier.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Performing a global barrier operation in a parallel computer that includes compute nodes coupled for data communications, where each compute node executes tasks, with one task on each compute node designated as a master task, including: for each task on each compute node until all master tasks have joined a global barrier: determining whether the task is a master task; if the task is not a master task, joining a single local barrier; if the task is a master task, joining the global barrier and the single local barrier only after all other tasks on the compute node have joinedmore » the single local barrier.« less
Multi-Attribute Task Battery - Applications in pilot workload and strategic behavior research
NASA Technical Reports Server (NTRS)
Arnegard, Ruth J.; Comstock, J. R., Jr.
1991-01-01
The Multi-Attribute Task (MAT) Battery provides a benchmark set of tasks for use in a wide range of lab studies of operator performance and workload. The battery incorporates tasks analogous to activities that aircraft crewmembers perform in flight, while providing a high degree of experimenter control, performance data on each subtask, and freedom to nonpilot test subjects. Features not found in existing computer based tasks include an auditory communication task (to simulate Air Traffic Control communication), a resource management task permitting many avenues or strategies of maintaining target performance, a scheduling window which gives the operator information about future task demands, and the option of manual or automated control of tasks. Performance data are generated for each subtask. In addition, the task battery may be paused and onscreen workload rating scales presented to the subject. The MAT Battery requires a desktop computer with color graphics. The communication task requires a serial link to a second desktop computer with a voice synthesizer or digitizer card.
The multi-attribute task battery for human operator workload and strategic behavior research
NASA Technical Reports Server (NTRS)
Comstock, J. Raymond, Jr.; Arnegard, Ruth J.
1992-01-01
The Multi-Attribute Task (MAT) Battery provides a benchmark set of tasks for use in a wide range of lab studies of operator performance and workload. The battery incorporates tasks analogous to activities that aircraft crewmembers perform in flight, while providing a high degree of experimenter control, performance data on each subtask, and freedom to use nonpilot test subjects. Features not found in existing computer based tasks include an auditory communication task (to simulate Air Traffic Control communication), a resource management task permitting many avenues or strategies of maintaining target performance, a scheduling window which gives the operator information about future task demands, and the option of manual or automated control of tasks. Performance data are generated for each subtask. In addition, the task battery may be paused and onscreen workload rating scales presented to the subject. The MAT Battery requires a desktop computer with color graphics. The communication task requires a serial link to a second desktop computer with a voice synthesizer or digitizer card.
Task allocation in a distributed computing system
NASA Technical Reports Server (NTRS)
Seward, Walter D.
1987-01-01
A conceptual framework is examined for task allocation in distributed systems. Application and computing system parameters critical to task allocation decision processes are discussed. Task allocation techniques are addressed which focus on achieving a balance in the load distribution among the system's processors. Equalization of computing load among the processing elements is the goal. Examples of system performance are presented for specific applications. Both static and dynamic allocation of tasks are considered and system performance is evaluated using different task allocation methodologies.
Method and system for benchmarking computers
Gustafson, John L.
1993-09-14
A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.
Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Abdulhamid, Shafi'i Muhammad; Usman, Mohammed Joda
2017-01-01
Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.
Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Usman, Mohammed Joda
2017-01-01
Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing. PMID:28467505
Study to design and develop remote manipulator system. [computer simulation of human performance
NASA Technical Reports Server (NTRS)
Hill, J. W.; Mcgovern, D. E.; Sword, A. J.
1974-01-01
Modeling of human performance in remote manipulation tasks is reported by automated procedures using computers to analyze and count motions during a manipulation task. Performance is monitored by an on-line computer capable of measuring the joint angles of both master and slave and in some cases the trajectory and velocity of the hand itself. In this way the operator's strategies with different transmission delays, displays, tasks, and manipulators can be analyzed in detail for comparison. Some progress is described in obtaining a set of standard tasks and difficulty measures for evaluating manipulator performance.
Psychology of computer use: XXXII. Computer screen-savers as distractors.
Volk, F A; Halcomb, C G
1994-12-01
The differences in performance of 16 male and 16 female undergraduates on three cognitive tasks were investigated in the presence of visual distractors (computer-generated dynamic graphic images). These tasks included skilled and unskilled proofreading and listening comprehension. The visually demanding task of proofreading (skilled and unskilled) showed no significant decreases in performance in the distractor conditions. Results showed significant decrements, however, in performance on listening comprehension in at least one of the distractor conditions.
Modeling Cognitive Strategies during Complex Task Performing Process
ERIC Educational Resources Information Center
Mazman, Sacide Guzin; Altun, Arif
2012-01-01
The purpose of this study is to examine individuals' computer based complex task performing processes and strategies in order to determine the reasons of failure by cognitive task analysis method and cued retrospective think aloud with eye movement data. Study group was five senior students from Computer Education and Instructional Technologies…
Taib, Mohd Firdaus Mohd; Bahn, Sangwoo; Yun, Myung Hwan
2016-06-27
The popularity of mobile computing products is well known. Thus, it is crucial to evaluate their contribution to musculoskeletal disorders during computer usage under both comfortable and stressful environments. This study explores the effect of different computer products' usages with different tasks used to induce psychosocial stress on muscle activity. Fourteen male subjects performed computer tasks: sixteen combinations of four different computer products with four different tasks used to induce stress. Electromyography for four muscles on the forearm, shoulder and neck regions and task performances were recorded. The increment of trapezius muscle activity was dependent on the task used to induce the stress where a higher level of stress made a greater increment. However, this relationship was not found in the other three muscles. Besides that, compared to desktop and laptop use, the lowest activity for all muscles was obtained during the use of a tablet or smart phone. The best net performance was obtained in a comfortable environment. However, during stressful conditions, the best performance can be obtained using the device that a user is most comfortable with or has the most experience with. Different computer products and different levels of stress play a big role in muscle activity during computer work. Both of these factors must be taken into account in order to reduce the occurrence of musculoskeletal disorders or problems.
Job Management and Task Bundling
NASA Astrophysics Data System (ADS)
Berkowitz, Evan; Jansen, Gustav R.; McElvain, Kenneth; Walker-Loud, André
2018-03-01
High Performance Computing is often performed on scarce and shared computing resources. To ensure computers are used to their full capacity, administrators often incentivize large workloads that are not possible on smaller systems. Measurements in Lattice QCD frequently do not scale to machine-size workloads. By bundling tasks together we can create large jobs suitable for gigantic partitions. We discuss METAQ and mpi_jm, software developed to dynamically group computational tasks together, that can intelligently backfill to consume idle time without substantial changes to users' current workflows or executables.
ERIC Educational Resources Information Center
Paisley, William; Butler, Matilda
This study of the computer/user interface investigated the role of the computer in performing information tasks that users now perform without computer assistance. Users' perceptual/cognitive processes are to be accelerated or augmented by the computer; a long term goal is to delegate information tasks entirely to the computer. Cybernetic and…
Distributed computation of graphics primitives on a transputer network
NASA Technical Reports Server (NTRS)
Ellis, Graham K.
1988-01-01
A method is developed for distributing the computation of graphics primitives on a parallel processing network. Off-the-shelf transputer boards are used to perform the graphics transformations and scan-conversion tasks that would normally be assigned to a single transputer based display processor. Each node in the network performs a single graphics primitive computation. Frequently requested tasks can be duplicated on several nodes. The results indicate that the current distribution of commands on the graphics network shows a performance degradation when compared to the graphics display board alone. A change to more computation per node for every communication (perform more complex tasks on each node) may cause the desired increase in throughput.
Belger, A; Banich, M T
1998-07-01
Because interaction of the cerebral hemispheres has been found to aid task performance under demanding conditions, the present study examined how this effect is moderated by computational complexity, the degree of lateralization for a task, and individual differences in asymmetric hemispheric activation (AHA). Computational complexity was manipulated across tasks either by increasing the number of inputs to be processed or by increasing the number of steps to a decision. Comparison of within- and across-hemisphere trials indicated that the size of the between-hemisphere advantage increased as a function of task complexity, except for a highly lateralized rhyme decision task that can only be performed by the left hemisphere. Measures of individual differences in AHA revealed that when task demands and an individual's AHA both load on the same hemisphere, the ability to divide the processing between the hemispheres is limited. Thus, interhemispheric division of processing improves performance at higher levels of computational complexity only when the required operations can be divided between the hemispheres.
Evaluating the Efficacy of the Cloud for Cluster Computation
NASA Technical Reports Server (NTRS)
Knight, David; Shams, Khawaja; Chang, George; Soderstrom, Tom
2012-01-01
Computing requirements vary by industry, and it follows that NASA and other research organizations have computing demands that fall outside the mainstream. While cloud computing made rapid inroads for tasks such as powering web applications, performance issues on highly distributed tasks hindered early adoption for scientific computation. One venture to address this problem is Nebula, NASA's homegrown cloud project tasked with delivering science-quality cloud computing resources. However, another industry development is Amazon's high-performance computing (HPC) instances on Elastic Cloud Compute (EC2) that promises improved performance for cluster computation. This paper presents results from a series of benchmarks run on Amazon EC2 and discusses the efficacy of current commercial cloud technology for running scientific applications across a cluster. In particular, a 240-core cluster of cloud instances achieved 2 TFLOPS on High-Performance Linpack (HPL) at 70% of theoretical computational performance. The cluster's local network also demonstrated sub-100 ?s inter-process latency with sustained inter-node throughput in excess of 8 Gbps. Beyond HPL, a real-world Hadoop image processing task from NASA's Lunar Mapping and Modeling Project (LMMP) was run on a 29 instance cluster to process lunar and Martian surface images with sizes on the order of tens of gigapixels. These results demonstrate that while not a rival of dedicated supercomputing clusters, commercial cloud technology is now a feasible option for moderately demanding scientific workloads.
Brain-computer interface control along instructed paths
NASA Astrophysics Data System (ADS)
Sadtler, P. T.; Ryu, S. I.; Tyler-Kabara, E. C.; Yu, B. M.; Batista, A. P.
2015-02-01
Objective. Brain-computer interfaces (BCIs) are being developed to assist paralyzed people and amputees by translating neural activity into movements of a computer cursor or prosthetic limb. Here we introduce a novel BCI task paradigm, intended to help accelerate improvements to BCI systems. Through this task, we can push the performance limits of BCI systems, we can quantify more accurately how well a BCI system captures the user’s intent, and we can increase the richness of the BCI movement repertoire. Approach. We have implemented an instructed path task, wherein the user must drive a cursor along a visible path. The instructed path task provides a versatile framework to increase the difficulty of the task and thereby push the limits of performance. Relative to traditional point-to-point tasks, the instructed path task allows more thorough analysis of decoding performance and greater richness of movement kinematics. Main results. We demonstrate that monkeys are able to perform the instructed path task in a closed-loop BCI setting. We further investigate how the performance under BCI control compares to native arm control, whether users can decrease their movement variability in the face of a more demanding task, and how the kinematic richness is enhanced in this task. Significance. The use of the instructed path task has the potential to accelerate the development of BCI systems and their clinical translation.
Computer-Mediated Training Tools to Enhance Joint Task Force Cognitive Leadership Skills
2007-04-01
University); and 5d. TASK NUMBER Barclay Lewis (American Systems) 5e. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ...ple G am ing Platform D ecisive A ction for Training ..................................................... 43 6. Perform ance M etrics...Figure 15: Automated Performance Measurement System ................................................................... 48 iv COMPUTER-MEDIATED TRAINING
Cloud computing task scheduling strategy based on improved differential evolution algorithm
NASA Astrophysics Data System (ADS)
Ge, Junwei; He, Qian; Fang, Yiqiu
2017-04-01
In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.
ERIC Educational Resources Information Center
Tanaka, Ayumi; Takehara, Takuma; Yamauchi, Hirotsugu
2006-01-01
The aims of the study were to test the linkages between achievement goals to task performance, as mediated by state anxiety arousal. Performance expectancy was also examined as antecedents of achievement goals. A presentation task in a computer practice class was used as achievement task. Fifty-three undergraduates (37 females and 16 males) were…
Control-display mapping in brain-computer interfaces.
Thurlings, Marieke E; van Erp, Jan B F; Brouwer, Anne-Marie; Blankertz, Benjamin; Werkhoven, Peter
2012-01-01
Event-related potential (ERP) based brain-computer interfaces (BCIs) employ differences in brain responses to attended and ignored stimuli. When using a tactile ERP-BCI for navigation, mapping is required between navigation directions on a visual display and unambiguously corresponding tactile stimuli (tactors) from a tactile control device: control-display mapping (CDM). We investigated the effect of congruent (both display and control horizontal or both vertical) and incongruent (vertical display, horizontal control) CDMs on task performance, the ERP and potential BCI performance. Ten participants attended to a target (determined via CDM), in a stream of sequentially vibrating tactors. We show that congruent CDM yields best task performance, enhanced the P300 and results in increased estimated BCI performance. This suggests a reduced availability of attentional resources when operating an ERP-BCI with incongruent CDM. Additionally, we found an enhanced N2 for incongruent CDM, which indicates a conflict between visual display and tactile control orientations. Incongruency in control-display mapping reduces task performance. In this study, brain responses, task and system performance are related to (in)congruent mapping of command options and the corresponding stimuli in a brain-computer interface (BCI). Directional congruency reduces task errors, increases available attentional resources, improves BCI performance and thus facilitates human-computer interaction.
Virtual reality computer simulation.
Grantcharov, T P; Rosenberg, J; Pahle, E; Funch-Jensen, P
2001-03-01
Objective assessment of psychomotor skills should be an essential component of a modern surgical training program. There are computer systems that can be used for this purpose, but their wide application is not yet generally accepted. The aim of this study was to validate the role of virtual reality computer simulation as a method for evaluating surgical laparoscopic skills. The study included 14 surgical residents. On day 1, they performed two runs of all six tasks on the Minimally Invasive Surgical Trainer, Virtual Reality (MIST VR). On day 2, they performed a laparoscopic cholecystectomy on living pigs; afterward, they were tested again on the MIST VR. A group of experienced surgeons evaluated the trainees' performance on the animal operation, giving scores for total performance error and economy of motion. During the tasks on the MIST VR, errors and noneconomy of movements for the left and right hand were also recorded. There were significant correlations between error scores in vivo and three of the six in vitro tasks (p < 0.05). In vivo economy scores correlated significantly with non-economy right-hand scores for five of the six tasks and with non-economy left-hand scores for one of the six tasks (p < 0.05). In this study, laparoscopic performance in the animal model correlated significantly with performance on the computer simulator. Thus, the computer model seems to be a promising objective method for the assessment of laparoscopic psychomotor skills.
An opportunity cost model of subjective effort and task performance
Kurzban, Robert; Duckworth, Angela; Kable, Joseph W.; Myers, Justus
2013-01-01
Why does performing certain tasks cause the aversive experience of mental effort and concomitant deterioration in task performance? One explanation posits a physical resource that is depleted over time. We propose an alternate explanation that centers on mental representations of the costs and benefits associated with task performance. Specifically, certain computational mechanisms, especially those associated with executive function, can be deployed for only a limited number of simultaneous tasks at any given moment. Consequently, the deployment of these computational mechanisms carries an opportunity cost – that is, the next-best use to which these systems might be put. We argue that the phenomenology of effort can be understood as the felt output of these cost/benefit computations. In turn, the subjective experience of effort motivates reduced deployment of these computational mechanisms in the service of the present task. These opportunity cost representations, then, together with other cost/benefit calculations, determine effort expended and, everything else equal, result in performance reductions. In making our case for this position, we review alternate explanations both for the phenomenology of effort associated with these tasks and for performance reductions over time. Likewise, we review the broad range of relevant empirical results from across subdisciplines, especially psychology and neuroscience. We hope that our proposal will help to build links among the diverse fields that have been addressing similar questions from different perspectives, and we emphasize ways in which alternate models might be empirically distinguished. PMID:24304775
Straus, S G; McGrath, J E
1994-02-01
The authors investigated the hypothesis that as group tasks pose greater requirements for member interdependence, communication media that transmit more social context cues will foster group performance and satisfaction. Seventy-two 3-person groups of undergraduate students worked in either computer-mediated or face-to-face meetings on 3 tasks with increasing levels of interdependence: an idea-generation task, an intellective task, and a judgment task. Results showed few differences between computer-mediated and face-to-face groups in the quality of the work completed but large differences in productivity favoring face-to-face groups. Analysis of productivity and of members' reactions supported the predicted interaction of tasks and media, with greater discrepancies between media conditions for tasks requiring higher levels of coordination. Results are discussed in terms of the implications of using computer-mediated communications systems for group work.
Strategy generalization across orientation tasks: testing a computational cognitive model.
Gunzelmann, Glenn
2008-07-08
Humans use their spatial information processing abilities flexibly to facilitate problem solving and decision making in a variety of tasks. This article explores the question of whether a general strategy can be adapted for performing two different spatial orientation tasks by testing the predictions of a computational cognitive model. Human performance was measured on an orientation task requiring participants to identify the location of a target either on a map (find-on-map) or within an egocentric view of a space (find-in-scene). A general strategy instantiated in a computational cognitive model of the find-on-map task, based on the results from Gunzelmann and Anderson (2006), was adapted to perform both tasks and used to generate performance predictions for a new study. The qualitative fit of the model to the human data supports the view that participants were able to tailor a general strategy to the requirements of particular spatial tasks. The quantitative differences between the predictions of the model and the performance of human participants in the new experiment expose individual differences in sample populations. The model provides a means of accounting for those differences and a framework for understanding how human spatial abilities are applied to naturalistic spatial tasks that involve reasoning with maps. 2008 Cognitive Science Society, Inc.
ERIC Educational Resources Information Center
Meyer, David E.; Kieras, David E.
Perceptual-motor and cognitive processes whereby people perform multiple concurrent tasks have been studied through an overlapping-tasks procedure in which two successive choice-reaction tasks are performed with a variable interval (stimulus onset asynchrony, or SOA) between the beginning of the first and second tasks. The increase in subjects'…
MAT - MULTI-ATTRIBUTE TASK BATTERY FOR HUMAN OPERATOR WORKLOAD AND STRATEGIC BEHAVIOR RESEARCH
NASA Technical Reports Server (NTRS)
Comstock, J. R.
1994-01-01
MAT, a Multi-Attribute Task battery, gives the researcher the capability of performing multi-task workload and performance experiments. The battery provides a benchmark set of tasks for use in a wide range of laboratory studies of operator performance and workload. MAT incorporates tasks analogous to activities that aircraft crew members perform in flight, while providing a high degree of experiment control, performance data on each subtask, and freedom to use non-pilot test subjects. The MAT battery primary display is composed of four separate task windows which are as follows: a monitoring task window which includes gauges and warning lights, a tracking task window for the demands of manual control, a communication task window to simulate air traffic control communications, and a resource management task window which permits maintaining target levels on a fuel management task. In addition, a scheduling task window gives the researcher information about future task demands. The battery also provides the option of manual or automated control of tasks. The task generates performance data for each subtask. The task battery may be paused and onscreen workload rating scales presented to the subject. The MAT battery was designed to use a serially linked second computer to generate the voice messages for the Communications task. The MATREMX program and support files, which are included in the MAT package, were designed to work with the Heath Voice Card (Model HV-2000, available through the Heath Company, Benton Harbor, Michigan 49022); however, the MATREMX program and support files may easily be modified to work with other voice synthesizer or digitizer cards. The MAT battery task computer may also be used independent of the voice computer if no computer synthesized voice messages are desired or if some other method of presenting auditory messages is devised. MAT is written in QuickBasic and assembly language for IBM PC series and compatible computers running MS-DOS. The code in MAT is written for Microsoft QuickBasic 4.5 and Microsoft Macro Assembler 5.1. This package requires a joystick and EGA or VGA color graphics. An 80286, 386, or 486 processor machine is highly recommended. The standard distribution medium for MAT is a 5.25 inch 360K MS-DOS format diskette. The files are compressed using the PKZIP file compression utility. PKUNZIP is included on the distribution diskette. MAT was developed in 1992. IBM PC is a registered trademark of International Business Machines. MS-DOS, Microsoft QuickBasic, and Microsoft Macro Assembler are registered trademarks of Microsoft Corporation. PKZIP and PKUNZIP are registered trademarks of PKWare, Inc.
NASA Technical Reports Server (NTRS)
1982-01-01
A summary of tasks performed on an integrated command, control, communication, and computation system design study is given. The Tracking and Data Relay Satellite System command and control system study, an automated real-time operations study, and image processing work are discussed.
NASA Astrophysics Data System (ADS)
Zhou, Weimin; Anastasio, Mark A.
2018-03-01
It has been advocated that task-based measures of image quality (IQ) should be employed to evaluate and optimize imaging systems. Task-based measures of IQ quantify the performance of an observer on a medically relevant task. The Bayesian Ideal Observer (IO), which employs complete statistical information of the object and noise, achieves the upper limit of the performance for a binary signal classification task. However, computing the IO performance is generally analytically intractable and can be computationally burdensome when Markov-chain Monte Carlo (MCMC) techniques are employed. In this paper, supervised learning with convolutional neural networks (CNNs) is employed to approximate the IO test statistics for a signal-known-exactly and background-known-exactly (SKE/BKE) binary detection task. The receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC) are compared to those produced by the analytically computed IO. The advantages of the proposed supervised learning approach for approximating the IO are demonstrated.
Image Processing and Computer Aided Diagnosis in Computed Tomography of the Breast
2007-03-01
TERMS breast imaging, breast CT, scatter compensation, denoising, CAD , Cone-beam CT 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...clinical projection images. The CAD tool based on signal known exactly (SKE) scenario is under development. Task 6: Test and compare the...performances of the CAD developed in Task 5 applied to processed projection data from Task 1 with the CAD performance on the projection data without Bayesian
Women and Computers: Effects of Stereotype Threat on Attribution of Failure
ERIC Educational Resources Information Center
Koch, Sabine C.; Muller, Stephanie M.; Sieverding, Monika
2008-01-01
This study investigated whether stereotype threat can influence women's attributions of failure in a computer task. Male and female college-age students (n = 86, 16-21 years old) from Germany were asked to work on a computer task and were hinted beforehand that in this task, either (a) men usually perform better than women do (negative threat…
Sankaran, Ramanan; Angel, Jordan; Brown, W. Michael
2015-04-08
The growth in size of networked high performance computers along with novel accelerator-based node architectures has further emphasized the importance of communication efficiency in high performance computing. The world's largest high performance computers are usually operated as shared user facilities due to the costs of acquisition and operation. Applications are scheduled for execution in a shared environment and are placed on nodes that are not necessarily contiguous on the interconnect. Furthermore, the placement of tasks on the nodes allocated by the scheduler is sub-optimal, leading to performance loss and variability. Here, we investigate the impact of task placement on themore » performance of two massively parallel application codes on the Titan supercomputer, a turbulent combustion flow solver (S3D) and a molecular dynamics code (LAMMPS). Benchmark studies show a significant deviation from ideal weak scaling and variability in performance. The inter-task communication distance was determined to be one of the significant contributors to the performance degradation and variability. A genetic algorithm-based parallel optimization technique was used to optimize the task ordering. This technique provides an improved placement of the tasks on the nodes, taking into account the application's communication topology and the system interconnect topology. As a result, application benchmarks after task reordering through genetic algorithm show a significant improvement in performance and reduction in variability, therefore enabling the applications to achieve better time to solution and scalability on Titan during production.« less
NASA Technical Reports Server (NTRS)
Smith, M. E.; Gevins, A.; Brown, H.; Karnik, A.; Du, R.
2001-01-01
Electroencephalographic (EEG) recordings were made while 16 participants performed versions of a personal-computer-based flight simulation task of low, moderate, or high difficulty. As task difficulty increased, frontal midline theta EEG activity increased and alpha band activity decreased. A participant-specific function that combined multiple EEG features to create a single load index was derived from a sample of each participant's data and then applied to new test data from that participant. Index values were computed for every 4 s of task data. Across participants, mean task load index values increased systematically with increasing task difficulty and differed significantly between the different task versions. Actual or potential applications of this research include the use of multivariate EEG-based methods to monitor task loading during naturalistic computer-based work.
ERIC Educational Resources Information Center
Rozell, E. J.; Gardner, W. L., III
1999-01-01
A model of the intrapersonal processes impacting computer-related performance was tested using data from 75 manufacturing employees in a computer training course. Gender, computer experience, and attributional style were predictive of computer attitudes, which were in turn related to computer efficacy, task-specific performance expectations, and…
A Decentralized Eigenvalue Computation Method for Spectrum Sensing Based on Average Consensus
NASA Astrophysics Data System (ADS)
Mohammadi, Jafar; Limmer, Steffen; Stańczak, Sławomir
2016-07-01
This paper considers eigenvalue estimation for the decentralized inference problem for spectrum sensing. We propose a decentralized eigenvalue computation algorithm based on the power method, which is referred to as generalized power method GPM; it is capable of estimating the eigenvalues of a given covariance matrix under certain conditions. Furthermore, we have developed a decentralized implementation of GPM by splitting the iterative operations into local and global computation tasks. The global tasks require data exchange to be performed among the nodes. For this task, we apply an average consensus algorithm to efficiently perform the global computations. As a special case, we consider a structured graph that is a tree with clusters of nodes at its leaves. For an accelerated distributed implementation, we propose to use computation over multiple access channel (CoMAC) as a building block of the algorithm. Numerical simulations are provided to illustrate the performance of the two algorithms.
Parallel processing using an optical delay-based reservoir computer
NASA Astrophysics Data System (ADS)
Van der Sande, Guy; Nguimdo, Romain Modeste; Verschaffelt, Guy
2016-04-01
Delay systems subject to delayed optical feedback have recently shown great potential in solving computationally hard tasks. By implementing a neuro-inspired computational scheme relying on the transient response to optical data injection, high processing speeds have been demonstrated. However, reservoir computing systems based on delay dynamics discussed in the literature are designed by coupling many different stand-alone components which lead to bulky, lack of long-term stability, non-monolithic systems. Here we numerically investigate the possibility of implementing reservoir computing schemes based on semiconductor ring lasers. Semiconductor ring lasers are semiconductor lasers where the laser cavity consists of a ring-shaped waveguide. SRLs are highly integrable and scalable, making them ideal candidates for key components in photonic integrated circuits. SRLs can generate light in two counterpropagating directions between which bistability has been demonstrated. We demonstrate that two independent machine learning tasks , even with different nature of inputs with different input data signals can be simultaneously computed using a single photonic nonlinear node relying on the parallelism offered by photonics. We illustrate the performance on simultaneous chaotic time series prediction and a classification of the Nonlinear Channel Equalization. We take advantage of different directional modes to process individual tasks. Each directional mode processes one individual task to mitigate possible crosstalk between the tasks. Our results indicate that prediction/classification with errors comparable to the state-of-the-art performance can be obtained even with noise despite the two tasks being computed simultaneously. We also find that a good performance is obtained for both tasks for a broad range of the parameters. The results are discussed in detail in [Nguimdo et al., IEEE Trans. Neural Netw. Learn. Syst. 26, pp. 3301-3307, 2015
Automated Instructional Monitors for Complex Operational Tasks. Final Report.
ERIC Educational Resources Information Center
Feurzeig, Wallace
A computer-based instructional system is described which incorporates diagnosis of students difficulties in acquiring complex concepts and skills. A computer automatically generated a simulated display. It then monitored and analyzed a student's work in the performance of assigned training tasks. Two major tasks were studied. The first,…
Parietal neural prosthetic control of a computer cursor in a graphical-user-interface task
NASA Astrophysics Data System (ADS)
Revechkis, Boris; Aflalo, Tyson NS; Kellis, Spencer; Pouratian, Nader; Andersen, Richard A.
2014-12-01
Objective. To date, the majority of Brain-Machine Interfaces have been used to perform simple tasks with sequences of individual targets in otherwise blank environments. In this study we developed a more practical and clinically relevant task that approximated modern computers and graphical user interfaces (GUIs). This task could be problematic given the known sensitivity of areas typically used for BMIs to visual stimuli, eye movements, decision-making, and attentional control. Consequently, we sought to assess the effect of a complex, GUI-like task on the quality of neural decoding. Approach. A male rhesus macaque monkey was implanted with two 96-channel electrode arrays in area 5d of the superior parietal lobule. The animal was trained to perform a GUI-like ‘Face in a Crowd’ task on a computer screen that required selecting one cued, icon-like, face image from a group of alternatives (the ‘Crowd’) using a neurally controlled cursor. We assessed whether the crowd affected decodes of intended cursor movements by comparing it to a ‘Crowd Off’ condition in which only the matching target appeared without alternatives. We also examined if training a neural decoder with the Crowd On rather than Off had any effect on subsequent decode quality. Main results. Despite the additional demands of working with the Crowd On, the animal was able to robustly perform the task under Brain Control. The presence of the crowd did not itself affect decode quality. Training the decoder with the Crowd On relative to Off had no negative influence on subsequent decoding performance. Additionally, the subject was able to gaze around freely without influencing cursor position. Significance. Our results demonstrate that area 5d recordings can be used for decoding in a complex, GUI-like task with free gaze. Thus, this area is a promising source of signals for neural prosthetics that utilize computing devices with GUI interfaces, e.g. personal computers, mobile devices, and tablet computers.
Parietal neural prosthetic control of a computer cursor in a graphical-user-interface task.
Revechkis, Boris; Aflalo, Tyson N S; Kellis, Spencer; Pouratian, Nader; Andersen, Richard A
2014-12-01
To date, the majority of Brain-Machine Interfaces have been used to perform simple tasks with sequences of individual targets in otherwise blank environments. In this study we developed a more practical and clinically relevant task that approximated modern computers and graphical user interfaces (GUIs). This task could be problematic given the known sensitivity of areas typically used for BMIs to visual stimuli, eye movements, decision-making, and attentional control. Consequently, we sought to assess the effect of a complex, GUI-like task on the quality of neural decoding. A male rhesus macaque monkey was implanted with two 96-channel electrode arrays in area 5d of the superior parietal lobule. The animal was trained to perform a GUI-like 'Face in a Crowd' task on a computer screen that required selecting one cued, icon-like, face image from a group of alternatives (the 'Crowd') using a neurally controlled cursor. We assessed whether the crowd affected decodes of intended cursor movements by comparing it to a 'Crowd Off' condition in which only the matching target appeared without alternatives. We also examined if training a neural decoder with the Crowd On rather than Off had any effect on subsequent decode quality. Despite the additional demands of working with the Crowd On, the animal was able to robustly perform the task under Brain Control. The presence of the crowd did not itself affect decode quality. Training the decoder with the Crowd On relative to Off had no negative influence on subsequent decoding performance. Additionally, the subject was able to gaze around freely without influencing cursor position. Our results demonstrate that area 5d recordings can be used for decoding in a complex, GUI-like task with free gaze. Thus, this area is a promising source of signals for neural prosthetics that utilize computing devices with GUI interfaces, e.g. personal computers, mobile devices, and tablet computers.
SRM Internal Flow Test and Computational Fluid Dynamic Analysis. Volume 1; Major Task Summaries
NASA Technical Reports Server (NTRS)
Whitesides, R. Harold; Dill, Richard A.; Purinton, David C.
1995-01-01
During the four year period of performance for NASA contract, NASB-39095, ERC has performed a wide variety of tasks to support the design and continued development of new and existing solid rocket motors and the resolution of operational problems associated with existing solid rocket motor's at NASA MSFC. This report summarizes the support provided to NASA MSFC during the contractual period of performance. The report is divided into three main sections. The first section presents summaries for the major tasks performed. These tasks are grouped into three major categories: full scale motor analysis, subscale motor analysis and cold flow analysis. The second section includes summaries describing the computational fluid dynamics (CFD) tasks performed. The third section, the appendices of the report, presents detailed descriptions of the analysis efforts as well as published papers, memoranda and final reports associated with specific tasks. These appendices are referenced in the summaries. The subsection numbers for the three sections correspond to the same topics for direct cross referencing.
Attention in a multi-task environment
NASA Technical Reports Server (NTRS)
Andre, Anthony D.; Heers, Susan T.
1993-01-01
Two experiments used a low fidelity multi-task simulation to investigate the effects of cue specificity on task preparation and performance. Subjects performed a continuous compensatory tracking task and were periodically prompted to perform one of several concurrent secondary tasks. The results provide strong evidence that subjects enacted a strategy to actively divert resources towards secondary task preparation only when they had specific information about an upcoming task to be performed. However, this strategy was not as much affected by the type of task cued (Experiment 1) or its difficulty level (Experiment 2). Overall, subjects seemed aware of both the costs (degraded primary task tracking) and benefits (improved secondary task performance) of cue information. Implications of the present results for computational human performance/workload models are discussed.
Models and techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Meyer, J. F.
1977-01-01
Models, measures and techniques were developed for evaluating the effectiveness of aircraft computing systems. The concept of effectiveness involves aspects of system performance, reliability and worth. Specifically done was a detailed development of model hierarchy at mission, functional task, and computational task levels. An appropriate class of stochastic models was investigated which served as bottom level models in the hierarchial scheme. A unified measure of effectiveness called 'performability' was defined and formulated.
Emergent Leadership and Team Effectiveness on a Team Resource Allocation Task
1987-10-01
equivalent training and experience on this task, but they had different levels of experience with computers and video games . This differential experience...typed: that is. it is sex-typed to the extent that males spend mnore time on related instrumeuts like computers and video games . However. the sex...perform better or worse than less talkative teams? Did teams with much computer and ’or video game experience perform better than inexperienced teams
Refueling Strategies for a Team of Cooperating AUVs
2011-01-01
manager, and thus the constraint a centrally managed underwater network imposes on the mission. Task management utilizing Robust Decentralized Task ...the computational complexity. A bid based approach to task management has also been studied as a possible means of decentralization of group task ...currently performing another task . In [18], ground robots perform distributed task allocation using the ASyMTRy-D algorithm, which is based on CNP
ERIC Educational Resources Information Center
Sins, Patrick H. M.; van Joolingen, Wouter R.; Savelsbergh, Elwin R.; van Hout-Wolters, Bernadette
2008-01-01
Purpose of the present study was to test a conceptual model of relations among achievement goal orientation, self-efficacy, cognitive processing, and achievement of students working within a particular collaborative task context. The task involved a collaborative computer-based modeling task. In order to test the model, group measures of…
Strategy Generalization across Orientation Tasks: Testing a Computational Cognitive Model
ERIC Educational Resources Information Center
Gunzelmann, Glenn
2008-01-01
Humans use their spatial information processing abilities flexibly to facilitate problem solving and decision making in a variety of tasks. This article explores the question of whether a general strategy can be adapted for performing two different spatial orientation tasks by testing the predictions of a computational cognitive model. Human…
Design and Analysis of Self-Adapted Task Scheduling Strategies in Wireless Sensor Networks
Guo, Wenzhong; Xiong, Naixue; Chao, Han-Chieh; Hussain, Sajid; Chen, Guolong
2011-01-01
In a wireless sensor network (WSN), the usage of resources is usually highly related to the execution of tasks which consume a certain amount of computing and communication bandwidth. Parallel processing among sensors is a promising solution to provide the demanded computation capacity in WSNs. Task allocation and scheduling is a typical problem in the area of high performance computing. Although task allocation and scheduling in wired processor networks has been well studied in the past, their counterparts for WSNs remain largely unexplored. Existing traditional high performance computing solutions cannot be directly implemented in WSNs due to the limitations of WSNs such as limited resource availability and the shared communication medium. In this paper, a self-adapted task scheduling strategy for WSNs is presented. First, a multi-agent-based architecture for WSNs is proposed and a mathematical model of dynamic alliance is constructed for the task allocation problem. Then an effective discrete particle swarm optimization (PSO) algorithm for the dynamic alliance (DPSO-DA) with a well-designed particle position code and fitness function is proposed. A mutation operator which can effectively improve the algorithm’s ability of global search and population diversity is also introduced in this algorithm. Finally, the simulation results show that the proposed solution can achieve significant better performance than other algorithms. PMID:22163971
Study to design and develop remote manipulator system
NASA Technical Reports Server (NTRS)
Hill, J. W.; Sword, A. J.
1973-01-01
Human performance measurement techniques for remote manipulation tasks and remote sensing techniques for manipulators are described for common manipulation tasks, performance is monitored by means of an on-line computer capable of measuring the joint angles of both master and slave arms as a function of time. The computer programs allow measurements of the operator's strategy and physical quantities such as task time and power consumed. The results are printed out after a test run to compare different experimental conditions. For tracking tasks, we describe a method of displaying errors in three dimensions and measuring the end-effector position in three dimensions.
Psychological Issues in Online Adaptive Task Allocation
NASA Technical Reports Server (NTRS)
Morris, N. M.; Rouse, W. B.; Ward, S. L.; Frey, P. R.
1984-01-01
Adaptive aiding is an idea that offers potential for improvement over many current approaches to aiding in human-computer systems. The expected return of tailoring the system to fit the user could be in the form of improved system performance and/or increased user satisfaction. Issues such as the manner in which information is shared between human and computer, the appropriate division of labor between them, and the level of autonomy of the aid are explored. A simulated visual search task was developed. Subjects are required to identify targets in a moving display while performing a compensatory sub-critical tracking task. By manipulating characteristics of the situation such as imposed task-related workload and effort required to communicate with the computer, it is possible to create conditions in which interaction with the computer would be more or less desirable. The results of preliminary research using this experimental scenario are presented, and future directions for this research effort are discussed.
Nair, Sankaran N; Czaja, Sara J; Sharit, Joseph
2007-06-01
This article explores the role of age, cognitive abilities, prior experience, and knowledge in skill acquisition for a computer-based simulated customer service task. Fifty-two participants aged 50-80 performed the task over 4 consecutive days following training. They also completed a battery that assessed prior computer experience and cognitive abilities. The data indicated that overall quality and efficiency of performance improved with practice. The predictors of initial level of performance and rate of change in performance varied according to the performance parameter assessed. Age and fluid intelligence predicted initial level and rate of improvement in overall quality, whereas crystallized intelligence and age predicted initial e-mail processing time, and crystallized intelligence predicted rate of change in e-mail processing time over days. We discuss the implications of these findings for the design of intervention strategies.
A resource-sharing model based on a repeated game in fog computing.
Sun, Yan; Zhang, Nan
2017-03-01
With the rapid development of cloud computing techniques, the number of users is undergoing exponential growth. It is difficult for traditional data centers to perform many tasks in real time because of the limited bandwidth of resources. The concept of fog computing is proposed to support traditional cloud computing and to provide cloud services. In fog computing, the resource pool is composed of sporadic distributed resources that are more flexible and movable than a traditional data center. In this paper, we propose a fog computing structure and present a crowd-funding algorithm to integrate spare resources in the network. Furthermore, to encourage more resource owners to share their resources with the resource pool and to supervise the resource supporters as they actively perform their tasks, we propose an incentive mechanism in our algorithm. Simulation results show that our proposed incentive mechanism can effectively reduce the SLA violation rate and accelerate the completion of tasks.
Task-specific image partitioning.
Kim, Sungwoong; Nowozin, Sebastian; Kohli, Pushmeet; Yoo, Chang D
2013-02-01
Image partitioning is an important preprocessing step for many of the state-of-the-art algorithms used for performing high-level computer vision tasks. Typically, partitioning is conducted without regard to the task in hand. We propose a task-specific image partitioning framework to produce a region-based image representation that will lead to a higher task performance than that reached using any task-oblivious partitioning framework and existing supervised partitioning framework, albeit few in number. The proposed method partitions the image by means of correlation clustering, maximizing a linear discriminant function defined over a superpixel graph. The parameters of the discriminant function that define task-specific similarity/dissimilarity among superpixels are estimated based on structured support vector machine (S-SVM) using task-specific training data. The S-SVM learning leads to a better generalization ability while the construction of the superpixel graph used to define the discriminant function allows a rich set of features to be incorporated to improve discriminability and robustness. We evaluate the learned task-aware partitioning algorithms on three benchmark datasets. Results show that task-aware partitioning leads to better labeling performance than the partitioning computed by the state-of-the-art general-purpose and supervised partitioning algorithms. We believe that the task-specific image partitioning paradigm is widely applicable to improving performance in high-level image understanding tasks.
Characterizing and Mitigating Work Time Inflation in Task Parallel Programs
Olivier, Stephen L.; de Supinski, Bronis R.; Schulz, Martin; ...
2013-01-01
Task parallelism raises the level of abstraction in shared memory parallel programming to simplify the development of complex applications. However, task parallel applications can exhibit poor performance due to thread idleness, scheduling overheads, and work time inflation – additional time spent by threads in a multithreaded computation beyond the time required to perform the same work in a sequential computation. We identify the contributions of each factor to lost efficiency in various task parallel OpenMP applications and diagnose the causes of work time inflation in those applications. Increased data access latency can cause significant work time inflation in NUMA systems.more » Our locality framework for task parallel OpenMP programs mitigates this cause of work time inflation. Our extensions to the Qthreads library demonstrate that locality-aware scheduling can improve performance up to 3X compared to the Intel OpenMP task scheduler.« less
Andersen, Pia; Lindgaard, Anne-Mette; Prgomet, Mirela; Creswick, Nerida; Westbrook, Johanna I
2009-08-04
Selecting the right mix of stationary and mobile computing devices is a significant challenge for system planners and implementers. There is very limited research evidence upon which to base such decisions. We aimed to investigate the relationships between clinician role, clinical task, and selection of a computer hardware device in hospital wards. Twenty-seven nurses and eight doctors were observed for a total of 80 hours as they used a range of computing devices to access a computerized provider order entry system on two wards at a major Sydney teaching hospital. Observers used a checklist to record the clinical tasks completed, devices used, and location of the activities. Field notes were also documented during observations. Semi-structured interviews were conducted after observation sessions. Assessment of the physical attributes of three devices-stationary PCs, computers on wheels (COWs) and tablet PCs-was made. Two types of COWs were available on the wards: generic COWs (laptops mounted on trolleys) and ergonomic COWs (an integrated computer and cart device). Heuristic evaluation of the user interfaces was also carried out. The majority (93.1%) of observed nursing tasks were conducted using generic COWs. Most nursing tasks were performed in patients' rooms (57%) or in the corridors (36%), with a small percentage at a patient's bedside (5%). Most nursing tasks related to the preparation and administration of drugs. Doctors on ward rounds conducted 57.3% of observed clinical tasks on generic COWs and 35.9% on tablet PCs. On rounds, 56% of doctors' tasks were performed in the corridors, 29% in patients' rooms, and 3% at the bedside. Doctors not on a ward round conducted 93.6% of tasks using stationary PCs, most often within the doctors' office. Nurses and doctors were observed performing workarounds, such as transcribing medication orders from the computer to paper. The choice of device was related to clinical role, nature of the clinical task, degree of mobility required, including where task completion occurs, and device design. Nurses' work, and clinical tasks performed by doctors during ward rounds, require highly mobile computer devices. Nurses and doctors on ward rounds showed a strong preference for generic COWs over all other devices. Tablet PCs were selected by doctors for only a small proportion of clinical tasks. Even when using mobile devices clinicians completed a very low proportion of observed tasks at the bedside. The design of the devices and ward space configurations place limitations on how and where devices are used and on the mobility of clinical work. In such circumstances, clinicians will initiate workarounds to compensate. In selecting hardware devices, consideration should be given to who will be using the devices, the nature of their work, and the physical layout of the ward.
Andersen, Pia; Lindgaard, Anne-Mette; Prgomet, Mirela; Creswick, Nerida
2009-01-01
Background Selecting the right mix of stationary and mobile computing devices is a significant challenge for system planners and implementers. There is very limited research evidence upon which to base such decisions. Objective We aimed to investigate the relationships between clinician role, clinical task, and selection of a computer hardware device in hospital wards. Methods Twenty-seven nurses and eight doctors were observed for a total of 80 hours as they used a range of computing devices to access a computerized provider order entry system on two wards at a major Sydney teaching hospital. Observers used a checklist to record the clinical tasks completed, devices used, and location of the activities. Field notes were also documented during observations. Semi-structured interviews were conducted after observation sessions. Assessment of the physical attributes of three devices—stationary PCs, computers on wheels (COWs) and tablet PCs—was made. Two types of COWs were available on the wards: generic COWs (laptops mounted on trolleys) and ergonomic COWs (an integrated computer and cart device). Heuristic evaluation of the user interfaces was also carried out. Results The majority (93.1%) of observed nursing tasks were conducted using generic COWs. Most nursing tasks were performed in patients’ rooms (57%) or in the corridors (36%), with a small percentage at a patient’s bedside (5%). Most nursing tasks related to the preparation and administration of drugs. Doctors on ward rounds conducted 57.3% of observed clinical tasks on generic COWs and 35.9% on tablet PCs. On rounds, 56% of doctors’ tasks were performed in the corridors, 29% in patients’ rooms, and 3% at the bedside. Doctors not on a ward round conducted 93.6% of tasks using stationary PCs, most often within the doctors’ office. Nurses and doctors were observed performing workarounds, such as transcribing medication orders from the computer to paper. Conclusions The choice of device was related to clinical role, nature of the clinical task, degree of mobility required, including where task completion occurs, and device design. Nurses’ work, and clinical tasks performed by doctors during ward rounds, require highly mobile computer devices. Nurses and doctors on ward rounds showed a strong preference for generic COWs over all other devices. Tablet PCs were selected by doctors for only a small proportion of clinical tasks. Even when using mobile devices clinicians completed a very low proportion of observed tasks at the bedside. The design of the devices and ward space configurations place limitations on how and where devices are used and on the mobility of clinical work. In such circumstances, clinicians will initiate workarounds to compensate. In selecting hardware devices, consideration should be given to who will be using the devices, the nature of their work, and the physical layout of the ward. PMID:19674959
Computer-Based Simulations for Maintenance Training: Current ARI Research. Technical Report 544.
ERIC Educational Resources Information Center
Knerr, Bruce W.; And Others
Three research efforts that used computer-based simulations for maintenance training were in progress when this report was written: Game-Based Learning, which investigated the use of computer-based games to train electronics diagnostic skills; Human Performance in Fault Diagnosis Tasks, which evaluated the use of context-free tasks to train…
A General Cross-Layer Cloud Scheduling Framework for Multiple IoT Computer Tasks.
Wu, Guanlin; Bao, Weidong; Zhu, Xiaomin; Zhang, Xiongtao
2018-05-23
The diversity of IoT services and applications brings enormous challenges to improving the performance of multiple computer tasks' scheduling in cross-layer cloud computing systems. Unfortunately, the commonly-employed frameworks fail to adapt to the new patterns on the cross-layer cloud. To solve this issue, we design a new computer task scheduling framework for multiple IoT services in cross-layer cloud computing systems. Specifically, we first analyze the features of the cross-layer cloud and computer tasks. Then, we design the scheduling framework based on the analysis and present detailed models to illustrate the procedures of using the framework. With the proposed framework, the IoT services deployed in cross-layer cloud computing systems can dynamically select suitable algorithms and use resources more effectively to finish computer tasks with different objectives. Finally, the algorithms are given based on the framework, and extensive experiments are also given to validate its effectiveness, as well as its superiority.
Gerth, Sabrina; Dolk, Thomas; Klassert, Annegret; Fliesser, Michael; Fischer, Martin H; Nottbusch, Guido; Festman, Julia
2016-08-01
Our study addresses the following research questions: Are there differences between handwriting movements on paper and on a tablet computer? Can experienced writers, such as most adults, adapt their graphomotor execution during writing to a rather unfamiliar surface for instance a tablet computer? We examined the handwriting performance of adults in three tasks with different complexity: (a) graphomotor abilities, (b) visuomotor abilities and (c) handwriting. Each participant performed each task twice, once on paper and once on a tablet computer with a pen. We tested 25 participants by measuring their writing duration, in air time, number of pen lifts, writing velocity and number of inversions in velocity. The data were analyzed using linear mixed-effects modeling with repeated measures. Our results reveal differences between writing on paper and on a tablet computer which were partly task-dependent. Our findings also show that participants were able to adapt their graphomotor execution to the smoother surface of the tablet computer during the tasks. Copyright © 2016 Elsevier B.V. All rights reserved.
Item Mass and Complexity and the Arithmetic Computation of Students with Learning Disabilities.
ERIC Educational Resources Information Center
Cawley, John F.; Shepard, Teri; Smith, Maureen; Parmar, Rene S.
1997-01-01
The performance of 76 students (ages 10 to 15) with learning disabilities on four tasks of arithmetic computation within each of the four basic operations was examined. Tasks varied in difficulty level and number of strokes needed to complete all items. Intercorrelations between task sets and operations were examined as was the use of…
ERIC Educational Resources Information Center
Hedayati, Mohsen; Foomani, Elham Mohammadi
2015-01-01
The study reported here explores whether English as a foreign Language (EFL) learners' preferred ways of learning (i.e., learning styles) affect their task performance in computer-mediated communication (CMC). As Ellis (2010) points out, while the increasing use of different sorts of technology is witnessed in language learning contexts, it is…
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Youngblood, John N.; Saha, Aindam
1987-01-01
Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carroll, C.C.; Youngblood, J.N.; Saha, A.
1987-12-01
Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processingmore » elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.« less
A Scheduling Algorithm for Cloud Computing System Based on the Driver of Dynamic Essential Path.
Xie, Zhiqiang; Shao, Xia; Xin, Yu
2016-01-01
To solve the problem of task scheduling in the cloud computing system, this paper proposes a scheduling algorithm for cloud computing based on the driver of dynamic essential path (DDEP). This algorithm applies a predecessor-task layer priority strategy to solve the problem of constraint relations among task nodes. The strategy assigns different priority values to every task node based on the scheduling order of task node as affected by the constraint relations among task nodes, and the task node list is generated by the different priority value. To address the scheduling order problem in which task nodes have the same priority value, the dynamic essential long path strategy is proposed. This strategy computes the dynamic essential path of the pre-scheduling task nodes based on the actual computation cost and communication cost of task node in the scheduling process. The task node that has the longest dynamic essential path is scheduled first as the completion time of task graph is indirectly influenced by the finishing time of task nodes in the longest dynamic essential path. Finally, we demonstrate the proposed algorithm via simulation experiments using Matlab tools. The experimental results indicate that the proposed algorithm can effectively reduce the task Makespan in most cases and meet a high quality performance objective.
A Scheduling Algorithm for Cloud Computing System Based on the Driver of Dynamic Essential Path
Xie, Zhiqiang; Shao, Xia; Xin, Yu
2016-01-01
To solve the problem of task scheduling in the cloud computing system, this paper proposes a scheduling algorithm for cloud computing based on the driver of dynamic essential path (DDEP). This algorithm applies a predecessor-task layer priority strategy to solve the problem of constraint relations among task nodes. The strategy assigns different priority values to every task node based on the scheduling order of task node as affected by the constraint relations among task nodes, and the task node list is generated by the different priority value. To address the scheduling order problem in which task nodes have the same priority value, the dynamic essential long path strategy is proposed. This strategy computes the dynamic essential path of the pre-scheduling task nodes based on the actual computation cost and communication cost of task node in the scheduling process. The task node that has the longest dynamic essential path is scheduled first as the completion time of task graph is indirectly influenced by the finishing time of task nodes in the longest dynamic essential path. Finally, we demonstrate the proposed algorithm via simulation experiments using Matlab tools. The experimental results indicate that the proposed algorithm can effectively reduce the task Makespan in most cases and meet a high quality performance objective. PMID:27490901
Flodgren, G; Heiden, M; Lyskov, E; Crenshaw, A G
2007-03-01
In the present study, we assessed the wrist kinetics (range of motion, mean position, velocity and mean power frequency in radial/ulnar deviation, flexion/extension, and pronation/supination) associated with performing a mouse-operated computerized task involving painting rectangles on a computer screen. Furthermore, we evaluated the effects of the painting task on subjective perception of fatigue and wrist position sense. The results showed that the painting task required constrained wrist movements, and repetitive movements of about the same magnitude as those performed in mouse-operated design tasks. In addition, the painting task induced a perception of muscle fatigue in the upper extremity (Borg CR-scale: 3.5, p<0.001) and caused a reduction in the position sense accuracy of the wrist (error before: 4.6 degrees , error after: 5.6 degrees , p<0.05). This standardized painting task appears suitable for studying relevant risk factors, and therefore it offers a potential for investigating the pathophysiological mechanisms behind musculoskeletal disorders related to computer mouse use.
ERIC Educational Resources Information Center
Wilson, Kimberly; Narayan, Anupama
2016-01-01
This study investigates relationships between self-efficacy, self-regulated learning strategy use and academic performance. Participants were 96 undergraduate students working on projects with three subtasks (idea generation task, methodical task and data collection) in a blended learning environment. Task self-efficacy was measured with…
Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach
NASA Technical Reports Server (NTRS)
Mak, Victor W. K.
1986-01-01
Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.
The Interdependence of Computers, Robots, and People.
ERIC Educational Resources Information Center
Ludden, Laverne; And Others
Computers and robots are becoming increasingly more advanced, with smaller and cheaper computers now doing jobs once reserved for huge multimillion dollar computers and with robots performing feats such as painting cars and using television cameras to simulate vision as they perform factory tasks. Technicians expect computers to become even more…
Integration of active pauses and pattern of muscular activity during computer work.
St-Onge, Nancy; Samani, Afshin; Madeleine, Pascal
2017-09-01
Submaximal isometric muscle contractions have been reported to increase variability of muscle activation during computer work; however, other types of active contractions may be more beneficial. Our objective was to determine which type of active pause vs. rest is more efficient in changing muscle activity pattern during a computer task. Asymptomatic regular computer users performed a standardised 20-min computer task four times, integrating a different type of pause: sub-maximal isometric contraction, dynamic contraction, postural exercise and rest. Surface electromyographic (SEMG) activity was recorded bilaterally from five neck/shoulder muscles. Root-mean-square decreased with isometric pauses in the cervical paraspinals, upper trapezius and middle trapezius, whereas it increased with rest. Variability in the pattern of muscular activity was not affected by any type of pause. Overall, no detrimental effects on the level of SEMG during active pauses were found suggesting that they could be implemented without a cost on activation level or variability. Practitioner Summary: We aimed to determine which type of active pause vs. rest is best in changing muscle activity pattern during a computer task. Asymptomatic computer users performed a standardised computer task integrating different types of pauses. Muscle activation decreased with isometric pauses in neck/shoulder muscles, suggesting their implementation during computer work.
Rosenthal, L E
1986-10-01
Software is the component in a computer system that permits the hardware to perform the various functions that a computer system is capable of doing. The history of software and its development can be traced to the early nineteenth century. All computer systems are designed to utilize the "stored program concept" as first developed by Charles Babbage in the 1850s. The concept was lost until the mid-1940s, when modern computers made their appearance. Today, because of the complex and myriad tasks that a computer system can perform, there has been a differentiation of types of software. There is software designed to perform specific business applications. There is software that controls the overall operation of a computer system. And there is software that is designed to carry out specialized tasks. Regardless of types, software is the most critical component of any computer system. Without it, all one has is a collection of circuits, transistors, and silicone chips.
Inoue, Kazuya; Takeda, Yuji; Kimura, Motohiro
2017-02-01
In a task involving continuous action to achieve a goal, the sense of agency increases with an improvement in task performance that is induced by unnoticed computer assistance. This study investigated how explicit instruction about the existence of computer assistance affects the increase of sense of agency that accompanies performance improvement. Participants performed a continuous action task in which they controlled the direction of motion of a dot to a goal by pressing keys. When instructions indicated the absence of assistance, the sense of agency increased with performance improvement induced by computer assistance, replicating previous findings. Interestingly, this increase of sense of agency was also observed even when instructions indicated the presence of assistance. These results suggest that even when a plausible cause of performance improvement other than one's own action exists, the improvement can be misattributed to one's own control of action, resulting in an increased sense of agency. Copyright © 2016 Elsevier Inc. All rights reserved.
Magnetic Tunnel Junction Mimics Stochastic Cortical Spiking Neurons
NASA Astrophysics Data System (ADS)
Sengupta, Abhronil; Panda, Priyadarshini; Wijesinghe, Parami; Kim, Yusung; Roy, Kaushik
2016-07-01
Brain-inspired computing architectures attempt to mimic the computations performed in the neurons and the synapses in the human brain in order to achieve its efficiency in learning and cognitive tasks. In this work, we demonstrate the mapping of the probabilistic spiking nature of pyramidal neurons in the cortex to the stochastic switching behavior of a Magnetic Tunnel Junction in presence of thermal noise. We present results to illustrate the efficiency of neuromorphic systems based on such probabilistic neurons for pattern recognition tasks in presence of lateral inhibition and homeostasis. Such stochastic MTJ neurons can also potentially provide a direct mapping to the probabilistic computing elements in Belief Networks for performing regenerative tasks.
Mallick, Zulquernain
2007-01-01
The last 20 years have seen a tremendous growth in mobile computing and wireless communications and services. An experimental study was conducted to explore the effect of text/background color on a laptop computing system along with variable environmental vibration on operators' data entry task performance in moving automobiles. The operators' performance was measured in terms of the number of characters entered per minute without spaces (NCEPMWS) on a laptop computing system. The subjects were divided into 3 categories, namely, Novices, Intermediates and Experts. Findings suggest a re-evaluation of existing laptop designs taking ergonomics into consideration. It appears that proper selection of text/background color on the laptop coupled with controlled vehicular speed could result in a better quality of interaction between human and laptops and it could also resolve the problem of poor data entry task performance.
Mental workload during brain-computer interface training.
Felton, Elizabeth A; Williams, Justin C; Vanderheiden, Gregg C; Radwin, Robert G
2012-01-01
It is not well understood how people perceive the difficulty of performing brain-computer interface (BCI) tasks, which specific aspects of mental workload contribute the most, and whether there is a difference in perceived workload between participants who are able-bodied and disabled. This study evaluated mental workload using the NASA Task Load Index (TLX), a multi-dimensional rating procedure with six subscales: Mental Demands, Physical Demands, Temporal Demands, Performance, Effort, and Frustration. Able-bodied and motor disabled participants completed the survey after performing EEG-based BCI Fitts' law target acquisition and phrase spelling tasks. The NASA-TLX scores were similar for able-bodied and disabled participants. For example, overall workload scores (range 0-100) for 1D horizontal tasks were 48.5 (SD = 17.7) and 46.6 (SD 10.3), respectively. The TLX can be used to inform the design of BCIs that will have greater usability by evaluating subjective workload between BCI tasks, participant groups, and control modalities. Mental workload of brain-computer interfaces (BCI) can be evaluated with the NASA Task Load Index (TLX). The TLX is an effective tool for comparing subjective workload between BCI tasks, participant groups (able-bodied and disabled), and control modalities. The data can inform the design of BCIs that will have greater usability.
Task-induced frequency modulation features for brain-computer interfacing.
Jayaram, Vinay; Hohmann, Matthias; Just, Jennifer; Schölkopf, Bernhard; Grosse-Wentrup, Moritz
2017-10-01
Task-induced amplitude modulation of neural oscillations is routinely used in brain-computer interfaces (BCIs) for decoding subjects' intents, and underlies some of the most robust and common methods in the field, such as common spatial patterns and Riemannian geometry. While there has been some interest in phase-related features for classification, both techniques usually presuppose that the frequencies of neural oscillations remain stable across various tasks. We investigate here whether features based on task-induced modulation of the frequency of neural oscillations enable decoding of subjects' intents with an accuracy comparable to task-induced amplitude modulation. We compare cross-validated classification accuracies using the amplitude and frequency modulated features, as well as a joint feature space, across subjects in various paradigms and pre-processing conditions. We show results with a motor imagery task, a cognitive task, and also preliminary results in patients with amyotrophic lateral sclerosis (ALS), as well as using common spatial patterns and Laplacian filtering. The frequency features alone do not significantly out-perform traditional amplitude modulation features, and in some cases perform significantly worse. However, across both tasks and pre-processing in healthy subjects the joint space significantly out-performs either the frequency or amplitude features alone. This result only does not hold for ALS patients, for whom the dataset is of insufficient size to draw any statistically significant conclusions. Task-induced frequency modulation is robust and straight forward to compute, and increases performance when added to standard amplitude modulation features across paradigms. This allows more information to be extracted from the EEG signal cheaply and can be used throughout the field of BCIs.
Accelerating Dust Storm Simulation by Balancing Task Allocation in Parallel Computing Environment
NASA Astrophysics Data System (ADS)
Gui, Z.; Yang, C.; XIA, J.; Huang, Q.; YU, M.
2013-12-01
Dust storm has serious negative impacts on environment, human health, and assets. The continuing global climate change has increased the frequency and intensity of dust storm in the past decades. To better understand and predict the distribution, intensity and structure of dust storm, a series of dust storm models have been developed, such as Dust Regional Atmospheric Model (DREAM), the NMM meteorological module (NMM-dust) and Chinese Unified Atmospheric Chemistry Environment for Dust (CUACE/Dust). The developments and applications of these models have contributed significantly to both scientific research and our daily life. However, dust storm simulation is a data and computing intensive process. Normally, a simulation for a single dust storm event may take several days or hours to run. It seriously impacts the timeliness of prediction and potential applications. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. Since spatiotemporal correlations exist in the geophysical process of dust storm simulation, each subdomain allocated to a node need to communicate with other geographically adjacent subdomains to exchange data. Inappropriate allocations may introduce imbalance task loads and unnecessary communications among computing nodes. Therefore, task allocation method is the key factor, which may impact the feasibility of the paralleling. The allocation algorithm needs to carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. This presentation introduces two algorithms for such allocation and compares them with evenly distributed allocation method. Specifically, 1) In order to get optimized solutions, a quadratic programming based modeling method is proposed. This algorithm performs well with small amount of computing tasks. However, its efficiency decreases significantly as the subdomain number and computing node number increase. 2) To compensate performance decreasing for large scale tasks, a K-Means clustering based algorithm is introduced. Instead of dedicating to get optimized solutions, this method can get relatively good feasible solutions within acceptable time. However, it may introduce imbalance communication for nodes or node-isolated subdomains. This research shows both two algorithms have their own strength and weakness for task allocation. A combination of the two algorithms is under study to obtain a better performance. Keywords: Scheduling; Parallel Computing; Load Balance; Optimization; Cost Model
Effects on Training Using Illumination in Virtual Environments
NASA Technical Reports Server (NTRS)
Maida, James C.; Novak, M. S. Jennifer; Mueller, Kristian
1999-01-01
Camera based tasks are commonly performed during orbital operations, and orbital lighting conditions, such as high contrast shadowing and glare, are a factor in performance. Computer based training using virtual environments is a common tool used to make and keep CTW members proficient. If computer based training included some of these harsh lighting conditions, would the crew increase their proficiency? The project goal was to determine whether computer based training increases proficiency if one trains for a camera based task using computer generated virtual environments with enhanced lighting conditions such as shadows and glare rather than color shaded computer images normally used in simulators. Previous experiments were conducted using a two degree of freedom docking system. Test subjects had to align a boresight camera using a hand controller with one axis of rotation and one axis of rotation. Two sets of subjects were trained on two computer simulations using computer generated virtual environments, one with lighting, and one without. Results revealed that when subjects were constrained by time and accuracy, those who trained with simulated lighting conditions performed significantly better than those who did not. To reinforce these results for speed and accuracy, the task complexity was increased.
Space station Simulation Computer System (SCS) study for NASA/MSFC. Volume 5: Study analysis report
NASA Technical Reports Server (NTRS)
1989-01-01
The Simulation Computer System (SCS) is the computer hardware, software, and workstations that will support the Payload Training Complex (PTC) at the Marshall Space Flight Center (MSFC). The PTC will train the space station payload scientists, station scientists, and ground controllers to operate the wide variety of experiments that will be on-board the Freedom Space Station. The further analysis performed on the SCS study as part of task 2-Perform Studies and Parametric Analysis-of the SCS study contract is summarized. These analyses were performed to resolve open issues remaining after the completion of task 1, and the publishing of the SCS study issues report. The results of these studies provide inputs into SCS task 3-Develop and present SCS requirements, and SCS task 4-develop SCS conceptual designs. The purpose of these studies is to resolve the issues into usable requirements given the best available information at the time of the study. A list of all the SCS study issues is given.
Desktop Computing Integration Project
NASA Technical Reports Server (NTRS)
Tureman, Robert L., Jr.
1992-01-01
The Desktop Computing Integration Project for the Human Resources Management Division (HRMD) of LaRC was designed to help division personnel use personal computing resources to perform job tasks. The three goals of the project were to involve HRMD personnel in desktop computing, link mainframe data to desktop capabilities, and to estimate training needs for the division. The project resulted in increased usage of personal computers by Awards specialists, an increased awareness of LaRC resources to help perform tasks, and personal computer output that was used in presentation of information to center personnel. In addition, the necessary skills for HRMD personal computer users were identified. The Awards Office was chosen for the project because of the consistency of their data requests and the desire of employees in that area to use the personal computer.
Visser, Bart; De Looze, Michiel; De Graaff, Matthijs; Van Dieën, Jaap
2004-02-05
The objective of the present study was to gain insight into the effects of precision demands and mental pressure on the load of the upper extremity. Two computer mouse tasks were used: an aiming and a tracking task. Upper extremity loading was operationalized as the myo-electric activity of the wrist flexor and extensor and of the trapezius descendens muscles and the applied grip- and click-forces on the computer mouse. Performance measures, reflecting the accuracy in both tasks and the clicking rate in the aiming task, indicated that the levels of the independent variables resulted in distinguishable levels of accuracy and work pace. Precision demands had a small effect on upper extremity loading with a significant increase in the EMG-amplitudes (21%) of the wrist flexors during the aiming tasks. Precision had large effects on performance. Mental pressure had substantial effects on EMG-amplitudes with an increase of 22% in the trapezius when tracking and increases of 41% in the trapezius and 45% and 140% in the wrist extensors and flexors, respectively, when aiming. During aiming, grip- and click-forces increased by 51% and 40% respectively. Mental pressure had small effects on accuracy but large effects on tempo during aiming. Precision demands and mental pressure in aiming and tracking tasks with a computer mouse were found to coincide with increased muscle activity in some upper extremity muscles and increased force exertion on the computer mouse. Mental pressure caused significant effects on these parameters more often than precision demands. Precision and mental pressure were found to have effects on performance, with precision effects being significant for all performance measures studied and mental pressure effects for some of them. The results of this study suggest that precision demands and mental pressure increase upper extremity load, with mental pressure effects being larger than precision effects. The possible role of precision demands as an indirect mental stressor in working conditions is discussed.
Planning and task management in Parkinson's disease: differential emphasis in dual-task performance.
Bialystok, Ellen; Craik, Fergus I M; Stefurak, Taresa
2008-03-01
Seventeen patients diagnosed with Parkinson's disease completed a complex computer-based task that involved planning and management while also performing an attention-demanding secondary task. The tasks were performed concurrently, but it was necessary to switch from one to the other. Performance was compared to a group of healthy age-matched control participants and a group of young participants. Parkinson's patients performed better than the age-matched controls on almost all measures and as well as the young controls in many cases. However, the Parkinson's patients achieved this by paying relatively less attention to the secondary task and focusing attention more on the primary task. Thus, Parkinson's patients can apparently improve their performance on some aspects of a multidimensional task by simplifying task demands. This benefit may occur as a consequence of their inflexible exaggerated attention to some aspects of a complex task to the relative neglect of other aspects.
Task Assignment Heuristics for Parallel and Distributed CFD Applications
NASA Technical Reports Server (NTRS)
Lopez-Benitez, Noe; Djomehri, M. Jahed; Biswas, Rupak
2003-01-01
This paper proposes a task graph (TG) model to represent a single discrete step of multi-block overset grid computational fluid dynamics (CFD) applications. The TG model is then used to not only balance the computational workload across the overset grids but also to reduce inter-grid communication costs. We have developed a set of task assignment heuristics based on the constraints inherent in this class of CFD problems. Two basic assignments, the smallest task first (STF) and the largest task first (LTF), are first presented. They are then systematically costs. To predict the performance of the proposed task assignment heuristics, extensive performance evaluations are conducted on a synthetic TG with tasks defined in terms of the number of grid points in predetermined overlapping grids. A TG derived from a realistic problem with eight million grid points is also used as a test case.
ERIC Educational Resources Information Center
Matrix Research Co., Alexandria, VA.
The handbook covers a comprehensive series of Job-Task Performance Tests for the Doppler Radar (AN/APN) and its Associated Computer (AN/ASN-35). The test series has been developed to measure job performance of the electronic technician. These tests encompass all phases of day-to-day preventative and corrective maintenance that technicians are…
Orhan, A Emin; Ma, Wei Ji
2017-07-26
Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks. In a probabilistic categorization task, error-based learning in a generic network simultaneously explains a monkey's learning curve and the evolution of qualitative aspects of its choice behavior. In all tasks, the number of neurons required for a given level of performance grows sublinearly with the input population size, a substantial improvement on previous implementations of probabilistic inference. The trained networks develop a novel sparsity-based probabilistic population code. Our results suggest that probabilistic inference emerges naturally in generic neural networks trained with error-based learning rules.Behavioural tasks often require probability distributions to be inferred about task specific variables. Here, the authors demonstrate that generic neural networks can be trained using a simple error-based learning rule to perform such probabilistic computations efficiently without any need for task specific operations.
The development of expertise using an intelligent computer-aided training system
NASA Technical Reports Server (NTRS)
Johnson, Debra Steele
1991-01-01
An initial examination was conducted of an Intelligent Tutoring System (ITS) developed for use in industry. The ITS, developed by NASA, simulated a satellite deployment task. More specifically, the PD (Payload Assist Module Deployment)/ICAT (Intelligent Computer Aided Training) System simulated a nominal Payload Assist Module (PAM) deployment. The development of expertise on this task was examined using three Flight Dynamics Officer (FDO) candidates who has no previous experience with this task. The results indicated that performance improved rapidly until Trial 5, followed by more gradual improvements through Trial 12. The performance dimensions measured included performance speed, actions completed, errors, help required, and display fields checked. Suggestions for further refining the software and for deciding when to expose trainees to more difficult task scenarios are discussed. Further, the results provide an initial demonstration of the effectiveness of the PD/ICAT system in training the nominal PAM deployment task and indicate the potential benefits of using ITS's for training other FDO tasks.
Utilizing gamma band to improve mental task based brain-computer interface design.
Palaniappan, Ramaswamy
2006-09-01
A common method for designing brain-computer Interface (BCI) is to use electroencephalogram (EEG) signals extracted during mental tasks. In these BCI designs, features from EEG such as power and asymmetry ratios from delta, theta, alpha, and beta bands have been used in classifying different mental tasks. In this paper, the performance of the mental task based BCI design is improved by using spectral power and asymmetry ratios from gamma (24-37 Hz) band in addition to the lower frequency bands. In the experimental study, EEG signals extracted during five mental tasks from four subjects were used. Elman neural network (ENN) trained by the resilient backpropagation algorithm was used to classify the power and asymmetry ratios from EEG into different combinations of two mental tasks. The results indicated that ((1) the classification performance and training time of the BCI design were improved through the use of additional gamma band features; (2) classification performances were nearly invariant to the number of ENN hidden units or feature extraction method.
Method and Apparatus for Performance Optimization Through Physical Perturbation of Task Elements
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III (Inventor); Pope, Alan T. (Inventor); Palsson, Olafur S. (Inventor); Turner, Marsha J. (Inventor)
2016-01-01
The invention is an apparatus and method of biofeedback training for attaining a physiological state optimally consistent with the successful performance of a task, wherein the probability of successfully completing the task is made is inversely proportional to a physiological difference value, computed as the absolute value of the difference between at least one physiological signal optimally consistent with the successful performance of the task and at least one corresponding measured physiological signal of a trainee performing the task. The probability of successfully completing the task is made inversely proportional to the physiological difference value by making one or more measurable physical attributes of the environment in which the task is performed, and upon which completion of the task depends, vary in inverse proportion to the physiological difference value.
Mazur, Lukasz M; Mosaly, Prithima R; Moore, Carlton; Comitz, Elizabeth; Yu, Fei; Falchook, Aaron D; Eblan, Michael J; Hoyle, Lesley M; Tracton, Gregg; Chera, Bhishamjit S; Marks, Lawrence B
2016-11-01
To assess the relationship between (1) task demands and workload, (2) task demands and performance, and (3) workload and performance, all during physician-computer interactions in a simulated environment. Two experiments were performed in 2 different electronic medical record (EMR) environments: WebCIS (n = 12) and Epic (n = 17). Each participant was instructed to complete a set of prespecified tasks on 3 routine clinical EMR-based scenarios: urinary tract infection (UTI), pneumonia (PN), and heart failure (HF). Task demands were quantified using behavioral responses (click and time analysis). At the end of each scenario, subjective workload was measured using the NASA-Task-Load Index (NASA-TLX). Physiological workload was measured using pupillary dilation and electroencephalography (EEG) data collected throughout the scenarios. Performance was quantified based on the maximum severity of omission errors. Data analysis indicated that the PN and HF scenarios were significantly more demanding than the UTI scenario for participants using WebCIS (P < .01), and that the PN scenario was significantly more demanding than the UTI and HF scenarios for participants using Epic (P < .01). In both experiments, the regression analysis indicated a significant relationship only between task demands and performance (P < .01). Results suggest that task demands as experienced by participants are related to participants' performance. Future work may support the notion that task demands could be used as a quality metric that is likely representative of performance, and perhaps patient outcomes. The present study is a reasonable next step in a systematic assessment of how task demands and workload are related to performance in EMR-evolving environments. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Fedorowich, Larissa M; Côté, Julie N
2018-10-01
Standing is a popular alternative to traditionally seated computer work. However, no studies have described how standing impacts both upper body muscular and vascular outcomes during a computer typing task. Twenty healthy adults completed two 90-min simulated work sessions, seated or standing. Upper limb discomfort, electromyography (EMG) from eight upper body muscles, typing performance and neck/shoulder and forearm blood flow were collected. Results showed significantly less upper body discomfort and higher typing speed during standing. Lower Trapezius EMG amplitude was higher during standing, but this postural difference decreased with time (interaction effect), and its variability was 68% higher during standing compared to sitting. There were no effects on blood flow. Results suggest that standing computer work may engage shoulder girdle stabilizers while reducing discomfort and improving performance. Studies are needed to identify how standing affects more complex computer tasks over longer work bouts in symptomatic workers. Copyright © 2018 Elsevier Ltd. All rights reserved.
Mautone, Jennifer A; DuPaul, George J; Jitendra, Asha K
2005-08-01
The present study examines the effects of computer-assisted instruction (CAI) on the mathematics performance and classroom behavior of three second-through fourth-grade students with ADHD. A controlled case study is used to evaluate the effects of the computer software on participants' mathematics performance and on-task behavior. Participants' mathematics achievement improve and their on-task behavior increase during the CAI sessions relative to independent seatwork conditions. In addition, students and teachers consider CAI to be an acceptable intervention for some students with ADHD who are having difficulty with mathematics. Implications of these results for practice and research are discussed.
Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A
2017-04-01
In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.
NASA Astrophysics Data System (ADS)
Yi, Weibo; Qiu, Shuang; Wang, Kun; Qi, Hongzhi; Zhao, Xin; He, Feng; Zhou, Peng; Yang, Jiajia; Ming, Dong
2017-04-01
Objective. We proposed a novel simultaneous hybrid brain-computer interface (BCI) by incorporating electrical stimulation into a motor imagery (MI) based BCI system. The goal of this study was to enhance the overall performance of an MI-based BCI. In addition, the brain oscillatory pattern in the hybrid task was also investigated. Approach. 64-channel electroencephalographic (EEG) data were recorded during MI, selective attention (SA) and hybrid tasks in fourteen healthy subjects. In the hybrid task, subjects performed MI with electrical stimulation which was applied to bilateral median nerve on wrists simultaneously. Main results. The hybrid task clearly presented additional steady-state somatosensory evoked potential (SSSEP) induced by electrical stimulation with MI-induced event-related desynchronization (ERD). By combining ERD and SSSEP features, the performance in the hybrid task was significantly better than in both MI and SA tasks, achieving a ~14% improvement in total relative to the MI task alone and reaching ~89% in mean classification accuracy. On the contrary, there was no significant enhancement obtained in performance while separate ERD feature was utilized in the hybrid task. In terms of the hybrid task, the performance using combined feature was significantly better than using separate ERD or SSSEP feature. Significance. The results in this work validate the feasibility of our proposed approach to form a novel MI-SSSEP hybrid BCI outperforming a conventional MI-based BCI through combing MI with electrical stimulation.
A Framework for Load Balancing of Tensor Contraction Expressions via Dynamic Task Partitioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, Pai-Wei; Stock, Kevin; Rajbhandari, Samyam
In this paper, we introduce the Dynamic Load-balanced Tensor Contractions (DLTC), a domain-specific library for efficient task parallel execution of tensor contraction expressions, a class of computation encountered in quantum chemistry and physics. Our framework decomposes each contraction into smaller unit of tasks, represented by an abstraction referred to as iterators. We exploit an extra level of parallelism by having tasks across independent contractions executed concurrently through a dynamic load balancing run- time. We demonstrate the improved performance, scalability, and flexibility for the computation of tensor contraction expressions on parallel computers using examples from coupled cluster methods.
1994-03-10
determine capacity variations as shown in dual-task performance. In addition, neurobiology and psychological evidence shows that the nervous system is...do we perform complex tasks requiring two or more activities in a short period of time and what determines the quality of performance? Psychological ...definition for these kinds of entities. Introduction of the computer metaphor in cognitive psychology , ascribing behavioral capacity limitations to a kind
Rivard, Justin D; Vergis, Ashley S; Unger, Bertram J; Hardy, Krista M; Andrew, Chris G; Gillman, Lawrence M; Park, Jason
2014-06-01
Computer-based surgical simulators capture a multitude of metrics based on different aspects of performance, such as speed, accuracy, and movement efficiency. However, without rigorous assessment, it may be unclear whether all, some, or none of these metrics actually reflect technical skill, which can compromise educational efforts on these simulators. We assessed the construct validity of individual performance metrics on the LapVR simulator (Immersion Medical, San Jose, CA, USA) and used these data to create task-specific summary metrics. Medical students with no prior laparoscopic experience (novices, N = 12), junior surgical residents with some laparoscopic experience (intermediates, N = 12), and experienced surgeons (experts, N = 11) all completed three repetitions of four LapVR simulator tasks. The tasks included three basic skills (peg transfer, cutting, clipping) and one procedural skill (adhesiolysis). We selected 36 individual metrics on the four tasks that assessed six different aspects of performance, including speed, motion path length, respect for tissue, accuracy, task-specific errors, and successful task completion. Four of seven individual metrics assessed for peg transfer, six of ten metrics for cutting, four of nine metrics for clipping, and three of ten metrics for adhesiolysis discriminated between experience levels. Time and motion path length were significant on all four tasks. We used the validated individual metrics to create summary equations for each task, which successfully distinguished between the different experience levels. Educators should maintain some skepticism when reviewing the plethora of metrics captured by computer-based simulators, as some but not all are valid. We showed the construct validity of a limited number of individual metrics and developed summary metrics for the LapVR. The summary metrics provide a succinct way of assessing skill with a single metric for each task, but require further validation.
Task-induced frequency modulation features for brain-computer interfacing
NASA Astrophysics Data System (ADS)
Jayaram, Vinay; Hohmann, Matthias; Just, Jennifer; Schölkopf, Bernhard; Grosse-Wentrup, Moritz
2017-10-01
Objective. Task-induced amplitude modulation of neural oscillations is routinely used in brain-computer interfaces (BCIs) for decoding subjects’ intents, and underlies some of the most robust and common methods in the field, such as common spatial patterns and Riemannian geometry. While there has been some interest in phase-related features for classification, both techniques usually presuppose that the frequencies of neural oscillations remain stable across various tasks. We investigate here whether features based on task-induced modulation of the frequency of neural oscillations enable decoding of subjects’ intents with an accuracy comparable to task-induced amplitude modulation. Approach. We compare cross-validated classification accuracies using the amplitude and frequency modulated features, as well as a joint feature space, across subjects in various paradigms and pre-processing conditions. We show results with a motor imagery task, a cognitive task, and also preliminary results in patients with amyotrophic lateral sclerosis (ALS), as well as using common spatial patterns and Laplacian filtering. Main results. The frequency features alone do not significantly out-perform traditional amplitude modulation features, and in some cases perform significantly worse. However, across both tasks and pre-processing in healthy subjects the joint space significantly out-performs either the frequency or amplitude features alone. This result only does not hold for ALS patients, for whom the dataset is of insufficient size to draw any statistically significant conclusions. Significance. Task-induced frequency modulation is robust and straight forward to compute, and increases performance when added to standard amplitude modulation features across paradigms. This allows more information to be extracted from the EEG signal cheaply and can be used throughout the field of BCIs.
NASA Astrophysics Data System (ADS)
Hwang, Han-Jeong; Lim, Jeong-Hwan; Kim, Do-Won; Im, Chang-Hwan
2014-07-01
A number of recent studies have demonstrated that near-infrared spectroscopy (NIRS) is a promising neuroimaging modality for brain-computer interfaces (BCIs). So far, most NIRS-based BCI studies have focused on enhancing the accuracy of the classification of different mental tasks. In the present study, we evaluated the performances of a variety of mental task combinations in order to determine the mental task pairs that are best suited for customized NIRS-based BCIs. To this end, we recorded event-related hemodynamic responses while seven participants performed eight different mental tasks. Classification accuracies were then estimated for all possible pairs of the eight mental tasks (C=28). Based on this analysis, mental task combinations with relatively high classification accuracies frequently included the following three mental tasks: "mental multiplication," "mental rotation," and "right-hand motor imagery." Specifically, mental task combinations consisting of two of these three mental tasks showed the highest mean classification accuracies. It is expected that our results will be a useful reference to reduce the time needed for preliminary tests when discovering individual-specific mental task combinations.
Dennerlein, J T; Yang, M C
2001-01-01
Pointing devices, essential input tools for the graphical user interface (GUI) of desktop computers, require precise motor control and dexterity to use. Haptic force-feedback devices provide the human operator with tactile cues, adding the sense of touch to existing visual and auditory interfaces. However, the performance enhancements, comfort, and possible musculoskeletal loading of using a force-feedback device in an office environment are unknown. Hypothesizing that the time to perform a task and the self-reported pain and discomfort of the task improve with the addition of force feedback, 26 people ranging in age from 22 to 44 years performed a point-and-click task 540 times with and without an attractive force field surrounding the desired target. The point-and-click movements were approximately 25% faster with the addition of force feedback (paired t-tests, p < 0.001). Perceived user discomfort and pain, as measured through a questionnaire, were also smaller with the addition of force feedback (p < 0.001). However, this difference decreased as additional distracting force fields were added to the task environment, simulating a more realistic work situation. These results suggest that for a given task, use of a force-feedback device improves performance, and potentially reduces musculoskeletal loading during mouse use. Actual or potential applications of this research include human-computer interface design, specifically that of the pointing device extensively used for the graphical user interface.
Computer-enhanced laparoscopic training system (CELTS): bridging the gap.
Stylopoulos, N; Cotin, S; Maithel, S K; Ottensmeye, M; Jackson, P G; Bardsley, R S; Neumann, P F; Rattner, D W; Dawson, S L
2004-05-01
There is a large and growing gap between the need for better surgical training methodologies and the systems currently available for such training. In an effort to bridge this gap and overcome the disadvantages of the training simulators now in use, we developed the Computer-Enhanced Laparoscopic Training System (CELTS). CELTS is a computer-based system capable of tracking the motion of laparoscopic instruments and providing feedback about performance in real time. CELTS consists of a mechanical interface, a customizable set of tasks, and an Internet-based software interface. The special cognitive and psychomotor skills a laparoscopic surgeon should master were explicitly defined and transformed into quantitative metrics based on kinematics analysis theory. A single global standardized and task-independent scoring system utilizing a z-score statistic was developed. Validation exercises were performed. The scoring system clearly revealed a gap between experts and trainees, irrespective of the task performed; none of the trainees obtained a score above the threshold that distinguishes the two groups. Moreover, CELTS provided educational feedback by identifying the key factors that contributed to the overall score. Among the defined metrics, depth perception, smoothness of motion, instrument orientation, and the outcome of the task are major indicators of performance and key parameters that distinguish experts from trainees. Time and path length alone, which are the most commonly used metrics in currently available systems, are not considered good indicators of performance. CELTS is a novel and standardized skills trainer that combines the advantages of computer simulation with the features of the traditional and popular training boxes. CELTS can easily be used with a wide array of tasks and ensures comparability across different training conditions. This report further shows that a set of appropriate and clinically relevant performance metrics can be defined and a standardized scoring system can be designed.
Effective correlator for RadioAstron project
NASA Astrophysics Data System (ADS)
Sergeev, Sergey
This paper presents the implementation of programme FX-correlator for Very Long Baseline Interferometry, adapted for the project "RadioAstron". Software correlator implemented for heterogeneous computing systems using graphics accelerators. It is shown that for the task interferometry implementation of the graphics hardware has a high efficiency. The host processor of heterogeneous computing system, performs the function of forming the data flow for graphics accelerators, the number of which corresponds to the number of frequency channels. So, for the Radioastron project, such channels is seven. Each accelerator is perform correlation matrix for all bases for a single frequency channel. Initial data is converted to the floating-point format, is correction for the corresponding delay function and computes the entire correlation matrix simultaneously. Calculation of the correlation matrix is performed using the sliding Fourier transform. Thus, thanks to the compliance of a solved problem for architecture graphics accelerators, managed to get a performance for one processor platform Kepler, which corresponds to the performance of this task, the computing cluster platforms Intel on four nodes. This task successfully scaled not only on a large number of graphics accelerators, but also on a large number of nodes with multiple accelerators.
Pal, Reshmi; Mendelson, John; Clavier, Odile; Baggott, Mathew J; Coyle, Jeremy; Galloway, Gantt P
2016-01-01
In methamphetamine (MA) users, drug-induced neurocognitive deficits may help to determine treatment, monitor adherence, and predict relapse. To measure these relationships, we developed an iPhone app (Neurophone) to compare lab and field performance of N-Back, Stop Signal, and Stroop tasks that are sensitive to MA-induced deficits. Twenty healthy controls and 16 MA-dependent participants performed the tasks in-lab using a validated computerized platform and the Neurophone before taking the latter home and performing the tasks twice daily for two weeks. N-Back task: there were no clear differences in performance between computer-based vs. phone-based in-lab tests and phone-based in-lab vs. phone-based in-field tests. Stop-Signal task: difference in parameters prevented comparison of computer-based and phone-based versions. There was significant difference in phone performance between field and lab. Stroop task: response time measured by the speech recognition engine lacked precision to yield quantifiable results. There was no learning effect over time. On an average, each participant completed 84.3% of the in-field NBack tasks and 90.4% of the in-field Stop Signal tasks (MA-dependent participants: 74.8% and 84.3%; healthy controls: 91.4% and 95.0%, respectively). Participants rated Neurophone easy to use. Cognitive tasks performed in-field using Neurophone have the potential to yield results comparable to those obtained in a laboratory setting. Tasks need to be modified for use as the app's voice recognition system is not yet adequate for timed tests.
Toward a Model-Based Predictive Controller Design in Brain–Computer Interfaces
Kamrunnahar, M.; Dias, N. S.; Schiff, S. J.
2013-01-01
A first step in designing a robust and optimal model-based predictive controller (MPC) for brain–computer interface (BCI) applications is presented in this article. An MPC has the potential to achieve improved BCI performance compared to the performance achieved by current ad hoc, nonmodel-based filter applications. The parameters in designing the controller were extracted as model-based features from motor imagery task-related human scalp electroencephalography. Although the parameters can be generated from any model-linear or non-linear, we here adopted a simple autoregressive model that has well-established applications in BCI task discriminations. It was shown that the parameters generated for the controller design can as well be used for motor imagery task discriminations with performance (with 8–23% task discrimination errors) comparable to the discrimination performance of the commonly used features such as frequency specific band powers and the AR model parameters directly used. An optimal MPC has significant implications for high performance BCI applications. PMID:21267657
Toward a model-based predictive controller design in brain-computer interfaces.
Kamrunnahar, M; Dias, N S; Schiff, S J
2011-05-01
A first step in designing a robust and optimal model-based predictive controller (MPC) for brain-computer interface (BCI) applications is presented in this article. An MPC has the potential to achieve improved BCI performance compared to the performance achieved by current ad hoc, nonmodel-based filter applications. The parameters in designing the controller were extracted as model-based features from motor imagery task-related human scalp electroencephalography. Although the parameters can be generated from any model-linear or non-linear, we here adopted a simple autoregressive model that has well-established applications in BCI task discriminations. It was shown that the parameters generated for the controller design can as well be used for motor imagery task discriminations with performance (with 8-23% task discrimination errors) comparable to the discrimination performance of the commonly used features such as frequency specific band powers and the AR model parameters directly used. An optimal MPC has significant implications for high performance BCI applications.
Devi, D Chitra; Uthariaraj, V Rhymend
2016-01-01
Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods.
Devi, D. Chitra; Uthariaraj, V. Rhymend
2016-01-01
Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods. PMID:26955656
Liu, Y; Wickens, C D
1994-11-01
The evaluation of mental workload is becoming increasingly important in system design and analysis. The present study examined the structure and assessment of mental workload in performing decision and monitoring tasks by focusing on two mental workload measurements: subjective assessment and time estimation. The task required the assignment of a series of incoming customers to the shortest of three parallel service lines displayed on a computer monitor. The subject was either in charge of the customer assignment (manual mode) or was monitoring an automated system performing the same task (automatic mode). In both cases, the subjects were required to detect the non-optimal assignments that they or the computer had made. Time pressure was manipulated by the experimenter to create fast and slow conditions. The results revealed a multi-dimensional structure of mental workload and a multi-step process of subjective workload assessment. The results also indicated that subjective workload was more influenced by the subject's participatory mode than by the factor of task speed. The time estimation intervals produced while performing the decision and monitoring tasks had significantly greater length and larger variability than those produced while either performing no other tasks or performing a well practised customer assignment task. This result seemed to indicate that time estimation was sensitive to the presence of perceptual/cognitive demands, but not to response related activities to which behavioural automaticity has developed.
No Fatigue Effect on Blink Rate
NASA Technical Reports Server (NTRS)
Kim, W.; Zangemeister, W.; Stark, L.
1984-01-01
Blink rate is reported to vary dependent upon ongoing task performance, perceptual, attentional and cognitive factors, and fatigue. Five levels of task difficulty were operationally defined and task performance as lines read aloud per minute were measured. A single noninvasive infrared TV eyetracker was modified to measure blinking and an on-line computer program identified and counted blinks while the subject performed the tasks. Blink rate decreased by 50% when either task performance increased (fast reading) or visual difficulty increased (blurred text); blink rate increased greatly during rest breaks. There was no change in blink rate during one hour experiments even though subjects complained of severe fatigue.
Akkas, Oguz; Lee, Cheng Hsien; Hu, Yu Hen; Harris Adamson, Carisa; Rempel, David; Radwin, Robert G
2017-12-01
Two computer vision algorithms were developed to automatically estimate exertion time, duty cycle (DC) and hand activity level (HAL) from videos of workers performing 50 industrial tasks. The average DC difference between manual frame-by-frame analysis and the computer vision DC was -5.8% for the Decision Tree (DT) algorithm, and 1.4% for the Feature Vector Training (FVT) algorithm. The average HAL difference was 0.5 for the DT algorithm and 0.3 for the FVT algorithm. A sensitivity analysis, conducted to examine the influence that deviations in DC have on HAL, found it remained unaffected when DC error was less than 5%. Thus, a DC error less than 10% will impact HAL less than 0.5 HAL, which is negligible. Automatic computer vision HAL estimates were therefore comparable to manual frame-by-frame estimates. Practitioner Summary: Computer vision was used to automatically estimate exertion time, duty cycle and hand activity level from videos of workers performing industrial tasks.
Stocco, Andrea; Yamasaki, Brianna L; Prat, Chantel S
2018-04-01
This article describes the data analyzed in the paper "Individual differences in the Simon effect are underpinned by differences in the competitive dynamics in the basal ganglia: An experimental verification and a computational model" (Stocco et al., 2017) [1]. The data includes behavioral results from participants performing three cognitive tasks (Probabilistic Stimulus Selection (Frank et al., 2004) [2], Simon task (Craft and Simon, 1970) [3], and Automated Operation Span (Unsworth et al., 2005) [4]), as well as simulationed traces generated by a computational neurocognitive model that accounts for individual variations in human performance across the tasks. The experimental data encompasses individual data files (in both preprocessed and native output format) as well as group-level summary files. The simulation data includes the entire model code, the results of a full-grid search of the model's parameter space, and the code used to partition the model space and parallelize the simulations. Finally, the repository includes the R scripts used to carry out the statistical analyses reported in the original paper.
Active Nodal Task Seeking for High-Performance, Ultra-Dependable Computing
1994-07-01
implementation. Figure 1 shows a hardware organization of ANTS: stand-alone computing nodes inter - connected by buses. 2.1 Run Time Partitioning The...nodes in 14 respond to changing loads [27] or system reconfiguration [26]. Existing techniques are all source-initiated or server-initiated [27]. 5.1...short-running task segments. The task segments must be short-running in order that processors will become avalable often enough to satisfy changing
A wearable navigation display can improve attentiveness to the surgical field.
Stewart, James; Billinghurst, Mark
2016-06-01
Surgical navigation is typically shown on a computer display that is distant from the patient, making it difficult for the surgeon to watch the patient while performing a guided task. We investigate whether a light-weight, untracked, wearable display (such as Google Glass, which has the same size and weight as corrective glasses) can improve attentiveness to the surgical field in a simulated surgical task. Three displays were tested: a computer monitor; a peripheral display above the eye; and a through-the-lens display in front of the eye. Twelve subjects performed a task to position and orient a tracked tool on a plastic femur. Both wearable displays were tested on the dominant and non-dominant eyes of each subject. Attentiveness during the task was measured by the time taken to respond to randomly illuminated LEDs on the femur. Attentiveness was improved with the wearable displays at the cost of a decrease in accuracy. The through-the-lens display performed better than the peripheral display. The peripheral display performed better when on the dominant eye, while the through-the-lens display performed better when on the non-dominant eye. Attentiveness to the surgical field can be improved with the use of a light-weight, untracked, wearable display. A through-the-lens display performs better than a peripheral display, and both perform better than a computer monitor. Eye dominance should be considered when positioning the display.
Mission-based Scenario Research: Experimental Design And Analysis
2012-01-01
neurotechnologies called Brain-Computer Interaction Technologies. 15. SUBJECT TERMS neuroimaging, EEG, task loading, neurotechnologies , ground... neurotechnologies called Brain-Computer Interaction Technologies. INTRODUCTION Imagine a system that can identify operator fatigue during a long-term...BCIT), a class of neurotechnologies , that aim to improve task performance by incorporating measures of brain activity to optimize the interactions
An Interaction of Screen Colour and Lesson Task in CAL
ERIC Educational Resources Information Center
Clariana, Roy B.
2004-01-01
Colour is a common feature in computer-aided learning (CAL), though the instructional effects of screen colour are not well understood. This investigation considers the effects of different CAL study tasks with feedback on posttest performance and on posttest memory of the lesson colour scheme. Graduate students (n=68) completed a computer-based…
Nonoccurrence of Negotiation of Meaning in Task-Based Synchronous Computer-Mediated Communication
ERIC Educational Resources Information Center
Van Der Zwaard, Rose; Bannink, Anne
2016-01-01
This empirical study investigated the occurrence of meaning negotiation in an interactive synchronous computer-mediated second language (L2) environment. Sixteen dyads (N = 32) consisting of nonnative speakers (NNSs) and native speakers (NSs) of English performed 2 different tasks using videoconferencing and written chat. The data were coded and…
2015-01-27
placed on the user by the required tasks. Design areas that are of concern include seating , input and output device location and design , ambient...software, hardware, and workspace design for the test function of operability that influence operator performance in a computer-based system. 15...PRESENTATION ................... 23 APPENDIX A. SAMPLE DESIGN CHECKLISTS ...................................... A-1 B. SAMPLE TASK CHECKLISTS
BASIC, Logo, and Pilot: A Comparison of Three Computer Languages.
ERIC Educational Resources Information Center
Maddux, Cleborne D.; Cummings, Rhoda E.
1985-01-01
Following a brief history of Logo, BASIC, and Pilot programing languages, common educational programing tasks (input from keyboard, evaluation of keyboard input, and computation) are presented in each language to illustrate how each can be used to perform the same tasks and to demonstrate each language's strengths and weaknesses. (MBR)
1988-06-01
Continue on reverse if necessary and identify by block number) FIELD GROUP SUB-GROUP Computer Assisted Instruction; Artificial Intelligence 194...while he/she tries to perform given tasks. Means-ends analysis, a classic technique for solving search problems in Artificial Intelligence, has been...he/she tries to perform given tasks. Means-ends analysis, a classic technique for solving search problems in Artificial Intelligence, has been used
A Hybrid Task Graph Scheduler for High Performance Image Processing Workflows.
Blattner, Timothy; Keyrouz, Walid; Bhattacharyya, Shuvra S; Halem, Milton; Brady, Mary
2017-12-01
Designing applications for scalability is key to improving their performance in hybrid and cluster computing. Scheduling code to utilize parallelism is difficult, particularly when dealing with data dependencies, memory management, data motion, and processor occupancy. The Hybrid Task Graph Scheduler (HTGS) improves programmer productivity when implementing hybrid workflows for multi-core and multi-GPU systems. The Hybrid Task Graph Scheduler (HTGS) is an abstract execution model, framework, and API that increases programmer productivity when implementing hybrid workflows for such systems. HTGS manages dependencies between tasks, represents CPU and GPU memories independently, overlaps computations with disk I/O and memory transfers, keeps multiple GPUs occupied, and uses all available compute resources. Through these abstractions, data motion and memory are explicit; this makes data locality decisions more accessible. To demonstrate the HTGS application program interface (API), we present implementations of two example algorithms: (1) a matrix multiplication that shows how easily task graphs can be used; and (2) a hybrid implementation of microscopy image stitching that reduces code size by ≈ 43% compared to a manually coded hybrid workflow implementation and showcases the minimal overhead of task graphs in HTGS. Both of the HTGS-based implementations show good performance. In image stitching the HTGS implementation achieves similar performance to the hybrid workflow implementation. Matrix multiplication with HTGS achieves 1.3× and 1.8× speedup over the multi-threaded OpenBLAS library for 16k × 16k and 32k × 32k size matrices, respectively.
Has computational creativity successfully made it "Beyond the Fence" in musical theatre?
NASA Astrophysics Data System (ADS)
Jordanous, Anna
2017-10-01
A significant test for software is to task it with replicating human performance, as done recently with creative software and the commercial project Beyond the Fence (undertaken for a television documentary Computer Says Show). The remit of this project was to use computer software as much as possible to produce "the world's first computer-generated musical". Several creative systems were used to generate this musical, which was performed in London's West End in 2016. This paper considers the challenge of evaluating this project. Current computational creativity evaluation methods are ill-suited to evaluating projects that involve creative input from multiple systems and people. Following recent inspiration within computational creativity research from interaction design, here the DECIDE evaluation framework is applied to evaluate the Beyond the Fence project. Evaluation finds that the project was reasonably successful at achieving the task of using computational generation to produce a credible musical. Lessons have been learned for future computational creativity projects though, particularly for affording creative software more agency and enabling software to interact with other creative partners. Upon reflection, the DECIDE framework emerges as a useful evaluation "checklist" (if not a tangible operational methodology) for evaluating multiple creative systems participating in a creative task.
Determination of Tasks Required by Graduates of Manufacturing Engineering Technology Programs.
ERIC Educational Resources Information Center
Zirbel, Jay H.
1993-01-01
A Delphi panel of 14 experts identified 37 tasks performed by/qualities needed by manufacturing engineering technologists. Most important were work ethic, performance quality, communication skills, teamwork, computer applications, manufacturing basics, materials knowledge, troubleshooting, supervision, and global issues. (SK)
Operator performance and localized muscle fatigue in a simulated space vehicle control task
NASA Technical Reports Server (NTRS)
Lewis, J. L., Jr.
1979-01-01
Fourier transforms in a special purpose computer were utilized to obtain power spectral density functions from electromyograms of the biceps brachii, triceps brachii, brachioradialis, flexor carpi ulnaris, brachialis, and pronator teres in eight subjects performing isometric tracking tasks in two directions utilizing a prototype spacecraft rotational hand controller. Analysis of these spectra in general purpose computers aided in defining muscles involved in performing the task, and yielded a derived measure potentially useful in predicting task termination. The triceps was the only muscle to show significant differences in all possible tests for simple effects in both tasks and, overall, was the most consistently involved of the six muscles. The total power monitored for triceps, biceps, and brachialis dropped to minimal levels across all subjects earlier than for other muscles. However, smaller variances existed for the biceps, brachioradialis, brachialis, and flexor carpi ulnaris muscles and could provide longer predictive times due to smaller standard deviations for a greater population range.
Mental Workload during Brain-Computer Interface Training
Felton, Elizabeth A.; Williams, Justin C.; Vanderheiden, Gregg C.; Radwin, Robert G.
2012-01-01
It is not well understood how people perceive the difficulty of performing brain-computer interface (BCI) tasks, which specific aspects of mental workload contribute the most, and whether there is a difference in perceived workload between participants who are able-bodied and disabled. This study evaluated mental workload using the NASA Task Load Index (TLX), a multi-dimensional rating procedure with six subscales: Mental Demands, Physical Demands, Temporal Demands, Performance, Effort, and Frustration. Able-bodied and motor disabled participants completed the survey after performing EEG-based BCI Fitts’ law target acquisition and phrase spelling tasks. The NASA-TLX scores were similar for able-bodied and disabled participants. For example, overall workload scores (range 0 – 100) for 1D horizontal tasks were 48.5 (SD = 17.7) and 46.6 (SD 10.3), respectively. The TLX can be used to inform the design of BCIs that will have greater usability by evaluating subjective workload between BCI tasks, participant groups, and control modalities. PMID:22506483
Summary of synfuel characterization and combustion studies
NASA Technical Reports Server (NTRS)
Schultz, D. F.
1983-01-01
Combustion component research studies aimed at evolving environmentally acceptable approaches for burning coal derived fuels for ground power applications were performed at the NASA Lewis Research Center under a program titled the ""Critical Research and Support Technology Program'' (CRT). The work was funded by the Department of Energy and was performed in four tasks. This report summarizes these tasks which have all been previously reported. In addition some previously unreported data from Task 4 is also presented. The first, Task 1 consisted of a literature survey aimed at determining the properties of synthetic fuels. This was followed by a computer modeling effort, Task 2, to predict the exhaust emissions resulting from burning coal liquids by various combustion techniques such as lean and rich-lean combustion. The computer predictions were then compared to the results of a flame tube rig, Task 3, in which the fuel properties were varied to simulate coal liquids. Two actual SRC 2 coal liquids were tested in this flame tube task.
Efficiency of the human observer detecting random signals in random backgrounds
Park, Subok; Clarkson, Eric; Kupinski, Matthew A.; Barrett, Harrison H.
2008-01-01
The efficiencies of the human observer and the channelized-Hotelling observer relative to the ideal observer for signal-detection tasks are discussed. Both signal-known-exactly (SKE) tasks and signal-known-statistically (SKS) tasks are considered. Signal location is uncertain for the SKS tasks, and lumpy backgrounds are used for background uncertainty in both cases. Markov chain Monte Carlo methods are employed to determine ideal-observer performance on the detection tasks. Psychophysical studies are conducted to compute human-observer performance on the same tasks. Efficiency is computed as the squared ratio of the detectabilities of the observer of interest to the ideal observer. Human efficiencies are approximately 2.1% and 24%, respectively, for the SKE and SKS tasks. The results imply that human observers are not affected as much as the ideal observer by signal-location uncertainty even though the ideal observer outperforms the human observer for both tasks. Three different simplified pinhole imaging systems are simulated, and the humans and the model observers rank the systems in the same order for both the SKE and the SKS tasks. PMID:15669610
Survey of Intelligent Computer-Aided Training
NASA Technical Reports Server (NTRS)
Loftin, R. B.; Savely, Robert T.
1992-01-01
Intelligent Computer-Aided Training (ICAT) systems integrate artificial intelligence and simulation technologies to deliver training for complex, procedural tasks in a distributed, workstation-based environment. Such systems embody both the knowledge of how to perform a task and how to train someone to perform that task. This paper briefly reviews the antecedents of ICAT systems and describes the approach to their creation developed at the NASA Lyndon B. Johnson Space Center. In addition to the general ICAT architecture, specific ICAT applications that have been or are currently under development are discussed. ICAT systems can offer effective solutions to a number of training problems of interest to the aerospace community.
Applications of neural networks to landmark detection in 3-D surface data
NASA Astrophysics Data System (ADS)
Arndt, Craig M.
1992-09-01
The problem of identifying key landmarks in 3-dimensional surface data is of considerable interest in solving a number of difficult real-world tasks, including object recognition and image processing. The specific problem that we address in this research is to identify the specific landmarks (anatomical) in human surface data. This is a complex task, currently performed visually by an expert human operator. In order to replace these human operators and increase reliability of the data acquisition, we need to develop a computer algorithm which will utilize the interrelations between the 3-dimensional data to identify the landmarks of interest. The current presentation describes a method for designing, implementing, training, and testing a custom architecture neural network which will perform the landmark identification task. We discuss the performance of the net in relationship to human performance on the same task and how this net has been integrated with other AI and traditional programming methods to produce a powerful analysis tool for computer anthropometry.
Cognitive performance modeling based on general systems performance theory.
Kondraske, George V
2010-01-01
General Systems Performance Theory (GSPT) was initially motivated by problems associated with quantifying different aspects of human performance. It has proved to be invaluable for measurement development and understanding quantitative relationships between human subsystem capacities and performance in complex tasks. It is now desired to bring focus to the application of GSPT to modeling of cognitive system performance. Previous studies involving two complex tasks (i.e., driving and performing laparoscopic surgery) and incorporating measures that are clearly related to cognitive performance (information processing speed and short-term memory capacity) were revisited. A GSPT-derived method of task analysis and performance prediction termed Nonlinear Causal Resource Analysis (NCRA) was employed to determine the demand on basic cognitive performance resources required to support different levels of complex task performance. This approach is presented as a means to determine a cognitive workload profile and the subsequent computation of a single number measure of cognitive workload (CW). Computation of CW may be a viable alternative to measuring it. Various possible "more basic" performance resources that contribute to cognitive system performance are discussed. It is concluded from this preliminary exploration that a GSPT-based approach can contribute to defining cognitive performance models that are useful for both individual subjects and specific groups (e.g., military pilots).
Parallel task processing of very large datasets
NASA Astrophysics Data System (ADS)
Romig, Phillip Richardson, III
This research concerns the use of distributed computer technologies for the analysis and management of very large datasets. Improvements in sensor technology, an emphasis on global change research, and greater access to data warehouses all are increase the number of non-traditional users of remotely sensed data. We present a framework for distributed solutions to the challenges of datasets which exceed the online storage capacity of individual workstations. This framework, called parallel task processing (PTP), incorporates both the task- and data-level parallelism exemplified by many image processing operations. An implementation based on the principles of PTP, called Tricky, is also presented. Additionally, we describe the challenges and practical issues in modeling the performance of parallel task processing with large datasets. We present a mechanism for estimating the running time of each unit of work within a system and an algorithm that uses these estimates to simulate the execution environment and produce estimated runtimes. Finally, we describe and discuss experimental results which validate the design. Specifically, the system (a) is able to perform computation on datasets which exceed the capacity of any one disk, (b) provides reduction of overall computation time as a result of the task distribution even with the additional cost of data transfer and management, and (c) in the simulation mode accurately predicts the performance of the real execution environment.
Chong, Raymond K Y; Mills, Bradley; Dailey, Leanna; Lane, Elizabeth; Smith, Sarah; Lee, Kyoung-Hyun
2010-07-01
We tested the hypothesis that a computational overload results when two activities, one motor and the other cognitive that draw on the same neural processing pathways, are performed concurrently. Healthy young adult subjects carried out two seemingly distinct tasks of maintaining standing balance control under conditions of low (eyes closed), normal (eyes open) or high (eyes open, sway-referenced surround) visuospatial processing load while concurrently performing a cognitive task of either subtracting backwards by seven or generating words of the same first letter. A decrease in the performance of the balance control task and a decrement in the speed and accuracy of responses were noted during the subtraction but not the word generation task. The interference in the subtraction task was isolated to the first trial of the high but not normal or low visuospatial conditions. Balance control improvements with repeated exposures were observed only in the low visuospatial conditions while performance in the other conditions remained compromised. These results suggest that sensory organization for balance control appear to draw on similar visuospatial computational resources needed for the subtraction but not the word generation task. In accordance with the theory of modularity in human performance, the contrast in results between the subtraction and word generation tasks suggests that the neural overload is related to competition for similar visuospatial processes rather than limited attentional resources. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
On the performances of computer vision algorithms on mobile platforms
NASA Astrophysics Data System (ADS)
Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.
2012-01-01
Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.
Multi-tasking computer control of video related equipment
NASA Technical Reports Server (NTRS)
Molina, Rod; Gilbert, Bob
1989-01-01
The flexibility, cost-effectiveness and widespread availability of personal computers now makes it possible to completely integrate the previously separate elements of video post-production into a single device. Specifically, a personal computer, such as the Commodore-Amiga, can perform multiple and simultaneous tasks from an individual unit. Relatively low cost, minimal space requirements and user-friendliness, provides the most favorable environment for the many phases of video post-production. Computers are well known for their basic abilities to process numbers, text and graphics and to reliably perform repetitive and tedious functions efficiently. These capabilities can now apply as either additions or alternatives to existing video post-production methods. A present example of computer-based video post-production technology is the RGB CVC (Computer and Video Creations) WorkSystem. A wide variety of integrated functions are made possible with an Amiga computer existing at the heart of the system.
ERIC Educational Resources Information Center
Mechling, Linda C.; Swindle, Catherine O.
2013-01-01
This investigation examined the effects of video modeling on the fine and gross motor task performance by three students with a diagnosis of moderate intellectual disability (Group 1) and by three students with a diagnosis of autism spectrum disorder (Group 2). Using a multiple probe design across three sets of tasks, the study examined the…
F-16 Task Analysis Criterion-Referenced Objective and Objectives Hierarchy Report. Volume 4
1981-03-01
Initiation cues: Engine flameout Systems presenting cues: Aircraft fuel, engine STANDARD: Authority: TACR 60-2 Performance precision: TD in first 1/3 of...task: None Initiation cues: On short final Systems preventing cues: N/A STANDARD: Authority: 60-2 Performance precision: +/- .5 AOA; TD zone 150-1000...precision: +/- .05 AOA; TD Zone 150-1000 Computational accuracy: N/A ... . . ... . ... e e m I TASK NO.: 1.9.4 BEHAVIOR: Perform short field landing
Engineering and Computing Portal to Solve Environmental Problems
NASA Astrophysics Data System (ADS)
Gudov, A. M.; Zavozkin, S. Y.; Sotnikov, I. Y.
2018-01-01
This paper describes architecture and services of the Engineering and Computing Portal, which is considered to be a complex solution that provides access to high-performance computing resources, enables to carry out computational experiments, teach parallel technologies and solve computing tasks, including technogenic safety ones.
DOT National Transportation Integrated Search
1996-03-01
As operators are required to spend more time monitoring computer controlled devices in future systems, it is critical to define the task and situational factors (i.e., fatigue) that may impact vigilance and performance. Aspects of the gaze system can...
Abdullahi, Mohammed; Ngadi, Md Asri
2016-01-01
Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.
Abdullahi, Mohammed; Ngadi, Md Asri
2016-01-01
Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan. PMID:27348127
A resource management architecture based on complex network theory in cloud computing federation
NASA Astrophysics Data System (ADS)
Zhang, Zehua; Zhang, Xuejie
2011-10-01
Cloud Computing Federation is a main trend of Cloud Computing. Resource Management has significant effect on the design, realization, and efficiency of Cloud Computing Federation. Cloud Computing Federation has the typical characteristic of the Complex System, therefore, we propose a resource management architecture based on complex network theory for Cloud Computing Federation (abbreviated as RMABC) in this paper, with the detailed design of the resource discovery and resource announcement mechanisms. Compare with the existing resource management mechanisms in distributed computing systems, a Task Manager in RMABC can use the historical information and current state data get from other Task Managers for the evolution of the complex network which is composed of Task Managers, thus has the advantages in resource discovery speed, fault tolerance and adaptive ability. The result of the model experiment confirmed the advantage of RMABC in resource discovery performance.
Davies, Daniel K; Stock, Steven E; Wehmeyer, Michael L
2002-10-01
Achieving greater independence for individuals with mental retardation depends upon the acquisition of several key skills, including time-management and scheduling skills. The ability to perform tasks according to a schedule is essential to domains like independent living and employment. The use of a portable schedule prompting system to increase independence and self-regulation in time-management for individuals with mental retardation was examined. Twelve people with mental retardation participated in a comparison of their use of the technology system to perform tasks on a schedule with use of a written schedule. Results demonstrated the utility of a Palmtop computer with schedule prompting software to increase independence in the performance of vocational and daily living tasks by individuals with mental retardation.
Assessing performance of an Electronic Health Record (EHR) using Cognitive Task Analysis.
Saitwal, Himali; Feng, Xuan; Walji, Muhammad; Patel, Vimla; Zhang, Jiajie
2010-07-01
Many Electronic Health Record (EHR) systems fail to provide user-friendly interfaces due to the lack of systematic consideration of human-centered computing issues. Such interfaces can be improved to provide easy to use, easy to learn, and error-resistant EHR systems to the users. To evaluate the usability of an EHR system and suggest areas of improvement in the user interface. The user interface of the AHLTA (Armed Forces Health Longitudinal Technology Application) was analyzed using the Cognitive Task Analysis (CTA) method called GOMS (Goals, Operators, Methods, and Selection rules) and an associated technique called KLM (Keystroke Level Model). The GOMS method was used to evaluate the AHLTA user interface by classifying each step of a given task into Mental (Internal) or Physical (External) operators. This analysis was performed by two analysts independently and the inter-rater reliability was computed to verify the reliability of the GOMS method. Further evaluation was performed using KLM to estimate the execution time required to perform the given task through application of its standard set of operators. The results are based on the analysis of 14 prototypical tasks performed by AHLTA users. The results show that on average a user needs to go through 106 steps to complete a task. To perform all 14 tasks, they would spend about 22 min (independent of system response time) for data entry, of which 11 min are spent on more effortful mental operators. The inter-rater reliability analysis performed for all 14 tasks was 0.8 (kappa), indicating good reliability of the method. This paper empirically reveals and identifies the following finding related to the performance of AHLTA: (1) large number of average total steps to complete common tasks, (2) high average execution time and (3) large percentage of mental operators. The user interface can be improved by reducing (a) the total number of steps and (b) the percentage of mental effort, required for the tasks. 2010 Elsevier Ireland Ltd. All rights reserved.
Perceived control in rhesus monkeys (Macaca mulatta) - Enhanced video-task performance
NASA Technical Reports Server (NTRS)
Washburn, David A.; Hopkins, William D.; Rumbaugh, Duane M.
1991-01-01
This investigation was designed to determine whether perceived control effects found in humans extend to rhesus monkeys (Macaca mulatta) tested in a video-task format, using a computer-generated menu program, SELECT. Choosing one of the options in SELECT resulted in presentation of five trials of a corresponding task and subsequent return to the menu. In Experiments 1-3, the animals exhibited stable, meaningful response patterns in this task (i.e., they made choices). In Experiment 4, performance on tasks that were selected by the animals significantly exceeded performance on identical tasks when assigned by the experimenter under comparable conditions (e.g., time of day, order, variety). The reliable and significant advantage for performance on selected tasks, typically found in humans, suggests that rhesus monkeys were able to perceive the availability of choices.
Identification of task demands and usability issues in police use of mobile computing terminals.
Zahabi, Maryam; Kaber, David
2018-01-01
Crash reports from various states in the U.S. have shown high numbers of emergency vehicle crashes, especially in law enforcement situations. This study identified the perceived importance and frequency of police mobile computing terminal (MCT) tasks, quantified the demands of different tasks using a cognitive performance modeling methodology, identified usability violations of current MCT interface designs, and formulated design recommendations for an enhanced interface. Results revealed that "access call notes", "plate number check" and "find location on map" are the most important and frequently performed tasks for officers. "Reading plate information" was also found to be the most visually and cognitively demanding task-method. Usability principles of "using simple and natural dialog" and "minimizing user memory load" were violated by the current MCT interface design. The enhanced design showed potential for reducing cognitive demands and task completion time. Findings should be further validated using a driving simulation study. Copyright © 2017 Elsevier Ltd. All rights reserved.
Performance monitoring for brain-computer-interface actions.
Schurger, Aaron; Gale, Steven; Gozel, Olivia; Blanke, Olaf
2017-02-01
When presented with a difficult perceptual decision, human observers are able to make metacognitive judgements of subjective certainty. Such judgements can be made independently of and prior to any overt response to a sensory stimulus, presumably via internal monitoring. Retrospective judgements about one's own task performance, on the other hand, require first that the subject perform a task and thus could potentially be made based on motor processes, proprioceptive, and other sensory feedback rather than internal monitoring. With this dichotomy in mind, we set out to study performance monitoring using a brain-computer interface (BCI), with which subjects could voluntarily perform an action - moving a cursor on a computer screen - without any movement of the body, and thus without somatosensory feedback. Real-time visual feedback was available to subjects during training, but not during the experiment where the true final position of the cursor was only revealed after the subject had estimated where s/he thought it had ended up after 6s of BCI-based cursor control. During the first half of the experiment subjects based their assessments primarily on the prior probability of the end position of the cursor on previous trials. However, during the second half of the experiment subjects' judgements moved significantly closer to the true end position of the cursor, and away from the prior. This suggests that subjects can monitor task performance when the task is performed without overt movement of the body. Copyright © 2016 Elsevier Inc. All rights reserved.
Multitasking a three-dimensional Navier-Stokes algorithm on the Cray-2
NASA Technical Reports Server (NTRS)
Swisshelm, Julie M.
1989-01-01
A three-dimensional computational aerodynamics algorithm has been multitasked for efficient parallel execution on the Cray-2. It provides a means for examining the multitasking performance of a complete CFD application code. An embedded zonal multigrid scheme is used to solve the Reynolds-averaged Navier-Stokes equations for an internal flow model problem. The explicit nature of each component of the method allows a spatial partitioning of the computational domain to achieve a well-balanced task load for MIMD computers with vector-processing capability. Experiments have been conducted with both two- and three-dimensional multitasked cases. The best speedup attained by an individual task group was 3.54 on four processors of the Cray-2, while the entire solver yielded a speedup of 2.67 on four processors for the three-dimensional case. The multiprocessing efficiency of various types of computational tasks is examined, performance on two Cray-2s with different memory access speeds is compared, and extrapolation to larger problems is discussed.
The effects of syntactic complexity on the human-computer interaction
NASA Technical Reports Server (NTRS)
Chechile, R. A.; Fleischman, R. N.; Sadoski, D. M.
1986-01-01
Three divided-attention experiments were performed to evaluate the effectiveness of a syntactic analysis of the primary task of editing flight route-way-point information. For all editing conditions, a formal syntactic expression was developed for the operator's interaction with the computer. In terms of the syntactic expression, four measures of syntactic were examined. Increased syntactic complexity did increase the time to train operators, but once the operators were trained, syntactic complexity did not influence the divided-attention performance. However, the number of memory retrievals required of the operator significantly accounted for the variation in the accuracy, workload, and task completion time found on the different editing tasks under attention-sharing conditions.
The employment of a spoken language computer applied to an air traffic control task.
NASA Technical Reports Server (NTRS)
Laveson, J. I.; Silver, C. A.
1972-01-01
Assessment of the merits of a limited spoken language (56 words) computer in a simulated air traffic control (ATC) task. An airport zone approximately 60 miles in diameter with a traffic flow simulation ranging from single-engine to commercial jet aircraft provided the workload for the controllers. This research determined that, under the circumstances of the experiments carried out, the use of a spoken-language computer would not improve the controller performance.
An ergonomic evaluation comparing desktop, notebook, and subnotebook computers.
Szeto, Grace P; Lee, Raymond
2002-04-01
To evaluate and compare the postures and movements of the cervical and upper thoracic spine, the typing performance, and workstation ergonomic factors when using a desktop, notebook, and subnotebook computers. Repeated-measures design. A motion analysis laboratory with an electromagnetic tracking device. A convenience sample of 21 university students between ages 20 and 24 years with no history of neck or shoulder discomfort. Each subject performed a standardized typing task by using each of the 3 computers. Measurements during the typing task were taken at set intervals. Cervical and thoracic spines adopted a more flexed posture in using the smaller-sized computers. There were significantly greater neck movements in using desktop computers when compared with the notebook and subnotebook computers. The viewing distances adopted by the subjects decreased as the computer size decreased. Typing performance and subjective rating of difficulty in using the keyboards were also significantly different among the 3 types of computers. Computer users need to consider the posture of the spine and potential risk of developing musculoskeletal discomfort in choosing computers. Copyright 2002 by the American Congress of Rehabilitation Medicine and the American Academy of Physical Medicine and Rehabilitation
Buchler, Norbou G; Hoyer, William J; Cerella, John
2008-06-01
Task-switching performance was assessed in young and older adults as a function of the number of task sets to be actively maintained in memory (varied from 1 to 4) over the course of extended training (5 days). Each of the four tasks required the execution of a simple computational algorithm, which was instantaneously cued by the color of the two-digit stimulus. Tasks were presented in pure (task set size 1) and mixed blocks (task set sizes 2, 3, 4), and the task sequence was unpredictable. By considering task switching beyond two tasks, we found evidence for a cognitive control system that is not overwhelmed by task set size load manipulations. Extended training eliminated age effects in task-switching performance, even when the participants had to manage the execution of up to four tasks. The results are discussed in terms of current theories of cognitive control, including task set inertia and production system postulates.
Granata, C; Pino, M; Legouverneur, G; Vidal, J-S; Bidaud, P; Rigaud, A-S
2013-01-01
Socially assistive robotics for elderly care is a growing field. However, although robotics has the potential to support elderly in daily tasks by offering specific services, the development of usable interfaces is still a challenge. Since several factors such as age or disease-related changes in perceptual or cognitive abilities and familiarity with computer technologies influence technology use they must be considered when designing interfaces for these users. This paper presents findings from usability testing of two different services provided by a social assistive robot intended for elderly with cognitive impairment: a grocery shopping list and an agenda application. The main goal of this study is to identify the usability problems of the robot interface for target end-users as well as to isolate the human factors that affect the use of the technology by elderly. Socio-demographic characteristics and computer experience were examined as factors that could have an influence on task performance. A group of 11 elderly persons with Mild Cognitive Impairment and a group of 11 cognitively healthy elderly individuals took part in this study. Performance measures (task completion time and number of errors) were collected. Cognitive profile, age and computer experience were found to impact task performance. Participants with cognitive impairment achieved the tasks committing more errors than cognitively healthy elderly. Instead younger participants and those with previous computer experience were faster at completing the tasks confirming previous findings in the literature. The overall results suggested that interfaces and contents of the services assessed were usable by older adults with cognitive impairment. However, some usability problems were identified and should be addressed to better meet the needs and capacities of target end-users.
Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster
2017-12-01
This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.
NASA Technical Reports Server (NTRS)
Meyer, Donald; Uchenik, Igor
2007-01-01
The PPC750 Performance Monitor (Perfmon) is a computer program that helps the user to assess the performance characteristics of application programs running under the Wind River VxWorks real-time operating system on a PPC750 computer. Perfmon generates a user-friendly interface and collects performance data by use of performance registers provided by the PPC750 architecture. It processes and presents run-time statistics on a per-task basis over a repeating time interval (typically, several seconds or minutes) specified by the user. When the Perfmon software module is loaded with the user s software modules, it is available for use through Perfmon commands, without any modification of the user s code and at negligible performance penalty. Per-task run-time performance data made available by Perfmon include percentage time, number of instructions executed per unit time, dispatch ratio, stack high water mark, and level-1 instruction and data cache miss rates. The performance data are written to a file specified by the user or to the serial port of the computer
On the tip of the tongue: learning typing and pointing with an intra-oral computer interface.
Caltenco, Héctor A; Breidegard, Björn; Struijk, Lotte N S Andreasen
2014-07-01
To evaluate typing and pointing performance and improvement over time of four able-bodied participants using an intra-oral tongue-computer interface for computer control. A physically disabled individual may lack the ability to efficiently control standard computer input devices. There have been several efforts to produce and evaluate interfaces that provide individuals with physical disabilities the possibility to control personal computers. Training with the intra-oral tongue-computer interface was performed by playing games over 18 sessions. Skill improvement was measured through typing and pointing exercises at the end of each training session. Typing throughput improved from averages of 2.36 to 5.43 correct words per minute. Pointing throughput improved from averages of 0.47 to 0.85 bits/s. Target tracking performance, measured as relative time on target, improved from averages of 36% to 47%. Path following throughput improved from averages of 0.31 to 0.83 bits/s and decreased to 0.53 bits/s with more difficult tasks. Learning curves support the notion that the tongue can rapidly learn novel motor tasks. Typing and pointing performance of the tongue-computer interface is comparable to performances of other proficient assistive devices, which makes the tongue a feasible input organ for computer control. Intra-oral computer interfaces could provide individuals with severe upper-limb mobility impairments the opportunity to control computers and automatic equipment. Typing and pointing performance of the tongue-computer interface is comparable to performances of other proficient assistive devices, but does not cause fatigue easily and might be invisible to other people, which is highly prioritized by assistive device users. Combination of visual and auditory feedback is vital for a good performance of an intra-oral computer interface and helps to reduce involuntary or erroneous activations.
Software designs of image processing tasks with incremental refinement of computation.
Anastasia, Davide; Andreopoulos, Yiannis
2010-08-01
Software realizations of computationally-demanding image processing tasks (e.g., image transforms and convolution) do not currently provide graceful degradation when their clock-cycles budgets are reduced, e.g., when delay deadlines are imposed in a multitasking environment to meet throughput requirements. This is an important obstacle in the quest for full utilization of modern programmable platforms' capabilities since worst-case considerations must be in place for reasonable quality of results. In this paper, we propose (and make available online) platform-independent software designs performing bitplane-based computation combined with an incremental packing framework in order to realize block transforms, 2-D convolution and frame-by-frame block matching. The proposed framework realizes incremental computation: progressive processing of input-source increments improves the output quality monotonically. Comparisons with the equivalent nonincremental software realization of each algorithm reveal that, for the same precision of the result, the proposed approach can lead to comparable or faster execution, while it can be arbitrarily terminated and provide the result up to the computed precision. Application examples with region-of-interest based incremental computation, task scheduling per frame, and energy-distortion scalability verify that our proposal provides significant performance scalability with graceful degradation.
Asymptotically Optimal Motion Planning for Learned Tasks Using Time-Dependent Cost Maps
Bowen, Chris; Ye, Gu; Alterovitz, Ron
2015-01-01
In unstructured environments in people’s homes and workspaces, robots executing a task may need to avoid obstacles while satisfying task motion constraints, e.g., keeping a plate of food level to avoid spills or properly orienting a finger to push a button. We introduce a sampling-based method for computing motion plans that are collision-free and minimize a cost metric that encodes task motion constraints. Our time-dependent cost metric, learned from a set of demonstrations, encodes features of a task’s motion that are consistent across the demonstrations and, hence, are likely required to successfully execute the task. Our sampling-based motion planner uses the learned cost metric to compute plans that simultaneously avoid obstacles and satisfy task constraints. The motion planner is asymptotically optimal and minimizes the Mahalanobis distance between the planned trajectory and the distribution of demonstrations in a feature space parameterized by the locations of task-relevant objects. The motion planner also leverages the distribution of the demonstrations to significantly reduce plan computation time. We demonstrate the method’s effectiveness and speed using a small humanoid robot performing tasks requiring both obstacle avoidance and satisfaction of learned task constraints. Note to Practitioners Motivated by the desire to enable robots to autonomously operate in cluttered home and workplace environments, this paper presents an approach for intuitively training a robot in a manner that enables it to repeat the task in novel scenarios and in the presence of unforeseen obstacles in the environment. Based on user-provided demonstrations of the task, our method learns features of the task that are consistent across the demonstrations and that we expect should be repeated by the robot when performing the task. We next present an efficient algorithm for planning robot motions to perform the task based on the learned features while avoiding obstacles. We demonstrate the effectiveness of our motion planner for scenarios requiring transferring a powder and pushing a button in environments with obstacles, and we plan to extend our results to more complex tasks in the future. PMID:26279642
Performance evaluation of a six-axis generalized force-reflecting teleoperator
NASA Technical Reports Server (NTRS)
Hannaford, B.; Wood, L.; Guggisberg, B.; Mcaffee, D.; Zak, H.
1989-01-01
Work in real-time distributed computation and control has culminated in a prototype force-reflecting telemanipulation system having a dissimilar master (cable-driven, force-reflecting hand controller) and a slave (PUMA 560 robot with custom controller), an extremely high sampling rate (1000 Hz), and a low loop computation delay (5 msec). In a series of experiments with this system and five trained test operators covering over 100 hours of teleoperation, performance was measured in a series of generic and application-driven tasks with and without force feedback, and with control shared between teleoperation and local sensor referenced control. Measurements defining task performance included 100-Hz recording of six-axis force/torque information from the slave manipulator wrist, task completion time, and visual observation of predefined task errors. The task consisted of high precision peg-in-hole insertion, electrical connectors, velcro attach-de-attach, and a twist-lock multi-pin connector. Each task was repeated three times under several operating conditions: normal bilateral telemanipulation, forward position control without force feedback, and shared control. In shared control, orientation was locally servo controlled to comply with applied torques, while translation was under operator control. All performance measures improved as capability was added along a spectrum of capabilities ranging from pure position control through force-reflecting teleoperation and shared control. Performance was optimal for the bare-handed operator.
Portrat, Sophie; Guida, Alessandro; Phénix, Thierry; Lemaire, Benoît
2016-04-01
Working memory (WM) is a cognitive system allowing short-term maintenance and processing of information. Maintaining information in WM consists, classically, in rehearsing or refreshing it. Chunking could also be considered as a maintenance mechanism. However, in the literature, it is more often used to explain performance than explicitly investigated within WM paradigms. Hence, the aim of the present paper was (1) to strengthen the experimental dialogue between WM and chunking, by studying the effect of acronyms in a computer-paced complex span task paradigm and (2) to formalize explicitly this dialogue within a computational model. Young adults performed a WM complex span task in which they had to maintain series of 7 letters for further recall while performing a concurrent location judgment task. The series to be remembered were either random strings of letters or strings containing a 3-letter acronym that appeared in position 1, 3, or 5 in the series. Together, the data and simulations provide a better understanding of the maintenance mechanisms taking place in WM and its interplay with long-term memory. Indeed, the behavioral WM performance lends evidence to the functional characteristics of chunking that seems to be, especially in a WM complex span task, an attentional time-based mechanism that certainly enhances WM performance but also competes with other processes at hand in WM. Computational simulations support and delineate such a conception by showing that searching for a chunk in long-term memory involves attentionally demanding subprocesses that essentially take place during the encoding phases of the task.
ERIC Educational Resources Information Center
Yaratan, Huseyin
2003-01-01
An ITS (Intelligent Tutoring System) is a teaching-learning medium that uses artificial intelligence (AI) technology for instruction. Roberts and Park (1983) defines AI as the attempt to get computers to perform tasks that if performed by a human-being, intelligence would be required to perform the task. The design of an ITS comprises two distinct…
Coffee intake and development of pain during computer work.
Strøm, Vegard; Røe, Cecilie; Knardahl, Stein
2012-09-03
The present study sought to determine if subjects who had consumed coffee before performing a simulated computer office-work task found to provoke pain in the neck and shoulders and forearms and wrists exhibited different time course in the pain development than the subjects who had abstained from coffee intake. Forty eight subjects all working fulltime, 22 with chronic shoulder and neck pain and 26 healthy pain-free subjects, were recruited to perform a computer-based office-work task for 90 min. Nineteen (40%) of the subjects had consumed coffee (1/2 -1 cup) on average 1 h 18 min before start. Pain intensity in the shoulders and neck and forearms and wrists was rated on a visual analogue scale every 15 min throughout the work task.During the work task the coffee consumers exhibited significantly lower pain increase than those who abstained from coffee. Subjects who had consumed coffee before starting a pain provoking office work task exhibited attenuated pain development compared with the subjects who had abstained from coffee intake. These results might have potentially interesting implications of a pain-modulating effect of caffeine in an everyday setting. However, studies with a double blind placebo controlled randomized design are needed.
Intelligent Computer Assisted Instruction (ICAI): Formative Evaluation of Two Systems
1986-03-01
appreciation .’.,-* for the power of computer technology. Interpretati on Yale students are a strikingly high performing group by traditional academic ...COMPUTER ASSISTED INSTRUCTION April 1984 - August 1985 (ICAI): FORMATIVE EVALUATION OF TWO SYSTEMS 6. PERFORMING ORG. REPORT NUMBER 7. AUTHOR(*) S...956881 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASK AREA & WORK UNIT NUMBERS Jet Propulsion Laboratory 2Q263743A794
Pagan, Marino
2014-01-01
The responses of high-level neurons tend to be mixtures of many different types of signals. While this diversity is thought to allow for flexible neural processing, it presents a challenge for understanding how neural responses relate to task performance and to neural computation. To address these challenges, we have developed a new method to parse the responses of individual neurons into weighted sums of intuitive signal components. Our method computes the weights by projecting a neuron's responses onto a predefined orthonormal basis. Once determined, these weights can be combined into measures of signal modulation; however, in their raw form these signal modulation measures are biased by noise. Here we introduce and evaluate two methods for correcting this bias, and we report that an analytically derived approach produces performance that is robust and superior to a bootstrap procedure. Using neural data recorded from inferotemporal cortex and perirhinal cortex as monkeys performed a delayed-match-to-sample target search task, we demonstrate how the method can be used to quantify the amounts of task-relevant signals in heterogeneous neural populations. We also demonstrate how these intuitive quantifications of signal modulation can be related to single-neuron measures of task performance (d′). PMID:24920017
ERIC Educational Resources Information Center
Slof, B.; Erkens, G.; Kirschner, P. A.; Janssen, J.; Jaspers, J. G. M.
2012-01-01
This study investigated whether and how scripting learners' use of representational tools in a computer supported collaborative learning (CSCL)-environment fostered their collaborative performance on a complex business-economics task. Scripting the problem-solving process sequenced and made its phase-related part-task demands explicit, namely…
Task-discriminative space-by-time factorization of muscle activity
Delis, Ioannis; Panzeri, Stefano; Pozzo, Thierry; Berret, Bastien
2015-01-01
Movement generation has been hypothesized to rely on a modular organization of muscle activity. Crucial to this hypothesis is the ability to perform reliably a variety of motor tasks by recruiting a limited set of modules and combining them in a task-dependent manner. Thus far, existing algorithms that extract putative modules of muscle activations, such as Non-negative Matrix Factorization (NMF), identify modular decompositions that maximize the reconstruction of the recorded EMG data. Typically, the functional role of the decompositions, i.e., task accomplishment, is only assessed a posteriori. However, as motor actions are defined in task space, we suggest that motor modules should be computed in task space too. In this study, we propose a new module extraction algorithm, named DsNM3F, that uses task information during the module identification process. DsNM3F extends our previous space-by-time decomposition method (the so-called sNM3F algorithm, which could assess task performance only after having computed modules) to identify modules gauging between two complementary objectives: reconstruction of the original data and reliable discrimination of the performed tasks. We show that DsNM3F recovers the task dependence of module activations more accurately than sNM3F. We also apply it to electromyographic signals recorded during performance of a variety of arm pointing tasks and identify spatial and temporal modules of muscle activity that are highly consistent with previous studies. DsNM3F achieves perfect task categorization without significant loss in data approximation when task information is available and generalizes as well as sNM3F when applied to new data. These findings suggest that the space-by-time decomposition of muscle activity finds robust task-discriminating modular representations of muscle activity and that the insertion of task discrimination objectives is useful for describing the task modulation of module recruitment. PMID:26217213
Task-discriminative space-by-time factorization of muscle activity.
Delis, Ioannis; Panzeri, Stefano; Pozzo, Thierry; Berret, Bastien
2015-01-01
Movement generation has been hypothesized to rely on a modular organization of muscle activity. Crucial to this hypothesis is the ability to perform reliably a variety of motor tasks by recruiting a limited set of modules and combining them in a task-dependent manner. Thus far, existing algorithms that extract putative modules of muscle activations, such as Non-negative Matrix Factorization (NMF), identify modular decompositions that maximize the reconstruction of the recorded EMG data. Typically, the functional role of the decompositions, i.e., task accomplishment, is only assessed a posteriori. However, as motor actions are defined in task space, we suggest that motor modules should be computed in task space too. In this study, we propose a new module extraction algorithm, named DsNM3F, that uses task information during the module identification process. DsNM3F extends our previous space-by-time decomposition method (the so-called sNM3F algorithm, which could assess task performance only after having computed modules) to identify modules gauging between two complementary objectives: reconstruction of the original data and reliable discrimination of the performed tasks. We show that DsNM3F recovers the task dependence of module activations more accurately than sNM3F. We also apply it to electromyographic signals recorded during performance of a variety of arm pointing tasks and identify spatial and temporal modules of muscle activity that are highly consistent with previous studies. DsNM3F achieves perfect task categorization without significant loss in data approximation when task information is available and generalizes as well as sNM3F when applied to new data. These findings suggest that the space-by-time decomposition of muscle activity finds robust task-discriminating modular representations of muscle activity and that the insertion of task discrimination objectives is useful for describing the task modulation of module recruitment.
Photonic single nonlinear-delay dynamical node for information processing
NASA Astrophysics Data System (ADS)
Ortín, Silvia; San-Martín, Daniel; Pesquera, Luis; Gutiérrez, José Manuel
2012-06-01
An electro-optical system with a delay loop based on semiconductor lasers is investigated for information processing by performing numerical simulations. This system can replace a complex network of many nonlinear elements for the implementation of Reservoir Computing. We show that a single nonlinear-delay dynamical system has the basic properties to perform as reservoir: short-term memory and separation property. The computing performance of this system is evaluated for two prediction tasks: Lorenz chaotic time series and nonlinear auto-regressive moving average (NARMA) model. We sweep the parameters of the system to find the best performance. The results achieved for the Lorenz and the NARMA-10 tasks are comparable to those obtained by other machine learning methods.
Program Predicts Time Courses of Human/Computer Interactions
NASA Technical Reports Server (NTRS)
Vera, Alonso; Howes, Andrew
2005-01-01
CPM X is a computer program that predicts sequences of, and amounts of time taken by, routine actions performed by a skilled person performing a task. Unlike programs that simulate the interaction of the person with the task environment, CPM X predicts the time course of events as consequences of encoded constraints on human behavior. The constraints determine which cognitive and environmental processes can occur simultaneously and which have sequential dependencies. The input to CPM X comprises (1) a description of a task and strategy in a hierarchical description language and (2) a description of architectural constraints in the form of rules governing interactions of fundamental cognitive, perceptual, and motor operations. The output of CPM X is a Program Evaluation Review Technique (PERT) chart that presents a schedule of predicted cognitive, motor, and perceptual operators interacting with a task environment. The CPM X program allows direct, a priori prediction of skilled user performance on complex human-machine systems, providing a way to assess critical interfaces before they are deployed in mission contexts.
High performance computing environment for multidimensional image analysis
Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo
2007-01-01
Background The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. Results We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478× speedup. Conclusion Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets. PMID:17634099
High performance computing environment for multidimensional image analysis.
Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo
2007-07-10
The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478x speedup. Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets.
Advanced information processing system
NASA Technical Reports Server (NTRS)
Lala, J. H.
1984-01-01
Design and performance details of the advanced information processing system (AIPS) for fault and damage tolerant data processing on aircraft and spacecraft are presented. AIPS comprises several computers distributed throughout the vehicle and linked by a damage tolerant data bus. Most I/O functions are available to all the computers, which run in a TDMA mode. Each computer performs separate specific tasks in normal operation and assumes other tasks in degraded modes. Redundant software assures that all fault monitoring, logging and reporting are automated, together with control functions. Redundant duplex links and damage-spread limitation provide the fault tolerance. Details of an advanced design of a laboratory-scale proof-of-concept system are described, including functional operations.
ERIC Educational Resources Information Center
Fezzani, K.; Albinet, C.; Thon, B.; Marquie, J. -C.
2010-01-01
The present study investigated the extent to which the impact of motor difficulty on the acquisition of a computer task varies as a function of age. Fourteen young and 14 older participants performed 352 sequences of 10 serial pointing movements with a wireless pen on a digitiser tablet. A conditional probabilistic structure governed the…
ERIC Educational Resources Information Center
Lu, Zhihong; Wang, Yanfei
2014-01-01
The effective design of test items within a computer-based language test (CBLT) for developing English as a foreign language (EFL) learners' listening and speaking skills has become an increasingly challenging task for both test users and test designers compared with that of pencil-and-paper tests in the past. It needs to fit integrated oral…
Increased Memory Load during Task Completion when Procedures Are Presented on Mobile Screens
ERIC Educational Resources Information Center
Byrd, Keena S.; Caldwell, Barrett S.
2011-01-01
The primary objective of this research was to compare procedure-based task performance using three common mobile screen sizes: ultra mobile personal computer (7 in./17.8 cm), personal data assistant (3.5 in./8.9 cm), and SmartPhone (2.8 in./7.1 cm). Subjects used these three screen sizes to view and execute a computer maintenance procedure.…
NASA Technical Reports Server (NTRS)
Dorband, John E.
1987-01-01
Generating graphics to faithfully represent information can be a computationally intensive task. A way of using the Massively Parallel Processor to generate images by ray tracing is presented. This technique uses sort computation, a method of performing generalized routing interspersed with computation on a single-instruction-multiple-data (SIMD) computer.
Simmering, Vanessa R
2016-09-01
Working memory is a vital cognitive skill that underlies a broad range of behaviors. Higher cognitive functions are reliably predicted by working memory measures from two domains: children's performance on complex span tasks, and infants' performance in looking paradigms. Despite the similar predictive power across these research areas, theories of working memory development have not connected these different task types and developmental periods. The current project takes a first step toward bridging this gap by presenting a process-oriented theory, focusing on two tasks designed to assess visual working memory capacity in infants (the change-preference task) versus children and adults (the change detection task). Previous studies have shown inconsistent results, with capacity estimates increasing from one to four items during infancy, but only two to three items during early childhood. A probable source of this discrepancy is the different task structures used with each age group, but prior theories were not sufficiently specific to explain how performance relates across tasks. The current theory focuses on cognitive dynamics, that is, how memory representations are formed, maintained, and used within specific task contexts over development. This theory was formalized in a computational model to generate three predictions: 1) capacity estimates in the change-preference task should continue to increase beyond infancy; 2) capacity estimates should be higher in the change-preference versus change detection task when tested within individuals; and 3) performance should correlate across tasks because both rely on the same underlying memory system. I also tested a fourth prediction, that development across tasks could be explained through increasing real-time stability, realized computationally as strengthening connectivity within the model. Results confirmed these predictions, supporting the cognitive dynamics account of performance and developmental changes in real-time stability. The monograph concludes with implications for understanding memory, behavior, and development in a broader range of cognitive development. © 2016 The Society for Research in Child Development, Inc.
Huo, Xueliang; Johnson-Long, Ashley N.; Ghovanloo, Maysam; Shinohara, Minoru
2015-01-01
The purpose of this study was to compare the motor performance of tongue, using Tongue Drive System, to hand operation for relatively complex tasks under different levels of background physical exertion. Thirteen young able-bodied adults performed tasks that tested the accuracy and variability in tracking a sinusoidal waveform, and the performance in playing two video games that require accurate and rapid movements with cognitive processing using tongue and hand under two levels of background physical exertion. Results show additional background physical activity did not influence rapid and accurate displacement motor performance, but compromised the slow waveform tracking and shooting performances in both hand and tongue. Slow waveform tracking performance by the tongue was compromised with an additional motor or cognitive task, but with an additional motor task only for the hand. Practitioner Summary We investigated the influence of task complexity and background physical exertion on the motor performance of tongue and hand. Results indicate the task performance degrades with an additional concurrent task or physical exertion due to the limited attentional resources available for handling both the motor task and background exertion. PMID:24003900
Hollingworth, William; Devine, Emily Beth; Hansen, Ryan N; Lawless, Nathan M; Comstock, Bryan A; Wilson-Norton, Jennifer L; Tharp, Kathleen L; Sullivan, Sean D
2007-01-01
Electronic prescribing has improved the quality and safety of care. One barrier preventing widespread adoption is the potential detrimental impact on workflow. We used time-motion techniques to compare prescribing times at three ambulatory care sites that used paper-based prescribing, desktop, or laptop e-prescribing. An observer timed all prescriber (n = 27) and staff (n = 42) tasks performed during a 4-hour period. At the sites with optional e-prescribing >75% of prescription-related events were performed electronically. Prescribers at e-prescribing sites spent less time writing, but time-savings were offset by increased computer tasks. After adjusting for site, prescriber and prescription type, e-prescribing tasks took marginally longer than hand written prescriptions (12.0 seconds; -1.6, 25.6 CI). Nursing staff at the e-prescribing sites spent longer on computer tasks (5.4 minutes/hour; 0.0, 10.7 CI). E-prescribing was not associated with an increase in combined computer and writing time for prescribers. If carefully implemented, e-prescribing will not greatly disrupt workflow.
NASA Technical Reports Server (NTRS)
Stroupe, Ashley W.; Okon, Avi; Robinson, Matthew; Huntsberger, Terry; Aghazarian, Hrand; Baumgartner, Eric
2004-01-01
Robotic Construction Crew (RCC) is a heterogeneous multi-robot system for autonomous acquisition, transport, and precision mating of components in construction tasks. RCC minimizes resources constrained in a space environment such as computation, power, communication and, sensing. A behavior-based architecture provides adaptability and robustness despite low computational requirements. RCC successfully performs several construction related tasks in an emulated outdoor environment despite high levels of uncertainty in motions and sensing. Quantitative results are provided for formation keeping in component transport, precision instrument placement, and construction tasks.
1980-01-01
TECHNIQUES IMPROVING RAPIDLY C-7 INDUSTRY THRUSTS IN 70s DRIVING FORCE : IMPROVE PRODUCT QUALITY * EASE MAINTENANCE, MODIFICATION IMPROVE PERFORMANCE...together a task force to make recommendations on what we should be doing about computer secur- ity. Other members of the task force came from both our...of the marketing task force mostly echoed and endorsed the user’s report. Both reports were issued in March of 1973. Notice that DoD 5200.28 had just
2013-12-01
authors present a Computing on Dissemination with predictable contacts ( pCoD ) algorithm, since it is impossible to reserve task execution time in advance...Computing While Charging DAG Directed Acyclic Graph 18 TTL Time-to-live pCoD Predictable contacts CoD Computing on Dissemination upCoD Unpredictable
Transformation of OODT CAS to Perform Larger Tasks
NASA Technical Reports Server (NTRS)
Mattmann, Chris; Freeborn, Dana; Crichton, Daniel; Hughes, John; Ramirez, Paul; Hardman, Sean; Woollard, David; Kelly, Sean
2008-01-01
A computer program denoted OODT CAS has been transformed to enable performance of larger tasks that involve greatly increased data volumes and increasingly intensive processing of data on heterogeneous, geographically dispersed computers. Prior to the transformation, OODT CAS (also alternatively denoted, simply, 'CAS') [wherein 'OODT' signifies 'Object-Oriented Data Technology' and 'CAS' signifies 'Catalog and Archive Service'] was a proven software component used to manage scientific data from spaceflight missions. In the transformation, CAS was split into two separate components representing its canonical capabilities: file management and workflow management. In addition, CAS was augmented by addition of a resource-management component. This third component enables CAS to manage heterogeneous computing by use of diverse resources, including high-performance clusters of computers, commodity computing hardware, and grid computing infrastructures. CAS is now more easily maintainable, evolvable, and reusable. These components can be used separately or, taking advantage of synergies, can be used together. Other elements of the transformation included addition of a separate Web presentation layer that supports distribution of data products via Really Simple Syndication (RSS) feeds, and provision for full Resource Description Framework (RDF) exports of metadata.
A novel computational model to probe visual search deficits during motor performance
Singh, Tarkeshwar; Fridriksson, Julius; Perry, Christopher M.; Tryon, Sarah C.; Ross, Angela; Fritz, Stacy
2016-01-01
Successful execution of many motor skills relies on well-organized visual search (voluntary eye movements that actively scan the environment for task-relevant information). Although impairments of visual search that result from brain injuries are linked to diminished motor performance, the neural processes that guide visual search within this context remain largely unknown. The first objective of this study was to examine how visual search in healthy adults and stroke survivors is used to guide hand movements during the Trail Making Test (TMT), a neuropsychological task that is a strong predictor of visuomotor and cognitive deficits. Our second objective was to develop a novel computational model to investigate combinatorial interactions between three underlying processes of visual search (spatial planning, working memory, and peripheral visual processing). We predicted that stroke survivors would exhibit deficits in integrating the three underlying processes, resulting in deteriorated overall task performance. We found that normal TMT performance is associated with patterns of visual search that primarily rely on spatial planning and/or working memory (but not peripheral visual processing). Our computational model suggested that abnormal TMT performance following stroke is associated with impairments of visual search that are characterized by deficits integrating spatial planning and working memory. This innovative methodology provides a novel framework for studying how the neural processes underlying visual search interact combinatorially to guide motor performance. NEW & NOTEWORTHY Visual search has traditionally been studied in cognitive and perceptual paradigms, but little is known about how it contributes to visuomotor performance. We have developed a novel computational model to examine how three underlying processes of visual search (spatial planning, working memory, and peripheral visual processing) contribute to visual search during a visuomotor task. We show that deficits integrating spatial planning and working memory underlie abnormal performance in stroke survivors with frontoparietal damage. PMID:27733596
Kappel, David; Legenstein, Robert; Habenschuss, Stefan; Hsieh, Michael; Maass, Wolfgang
2018-01-01
Synaptic connections between neurons in the brain are dynamic because of continuously ongoing spine dynamics, axonal sprouting, and other processes. In fact, it was recently shown that the spontaneous synapse-autonomous component of spine dynamics is at least as large as the component that depends on the history of pre- and postsynaptic neural activity. These data are inconsistent with common models for network plasticity and raise the following questions: how can neural circuits maintain a stable computational function in spite of these continuously ongoing processes, and what could be functional uses of these ongoing processes? Here, we present a rigorous theoretical framework for these seemingly stochastic spine dynamics and rewiring processes in the context of reward-based learning tasks. We show that spontaneous synapse-autonomous processes, in combination with reward signals such as dopamine, can explain the capability of networks of neurons in the brain to configure themselves for specific computational tasks, and to compensate automatically for later changes in the network or task. Furthermore, we show theoretically and through computer simulations that stable computational performance is compatible with continuously ongoing synapse-autonomous changes. After reaching good computational performance it causes primarily a slow drift of network architecture and dynamics in task-irrelevant dimensions, as observed for neural activity in motor cortex and other areas. On the more abstract level of reinforcement learning the resulting model gives rise to an understanding of reward-driven network plasticity as continuous sampling of network configurations.
Habenschuss, Stefan; Hsieh, Michael
2018-01-01
Synaptic connections between neurons in the brain are dynamic because of continuously ongoing spine dynamics, axonal sprouting, and other processes. In fact, it was recently shown that the spontaneous synapse-autonomous component of spine dynamics is at least as large as the component that depends on the history of pre- and postsynaptic neural activity. These data are inconsistent with common models for network plasticity and raise the following questions: how can neural circuits maintain a stable computational function in spite of these continuously ongoing processes, and what could be functional uses of these ongoing processes? Here, we present a rigorous theoretical framework for these seemingly stochastic spine dynamics and rewiring processes in the context of reward-based learning tasks. We show that spontaneous synapse-autonomous processes, in combination with reward signals such as dopamine, can explain the capability of networks of neurons in the brain to configure themselves for specific computational tasks, and to compensate automatically for later changes in the network or task. Furthermore, we show theoretically and through computer simulations that stable computational performance is compatible with continuously ongoing synapse-autonomous changes. After reaching good computational performance it causes primarily a slow drift of network architecture and dynamics in task-irrelevant dimensions, as observed for neural activity in motor cortex and other areas. On the more abstract level of reinforcement learning the resulting model gives rise to an understanding of reward-driven network plasticity as continuous sampling of network configurations. PMID:29696150
Effects of Computer Skill on Mouse Move and Click Performance
ERIC Educational Resources Information Center
Panagiotakopoulos, Chris; Sarris, Menelaos
2008-01-01
This study focuses on the use of computers in the field of education. It reports a series of experimental mouse move and click tasks on constant and moving stimuli. These experiments attempt to explore the efficiency with which individuals of different skill level and age group perform using a mouse. Differences in performance between high-skill…
Higher Intelligence Is Associated with Less Task-Related Brain Network Reconfiguration
Cole, Michael W.
2016-01-01
The human brain is able to exceed modern computers on multiple computational demands (e.g., language, planning) using a small fraction of the energy. The mystery of how the brain can be so efficient is compounded by recent evidence that all brain regions are constantly active as they interact in so-called resting-state networks (RSNs). To investigate the brain's ability to process complex cognitive demands efficiently, we compared functional connectivity (FC) during rest and multiple highly distinct tasks. We found previously that RSNs are present during a wide variety of tasks and that tasks only minimally modify FC patterns throughout the brain. Here, we tested the hypothesis that, although subtle, these task-evoked FC updates from rest nonetheless contribute strongly to behavioral performance. One might expect that larger changes in FC reflect optimization of networks for the task at hand, improving behavioral performance. Alternatively, smaller changes in FC could reflect optimization for efficient (i.e., small) network updates, reducing processing demands to improve behavioral performance. We found across three task domains that high-performing individuals exhibited more efficient brain connectivity updates in the form of smaller changes in functional network architecture between rest and task. These smaller changes suggest that individuals with an optimized intrinsic network configuration for domain-general task performance experience more efficient network updates generally. Confirming this, network update efficiency correlated with general intelligence. The brain's reconfiguration efficiency therefore appears to be a key feature contributing to both its network dynamics and general cognitive ability. SIGNIFICANCE STATEMENT The brain's network configuration varies based on current task demands. For example, functional brain connections are organized in one way when one is resting quietly but in another way if one is asked to make a decision. We found that the efficiency of these updates in brain network organization is positively related to general intelligence, the ability to perform a wide variety of cognitively challenging tasks well. Specifically, we found that brain network configuration at rest was already closer to a wide variety of task configurations in intelligent individuals. This suggests that the ability to modify network connectivity efficiently when task demands change is a hallmark of high intelligence. PMID:27535904
Learning about Tasks Computers Can Perform. ERIC Digest.
ERIC Educational Resources Information Center
Brosnan, Patricia A.
Knowing what different kinds of computer equipment can do is the first step in choosing the computer that is right for you. This digest describes a developmental progression of computer capabilities. First the basic three software programs (word processing, spreadsheets, and database programs) are discussed using examples. Next, an explanation of…
Roijendijk, Linsey; Farquhar, Jason; van Gerven, Marcel; Jensen, Ole; Gielen, Stan
2013-01-01
Objective Covert visual spatial attention is a relatively new task used in brain computer interfaces (BCIs) and little is known about the characteristics which may affect performance in BCI tasks. We investigated whether eccentricity and task difficulty affect alpha lateralization and BCI performance. Approach We conducted a magnetoencephalography study with 14 participants who performed a covert orientation discrimination task at an easy or difficult stimulus contrast at either a near (3.5°) or far (7°) eccentricity. Task difficulty was manipulated block wise and subjects were aware of the difficulty level of each block. Main Results Grand average analyses revealed a significantly larger hemispheric lateralization of posterior alpha power in the difficult condition than in the easy condition, while surprisingly no difference was found for eccentricity. The difference between task difficulty levels was significant in the interval between 1.85 s and 2.25 s after cue onset and originated from a stronger decrease in the contralateral hemisphere. No significant effect of eccentricity was found. Additionally, single-trial classification analysis revealed a higher classification rate in the difficult (65.9%) than in the easy task condition (61.1%). No effect of eccentricity was found in classification rate. Significance Our results indicate that manipulating the difficulty of a task gives rise to variations in alpha lateralization and that using a more difficult task improves covert visual spatial attention BCI performance. The variations in the alpha lateralization could be caused by different factors such as an increased mental effort or a higher visual attentional demand. Further research is necessary to discriminate between them. We did not discover any effect of eccentricity in contrast to results of previous research. PMID:24312477
Roijendijk, Linsey; Farquhar, Jason; van Gerven, Marcel; Jensen, Ole; Gielen, Stan
2013-01-01
Covert visual spatial attention is a relatively new task used in brain computer interfaces (BCIs) and little is known about the characteristics which may affect performance in BCI tasks. We investigated whether eccentricity and task difficulty affect alpha lateralization and BCI performance. We conducted a magnetoencephalography study with 14 participants who performed a covert orientation discrimination task at an easy or difficult stimulus contrast at either a near (3.5°) or far (7°) eccentricity. Task difficulty was manipulated block wise and subjects were aware of the difficulty level of each block. Grand average analyses revealed a significantly larger hemispheric lateralization of posterior alpha power in the difficult condition than in the easy condition, while surprisingly no difference was found for eccentricity. The difference between task difficulty levels was significant in the interval between 1.85 s and 2.25 s after cue onset and originated from a stronger decrease in the contralateral hemisphere. No significant effect of eccentricity was found. Additionally, single-trial classification analysis revealed a higher classification rate in the difficult (65.9%) than in the easy task condition (61.1%). No effect of eccentricity was found in classification rate. Our results indicate that manipulating the difficulty of a task gives rise to variations in alpha lateralization and that using a more difficult task improves covert visual spatial attention BCI performance. The variations in the alpha lateralization could be caused by different factors such as an increased mental effort or a higher visual attentional demand. Further research is necessary to discriminate between them. We did not discover any effect of eccentricity in contrast to results of previous research.
Pointing Device Performance in Steering Tasks.
Senanayake, Ransalu; Goonetilleke, Ravindra S
2016-06-01
Use of touch-screen-based interactions is growing rapidly. Hence, knowing the maneuvering efficacy of touch screens relative to other pointing devices is of great importance in the context of graphical user interfaces. Movement time, accuracy, and user preferences of four pointing device settings were evaluated on a computer with 14 participants aged 20.1 ± 3.13 years. It was found that, depending on the difficulty of the task, the optimal settings differ for ballistic and visual control tasks. With a touch screen, resting the arm increased movement time for steering tasks. When both performance and comfort are considered, whether to use a mouse or a touch screen for person-computer interaction depends on the steering difficulty. Hence, a input device should be chosen based on the application, and should be optimized to match the graphical user interface. © The Author(s) 2016.
McKanna, James A; Pavel, Misha; Jimison, Holly
2010-11-13
Assessment of cognitive functionality is an important aspect of care for elders. Unfortunately, few tools exist to measure divided attention, the ability to allocate attention to different aspects of tasks. An accurate determination of divided attention would allow inference of generalized cognitive decline, as well as providing a quantifiable indicator of an important component of driving skill. We propose a new method for determining relative divided attention ability through unobtrusive monitoring of computer use. Specifically, we measure performance on a dual-task cognitive computer exercise as part of a health coaching intervention. This metric indicates whether the user has the ability to pay attention to both tasks at once, or is primarily attending to one task at a time (sacrificing optimal performance). The monitoring of divided attention in a home environment is a key component of both the early detection of cognitive problems and for assessing the efficacy of coaching interventions.
Distributed computing feasibility in a non-dedicated homogeneous distributed system
NASA Technical Reports Server (NTRS)
Leutenegger, Scott T.; Sun, Xian-He
1993-01-01
The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.
Counterfactual Thinking and Anticipated Emotions Enhance Performance in Computer Skills Training
ERIC Educational Resources Information Center
Chan, Amy Y. C.; Caputi, Peter; Jayasuriya, Rohan; Browne, Jessica L.
2013-01-01
The present study examined the relationship between novice learners' counterfactual thinking (i.e. generating "what if" and "if only" thoughts) about their initial training experience with a computer application and subsequent improvement in task performance. The role of anticipated emotions towards goal attainment in task…
Compressed Continuous Computation v. 12/20/2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorodetsky, Alex
2017-02-17
A library for performing numerical computation with low-rank functions. The (C3) library enables performing continuous linear and multilinear algebra with multidimensional functions. Common tasks include taking "matrix" decompositions of vector- or matrix-valued functions, approximating multidimensional functions in low-rank format, adding or multiplying functions together, integrating multidimensional functions.
2010-09-01
application of existing assessment tools that may be applicable to Marine Air Ground Task Force (MAGTF) Command, Control, Communications and...of existing assessment tools that may be applicable to Marine Air Ground Task Force (MAGTF) Command, Control, Communications and Computers (C4...assessment tools and analysis concepts that may be extended to the Marine Corps’ C4 System of Systems assessment methodology as a means to obtain a
ERIC Educational Resources Information Center
Wang, Shu-Ling; Hwang, Gwo-Jen
2012-01-01
Research has suggested that CSCL environments contain fewer social context clues, resulting in various group processes, performance or motivation. This study thus attempts to explore the relationship among collective efficacy, group processes (i.e. task cohesion, cognitive quality) and collaborative performance in a CSCL environment. A total of 75…
The Effect of Context Change on Simple Acquisition Disappears with Increased Training
ERIC Educational Resources Information Center
Leon, Samuel P.; Abad, Maria J. F.; Rosas, Juan M.
2010-01-01
The goal of this experiment was to assess the impact that experience with a task has on the context specificity of the learning that occurs. Participants performed an instrumental task within a computer game where different responses were performed in the presence of discriminative stimuli to obtain reinforcers. The number of training trials (3,…
ERIC Educational Resources Information Center
West, Patti; Rutstein, Daisy Wise; Mislevy, Robert J.; Liu, Junhui; Choi, Younyoung; Levy, Roy; Crawford, Aaron; DiCerbo, Kristen E.; Chappel, Kristina; Behrens, John T.
2010-01-01
A major issue in the study of learning progressions (LPs) is linking student performance on assessment tasks to the progressions. This report describes the challenges faced in making this linkage using Bayesian networks to model LPs in the field of computer networking. The ideas are illustrated with exemplar Bayesian networks built on Cisco…
Computer-task testing of rhesus monkeys (Macaca mulatta) in the social milieu.
Washburn, D A; Harper, S; Rumbaugh, D M
1994-07-01
Previous research has demonstrated that a behavior and performance testing paradigm, in which rhesus monkeys (Macaca mulatta) manipulate a joystick to respond to computer-generated stimuli, provides environmental enrichment and supports the psychological well-being of captive research animals. The present study was designed to determine whether computer-task activity would be affected by pair-housing animals that had previously been tested only in their single-animal home cages. No differences were observed in productivity or performance levels as a function of housing condition, even when the animals were required to "self-identify" prior to performing each trial. The data indicate that cognitive challenge and control are as preferred by the animals as social opportunities, and that, together with comfort/health considerations, each must be addressed for the assurance of psychological well-being.
Computational and fMRI Studies of Visualization
2009-03-31
spatial thinking in high level cognition, such as in problem-solving and reasoning. In conjunction with the experimental work, the project developed a...computational modeling system (4CAPS) as well as the development of 4CAPS models for particular tasks. The cognitive level of 4CAPS accounts for...neuroarchitecture to interpret and predict the brain activation in a network of cortical areas that underpin the performance of a visual thinking task. The
1985-09-01
FILL. MOVE ALPHA-RESPONSE TO RESPONSE. 221C-RUN-TASKS-EXIT. EXIT. 2220-DISPLAY-TASK-MENU. PERFORM 5000- OETER -NISC-TASK-VALS. MOVE 1 TO ANSWER-FILE-KEY...INDEX-FIELD-2 ELSE MOVE 4 TO ANSWER-FILE-KEY SUBTRACT 200 FROM INDEX-FIELD-2. 5000- OETER -MISC-TASK-VALS. IF AREA-NUMBER a ŕ" MOVE 1 TO TASK-FILE-REC-NUM
Performing a local barrier operation
Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E
2014-03-04
Performing a local barrier operation with parallel tasks executing on a compute node including, for each task: retrieving a present value of a counter; calculating, in dependence upon the present value of the counter and a total number of tasks performing the local barrier operation, a base value, the base value representing the counter's value prior to any task joining the local barrier; calculating, in dependence upon the base value and the total number of tasks performing the local barrier operation, a target value of the counter, the target value representing the counter's value when all tasks have joined the local barrier; joining the local barrier, including atomically incrementing the value of the counter; and repetitively, until the present value of the counter is no less than the target value of the counter: retrieving the present value of the counter and determining whether the present value equals the target value.
Performing a local barrier operation
Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E
2014-03-04
Performing a local barrier operation with parallel tasks executing on a compute node including, for each task: retrieving a present value of a counter; calculating, in dependence upon the present value of the counter and a total number of tasks performing the local barrier operation, a base value of the counter, the base value representing the counter's value prior to any task joining the local barrier; calculating, in dependence upon the base value and the total number of tasks performing the local barrier operation, a target value, the target value representing the counter's value when all tasks have joined the local barrier; joining the local barrier, including atomically incrementing the value of the counter; and repetitively, until the present value of the counter is no less than the target value of the counter: retrieving the present value of the counter and determining whether the present value equals the target value.
Wei, Z G; Macwan, A P; Wieringa, P A
1998-06-01
In this paper we quantitatively model degree of automation (DofA) in supervisory control as a function of the number and nature of tasks to be performed by the operator and automation. This model uses a task weighting scheme in which weighting factors are obtained from task demand load, task mental load, and task effect on system performance. The computation of DofA is demonstrated using an experimental system. Based on controlled experiments using operators, analyses of the task effect on system performance, the prediction and assessment of task demand load, and the prediction of mental load were performed. Each experiment had a different DofA. The effect of a change in DofA on system performance and mental load was investigated. It was found that system performance became less sensitive to changes in DofA at higher levels of DofA. The experimental data showed that when the operator controlled a partly automated system, perceived mental load could be predicted from the task mental load for each task component, as calculated by analyzing a situation in which all tasks were manually controlled. Actual or potential applications of this research include a methodology to balance and optimize the automation of complex industrial systems.
ERIC Educational Resources Information Center
Ketch, Karen M.; Brodeur, Darlene A.; McGee, Robin
2009-01-01
This study investigated the effects of response rate and attention focusing on performance of ADHD, clinical-control (CRNA) and non-clinical control children in response inhibition tasks. All children completed the "Go-NoGo" task, a computer-based task of attention and impulsivity. Focused attention on this task was manipulated using a priming…
Mental Task Evaluation for Hybrid NIRS-EEG Brain-Computer Interfaces
Gupta, Rishabh; Falk, Tiago H.
2017-01-01
Based on recent electroencephalography (EEG) and near-infrared spectroscopy (NIRS) studies that showed that tasks such as motor imagery and mental arithmetic induce specific neural response patterns, we propose a hybrid brain-computer interface (hBCI) paradigm in which EEG and NIRS data are fused to improve binary classification performance. We recorded simultaneous NIRS-EEG data from nine participants performing seven mental tasks (word generation, mental rotation, subtraction, singing and navigation, and motor and face imagery). Classifiers were trained for each possible pair of tasks using (1) EEG features alone, (2) NIRS features alone, and (3) EEG and NIRS features combined, to identify the best task pairs and assess the usefulness of a multimodal approach. The NIRS-EEG approach led to an average increase in peak kappa of 0.03 when using features extracted from one-second windows (equivalent to an increase of 1.5% in classification accuracy for balanced classes). The increase was much stronger (0.20, corresponding to an 10% accuracy increase) when focusing on time windows of high NIRS performance. The EEG and NIRS analyses further unveiled relevant brain regions and important feature types. This work provides a basis for future NIRS-EEG hBCI studies aiming to improve classification performance toward more efficient and flexible BCIs. PMID:29181021
Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO.
Hernandez-Vicen, Juan; Martinez, Santiago; Garcia-Haro, Juan Miguel; Balaguer, Carlos
2018-03-25
New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid.
Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO
2018-01-01
New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid. PMID:29587392
Computer Menu Task Performance Model Development
1990-11-01
effect that all three of these factors have on menu task performance Results showed that all three factors significantly influenced menu search and...applications. The work was sponsored by the AFHRL Operations Training Division (AFHRL/OT) and performed under Work Unit 1123-34-02, User/System Interface...capabilities effectively are often either not available or configured in a manner that is difficult to use. These findings provided the genesis for the work
A comparison of symptoms after viewing text on a computer screen and hardcopy.
Chu, Christina; Rosenfield, Mark; Portello, Joan K; Benzoni, Jaclyn A; Collier, Juanita D
2011-01-01
Computer vision syndrome (CVS) is a complex of eye and vision problems experienced during or related to computer use. Ocular symptoms may include asthenopia, accommodative and vergence difficulties and dry eye. CVS occurs in up to 90% of computer workers, and given the almost universal use of these devices, it is important to identify whether these symptoms are specific to computer operation, or are simply a manifestation of performing a sustained near-vision task. This study compared ocular symptoms immediately following a sustained near task. 30 young, visually-normal subjects read text aloud either from a desktop computer screen or a printed hardcopy page at a viewing distance of 50 cm for a continuous 20 min period. Identical text was used in the two sessions, which was matched for size and contrast. Target viewing angle and luminance were similar for the two conditions. Immediately following completion of the reading task, subjects completed a written questionnaire asking about their level of ocular discomfort during the task. When comparing the computer and hardcopy conditions, significant differences in median symptom scores were reported with regard to blurred vision during the task (t = 147.0; p = 0.03) and the mean symptom score (t = 102.5; p = 0.04). In both cases, symptoms were higher during computer use. Symptoms following sustained computer use were significantly worse than those reported after hard copy fixation under similar viewing conditions. A better understanding of the physiology underlying CVS is critical to allow more accurate diagnosis and treatment. This will allow practitioners to optimize visual comfort and efficiency during computer operation.
Force-reflection and shared compliant control in operating telemanipulators with time delay
NASA Technical Reports Server (NTRS)
Kim, Won S.; Hannaford, Blake; Bejczy, Antal K.
1992-01-01
The performance of an advanced telemanipulation system in the presence of a wide range of time delays between a master control station and a slave robot is quantified. The contemplated applications include multiple satellite links to LEO, geosynchronous operation, spacecraft local area networks, and general-purpose computer-based short-distance designs. The results of high-precision peg-in-hole tasks performed by six test operators indicate that task performance decreased linearly with introduced time delays for both kinesthetic force feedback (KFF) and shared compliant control (SCC). The rate of this decrease was substantially improved with SCC compared to KFF. Task performance at delays above 1 s was not possible using KFF. SCC enabled task performance for such delays, which are realistic values for ground-controlled remote manipulation of telerobots in space.
Numerical observer for atherosclerotic plaque classification in spectral computed tomography
Lorsakul, Auranuch; Fakhri, Georges El; Worstell, William; Ouyang, Jinsong; Rakvongthai, Yothin; Laine, Andrew F.; Li, Quanzheng
2016-01-01
Abstract. Spectral computed tomography (SCT) generates better image quality than conventional computed tomography (CT). It has overcome several limitations for imaging atherosclerotic plaque. However, the literature evaluating the performance of SCT based on objective image assessment is very limited for the task of discriminating plaques. We developed a numerical-observer method and used it to assess performance on discrimination vulnerable-plaque features and compared the performance among multienergy CT (MECT), dual-energy CT (DECT), and conventional CT methods. Our numerical observer was designed to incorporate all spectral information and comprised two-processing stages. First, each energy-window domain was preprocessed by a set of localized channelized Hotelling observers (CHO). In this step, the spectral image in each energy bin was decorrelated using localized prewhitening and matched filtering with a set of Laguerre–Gaussian channel functions. Second, the series of the intermediate scores computed from all the CHOs were integrated by a Hotelling observer with an additional prewhitening and matched filter. The overall signal-to-noise ratio (SNR) and the area under the receiver operating characteristic curve (AUC) were obtained, yielding an overall discrimination performance metric. The performance of our new observer was evaluated for the particular binary classification task of differentiating between alternative plaque characterizations in carotid arteries. A clinically realistic model of signal variability was also included in our simulation of the discrimination tasks. The inclusion of signal variation is a key to applying the proposed observer method to spectral CT data. Hence, the task-based approaches based on the signal-known-exactly/background-known-exactly (SKE/BKE) framework and the clinical-relevant signal-known-statistically/background-known-exactly (SKS/BKE) framework were applied for analytical computation of figures of merit (FOM). Simulated data of a carotid-atherosclerosis patient were used to validate our methods. We used an extended cardiac-torso anthropomorphic digital phantom and three simulated plaque types (i.e., calcified plaque, fatty-mixed plaque, and iodine-mixed blood). The images were reconstructed using a standard filtered backprojection (FBP) algorithm for all the acquisition methods and were applied to perform two different discrimination tasks of: (1) calcified plaque versus fatty-mixed plaque and (2) calcified plaque versus iodine-mixed blood. MECT outperformed DECT and conventional CT systems for all cases of the SKE/BKE and SKS/BKE tasks (all p<0.01). On average of signal variability, MECT yielded the SNR improvements over other acquisition methods in the range of 46.8% to 65.3% (all p<0.01) for FBP-Ramp images and 53.2% to 67.7% (all p<0.01) for FBP-Hanning images for both identification tasks. This proposed numerical observer combined with our signal variability framework is promising for assessing material characterization obtained through the additional energy-dependent attenuation information of SCT. These methods can be further extended to other clinical tasks such as kidney or urinary stone identification applications. PMID:27429999
Characterizing quantum supremacy in near-term devices
NASA Astrophysics Data System (ADS)
Boixo, Sergio; Isakov, Sergei V.; Smelyanskiy, Vadim N.; Babbush, Ryan; Ding, Nan; Jiang, Zhang; Bremner, Michael J.; Martinis, John M.; Neven, Hartmut
2018-06-01
A critical question for quantum computing in the near future is whether quantum devices without error correction can perform a well-defined computational task beyond the capabilities of supercomputers. Such a demonstration of what is referred to as quantum supremacy requires a reliable evaluation of the resources required to solve tasks with classical approaches. Here, we propose the task of sampling from the output distribution of random quantum circuits as a demonstration of quantum supremacy. We extend previous results in computational complexity to argue that this sampling task must take exponential time in a classical computer. We introduce cross-entropy benchmarking to obtain the experimental fidelity of complex multiqubit dynamics. This can be estimated and extrapolated to give a success metric for a quantum supremacy demonstration. We study the computational cost of relevant classical algorithms and conclude that quantum supremacy can be achieved with circuits in a two-dimensional lattice of 7 × 7 qubits and around 40 clock cycles. This requires an error rate of around 0.5% for two-qubit gates (0.05% for one-qubit gates), and it would demonstrate the basic building blocks for a fault-tolerant quantum computer.
Application of a fast skyline computation algorithm for serendipitous searching problems
NASA Astrophysics Data System (ADS)
Koizumi, Kenichi; Hiraki, Kei; Inaba, Mary
2018-02-01
Skyline computation is a method of extracting interesting entries from a large population with multiple attributes. These entries, called skyline or Pareto optimal entries, are known to have extreme characteristics that cannot be found by outlier detection methods. Skyline computation is an important task for characterizing large amounts of data and selecting interesting entries with extreme features. When the population changes dynamically, the task of calculating a sequence of skyline sets is called continuous skyline computation. This task is known to be difficult to perform for the following reasons: (1) information of non-skyline entries must be stored since they may join the skyline in the future; (2) the appearance or disappearance of even a single entry can change the skyline drastically; (3) it is difficult to adopt a geometric acceleration algorithm for skyline computation tasks with high-dimensional datasets. Our new algorithm called jointed rooted-tree (JR-tree) manages entries using a rooted tree structure. JR-tree delays extend the tree to deep levels to accelerate tree construction and traversal. In this study, we presented the difficulties in extracting entries tagged with a rare label in high-dimensional space and the potential of fast skyline computation in low-latency cell identification technology.
Managing CMC-Based Task through Text-Based Dialogue: An Exploratory Study in a Chinese EFL Context
ERIC Educational Resources Information Center
Yu, Lianfen; Zeng, Gang
2011-01-01
This paper examines EFL learners' dialogic interaction in the implementation of a computer-mediated communication (CMC) task. Within the framework of sociocultural theory, the research focuses on how learners working in pairs collaboratively perform task management and build relationship in the synchronous CMC context. Sixteen Chinese tertiary EFL…
ERIC Educational Resources Information Center
Kukkonen, Jari; Dillon, Patrick; Kärkkäinen, Sirpa; Hartikainen-Ahia, Anu; Keinonen, Tuula
2016-01-01
Scaffolding helps the novice to accomplish a task goal or solve a problem that otherwise would be beyond unassisted efforts. Scaffolding firstly aims to support the learner in accomplishing the task and secondly in learning from the task and improving future performance. This study has examined pre-service teachers' experiences of…
A dual-task investigation of automaticity in visual word processing
NASA Technical Reports Server (NTRS)
McCann, R. S.; Remington, R. W.; Van Selst, M.
2000-01-01
An analysis of activation models of visual word processing suggests that frequency-sensitive forms of lexical processing should proceed normally while unattended. This hypothesis was tested by having participants perform a speeded pitch discrimination task followed by lexical decisions or word naming. As the stimulus onset asynchrony between the tasks was reduced, lexical-decision and naming latencies increased dramatically. Word-frequency effects were additive with the increase, indicating that frequency-sensitive processing was subject to postponement while attention was devoted to the other task. Either (a) the same neural hardware shares responsibility for lexical processing and central stages of choice reaction time task processing and cannot perform both computations simultaneously, or (b) lexical processing is blocked in order to optimize performance on the pitch discrimination task. Either way, word processing is not as automatic as activation models suggest.
The Effect of Task Duration on Event-Based Prospective Memory: A Multinomial Modeling Approach
Zhang, Hongxia; Tang, Weihai; Liu, Xiping
2017-01-01
Remembering to perform an action when a specific event occurs is referred to as Event-Based Prospective Memory (EBPM). This study investigated how EBPM performance is affected by task duration by having university students (n = 223) perform an EBPM task that was embedded within an ongoing computer-based color-matching task. For this experiment, we separated the overall task’s duration into the filler task duration and the ongoing task duration. The filler task duration is the length of time between the intention and the beginning of the ongoing task, and the ongoing task duration is the length of time between the beginning of the ongoing task and the appearance of the first Prospective Memory (PM) cue. The filler task duration and ongoing task duration were further divided into three levels: 3, 6, and 9 min. Two factors were then orthogonally manipulated between-subjects using a multinomial processing tree model to separate the effects of different task durations on the two EBPM components. A mediation model was then created to verify whether task duration influences EBPM via self-reminding or discrimination. The results reveal three points. (1) Lengthening the duration of ongoing tasks had a negative effect on EBPM performance while lengthening the duration of the filler task had no significant effect on it. (2) As the filler task was lengthened, both the prospective and retrospective components show a decreasing and then increasing trend. Also, when the ongoing task duration was lengthened, the prospective component decreased while the retrospective component significantly increased. (3) The mediating effect of discrimination between the task duration and EBPM performance was significant. We concluded that different task durations influence EBPM performance through different components with discrimination being the mediator between task duration and EBPM performance. PMID:29163277
Attention and P300-based BCI performance in people with amyotrophic lateral sclerosis
Riccio, Angela; Simione, Luca; Schettini, Francesca; Pizzimenti, Alessia; Inghilleri, Maurizio; Belardinelli, Marta Olivetti; Mattia, Donatella; Cincotti, Febo
2013-01-01
The purpose of this study was to investigate the support of attentional and memory processes in controlling a P300-based brain-computer interface (BCI) in people with amyotrophic lateral sclerosis (ALS). Eight people with ALS performed two behavioral tasks: (i) a rapid serial visual presentation (RSVP) task, screening the temporal filtering capacity and the speed of the update of the attentive filter, and (ii) a change detection task, screening the memory capacity and the spatial filtering capacity. The participants were also asked to perform a P300-based BCI spelling task. By using correlation and regression analyses, we found that only the temporal filtering capacity in the RSVP task was a predictor of both the P300-based BCI accuracy and of the amplitude of the P300 elicited performing the BCI task. We concluded that the ability to keep the attentional filter active during the selection of a target influences performance in BCI control. PMID:24282396
NASA Technical Reports Server (NTRS)
Hu, Chaumin
2007-01-01
IPG Execution Service is a framework that reliably executes complex jobs on a computational grid, and is part of the IPG service architecture designed to support location-independent computing. The new grid service enables users to describe the platform on which they need a job to run, which allows the service to locate the desired platform, configure it for the required application, and execute the job. After a job is submitted, users can monitor it through periodic notifications, or through queries. Each job consists of a set of tasks that performs actions such as executing applications and managing data. Each task is executed based on a starting condition that is an expression of the states of other tasks. This formulation allows tasks to be executed in parallel, and also allows a user to specify tasks to execute when other tasks succeed, fail, or are canceled. The two core components of the Execution Service are the Task Database, which stores tasks that have been submitted for execution, and the Task Manager, which executes tasks in the proper order, based on the user-specified starting conditions, and avoids overloading local and remote resources while executing tasks.
Rosenthal, Rachel; Geuss, Steffen; Dell-Kuster, Salome; Schäfer, Juliane; Hahnloser, Dieter; Demartines, Nicolas
2011-06-01
In children, video game experience improves spatial performance, a predictor of surgical performance. This study aims at comparing laparoscopic virtual reality (VR) task performance of children with different levels of experience in video games and residents. A total of 32 children (8.4 to 12.1 years), 20 residents, and 14 board-certified surgeons (total n = 66) performed several VR and 2 conventional tasks (cube/spatial and pegboard/fine motor). Performance between the groups was compared (primary outcome). VR performance was correlated with conventional task performance (secondary outcome). Lowest VR performance was found in children with low video game experience, followed by those with high video game experience, residents, and board-certified surgeons. VR performance correlated well with the spatial test and moderately with the fine motor test. The use of computer games can be considered not only as pure entertainment but may also contribute to the development of skills relevant for adequate performance in VR laparoscopic tasks. Spatial skills are relevant for VR laparoscopic task performance.
AstroGrid: Taverna in the Virtual Observatory .
NASA Astrophysics Data System (ADS)
Benson, K. M.; Walton, N. A.
This paper reports on the implementation of the Taverna workbench by AstroGrid, a tool for designing and executing workflows of tasks in the Virtual Observatory. The workflow approach helps astronomers perform complex task sequences with little technical effort. Visual approach to workflow construction streamlines highly complex analysis over public and private data and uses computational resources as minimal as a desktop computer. Some integration issues and future work are discussed in this article.
Flow of a Gas Turbine Engine Low-Pressure Subsystem Simulated
NASA Technical Reports Server (NTRS)
Veres, Joseph P.
1997-01-01
The NASA Lewis Research Center is managing a task to numerically simulate overnight, on a parallel computing testbed, the aerodynamic flow in the complete low-pressure subsystem (LPS) of a gas turbine engine. The model solves the three-dimensional Navier- Stokes flow equations through all the components within the LPS, as well as the external flow around the engine nacelle. The LPS modeling task is being performed by Allison Engine Company under the Small Engine Technology contract. The large computer simulation was evaluated on networked computer systems using 8, 16, and 32 processors, with the parallel computing efficiency reaching 75 percent when 16 processors were used.
The Effects of a Goal Setting Intervention on Productivity and Persistence in an Analogue Work Task
ERIC Educational Resources Information Center
Tammemagi, Triona; O'Hora, Denis; Maglieri, Kristen A.
2013-01-01
The authors of this study sought to quantify the beneficial effect of goal setting on work performance, and to characterize the persistence or deterioration of goal-directed behavior over time. Twenty-six participants completed a computer-based data entry task. Performance was measured during an initial baseline, a goal setting intervention that…
ERIC Educational Resources Information Center
Pennsylvania Blue Shield, Camp Hill.
A project developed a model curriculum to be delivered by computer-based instruction to teach the required literacy skills for entry workers in the health insurance industry. Literacy task analyses were performed for the targeted jobs and then validated with focus groups. The job tasks and related basic skills were divided into modules. The job…
VM Capacity-Aware Scheduling within Budget Constraints in IaaS Clouds
Thanasias, Vasileios; Lee, Choonhwa; Hanif, Muhammad; Kim, Eunsam; Helal, Sumi
2016-01-01
Recently, cloud computing has drawn significant attention from both industry and academia, bringing unprecedented changes to computing and information technology. The infrastructure-as-a-Service (IaaS) model offers new abilities such as the elastic provisioning and relinquishing of computing resources in response to workload fluctuations. However, because the demand for resources dynamically changes over time, the provisioning of resources in a way that a given budget is efficiently utilized while maintaining a sufficing performance remains a key challenge. This paper addresses the problem of task scheduling and resource provisioning for a set of tasks running on IaaS clouds; it presents novel provisioning and scheduling algorithms capable of executing tasks within a given budget, while minimizing the slowdown due to the budget constraint. Our simulation study demonstrates a substantial reduction up to 70% in the overall task slowdown rate by the proposed algorithms. PMID:27501046
VM Capacity-Aware Scheduling within Budget Constraints in IaaS Clouds.
Thanasias, Vasileios; Lee, Choonhwa; Hanif, Muhammad; Kim, Eunsam; Helal, Sumi
2016-01-01
Recently, cloud computing has drawn significant attention from both industry and academia, bringing unprecedented changes to computing and information technology. The infrastructure-as-a-Service (IaaS) model offers new abilities such as the elastic provisioning and relinquishing of computing resources in response to workload fluctuations. However, because the demand for resources dynamically changes over time, the provisioning of resources in a way that a given budget is efficiently utilized while maintaining a sufficing performance remains a key challenge. This paper addresses the problem of task scheduling and resource provisioning for a set of tasks running on IaaS clouds; it presents novel provisioning and scheduling algorithms capable of executing tasks within a given budget, while minimizing the slowdown due to the budget constraint. Our simulation study demonstrates a substantial reduction up to 70% in the overall task slowdown rate by the proposed algorithms.
Computational models of the Posner simple and choice reaction time tasks
Feher da Silva, Carolina; Baldo, Marcus V. C.
2015-01-01
The landmark experiments by Posner in the late 1970s have shown that reaction time (RT) is faster when the stimulus appears in an expected location, as indicated by a cue; since then, the so-called Posner task has been considered a “gold standard” test of spatial attention. It is thus fundamental to understand the neural mechanisms involved in performing it. To this end, we have developed a Bayesian detection system and small integrate-and-fire neural networks, which modeled sensory and motor circuits, respectively, and optimized them to perform the Posner task under different cue type proportions and noise levels. In doing so, main findings of experimental research on RT were replicated: the relative frequency effect, suboptimal RTs and significant error rates due to noise and invalid cues, slower RT for choice RT tasks than for simple RT tasks, fastest RTs for valid cues and slowest RTs for invalid cues. Analysis of the optimized systems revealed that the employed mechanisms were consistent with related findings in neurophysiology. Our models predict that (1) the results of a Posner task may be affected by the relative frequency of valid and neutral trials, (2) in simple RT tasks, input from multiple locations are added together to compose a stronger signal, and (3) the cue affects motor circuits more strongly in choice RT tasks than in simple RT tasks. In discussing the computational demands of the Posner task, attention has often been described as a filter that protects the nervous system, whose capacity is limited, from information overload. Our models, however, reveal that the main problems that must be overcome to perform the Posner task effectively are distinguishing signal from external noise and selecting the appropriate response in the presence of internal noise. PMID:26190997
All-optical reservoir computing.
Duport, François; Schneider, Bendix; Smerieri, Anteo; Haelterman, Marc; Massar, Serge
2012-09-24
Reservoir Computing is a novel computing paradigm that uses a nonlinear recurrent dynamical system to carry out information processing. Recent electronic and optoelectronic Reservoir Computers based on an architecture with a single nonlinear node and a delay loop have shown performance on standardized tasks comparable to state-of-the-art digital implementations. Here we report an all-optical implementation of a Reservoir Computer, made of off-the-shelf components for optical telecommunications. It uses the saturation of a semiconductor optical amplifier as nonlinearity. The present work shows that, within the Reservoir Computing paradigm, all-optical computing with state-of-the-art performance is possible.
High-Performance Data Analysis Tools for Sun-Earth Connection Missions
NASA Technical Reports Server (NTRS)
Messmer, Peter
2011-01-01
The data analysis tool of choice for many Sun-Earth Connection missions is the Interactive Data Language (IDL) by ITT VIS. The increasing amount of data produced by these missions and the increasing complexity of image processing algorithms requires access to higher computing power. Parallel computing is a cost-effective way to increase the speed of computation, but algorithms oftentimes have to be modified to take advantage of parallel systems. Enhancing IDL to work on clusters gives scientists access to increased performance in a familiar programming environment. The goal of this project was to enable IDL applications to benefit from both computing clusters as well as graphics processing units (GPUs) for accelerating data analysis tasks. The tool suite developed in this project enables scientists now to solve demanding data analysis problems in IDL that previously required specialized software, and it allows them to be solved orders of magnitude faster than on conventional PCs. The tool suite consists of three components: (1) TaskDL, a software tool that simplifies the creation and management of task farms, collections of tasks that can be processed independently and require only small amounts of data communication; (2) mpiDL, a tool that allows IDL developers to use the Message Passing Interface (MPI) inside IDL for problems that require large amounts of data to be exchanged among multiple processors; and (3) GPULib, a tool that simplifies the use of GPUs as mathematical coprocessors from within IDL. mpiDL is unique in its support for the full MPI standard and its support of a broad range of MPI implementations. GPULib is unique in enabling users to take advantage of an inexpensive piece of hardware, possibly already installed in their computer, and achieve orders of magnitude faster execution time for numerically complex algorithms. TaskDL enables the simple setup and management of task farms on compute clusters. The products developed in this project have the potential to interact, so one can build a cluster of PCs, each equipped with a GPU, and use mpiDL to communicate between the nodes and GPULib to accelerate the computations on each node.
Barber, Larissa K; Smit, Brandon W
2014-01-01
This study replicated ego-depletion predictions from the self-control literature in a computer simulation task that requires ongoing decision-making in relation to constantly changing environmental information: the Network Fire Chief (NFC). Ego-depletion led to decreased self-regulatory effort, but not performance, on the NFC task. These effects were also buffered by task enjoyment so that individuals who enjoyed the dynamic decision-making task did not experience ego-depletion effects. These findings confirm that past ego-depletion effects on decision-making are not limited to static or isolated decision-making tasks and can be extended to dynamic, naturalistic decision-making processes more common to naturalistic settings. Furthermore, the NFC simulation provides a methodological mechanism for independently measuring effort and performance when studying ego-depletion.
Wieringa, Fokko P.; Bouma, Henri; Eendebak, Pieter T.; van Basten, Jean-Paul A.; Beerlage, Harrie P.; Smits, Geert A. H. J.; Bos, Jelte E.
2014-01-01
Abstract. In comparison to open surgery, endoscopic surgery offers impaired depth perception and narrower field-of-view. To improve depth perception, the Da Vinci robot offers three-dimensional (3-D) video on the console for the surgeon but not for assistants, although both must collaborate. We improved the shared perception of the whole surgical team by connecting live 3-D monitors to all three available Da Vinci generations, probed user experience after two years by questionnaire, and compared time measurements of a predefined complex interaction task performed with a 3-D monitor versus two-dimensional. Additionally, we investigated whether the complex mental task of reconstructing a 3-D overview from an endoscopic video can be performed by a computer and shared among users. During the study, 925 robot-assisted laparoscopic procedures were performed in three hospitals, including prostatectomies, cystectomies, and nephrectomies. Thirty-one users participated in our questionnaire. Eighty-four percent preferred 3-D monitors and 100% reported spatial-perception improvement. All participating urologists indicated quicker performance of tasks requiring delicate collaboration (e.g., clip placement) when assistants used 3-D monitors. Eighteen users participated in a timing experiment during a delicate cooperation task in vitro. Teamwork was significantly (40%) faster with the 3-D monitor. Computer-generated 3-D reconstructions from recordings offered very wide interactive panoramas with educational value, although the present embodiment is vulnerable to movement artifacts. PMID:26158026
Allen, Michael Todd; Jameson, Molly M; Myers, Catherine E
2017-01-01
Personality factors such as behavioral inhibition (BI), a temperamental tendency for avoidance in the face of unfamiliar situations, have been identified as risk factors for anxiety disorders. Personality factors are generally identified through self-report inventories. However, this tendency to avoid may affect the accuracy of these self-report inventories. Previously, a computer based task was developed in which the participant guides an on-screen "avatar" through a series of onscreen events; performance on the task could accurately predict participants' BI, measured by a standard paper and pencil questionnaire (Adult Measure of Behavioral Inhibition, or AMBI). Here, we sought to replicate this finding as well as compare performance on the avatar task to another measure related to BI, the harm avoidance (HA) scale of the Tridimensional Personality Questionnaire (TPQ). The TPQ includes HA scales as well as scales assessing reward dependence (RD), novelty seeking (NS) and persistence. One hundred and one undergraduates voluntarily completed the avatar task and the paper and pencil inventories in a counter-balanced order. Scores on the avatar task were strongly correlated with BI assessed via the AMBI questionnaire, which replicates prior findings. Females exhibited higher HA scores than males, but did not differ on scores on the avatar task. There was a strong positive relationship between scores on the avatar task and HA scores. One aspect of HA, fear of uncertainty was found to moderately mediate the relationship between AMBI scores and avatar scores. NS had a strong negative relationship with scores on the avatar task, but there was no significant relationship between RD and scores on the avatar task. These findings indicate the effectiveness of the avatar task as a behavioral alternative to self-report measures to assess avoidance. In addition, the use of computer based behavioral tasks are a viable alternative to paper and pencil self-report inventories, particularly when assessing anxiety and avoidance.
Shin, Joon-Ho; Park, Gyulee; Cho, Duk Youn
2017-04-01
To explore motor performance on 2 different cognitive tasks during robotic rehabilitation in which motor performance was longitudinally assessed. Prospective study. Rehabilitation hospital. Patients (N=22) with chronic stroke and upper extremity impairment. A total of 640 repetitions of robot-assisted planar reaching, 5 times a week for 4 weeks. Longitudinal robotic evaluations regarding motor performance included smoothness, mean velocity, path error, and reach error by the type of cognitive task. Dual-task effects (DTEs) of motor performance were computed to analyze the effect of the cognitive task on dual-task interference. Cognitive task type influenced smoothness (P=.006), the DTEs of smoothness (P=.002), and the DTEs of reach error (P=.052). Robotic rehabilitation improved smoothness (P=.007) and reach error (P=.078), while stroke severity affected smoothness (P=.01), reach error (P<.001), and path error (P=.01). Robotic rehabilitation or severity did not affect the DTEs of motor performance. The results provide evidence for the effect of cognitive-motor interference on upper extremity performance among participants with stroke using a robotic-guided rehabilitation system. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Blinks, saccades, and fixation pauses during vigilance task performance. I., Time on task.
DOT National Transportation Integrated Search
1994-12-01
In the future, operators of complex equipment will spend more time monitoring computer controlled devices rather than having hands on control of such equipment. The operator intervenes in system operation under "unusual" conditions or when there is a...
Cue Integration in Categorical Tasks: Insights from Audio-Visual Speech Perception
Bejjanki, Vikranth Rao; Clayards, Meghan; Knill, David C.; Aslin, Richard N.
2011-01-01
Previous cue integration studies have examined continuous perceptual dimensions (e.g., size) and have shown that human cue integration is well described by a normative model in which cues are weighted in proportion to their sensory reliability, as estimated from single-cue performance. However, this normative model may not be applicable to categorical perceptual dimensions (e.g., phonemes). In tasks defined over categorical perceptual dimensions, optimal cue weights should depend not only on the sensory variance affecting the perception of each cue but also on the environmental variance inherent in each task-relevant category. Here, we present a computational and experimental investigation of cue integration in a categorical audio-visual (articulatory) speech perception task. Our results show that human performance during audio-visual phonemic labeling is qualitatively consistent with the behavior of a Bayes-optimal observer. Specifically, we show that the participants in our task are sensitive, on a trial-by-trial basis, to the sensory uncertainty associated with the auditory and visual cues, during phonemic categorization. In addition, we show that while sensory uncertainty is a significant factor in determining cue weights, it is not the only one and participants' performance is consistent with an optimal model in which environmental, within category variability also plays a role in determining cue weights. Furthermore, we show that in our task, the sensory variability affecting the visual modality during cue-combination is not well estimated from single-cue performance, but can be estimated from multi-cue performance. The findings and computational principles described here represent a principled first step towards characterizing the mechanisms underlying human cue integration in categorical tasks. PMID:21637344
Park, Subok; Clarkson, Eric
2010-01-01
The Bayesian ideal observer is optimal among all observers and sets an absolute upper bound for the performance of any observer in classification tasks [Van Trees, Detection, Estimation, and Modulation Theory, Part I (Academic, 1968).]. Therefore, the ideal observer should be used for objective image quality assessment whenever possible. However, computation of ideal-observer performance is difficult in practice because this observer requires the full description of unknown, statistical properties of high-dimensional, complex data arising in real life problems. Previously, Markov-chain Monte Carlo (MCMC) methods were developed by Kupinski et al. [J. Opt. Soc. Am. A 20, 430(2003) ] and by Park et al. [J. Opt. Soc. Am. A 24, B136 (2007) and IEEE Trans. Med. Imaging 28, 657 (2009) ] to estimate the performance of the ideal observer and the channelized ideal observer (CIO), respectively, in classification tasks involving non-Gaussian random backgrounds. However, both algorithms had the disadvantage of long computation times. We propose a fast MCMC for real-time estimation of the likelihood ratio for the CIO. Our simulation results show that our method has the potential to speed up ideal-observer performance in tasks involving complex data when efficient channels are used for the CIO. PMID:19884916
Arntzen, Erik; Halstadtro, Lill-Beathe; Halstadtro, Monica
2009-01-01
The purpose of the study was to extend the literature on verbal self-regulation by using the “silent dog” method to evaluate the role of verbal regulation over nonverbal behavior in 2 individuals with autism. Participants were required to talk-aloud while performing functional computer tasks.Then the effects of distracters with increasing demands on target behavior were evaluated as well as whether self-talk emitted by Participant 1 could be used to alter Participant 2's performance. Results suggest that participants' tasks seemed to be under control of self-instructions, and the rules generated from Participants 1's self-talk were effective in teaching computer skills to Participant 2. The silent dog method was useful in evaluating the possible role of self-generated rules in teaching computer skills to participants with autism. PMID:22477428
Arntzen, Erik; Halstadtro, Lill-Beathe; Halstadtro, Monica
2009-01-01
The purpose of the study was to extend the literature on verbal self-regulation by using the "silent dog" method to evaluate the role of verbal regulation over nonverbal behavior in 2 individuals with autism. Participants were required to talk-aloud while performing functional computer tasks.Then the effects of distracters with increasing demands on target behavior were evaluated as well as whether self-talk emitted by Participant 1 could be used to alter Participant 2's performance. Results suggest that participants' tasks seemed to be under control of self-instructions, and the rules generated from Participants 1's self-talk were effective in teaching computer skills to Participant 2. The silent dog method was useful in evaluating the possible role of self-generated rules in teaching computer skills to participants with autism.
Human factors with nonhumans - Factors that affect computer-task performance
NASA Technical Reports Server (NTRS)
Washburn, David A.
1992-01-01
There are two general strategies that may be employed for 'doing human factors research with nonhuman animals'. First, one may use the methods of traditional human factors investigations to examine the nonhuman animal-to-machine interface. Alternatively, one might use performance by nonhuman animals as a surrogate for or model of performance by a human operator. Each of these approaches is illustrated with data in the present review. Chronic ambient noise was found to have a significant but inconsequential effect on computer-task performance by rhesus monkeys (Macaca mulatta). Additional data supported the generality of findings such as these to humans, showing that rhesus monkeys are appropriate models of human psychomotor performance. It is argued that ultimately the interface between comparative psychology and technology will depend on the coordinated use of both strategies of investigation.
The effects of legacy organization culture on post-merger integration.
Frantz, Terrill L; Carley, Kathleen M
2013-01-01
We explore the relationship between the characteristics of pre-existing organization cultures and post-merger integration dynamics; this study involves examining data produced by computer simulation. Two characteristics of organization culture, its characteristic complexity and its propensity for members' to share information, are controlled in computational experiments. To characterize post-merger integration dynamics, we measure the transfer of information with respect to two types: (a) that which is necessary in performing work tasks, and (b) that which underlies the features of a group's culture. The extent to which this information is common in a group is indicative of task performance and the cultural cohesiveness of its members; leading to the level of performance for the group. We consider cultural knowledge as it pertains to both that of the entire organization and at the work-team level; often times, these can be dissimilar. We find that cultural complexity and exchange motivation vary in their influence on the diffusion of task and cultural knowledge: the more complex the culture, the longer for post-merger integration to complete, while simultaneously task performance suffers. However, the inclination for an organization to energetically share their culture with another group does not immensely impact the diffusion of cultural or task knowledge; moreover, high levels of task focus in a culture can hinder cultural diffusion, though performance is positively correlated with this characteristic. This study has relevance to post-merger integration research and practice by providing a theoretically grounded, quantitative model useful for estimating the post-merger dynamics of cultural awareness and knowledge diffusion for a specific merger situation.
Computer Self-Efficacy, Anxiety, and Learning in Online versus Face to Face Medium
ERIC Educational Resources Information Center
Hauser, Richard; Paul, Ravi; Bradley, John
2012-01-01
The purpose of this research is to examine the relationships between changes to computer self-efficacy (CSE) and computer anxiety and the impact on performance on computer-related tasks in both online and face-to-face mediums. While many studies have looked at these factors individually, relatively few have included multiple measures of these…
Decreased attention to object size information in scale errors performers.
Grzyb, Beata J; Cangelosi, Angelo; Cattani, Allegra; Floccia, Caroline
2017-05-01
Young children sometimes make serious attempts to perform impossible actions on miniature objects as if they were full-size objects. The existing explanations of these curious action errors assume (but never explicitly tested) children's decreased attention to object size information. This study investigated the attention to object size information in scale errors performers. Two groups of children aged 18-25 months (N=52) and 48-60 months (N=23) were tested in two consecutive tasks: an action task that replicated the original scale errors elicitation situation, and a looking task that involved watching on a computer screen actions performed with adequate to inadequate size object. Our key finding - that children performing scale errors in the action task subsequently pay less attention to size changes than non-scale errors performers in the looking task - suggests that the origins of scale errors in childhood operate already at the perceptual level, and not at the action level. Copyright © 2017 Elsevier Inc. All rights reserved.
Finding Waldo: Learning about Users from their Interactions.
Brown, Eli T; Ottley, Alvitta; Zhao, Helen; Quan Lin; Souvenir, Richard; Endert, Alex; Chang, Remco
2014-12-01
Visual analytics is inherently a collaboration between human and computer. However, in current visual analytics systems, the computer has limited means of knowing about its users and their analysis processes. While existing research has shown that a user's interactions with a system reflect a large amount of the user's reasoning process, there has been limited advancement in developing automated, real-time techniques that mine interactions to learn about the user. In this paper, we demonstrate that we can accurately predict a user's task performance and infer some user personality traits by using machine learning techniques to analyze interaction data. Specifically, we conduct an experiment in which participants perform a visual search task, and apply well-known machine learning algorithms to three encodings of the users' interaction data. We achieve, depending on algorithm and encoding, between 62% and 83% accuracy at predicting whether each user will be fast or slow at completing the task. Beyond predicting performance, we demonstrate that using the same techniques, we can infer aspects of the user's personality factors, including locus of control, extraversion, and neuroticism. Further analyses show that strong results can be attained with limited observation time: in one case 95% of the final accuracy is gained after a quarter of the average task completion time. Overall, our findings show that interactions can provide information to the computer about its human collaborator, and establish a foundation for realizing mixed-initiative visual analytics systems.
Self-Associations Influence Task-Performance through Bayesian Inference
Bengtsson, Sara L.; Penny, Will D.
2013-01-01
The way we think about ourselves impacts greatly on our behavior. This paper describes a behavioral study and a computational model that shed new light on this important area. Participants were primed “clever” and “stupid” using a scrambled sentence task, and we measured the effect on response time and error-rate on a rule-association task. First, we observed a confirmation bias effect in that associations to being “stupid” led to a gradual decrease in performance, whereas associations to being “clever” did not. Second, we observed that the activated self-concepts selectively modified attention toward one’s performance. There was an early to late double dissociation in RTs in that primed “clever” resulted in RT increase following error responses, whereas primed “stupid” resulted in RT increase following correct responses. We propose a computational model of subjects’ behavior based on the logic of the experimental task that involves two processes; memory for rules and the integration of rules with subsequent visual cues. The model incorporates an adaptive decision threshold based on Bayes rule, whereby decision thresholds are increased if integration was inferred to be faulty. Fitting the computational model to experimental data confirmed our hypothesis that priming affects the memory process. This model explains both the confirmation bias and double dissociation effects and demonstrates that Bayesian inferential principles can be used to study the effect of self-concepts on behavior. PMID:23966937
Stone, John E; Hallock, Michael J; Phillips, James C; Peterson, Joseph R; Luthey-Schulten, Zaida; Schulten, Klaus
2016-05-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers.
Analysis of film cooling in rocket nozzles
NASA Technical Reports Server (NTRS)
Woodbury, Keith A.
1993-01-01
This report summarizes the findings on the NASA contract NAG8-212, Task No. 3. The overall project consists of three tasks, all of which have been successfully completed. In addition, some supporting supplemental work, not required by the contract, has been performed and is documented herein. Task 1 involved the modification of the wall functions in the code FDNS (Finite Difference Navier-Stokes) to use a Reynolds Analogy-based method. This task was completed in August, 1992. Task 2 involved the verification of the code against experimentally available data. The data chosen for comparison was from an experiment involving the injection of helium from a wall jet. Results obtained in completing this task also show the sensitivity of the FDNS code to unknown conditions at the injection slot. This task was completed in September, 1992. Task 3 required the computation of the flow of hot exhaust gases through the P&W 40K subscale nozzle. Computations were performed both with and without film coolant injection. This task was completed in July, 1993. The FDNS program tends to overpredict heat fluxes, but, with suitable modeling of backside cooling, may give reasonable wall temperature predictions. For film cooling in the P&W 40K calorimeter subscale nozzle, the average wall temperature is reduced from 1750R to about 1050R by the film cooling. The average wall heat flux is reduced by a factor of 3.
PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deelman, Ewa; Carothers, Christopher; Mandal, Anirban
Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less
PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows
Deelman, Ewa; Carothers, Christopher; Mandal, Anirban; ...
2015-07-14
Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less
Physical Activity Predicts Performance in an Unpracticed Bimanual Coordination Task.
Boisgontier, Matthieu P; Serbruyns, Leen; Swinnen, Stephan P
2017-01-01
Practice of a given physical activity is known to improve the motor skills related to this activity. However, whether unrelated skills are also improved is still unclear. To test the impact of physical activity on an unpracticed motor task, 26 young adults completed the international physical activity questionnaire and performed a bimanual coordination task they had never practiced before. Results showed that higher total physical activity predicted higher performance in the bimanual task, controlling for multiple factors such as age, physical inactivity, music practice, and computer games practice. Linear mixed models allowed this effect of physical activity to be generalized to a large population of bimanual coordination conditions. This finding runs counter to the notion that generalized motor abilities do not exist and supports the existence of a "learning to learn" skill that could be improved through physical activity and that impacts performance in tasks that are not necessarily related to the practiced activity.
NASA Astrophysics Data System (ADS)
Lakey, Chad E.; Berry, Daniel R.; Sellers, Eric W.
2011-04-01
In this study, we examined the effects of a short mindfulness meditation induction (MMI) on the performance of a P300-based brain-computer interface (BCI) task. We expected that MMI would harness present-moment attentional resources, resulting in two positive consequences for P300-based BCI use. Specifically, we believed that MMI would facilitate increases in task accuracy and promote the production of robust P300 amplitudes. Sixteen-channel electroencephalographic data were recorded from 18 subjects using a row/column speller task paradigm. Nine subjects participated in a 6 min MMI and an additional nine subjects served as a control group. Subjects were presented with a 6 × 6 matrix of alphanumeric characters on a computer monitor. Stimuli were flashed at a stimulus onset asynchrony (SOA) of 125 ms. Calibration data were collected on 21 items without providing feedback. These data were used to derive a stepwise linear discriminate analysis classifier that was applied to an additional 14 items to evaluate accuracy. Offline performance analyses revealed that MMI subjects were significantly more accurate than control subjects. Likewise, MMI subjects produced significantly larger P300 amplitudes than control subjects at Cz and PO7. The discussion focuses on the potential attentional benefits of MMI for P300-based BCI performance.
ERIC Educational Resources Information Center
Salden, Ron J.C.M.; Paas, Fred; Broers, Nick J.; van Merrienboer, Jeroen J. G.
2004-01-01
The differential effects of four task selection methods on training efficiency and transfer in computer-based training for Air Traffic Control were investigated. A non-dynamic condition, in which the learning tasks were presented to the participants in a fixed, predetermined sequence, was compared to three dynamic conditions, in which learning…
A neuronal model of a global workspace in effortful cognitive tasks.
Dehaene, S; Kerszberg, M; Changeux, J P
1998-11-24
A minimal hypothesis is proposed concerning the brain processes underlying effortful tasks. It distinguishes two main computational spaces: a unique global workspace composed of distributed and heavily interconnected neurons with long-range axons, and a set of specialized and modular perceptual, motor, memory, evaluative, and attentional processors. Workspace neurons are mobilized in effortful tasks for which the specialized processors do not suffice. They selectively mobilize or suppress, through descending connections, the contribution of specific processor neurons. In the course of task performance, workspace neurons become spontaneously coactivated, forming discrete though variable spatio-temporal patterns subject to modulation by vigilance signals and to selection by reward signals. A computer simulation of the Stroop task shows workspace activation to increase during acquisition of a novel task, effortful execution, and after errors. We outline predictions for spatio-temporal activation patterns during brain imaging, particularly about the contribution of dorsolateral prefrontal cortex and anterior cingulate to the workspace.
Kuschpel, Maxim S; Liu, Shuyan; Schad, Daniel J; Heinzel, Stephan; Heinz, Andreas; Rapp, Michael A
2015-01-01
The interruption of learning processes by breaks filled with diverse activities is common in everyday life. We investigated the effects of active computer gaming and passive relaxation (rest and music) breaks on working memory performance. Young adults were exposed to breaks involving (i) eyes-open resting, (ii) listening to music and (iii) playing the video game "Angry Birds" before performing the n-back working memory task. Based on linear mixed-effects modeling, we found that playing the "Angry Birds" video game during a short learning break led to a decline in task performance over the course of the task as compared to eyes-open resting and listening to music, although overall task performance was not impaired. This effect was associated with high levels of daily mind wandering and low self-reported ability to concentrate. These findings indicate that video games can negatively affect working memory performance over time when played in between learning tasks. We suggest further investigation of these effects because of their relevance to everyday activity.
Kuschpel, Maxim S.; Liu, Shuyan; Schad, Daniel J.; Heinzel, Stephan; Heinz, Andreas; Rapp, Michael A.
2015-01-01
The interruption of learning processes by breaks filled with diverse activities is common in everyday life. We investigated the effects of active computer gaming and passive relaxation (rest and music) breaks on working memory performance. Young adults were exposed to breaks involving (i) eyes-open resting, (ii) listening to music and (iii) playing the video game “Angry Birds” before performing the n-back working memory task. Based on linear mixed-effects modeling, we found that playing the “Angry Birds” video game during a short learning break led to a decline in task performance over the course of the task as compared to eyes-open resting and listening to music, although overall task performance was not impaired. This effect was associated with high levels of daily mind wandering and low self-reported ability to concentrate. These findings indicate that video games can negatively affect working memory performance over time when played in between learning tasks. We suggest further investigation of these effects because of their relevance to everyday activity. PMID:26579055
Flexible Description Language for HPC based Processing of Remote Sense Data
NASA Astrophysics Data System (ADS)
Nandra, Constantin; Gorgan, Dorian; Bacu, Victor
2016-04-01
When talking about Big Data, the most challenging aspect lays in processing them in order to gain new insight, find new patterns and gain knowledge from them. This problem is likely most apparent in the case of Earth Observation (EO) data. With ever higher numbers of data sources and increasing data acquisition rates, dealing with EO data is indeed a challenge [1]. Geoscientists should address this challenge by using flexible and efficient tools and platforms. To answer this trend, the BigEarth project [2] aims to combine the advantages of high performance computing solutions with flexible processing description methodologies in order to reduce both task execution times and task definition time and effort. As a component of the BigEarth platform, WorDeL (Workflow Description Language) [3] is intended to offer a flexible, compact and modular approach to the task definition process. WorDeL, unlike other description alternatives such as Python or shell scripts, is oriented towards the description topologies, using them as abstractions for the processing programs. This feature is intended to make it an attractive alternative for users lacking in programming experience. By promoting modular designs, WorDeL not only makes the processing descriptions more user-readable and intuitive, but also helps organizing the processing tasks into independent sub-tasks, which can be executed in parallel on multi-processor platforms in order to improve execution times. As a BigEarth platform [4] component, WorDeL represents the means by which the user interacts with the system, describing processing algorithms in terms of existing operators and workflows [5], which are ultimately translated into sets of executable commands. The WorDeL language has been designed to help in the definition of compute-intensive, batch tasks which can be distributed and executed on high-performance, cloud or grid-based architectures in order to improve the processing time. Main references for further information: [1] Gorgan, D., "Flexible and Adaptive Processing of Earth Observation Data over High Performance Computation Architectures", International Conference and Exhibition Satellite 2015, August 17-19, Houston, Texas, USA. [2] Bigearth project - flexible processing of big earth data over high performance computing architectures. http://cgis.utcluj.ro/bigearth, (2014) [3] Nandra, C., Gorgan, D., "Workflow Description Language for Defining Big Earth Data Processing Tasks", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp. 461-468, (2015). [4] Bacu, V., Stefan, T., Gorgan, D., "Adaptive Processing of Earth Observation Data on Cloud Infrastructures Based on Workflow Description", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp.444-454, (2015). [5] Mihon, D., Bacu, V., Colceriu, V., Gorgan, D., "Modeling of Earth Observation Use Cases through the KEOPS System", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp. 455-460, (2015).
Glisky, E L; Schacter, D L
1989-01-01
This study explored the limits of learning that could be achieved by an amnesic patient in a complex real-world domain. Using a cuing procedure known as the method of vanishing cues, a severely amnesic encephalitic patient was taught over 250 discrete pieces of new information concerning the rules and procedures for performing a task involving data entry into a computer. Subsequently, she was able to use this acquired knowledge to perform the task accurately and efficiently in the workplace. These results suggest that amnesic patients' preserved learning abilities can be extended well beyond what has been reported previously.
NASA Astrophysics Data System (ADS)
Puzyrkov, Dmitry; Polyakov, Sergey; Podryga, Viktoriia; Markizov, Sergey
2018-02-01
At the present stage of computer technology development it is possible to study the properties and processes in complex systems at molecular and even atomic levels, for example, by means of molecular dynamics methods. The most interesting are problems related with the study of complex processes under real physical conditions. Solving such problems requires the use of high performance computing systems of various types, for example, GRID systems and HPC clusters. Considering the time consuming computational tasks, the need arises of software for automatic and unified monitoring of such computations. A complex computational task can be performed over different HPC systems. It requires output data synchronization between the storage chosen by a scientist and the HPC system used for computations. The design of the computational domain is also quite a problem. It requires complex software tools and algorithms for proper atomistic data generation on HPC systems. The paper describes the prototype of a cloud service, intended for design of atomistic systems of large volume for further detailed molecular dynamic calculations and computational management for this calculations, and presents the part of its concept aimed at initial data generation on the HPC systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seitz, R.R.; Rittmann, P.D.; Wood, M.I.
The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrationsmore » in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided.« less
Implicit Behavioral Change in Response to Cognitive Tasks in Alzheimer Disease.
Bomilcar, Iris; Morris, Robin G; Brown, Richard G; Mograbi, Daniel C
2018-03-01
Lack of awareness about impairments is commonly found in Alzheimer disease (AD), but recent evidence suggests that patients may respond to the experience of illness despite limited awareness. In this study, we explored whether implicit emotional responses to experiences of failure in cognitive tasks would result in longer-term change in behavior. Twenty-two patients with AD were seen 1 week after a previous session in which they performed computer tasks that had been manipulated to be either too difficult (failure condition) or very easy (success condition) for them. At the second session, both types of tasks were set to have medium difficulty and were administered so that the participants decided how long to persist on each task. Task persistence was determined by relative time spent doing the tasks, considering that participants would be more likely to stop performing tasks in which they had experienced failure during the first session. Task persistence in the second session was not affected by performance in the first session. However, when participants' awareness of performance in the first session was taken into account, differences were found in persistence between tasks in the second session. During the second session, participants stopped performing tasks after a sequence of errors. There were no self-reported changes in motivation or enjoyment in response to task failure. These findings suggest that implicit learning of task valence may be compromised in AD, but that initial moments of awareness of performance may influence long-term adaptation in unaware patients.
ERIC Educational Resources Information Center
Tsai, Shu-Chiao
2017-01-01
This study reports on investigating students' English translation performance and their use of reading strategies in an elective English writing course offered to senior students of English as a Foreign Language for 100 minutes per week for 12 weeks. A courseware-implemented instruction combined with a task-based learning approach was adopted.…
NASA Technical Reports Server (NTRS)
Harper, Warren
1989-01-01
Two electromagnetic scattering codes, NEC-BSC and ESP3, were delivered and installed on a NASA VAX computer for use by Marshall Space Flight Center antenna design personnel. The existing codes and certain supplementary software were updated, the codes installed on a computer that will be delivered to the customer, to provide capability for graphic display of the data to be computed by the use of the codes and to assist the customer in the solution of specific problems that demonstrate the use of the codes. With the exception of one code revision, all of these tasks were performed.
Models for interrupted monitoring of a stochastic process
NASA Technical Reports Server (NTRS)
Palmer, E.
1977-01-01
As computers are added to the cockpit, the pilot's job is changing from of manually flying the aircraft, to one of supervising computers which are doing navigation, guidance and energy management calculations as well as automatically flying the aircraft. In this supervisorial role the pilot must divide his attention between monitoring the aircraft's performance and giving commands to the computer. Normative strategies are developed for tasks where the pilot must interrupt his monitoring of a stochastic process in order to attend to other duties. Results are given as to how characteristics of the stochastic process and the other tasks affect the optimal strategies.
Learners' Willingness to Communicate in Face-to-Face versus Oral Computer-Mediated Communication
ERIC Educational Resources Information Center
Yanguas, Íñigo; Flores, Alayne
2014-01-01
The present study had two main goals: to explore performance differences in a task-based environment between face-to-face (FTF) and oral computer-mediated communication (OCMC) groups, and to investigate the relationship between trait-like willingness to communicate (WTC) and performance in the FTF and OCMC groups. Students from two intact…
Young Children's Skill in Using a Mouse to Control a Graphical Computer Interface.
ERIC Educational Resources Information Center
Crook, Charles
1992-01-01
Describes a study that investigated the performance of preschoolers and children in the first three years of formal education on tasks that involved skills using a mouse-based control of a graphical computer interface. The children's performance is compared with that of novice adult users and expert users. (five references) (LRW)
Carlsson, Emilia; Miniscalco, Carmela; Gillberg, Christopher; Åsberg Johnels, Jakob
2018-03-26
We have developed a False-Belief (FB) understanding task for use on a computer tablet, trying to assess FB understanding in a less social way. It is based on classical FB protocols, and additionally includes a manipulation of language in an attempt to explore the facilitating effect of linguistic support during FB processing. Specifically, the FB task was presented in three auditory conditions: narrative, silent, and interference. The task was assumed to shed new light on the FB difficulties often observed in Autism Spectrum Disorder (ASD). Sixty-eight children with ASD (M = 7.5 years) and an age matched comparison group with 98 typically developing (TD) children were assessed with the FB task. The children with ASD did not perform above chance level in any condition, and significant differences in success rates were found between the groups in two conditions (silent and narrative), with TD children performing better. We discuss implications, limitations, and further developments.
Chaisangmongkon, Warasinee; Swaminathan, Sruthi K.; Freedman, David J.; Wang, Xiao-Jing
2017-01-01
Summary Decision making involves dynamic interplay between internal judgements and external perception, which has been investigated in delayed match-to-category (DMC) experiments. Our analysis of neural recordings shows that, during DMC tasks, LIP and PFC neurons demonstrate mixed, time-varying, and heterogeneous selectivity, but previous theoretical work has not established the link between these neural characteristics and population-level computations. We trained a recurrent network model to perform DMC tasks and found that the model can remarkably reproduce key features of neuronal selectivity at the single-neuron and population levels. Analysis of the trained networks elucidates that robust transient trajectories of the neural population are the key driver of sequential categorical decisions. The directions of trajectories are governed by network self-organized connectivity, defining a ‘neural landscape’, consisting of a task-tailored arrangement of slow states and dynamical tunnels. With this model, we can identify functionally-relevant circuit motifs and generalize the framework to solve other categorization tasks. PMID:28334612
Jacova, Claudia; McGrenere, Joanna; Lee, Hyunsoo S; Wang, William W; Le Huray, Sarah; Corenblith, Emily F; Brehmer, Matthew; Tang, Charlotte; Hayden, Sherri; Beattie, B Lynn; Hsiung, Ging-Yuek R
2015-01-01
Cognitive Testing on Computer (C-TOC) is a novel computer-based test battery developed to improve both usability and validity in the computerized assessment of cognitive function in older adults. C-TOC's usability was evaluated concurrently with its iterative development to version 4 in subjects with and without cognitive impairment, and health professional advisors representing different ethnocultural groups. C-TOC version 4 was then validated against neuropsychological tests (NPTs), and by comparing performance scores of subjects with normal cognition, Cognitive Impairment Not Dementia (CIND) and Alzheimer disease. C-TOC's language tests were validated in subjects with aphasic disorders. The most important usability issue that emerged from consultations with 27 older adults and with 8 cultural advisors was the test-takers' understanding of the task, particularly executive function tasks. User interface features did not pose significant problems. C-TOC version 4 tests correlated with comparator NPT (r=0.4 to 0.7). C-TOC test scores were normal (n=16)>CIND (n=16)>Alzheimer disease (n=6). All normal/CIND NPT performance differences were detected on C-TOC. Low computer knowledge adversely affected test performance, particularly in CIND. C-TOC detected impairments in aphasic disorders (n=11). In general, C-TOC had good validity in detecting cognitive impairment. Ensuring test-takers' understanding of the tasks, and considering their computer knowledge appear important steps towards C-TOC's implementation.
Academic physicians' assessment of the effects of computers on health care.
Detmer, W. M.; Friedman, C. P.
1994-01-01
We assessed the attitudes of academic physicians towards computers in health care at two academic medical centers that are in the early stages of clinical information-system deployment. We distributed a 4-page questionnaire to 470 subjects, and a total of 272 physicians (58%) responded. Our results show that respondents use computers frequently, primarily to perform academic-oriented tasks as opposed to clinical tasks. Overall, respondents viewed computers as being slightly beneficial to health care. They perceive self-education and access to up-to-date information as the most beneficial aspects of computers and are most concerned about privacy issues and the effect of computers on the doctor-patient relationship. Physicians with prior computer training and greater knowledge of informatics concepts had more favorable attitudes towards computers in health care. We suggest that negative attitudes towards computers can be addressed by careful system design as well as targeted educational activities. PMID:7949990
Some foundational aspects of quantum computers and quantum robots.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benioff, P.; Physics
1998-01-01
This paper addresses foundational issues related to quantum computing. The need for a universally valid theory such as quantum mechanics to describe to some extent its own validation is noted. This includes quantum mechanical descriptions of systems that do theoretical calculations (i.e. quantum computers) and systems that perform experiments. Quantum robots interacting with an environment are a small first step in this direction. Quantum robots are described here as mobile quantum systems with on-board quantum computers that interact with environments. Included are discussions on the carrying out of tasks and the division of tasks into computation and action phases. Specificmore » models based on quantum Turing machines are described. Differences and similarities between quantum robots plus environments and quantum computers are discussed.« less
Task Assignment Heuristics for Distributed CFD Applications
NASA Technical Reports Server (NTRS)
Lopez-Benitez, N.; Djomehri, M. J.; Biswas, R.; Biegel, Bryan (Technical Monitor)
2001-01-01
CFD applications require high-performance computational platforms: 1. Complex physics and domain configuration demand strongly coupled solutions; 2. Applications are CPU and memory intensive; and 3. Huge resource requirements can only be satisfied by teraflop-scale machines or distributed computing.
Improving Physical Task Performance with Counterfactual and Prefactual Thinking.
Hammell, Cecilia; Chan, Amy Y C
2016-01-01
Counterfactual thinking (reflecting on "what might have been") has been shown to enhance future performance by translating information about past mistakes into plans for future action. Prefactual thinking (imagining "what might be if…") may serve a greater preparative function than counterfactual thinking as it is future-orientated and focuses on more controllable features, thus providing a practical script to prime future behaviour. However, whether or not this difference in hypothetical thought content may translate into a difference in actual task performance has been largely unexamined. In Experiment 1 (n = 42), participants performed trials of a computer-simulated physical task, in between which they engaged in either task-related hypothetical thinking (counterfactual or prefactual) or an unrelated filler task (control). As hypothesised, prefactuals contained more controllable features than counterfactuals. Moreover, participants who engaged in either form of hypothetical thinking improved significantly in task performance over trials compared to participants in the control group. The difference in thought content between counterfactuals and prefactuals, however, did not yield a significant difference in performance improvement. Experiment 2 (n = 42) replicated these findings in a dynamic balance task environment. Together, these findings provide further evidence for the preparatory function of counterfactuals, and demonstrate that prefactuals share this same functional characteristic.
Guastello, Stephen J; Gorin, Hillary; Huschen, Samuel; Peters, Natalie E; Fabisch, Megan; Poston, Kirsten
2012-10-01
It has become well established in laboratory experiments that switching tasks, perhaps due to interruptions at work, incur costs in response time to complete the next task. Conditions are also known that exaggerate or lessen the switching costs. Although switching costs can contribute to fatigue, task switching can also be an adaptive response to fatigue. The present study introduces a new research paradigm for studying the emergence of voluntary task switching regimes, self-organizing processes therein, and the possibly conflicting roles of switching costs and minimum entropy. Fifty-four undergraduates performed 7 different computer-based cognitive tasks producing sets of 49 responses under instructional conditions requiring task quotas or no quotas. The sequences of task choices were analyzed using orbital decomposition to extract pattern types and lengths, which were then classified and compared with regard to Shannon entropy, topological entropy, number of task switches involved, and overall performance. Results indicated that similar but different patterns were generated under the two instructional conditions, and better performance was associated with lower topological entropy. Both entropy metrics were associated with the amount of voluntary task switching. Future research should explore conditions affecting the trade-off between switching costs and entropy, levels of automaticity between task elements, and the role of voluntary switching regimes on fatigue.
Neuroimaging of the joint Simon effect with believed biological and non-biological co-actors
Wen, Tanya; Hsieh, Shulan
2015-01-01
Performing a task alone or together with another agent can produce different outcomes. The current study used event-related functional magnetic resonance imaging (fMRI) to investigate the neural underpinnings when participants performed a Go/Nogo task alone or complementarily with another co-actor (unseen), whom was believed to be another human or a computer. During both complementary tasks, reaction time data suggested that participants integrated the potential action of their co-actor in their own action planning. Compared to the single-actor task, increased parietal and precentral activity during complementary tasks as shown in the fMRI data further suggested representation of the co-actor’s response. The superior frontal gyrus of the medial prefrontal cortex was differentially activated in the human co-actor condition compared to the computer co-actor condition. The medial prefrontal cortex, involved thinking about the beliefs and intentions of other people, possibly reflects a social-cognitive aspect or self-other discrimination during the joint task when believing a biological co-actor is present. Our results suggest that action co-representation can occur even offline with any agent type given a priori information that they are co-acting; however, additional regions are recruited when participants believe they are task-sharing with another human. PMID:26388760
Neuroimaging of the joint Simon effect with believed biological and non-biological co-actors.
Wen, Tanya; Hsieh, Shulan
2015-01-01
Performing a task alone or together with another agent can produce different outcomes. The current study used event-related functional magnetic resonance imaging (fMRI) to investigate the neural underpinnings when participants performed a Go/Nogo task alone or complementarily with another co-actor (unseen), whom was believed to be another human or a computer. During both complementary tasks, reaction time data suggested that participants integrated the potential action of their co-actor in their own action planning. Compared to the single-actor task, increased parietal and precentral activity during complementary tasks as shown in the fMRI data further suggested representation of the co-actor's response. The superior frontal gyrus of the medial prefrontal cortex was differentially activated in the human co-actor condition compared to the computer co-actor condition. The medial prefrontal cortex, involved thinking about the beliefs and intentions of other people, possibly reflects a social-cognitive aspect or self-other discrimination during the joint task when believing a biological co-actor is present. Our results suggest that action co-representation can occur even offline with any agent type given a priori information that they are co-acting; however, additional regions are recruited when participants believe they are task-sharing with another human.
The effects of advertisement location and familiarity on selective attention.
Jessen, Tanja Lund; Rodway, Paul
2010-06-01
This study comprised two experiments to examine the distracting effects of advertisement familiarity, location, and onset on the performance of a selective attention task. In Exp. 1, familiar advertisements presented in peripheral vision disrupted selective attention when the attention task was more demanding, suggesting that the distracting effect of advertisements is a product of task demands and advertisement familiarity and location. In Exp. 2, the onset of the advertisement shortly before, or after, the attention task captured attention and disrupted attentional performance. The onset of the advertisement before the attention task reduced target response time without an increase in errors and therefore facilitated performance. Despite being instructed to ignore the advertisements, the participants were able to recall a substantial proportion of the familiar advertisements. Implications for the presentation of advertisements during human-computer interaction were discussed.
An anatomy of industrial robots and their controls
NASA Astrophysics Data System (ADS)
Luh, J. Y. S.
1983-02-01
The modernization of manufacturing facilities by means of automation represents an approach for increasing productivity in industry. The three existing types of automation are related to continuous process controls, the use of transfer conveyor methods, and the employment of programmable automation for the low-volume batch production of discrete parts. The industrial robots, which are defined as computer controlled mechanics manipulators, belong to the area of programmable automation. Typically, the robots perform tasks of arc welding, paint spraying, or foundary operation. One may assign a robot to perform a variety of job assignments simply by changing the appropriate computer program. The present investigation is concerned with an evaluation of the potential of the robot on the basis of its basic structure and controls. It is found that robots function well in limited areas of industry. If the range of tasks which robots can perform is to be expanded, it is necessary to provide multiple-task sensors, or special tooling, or even automatic tooling.
Task-dependent signal variations in EEG error-related potentials for brain-computer interfaces.
Iturrate, I; Montesano, L; Minguez, J
2013-04-01
A major difficulty of brain-computer interface (BCI) technology is dealing with the noise of EEG and its signal variations. Previous works studied time-dependent non-stationarities for BCIs in which the user's mental task was independent of the device operation (e.g., the mental task was motor imagery and the operational task was a speller). However, there are some BCIs, such as those based on error-related potentials, where the mental and operational tasks are dependent (e.g., the mental task is to assess the device action and the operational task is the device action itself). The dependence between the mental task and the device operation could introduce a new source of signal variations when the operational task changes, which has not been studied yet. The aim of this study is to analyse task-dependent signal variations and their effect on EEG error-related potentials. The work analyses the EEG variations on the three design steps of BCIs: an electrophysiology study to characterize the existence of these variations, a feature distribution analysis and a single-trial classification analysis to measure the impact on the final BCI performance. The results demonstrate that a change in the operational task produces variations in the potentials, even when EEG activity exclusively originated in brain areas related to error processing is considered. Consequently, the extracted features from the signals vary, and a classifier trained with one operational task presents a significant loss of performance for other tasks, requiring calibration or adaptation for each new task. In addition, a new calibration for each of the studied tasks rapidly outperforms adaptive techniques designed in the literature to mitigate the EEG time-dependent non-stationarities.
Task-dependent signal variations in EEG error-related potentials for brain-computer interfaces
NASA Astrophysics Data System (ADS)
Iturrate, I.; Montesano, L.; Minguez, J.
2013-04-01
Objective. A major difficulty of brain-computer interface (BCI) technology is dealing with the noise of EEG and its signal variations. Previous works studied time-dependent non-stationarities for BCIs in which the user’s mental task was independent of the device operation (e.g., the mental task was motor imagery and the operational task was a speller). However, there are some BCIs, such as those based on error-related potentials, where the mental and operational tasks are dependent (e.g., the mental task is to assess the device action and the operational task is the device action itself). The dependence between the mental task and the device operation could introduce a new source of signal variations when the operational task changes, which has not been studied yet. The aim of this study is to analyse task-dependent signal variations and their effect on EEG error-related potentials.Approach. The work analyses the EEG variations on the three design steps of BCIs: an electrophysiology study to characterize the existence of these variations, a feature distribution analysis and a single-trial classification analysis to measure the impact on the final BCI performance.Results and significance. The results demonstrate that a change in the operational task produces variations in the potentials, even when EEG activity exclusively originated in brain areas related to error processing is considered. Consequently, the extracted features from the signals vary, and a classifier trained with one operational task presents a significant loss of performance for other tasks, requiring calibration or adaptation for each new task. In addition, a new calibration for each of the studied tasks rapidly outperforms adaptive techniques designed in the literature to mitigate the EEG time-dependent non-stationarities.
Software and Hardware Utilization in Computer Medicine Education.
ERIC Educational Resources Information Center
Pitts, Gerald N.; Bateman, Barry L.
Computers are currently being used to perform medical tasks such as: (1) taking medical histories; (2) patient care and health-unit care management; (3) clinical and laboratory work; (4) physiological signal monitoring; and (5) multiphasic screening. In a survey of over 200 institutions, over 339 computer language applications were found, many of…
NASA Astrophysics Data System (ADS)
Antonik, Piotr; Haelterman, Marc; Massar, Serge
2017-05-01
Reservoir computing is a bioinspired computing paradigm for processing time-dependent signals. Its hardware implementations have received much attention because of their simplicity and remarkable performance on a series of benchmark tasks. In previous experiments, the output was uncoupled from the system and, in most cases, simply computed off-line on a postprocessing computer. However, numerical investigations have shown that feeding the output back into the reservoir opens the possibility of long-horizon time-series forecasting. Here, we present a photonic reservoir computer with output feedback, and we demonstrate its capacity to generate periodic time series and to emulate chaotic systems. We study in detail the effect of experimental noise on system performance. In the case of chaotic systems, we introduce several metrics, based on standard signal-processing techniques, to evaluate the quality of the emulation. Our work significantly enlarges the range of tasks that can be solved by hardware reservoir computers and, therefore, the range of applications they could potentially tackle. It also raises interesting questions in nonlinear dynamics and chaos theory.
Sort-Mid tasks scheduling algorithm in grid computing.
Reda, Naglaa M; Tawfik, A; Marzok, Mohamed A; Khamis, Soheir M
2015-11-01
Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.
Sort-Mid tasks scheduling algorithm in grid computing
Reda, Naglaa M.; Tawfik, A.; Marzok, Mohamed A.; Khamis, Soheir M.
2014-01-01
Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan. PMID:26644937
Zhang, Rubo; Yang, Yu
2017-01-01
Research on distributed task planning model for multi-autonomous underwater vehicle (MAUV). A scroll time domain quantum artificial bee colony (STDQABC) optimization algorithm is proposed to solve the multi-AUV optimal task planning scheme. In the uncertain marine environment, the rolling time domain control technique is used to realize a numerical optimization in a narrowed time range. Rolling time domain control is one of the better task planning techniques, which can greatly reduce the computational workload and realize the tradeoff between AUV dynamics, environment and cost. Finally, a simulation experiment was performed to evaluate the distributed task planning performance of the scroll time domain quantum bee colony optimization algorithm. The simulation results demonstrate that the STDQABC algorithm converges faster than the QABC and ABC algorithms in terms of both iterations and running time. The STDQABC algorithm can effectively improve MAUV distributed tasking planning performance, complete the task goal and get the approximate optimal solution. PMID:29186166
Li, Jianjun; Zhang, Rubo; Yang, Yu
2017-01-01
Research on distributed task planning model for multi-autonomous underwater vehicle (MAUV). A scroll time domain quantum artificial bee colony (STDQABC) optimization algorithm is proposed to solve the multi-AUV optimal task planning scheme. In the uncertain marine environment, the rolling time domain control technique is used to realize a numerical optimization in a narrowed time range. Rolling time domain control is one of the better task planning techniques, which can greatly reduce the computational workload and realize the tradeoff between AUV dynamics, environment and cost. Finally, a simulation experiment was performed to evaluate the distributed task planning performance of the scroll time domain quantum bee colony optimization algorithm. The simulation results demonstrate that the STDQABC algorithm converges faster than the QABC and ABC algorithms in terms of both iterations and running time. The STDQABC algorithm can effectively improve MAUV distributed tasking planning performance, complete the task goal and get the approximate optimal solution.
Quantitative analysis of task selection for brain-computer interfaces
NASA Astrophysics Data System (ADS)
Llera, Alberto; Gómez, Vicenç; Kappen, Hilbert J.
2014-10-01
Objective. To assess quantitatively the impact of task selection in the performance of brain-computer interfaces (BCI). Approach. We consider the task-pairs derived from multi-class BCI imagery movement tasks in three different datasets. We analyze for the first time the benefits of task selection on a large-scale basis (109 users) and evaluate the possibility of transferring task-pair information across days for a given subject. Main results. Selecting the subject-dependent optimal task-pair among three different imagery movement tasks results in approximately 20% potential increase in the number of users that can be expected to control a binary BCI. The improvement is observed with respect to the best task-pair fixed across subjects. The best task-pair selected for each subject individually during a first day of recordings is generally a good task-pair in subsequent days. In general, task learning from the user side has a positive influence in the generalization of the optimal task-pair, but special attention should be given to inexperienced subjects. Significance. These results add significant evidence to existing literature that advocates task selection as a necessary step towards usable BCIs. This contribution motivates further research focused on deriving adaptive methods for task selection on larger sets of mental tasks in practical online scenarios.
Human problem solving performance in a fault diagnosis task
NASA Technical Reports Server (NTRS)
Rouse, W. B.
1978-01-01
It is proposed that humans in automated systems will be asked to assume the role of troubleshooter or problem solver and that the problems which they will be asked to solve in such systems will not be amenable to rote solution. The design of visual displays for problem solving in such situations is considered, and the results of two experimental investigations of human problem solving performance in the diagnosis of faults in graphically displayed network problems are discussed. The effects of problem size, forced-pacing, computer aiding, and training are considered. Results indicate that human performance deviates from optimality as problem size increases. Forced-pacing appears to cause the human to adopt fairly brute force strategies, as compared to those adopted in self-paced situations. Computer aiding substantially lessens the number of mistaken diagnoses by performing the bookkeeping portions of the task.
Evaluation of a novel Serious Game based assessment tool for patients with Alzheimer's disease.
Vallejo, Vanessa; Wyss, Patric; Rampa, Luca; Mitache, Andrei V; Müri, René M; Mosimann, Urs P; Nef, Tobias
2017-01-01
Despite growing interest in developing ecological assessment of difficulties in patients with Alzheimer's disease new methods assessing the cognitive difficulties related to functional activities are missing. To complete current evaluation, the use of Serious Games can be a promising approach as it offers the possibility to recreate a virtual environment with daily living activities and a precise and complete cognitive evaluation. The aim of the present study was to evaluate the usability and the screening potential of a new ecological tool for assessment of cognitive functions in patients with Alzheimer's disease. Eighteen patients with Alzheimer's disease and twenty healthy controls participated to the study. They were asked to complete six daily living virtual tasks assessing several cognitive functions: three navigation tasks, one shopping task, one cooking task and one table preparation task following a one-day scenario. Usability of the game was evaluated through a questionnaire and through the analysis of the computer interactions for the two groups. Furthermore, the performances in terms of time to achieve the task and percentage of completion on the several tasks were recorded. Results indicate that both groups subjectively found the game user friendly and they were objectively able to play the game without computer interactions difficulties. Comparison of the performances between the two groups indicated a significant difference in terms of percentage of achievement of the several tasks and in terms of time they needed to achieve the several tasks. This study suggests that this new Serious Game based assessment tool is a user-friendly and ecological method to evaluate the cognitive abilities related to the difficulties patients can encounter in daily living activities and can be used as a screening tool as it allowed to distinguish Alzheimer's patient's performance from healthy controls.
POPEYE: A production rule-based model of multitask supervisory control (POPCORN)
NASA Technical Reports Server (NTRS)
Townsend, James T.; Kadlec, Helena; Kantowitz, Barry H.
1988-01-01
Recent studies of relationships between subjective ratings of mental workload, performance, and human operator and task characteristics have indicated that these relationships are quite complex. In order to study the various relationships and place subjective mental workload within a theoretical framework, we developed a production system model for the performance component of the complex supervisory task called POPCORN. The production system model is represented by a hierarchial structure of goals and subgoals, and the information flow is controlled by a set of condition-action rules. The implementation of this production system, called POPEYE, generates computer simulated data under different task difficulty conditions which are comparable to those of human operators performing the task. This model is the performance aspect of an overall dynamic psychological model which we are developing to examine and quantify relationships between performance and psychological aspects in a complex environment.
Improving balance by performing a secondary cognitive task.
Swan, Laurie; Otani, Hajime; Loubert, Peter V; Sheffert, Sonya M; Dunbar, Gary L
2004-02-01
Contrary to general findings in the attention and memory literature, some studies have shown that performing a secondary cognitive task produces an improvement in balance performance. The purpose of the present experiment was to investigate under what condition such an improvement would occur. Young and older adults were asked to hold as still as possible on a platform that measured sway while performing or not performing the encoding phase of the Brooks' (1967) spatial or non-spatial memory task. The difficulty of maintaining balance was manipulated by varying the availability of visual input and sway-referenced motion of the platform. Sway scores were computed based on the distance between the individual pressure centres and the average centre of pressure during each 20-s trial. The results indicated that both the spatial and non-spatial memory tasks improved balance for older adults under the most difficult balance condition.
Interpersonal Biocybernetics: Connecting Through Social Psychophysiology
NASA Technical Reports Server (NTRS)
Pope, Alan T.; Stephens, Chad L.
2012-01-01
One embodiment of biocybernetic adaptation is a human-computer interaction system designed such that physiological signals modulate the effect that control of a task by other means, usually manual control, has on performance of the task. Such a modulation system enables a variety of human-human interactions based upon physiological self-regulation performance. These interpersonal interactions may be mixes of competition and cooperation for simulation training and/or videogame entertainment
Falls Risk and Simulated Driving Performance in Older Adults
Gaspar, John G.; Neider, Mark B.; Kramer, Arthur F.
2013-01-01
Declines in executive function and dual-task performance have been related to falls in older adults, and recent research suggests that older adults at risk for falls also show impairments on real-world tasks, such as crossing a street. The present study examined whether falls risk was associated with driving performance in a high-fidelity simulator. Participants were classified as high or low falls risk using the Physiological Profile Assessment and completed a number of challenging simulated driving assessments in which they responded quickly to unexpected events. High falls risk drivers had slower response times (~2.1 seconds) to unexpected events compared to low falls risk drivers (~1.7 seconds). Furthermore, when asked to perform a concurrent cognitive task while driving, high falls risk drivers showed greater costs to secondary task performance than did low falls risk drivers, and low falls risk older adults also outperformed high falls risk older adults on a computer-based measure of dual-task performance. Our results suggest that attentional differences between high and low falls risk older adults extend to simulated driving performance. PMID:23509627
Dual Arm Work Package performance estimates and telerobot task network simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Draper, J.V.; Blair, L.M.
1997-02-01
This paper describes the methodology and results of a network simulation study of the Dual Arm Work Package (DAWP), to be employed for dismantling the Argonne National Laboratory CP-5 reactor. The development of the simulation model was based upon the results of a task analysis for the same system. This study was performed by the Oak Ridge National Laboratory (ORNL), in the Robotics and Process Systems Division. Funding was provided the US Department of Energy`s Office of Technology Development, Robotics Technology Development Program (RTDP). The RTDP is developing methods of computer simulation to estimate telerobotic system performance. Data were collectedmore » to provide point estimates to be used in a task network simulation model. Three skilled operators performed six repetitions of a pipe cutting task representative of typical teleoperation cutting operations.« less
Flow visualization of CFD using graphics workstations
NASA Technical Reports Server (NTRS)
Lasinski, Thomas; Buning, Pieter; Choi, Diana; Rogers, Stuart; Bancroft, Gordon
1987-01-01
High performance graphics workstations are used to visualize the fluid flow dynamics obtained from supercomputer solutions of computational fluid dynamic programs. The visualizations can be done independently on the workstation or while the workstation is connected to the supercomputer in a distributed computing mode. In the distributed mode, the supercomputer interactively performs the computationally intensive graphics rendering tasks while the workstation performs the viewing tasks. A major advantage of the workstations is that the viewers can interactively change their viewing position while watching the dynamics of the flow fields. An overview of the computer hardware and software required to create these displays is presented. For complex scenes the workstation cannot create the displays fast enough for good motion analysis. For these cases, the animation sequences are recorded on video tape or 16 mm film a frame at a time and played back at the desired speed. The additional software and hardware required to create these video tapes or 16 mm movies are also described. Photographs illustrating current visualization techniques are discussed. Examples of the use of the workstations for flow visualization through animation are available on video tape.
GPU Accelerated Vector Median Filter
NASA Technical Reports Server (NTRS)
Aras, Rifat; Shen, Yuzhong
2011-01-01
Noise reduction is an important step for most image processing tasks. For three channel color images, a widely used technique is vector median filter in which color values of pixels are treated as 3-component vectors. Vector median filters are computationally expensive; for a window size of n x n, each of the n(sup 2) vectors has to be compared with other n(sup 2) - 1 vectors in distances. General purpose computation on graphics processing units (GPUs) is the paradigm of utilizing high-performance many-core GPU architectures for computation tasks that are normally handled by CPUs. In this work. NVIDIA's Compute Unified Device Architecture (CUDA) paradigm is used to accelerate vector median filtering. which has to the best of our knowledge never been done before. The performance of GPU accelerated vector median filter is compared to that of the CPU and MPI-based versions for different image and window sizes, Initial findings of the study showed 100x improvement of performance of vector median filter implementation on GPUs over CPU implementations and further speed-up is expected after more extensive optimizations of the GPU algorithm .
Use of a Tracing Task to Assess Visuomotor Performance: Effects of Age, Sex, and Handedness
2013-01-01
Background. Visuomotor abnormalities are common in aging and age-related disease, yet difficult to quantify. This study investigated the effects of healthy aging, sex, and handedness on the performance of a tracing task. Participants (n = 150, aged 21–95 years, 75 females) used a stylus to follow a moving target around a circle on a tablet computer with their dominant and nondominant hands. Participants also performed the Trail Making Test (a measure of executive function). Methods. Deviations from the circular path were computed to derive an “error” time series. For each time series, absolute mean, variance, and complexity index (a proposed measure of system functionality and adaptability) were calculated. Using the moving target and stylus coordinates, the percentage of task time within the target region and the cumulative micropause duration (a measure of motion continuity) were computed. Results. All measures showed significant effects of aging (p < .0005). Post hoc age group comparisons showed that with increasing age, the absolute mean and variance of the error increased, complexity index decreased, percentage of time within the target region decreased, and cumulative micropause duration increased. Only complexity index showed a significant difference between dominant versus nondominant hands within each age group (p < .0005). All measures showed relationships to the Trail Making Test (p < .05). Conclusions. Measures derived from a tracing task identified performance differences in healthy individuals as a function of age, sex, and handedness. Studies in populations with specific neuromotor syndromes are warranted to test the utility of measures based on the dynamics of tracking a target as a clinical assessment tool. PMID:23388876
ERIC Educational Resources Information Center
McCluskey, James J.
1997-01-01
A study of 160 undergraduate journalism students trained to design projects (stacks) using HyperCard on Macintosh computers determined that right-brain dominant subjects outperformed left-brain and mixed-brain dominant subjects, whereas left-brain dominant subjects out performed mixed-brain dominant subjects in several areas. Recommends future…
Four Studies on Aspects of Assessing Computational Performance. Technical Report No. 297.
ERIC Educational Resources Information Center
Romberg, Thomas A., Ed.
The four studies reported in this document deal with aspects of assessing students' performance on computational skills. The first study grew out of a need for an instrument to measure students' speed at recalling addition facts. This had seemed to be a very easy task, but it proved to be much more difficult than anticipated. The second study grew…
LUNGx Challenge for computerized lung nodule classification
Armato, Samuel G.; Drukker, Karen; Li, Feng; ...
2016-12-19
The purpose of this work is to describe the LUNGx Challenge for the computerized classification of lung nodules on diagnostic computed tomography (CT) scans as benign or malignant and report the performance of participants’ computerized methods along with that of six radiologists who participated in an observer study performing the same Challenge task on the same dataset. The Challenge provided sets of calibration and testing scans, established a performance assessment process, and created an infrastructure for case dissemination and result submission. We present ten groups that applied their own methods to 73 lung nodules (37 benign and 36 malignant) thatmore » were selected to achieve approximate size matching between the two cohorts. Area under the receiver operating characteristic curve (AUC) values for these methods ranged from 0.50 to 0.68; only three methods performed statistically better than random guessing. The radiologists’ AUC values ranged from 0.70 to 0.85; three radiologists performed statistically better than the best-performing computer method. The LUNGx Challenge compared the performance of computerized methods in the task of differentiating benign from malignant lung nodules on CT scans, placed in the context of the performance of radiologists on the same task. Lastly, the continued public availability of the Challenge cases will provide a valuable resource for the medical imaging research community.« less
LUNGx Challenge for computerized lung nodule classification
Armato, Samuel G.; Drukker, Karen; Li, Feng; Hadjiiski, Lubomir; Tourassi, Georgia D.; Engelmann, Roger M.; Giger, Maryellen L.; Redmond, George; Farahani, Keyvan; Kirby, Justin S.; Clarke, Laurence P.
2016-01-01
Abstract. The purpose of this work is to describe the LUNGx Challenge for the computerized classification of lung nodules on diagnostic computed tomography (CT) scans as benign or malignant and report the performance of participants’ computerized methods along with that of six radiologists who participated in an observer study performing the same Challenge task on the same dataset. The Challenge provided sets of calibration and testing scans, established a performance assessment process, and created an infrastructure for case dissemination and result submission. Ten groups applied their own methods to 73 lung nodules (37 benign and 36 malignant) that were selected to achieve approximate size matching between the two cohorts. Area under the receiver operating characteristic curve (AUC) values for these methods ranged from 0.50 to 0.68; only three methods performed statistically better than random guessing. The radiologists’ AUC values ranged from 0.70 to 0.85; three radiologists performed statistically better than the best-performing computer method. The LUNGx Challenge compared the performance of computerized methods in the task of differentiating benign from malignant lung nodules on CT scans, placed in the context of the performance of radiologists on the same task. The continued public availability of the Challenge cases will provide a valuable resource for the medical imaging research community. PMID:28018939
LUNGx Challenge for computerized lung nodule classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armato, Samuel G.; Drukker, Karen; Li, Feng
The purpose of this work is to describe the LUNGx Challenge for the computerized classification of lung nodules on diagnostic computed tomography (CT) scans as benign or malignant and report the performance of participants’ computerized methods along with that of six radiologists who participated in an observer study performing the same Challenge task on the same dataset. The Challenge provided sets of calibration and testing scans, established a performance assessment process, and created an infrastructure for case dissemination and result submission. We present ten groups that applied their own methods to 73 lung nodules (37 benign and 36 malignant) thatmore » were selected to achieve approximate size matching between the two cohorts. Area under the receiver operating characteristic curve (AUC) values for these methods ranged from 0.50 to 0.68; only three methods performed statistically better than random guessing. The radiologists’ AUC values ranged from 0.70 to 0.85; three radiologists performed statistically better than the best-performing computer method. The LUNGx Challenge compared the performance of computerized methods in the task of differentiating benign from malignant lung nodules on CT scans, placed in the context of the performance of radiologists on the same task. Lastly, the continued public availability of the Challenge cases will provide a valuable resource for the medical imaging research community.« less
Selling points: What cognitive abilities are tapped by casual video games?
Baniqued, Pauline L.; Lee, Hyunkyu; Voss, Michelle W.; Basak, Chandramallika; Cosman, Joshua D.; DeSouza, Shanna; Severson, Joan; Salthouse, Timothy A.; Kramer, Arthur F.
2013-01-01
The idea that video games or computer-based applications can improve cognitive function has led to a proliferation of programs claiming to “train the brain.” However, there is often little scientific basis in the development of commercial training programs, and many research-based programs yield inconsistent or weak results. In this study, we sought to better understand the nature of cognitive abilities tapped by casual video games and thus reflect on their potential as a training tool. A moderately large sample of participants (n=209) played 20 web-based casual games and performed a battery of cognitive tasks. We used cognitive task analysis and multivariate statistical techniques to characterize the relationships between performance metrics. We validated the cognitive abilities measured in the task battery, examined a task analysis-based categorization of the casual games, and then characterized the relationship between game and task performance. We found that games categorized to tap working memory and reasoning were robustly related to performance on working memory and fluid intelligence tasks, with fluid intelligence best predicting scores on working memory and reasoning games. We discuss these results in the context of overlap in cognitive processes engaged by the cognitive tasks and casual games, and within the context of assessing near and far transfer. While this is not a training study, these findings provide a methodology to assess the validity of using certain games as training and assessment devices for specific cognitive abilities, and shed light on the mixed transfer results in the computer-based training literature. Moreover, the results can inform design of a more theoretically-driven and methodologically-sound cognitive training program. PMID:23246789
Selling points: What cognitive abilities are tapped by casual video games?
Baniqued, Pauline L; Lee, Hyunkyu; Voss, Michelle W; Basak, Chandramallika; Cosman, Joshua D; Desouza, Shanna; Severson, Joan; Salthouse, Timothy A; Kramer, Arthur F
2013-01-01
The idea that video games or computer-based applications can improve cognitive function has led to a proliferation of programs claiming to "train the brain." However, there is often little scientific basis in the development of commercial training programs, and many research-based programs yield inconsistent or weak results. In this study, we sought to better understand the nature of cognitive abilities tapped by casual video games and thus reflect on their potential as a training tool. A moderately large sample of participants (n=209) played 20 web-based casual games and performed a battery of cognitive tasks. We used cognitive task analysis and multivariate statistical techniques to characterize the relationships between performance metrics. We validated the cognitive abilities measured in the task battery, examined a task analysis-based categorization of the casual games, and then characterized the relationship between game and task performance. We found that games categorized to tap working memory and reasoning were robustly related to performance on working memory and fluid intelligence tasks, with fluid intelligence best predicting scores on working memory and reasoning games. We discuss these results in the context of overlap in cognitive processes engaged by the cognitive tasks and casual games, and within the context of assessing near and far transfer. While this is not a training study, these findings provide a methodology to assess the validity of using certain games as training and assessment devices for specific cognitive abilities, and shed light on the mixed transfer results in the computer-based training literature. Moreover, the results can inform design of a more theoretically-driven and methodologically-sound cognitive training program. Copyright © 2012 Elsevier B.V. All rights reserved.
A novel strategy for load balancing of distributed medical applications.
Logeswaran, Rajasvaran; Chen, Li-Choo
2012-04-01
Current trends in medicine, specifically in the electronic handling of medical applications, ranging from digital imaging, paperless hospital administration and electronic medical records, telemedicine, to computer-aided diagnosis, creates a burden on the network. Distributed Service Architectures, such as Intelligent Network (IN), Telecommunication Information Networking Architecture (TINA) and Open Service Access (OSA), are able to meet this new challenge. Distribution enables computational tasks to be spread among multiple processors; hence, performance is an important issue. This paper proposes a novel approach in load balancing, the Random Sender Initiated Algorithm, for distribution of tasks among several nodes sharing the same computational object (CO) instances in Distributed Service Architectures. Simulations illustrate that the proposed algorithm produces better network performance than the benchmark load balancing algorithms-the Random Node Selection Algorithm and the Shortest Queue Algorithm, especially under medium and heavily loaded conditions.
Characterization of task-free and task-performance brain states via functional connectome patterns.
Zhang, Xin; Guo, Lei; Li, Xiang; Zhang, Tuo; Zhu, Dajiang; Li, Kaiming; Chen, Hanbo; Lv, Jinglei; Jin, Changfeng; Zhao, Qun; Li, Lingjiang; Liu, Tianming
2013-12-01
Both resting state fMRI (R-fMRI) and task-based fMRI (T-fMRI) have been widely used to study the functional activities of the human brain during task-free and task-performance periods, respectively. However, due to the difficulty in strictly controlling the participating subject's mental status and their cognitive behaviors during R-fMRI/T-fMRI scans, it has been challenging to ascertain whether or not an R-fMRI/T-fMRI scan truly reflects the participant's functional brain states during task-free/task-performance periods. This paper presents a novel computational approach to characterizing and differentiating the brain's functional status into task-free or task-performance states, by which the functional brain activities can be effectively understood and differentiated. Briefly, the brain's functional state is represented by a whole-brain quasi-stable connectome pattern (WQCP) of R-fMRI or T-fMRI data based on 358 consistent cortical landmarks across individuals, and then an effective sparse representation method was applied to learn the atomic connectome patterns (ACPs) of both task-free and task-performance states. Experimental results demonstrated that the learned ACPs for R-fMRI and T-fMRI datasets are substantially different, as expected. A certain portion of ACPs from R-fMRI and T-fMRI data were overlapped, suggesting some subjects with overlapping ACPs were not in the expected task-free/task-performance brain states. Besides, potential outliers in the T-fMRI dataset were further investigated via functional activation detections in different groups, and our results revealed unexpected task-performances of some subjects. This work offers novel insights into the functional architectures of the brain. Copyright © 2013 Elsevier B.V. All rights reserved.
Characterization of Task-free and Task-performance Brain States via Functional Connectome Patterns
Zhang, Xin; Guo, Lei; Li, Xiang; Zhang, Tuo; Zhu, Dajiang; Li, Kaiming; Chen, Hanbo; Lv, Jinglei; Jin, Changfeng; Zhao, Qun; Li, Lingjiang; Liu, Tianming
2014-01-01
Both resting state fMRI (R-fMRI) and task-based fMRI (T-fMRI) have been widely used to study the functional activities of the human brain during task-free and task-performance periods, respectively. However, due to the difficulty in strictly controlling the participating subject's mental status and their cognitive behaviors during R-fMRI/T-fMRI scans, it has been challenging to ascertain whether or not an R-fMRI/T-fMRI scan truly reflects the participant's functional brain states during task-free/task-performance periods. This paper presents a novel computational approach to characterizing and differentiating the brain's functional status into task-free or task-performance states, by which the functional brain activities can be effectively understood and differentiated. Briefly, the brain's functional state is represented by a whole-brain quasi-stable connectome pattern (WQCP) of R-fMRI or T-fMRI data based on 358 consistent cortical landmarks across individuals, and then an effective sparse representation method was applied to learn the atomic connectome patterns (ACP) of both task-free and task-performance states. Experimental results demonstrated that the learned ACPs for R-fMRI and T-fMRI datasets are substantially different, as expected. A certain portion of ACPs from R-fMRI and T-fMRI data were overlapped, suggesting some subjects with overlapping ACPs were not in the expected task-free/task-performance brain states. Besides, potential outliers in the T-fMRI dataset were further investigated via functional activation detections in different groups, and our results revealed unexpected task-performances of some subjects. This work offers novel insights into the functional architectures of the brain. PMID:23938590
Connectionist Models and Parallelism in High Level Vision.
1985-01-01
GRANT NUMBER(s) Jerome A. Feldman N00014-82-K-0193 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENt. PROJECT, TASK Computer Science...Connectionist Models 2.1 Background and Overviev % Computer science is just beginning to look seriously at parallel computation : it may turn out that...the chair. The program includes intermediate level networks that compute more complex joints and ones that compute parallelograms in the image. These
DOT National Transportation Integrated Search
2016-01-01
Human attention is a finite resource. When interrupted while performing a task, this : resource is split between two interactive tasks. People have to decide whether the benefits : from the interruptive interaction will be enough to offset the loss o...
The Fleeting Nature of Sex Differences in Spatial Ability.
ERIC Educational Resources Information Center
Alderton, David L.
Gender differences were examined on three computer-administered spatial processing tasks: (1) the Intercept task, requiring processing dynamic or moving figures; (2) the mental rotation test, employing rotated asymmetric polygons; and (3) the integrating details test, in which subjects performed a complex visual synthesis. Participants were about…
Task scheduling in dataflow computer architectures
NASA Technical Reports Server (NTRS)
Katsinis, Constantine
1994-01-01
Dataflow computers provide a platform for the solution of a large class of computational problems, which includes digital signal processing and image processing. Many typical applications are represented by a set of tasks which can be repetitively executed in parallel as specified by an associated dataflow graph. Research in this area aims to model these architectures, develop scheduling procedures, and predict the transient and steady state performance. Researchers at NASA have created a model and developed associated software tools which are capable of analyzing a dataflow graph and predicting its runtime performance under various resource and timing constraints. These models and tools were extended and used in this work. Experiments using these tools revealed certain properties of such graphs that require further study. Specifically, the transient behavior at the beginning of the execution of a graph can have a significant effect on the steady state performance. Transformation and retiming of the application algorithm and its initial conditions can produce a different transient behavior and consequently different steady state performance. The effect of such transformations on the resource requirements or under resource constraints requires extensive study. Task scheduling to obtain maximum performance (based on user-defined criteria), or to satisfy a set of resource constraints, can also be significantly affected by a transformation of the application algorithm. Since task scheduling is performed by heuristic algorithms, further research is needed to determine if new scheduling heuristics can be developed that can exploit such transformations. This work has provided the initial development for further long-term research efforts. A simulation tool was completed to provide insight into the transient and steady state execution of a dataflow graph. A set of scheduling algorithms was completed which can operate in conjunction with the modeling and performance tools previously developed. Initial studies on the performance of these algorithms were done to examine the effects of application algorithm transformations as measured by such quantities as number of processors, time between outputs, time between input and output, communication time, and memory size.
NASA Astrophysics Data System (ADS)
Tezuka, Miwa; Kanno, Kazutaka; Bunsen, Masatoshi
2016-08-01
Reservoir computing is a machine-learning paradigm based on information processing in the human brain. We numerically demonstrate reservoir computing with a slowly modulated mask signal for preprocessing by using a mutually coupled optoelectronic system. The performance of our system is quantitatively evaluated by a chaotic time series prediction task. Our system can produce comparable performance with reservoir computing with a single feedback system and a fast modulated mask signal. We showed that it is possible to slow down the modulation speed of the mask signal by using the mutually coupled system in reservoir computing.
Evaluation of Hands-Free Devices for Space Habitat Maintenance Procedures
NASA Technical Reports Server (NTRS)
Hoffman, R. B.; Twyford, E.; Conlee, C. S.; Litaker, H. L.; Solemn, J. A.; Holden
2007-01-01
Currently, International Space Station (ISS) crews use a laptop computer to display procedures for performing onboard maintenance tasks. This approach has been determined to be suboptimal. A heuristic evaluation and two studies have been completed to test commercial off-the-shelf (COTS) "near-eye" heads up displays (HUDs) for support of these types of maintenance tasks. In both studies, subjects worked through electronic procedures to perform simple maintenance tasks. As a result of the Phase I study, three HUDs were down-selected to one. In the Phase II study, the HUD was compared against two other electronic display devices - a laptop computer and an e-book reader. Results suggested that adjustability and stability of the HUD display were the most significant acceptability factors to consider for near-eye displays. The Phase II study uncovered a number of advantages and disadvantages of the HUD relative to the laptop and e-book reader for interacting with electronic procedures.
NASA Technical Reports Server (NTRS)
Clement, Bradley J.; Estlin, Tara A.; Bornstein, Benjamin J.
2013-01-01
The Mobile Thread Task Manager (MTTM) is being applied to parallelizing existing flight software to understand the benefits and to develop new techniques and architectural concepts for adapting software to multicore architectures. It allocates and load-balances tasks for a group of threads that migrate across processors to improve cache performance. In order to balance-load across threads, the MTTM augments a basic map-reduce strategy to draw jobs from a global queue. In a multicore processor, memory may be "homed" to the cache of a specific processor and must be accessed from that processor. The MTTB architecture wraps access to data with thread management to move threads to the home processor for that data so that the computation follows the data in an attempt to avoid L2 cache misses. Cache homing is also handled by a memory manager that translates identifiers to processor IDs where the data will be homed (according to rules defined by the user). The user can also specify the number of threads and processors separately, which is important for tuning performance for different patterns of computation and memory access. MTTM efficiently processes tasks in parallel on a multiprocessor computer. It also provides an interface to make it easier to adapt existing software to a multiprocessor environment.
A Scheduling Algorithm for Replicated Real-Time Tasks
NASA Technical Reports Server (NTRS)
Yu, Albert C.; Lin, Kwei-Jay
1991-01-01
We present an algorithm for scheduling real-time periodic tasks on a multiprocessor system under fault-tolerant requirement. Our approach incorporates both the redundancy and masking technique and the imprecise computation model. Since the tasks in hard real-time systems have stringent timing constraints, the redundancy and masking technique are more appropriate than the rollback techniques which usually require extra time for error recovery. The imprecise computation model provides flexible functionality by trading off the quality of the result produced by a task with the amount of processing time required to produce it. It therefore permits the performance of a real-time system to degrade gracefully. We evaluate the algorithm by stochastic analysis and Monte Carlo simulations. The results show that the algorithm is resilient under hardware failures.
A study on haptic collaborative game in shared virtual environment
NASA Astrophysics Data System (ADS)
Lu, Keke; Liu, Guanyang; Liu, Lingzhi
2013-03-01
A study on collaborative game in shared virtual environment with haptic feedback over computer networks is introduced in this paper. A collaborative task was used where the players located at remote sites and played the game together. The player can feel visual and haptic feedback in virtual environment compared to traditional networked multiplayer games. The experiment was desired in two conditions: visual feedback only and visual-haptic feedback. The goal of the experiment is to assess the impact of force feedback on collaborative task performance. Results indicate that haptic feedback is beneficial for performance enhancement for collaborative game in shared virtual environment. The outcomes of this research can have a powerful impact on the networked computer games.
Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus
2016-01-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922
schwimmbad: A uniform interface to parallel processing pools in Python
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.; Foreman-Mackey, Daniel
2017-09-01
Many scientific and computing problems require doing some calculation on all elements of some data set. If the calculations can be executed in parallel (i.e. without any communication between calculations), these problems are said to be perfectly parallel. On computers with multiple processing cores, these tasks can be distributed and executed in parallel to greatly improve performance. A common paradigm for handling these distributed computing problems is to use a processing "pool": the "tasks" (the data) are passed in bulk to the pool, and the pool handles distributing the tasks to a number of worker processes when available. schwimmbad provides a uniform interface to parallel processing pools and enables switching easily between local development (e.g., serial processing or with multiprocessing) and deployment on a cluster or supercomputer (via, e.g., MPI or JobLib).
The LHCb Grid Simulation: Proof of Concept
NASA Astrophysics Data System (ADS)
Hushchyn, M.; Ustyuzhanin, A.; Arzymatov, K.; Roiser, S.; Baranov, A.
2017-10-01
The Worldwide LHC Computing Grid provides access to data and computational resources to analyze it for researchers with different geographical locations. The grid has a hierarchical topology with multiple sites distributed over the world with varying number of CPUs, amount of disk storage and connection bandwidth. Job scheduling and data distribution strategy are key elements of grid performance. Optimization of algorithms for those tasks requires their testing on real grid which is hard to achieve. Having a grid simulator might simplify this task and therefore lead to more optimal scheduling and data placement algorithms. In this paper we demonstrate a grid simulator for the LHCb distributed computing software.
Probabilistic brains: knowns and unknowns
Pouget, Alexandre; Beck, Jeffrey M; Ma, Wei Ji; Latham, Peter E
2015-01-01
There is strong behavioral and physiological evidence that the brain both represents probability distributions and performs probabilistic inference. Computational neuroscientists have started to shed light on how these probabilistic representations and computations might be implemented in neural circuits. One particularly appealing aspect of these theories is their generality: they can be used to model a wide range of tasks, from sensory processing to high-level cognition. To date, however, these theories have only been applied to very simple tasks. Here we discuss the challenges that will emerge as researchers start focusing their efforts on real-life computations, with a focus on probabilistic learning, structural learning and approximate inference. PMID:23955561
Cook, Nicola A; Kim, Jin Un; Pasha, Yasmin; Crossey, Mary Me; Schembri, Adrian J; Harel, Brian T; Kimhofer, Torben; Taylor-Robinson, Simon D
2017-01-01
Psychometric testing is used to identify patients with cirrhosis who have developed hepatic encephalopathy (HE). Most batteries consist of a series of paper-and-pencil tests, which are cumbersome for most clinicians. A modern, easy-to-use, computer-based battery would be a helpful clinical tool, given that in its minimal form, HE has an impact on both patients' quality of life and the ability to drive and operate machinery (with societal consequences). We compared the Cogstate™ computer battery testing with the Psychometric Hepatic Encephalopathy Score (PHES) tests, with a view to simplify the diagnosis. This was a prospective study of 27 patients with histologically proven cirrhosis. An analysis of psychometric testing was performed using accuracy of task performance and speed of completion as primary variables to create a correlation matrix. A stepwise linear regression analysis was performed with backward elimination, using analysis of variance. Strong correlations were found between the international shopping list, international shopping list delayed recall of Cogstate and the PHES digit symbol test. The Shopping List Tasks were the only tasks that consistently had P values of <0.05 in the linear regression analysis. Subtests of the Cogstate battery correlated very strongly with the digit symbol component of PHES in discriminating severity of HE. These findings would indicate that components of the current PHES battery with the international shopping list tasks of Cogstate would be discriminant and have the potential to be used easily in clinical practice.
Finding Waldo: Learning about Users from their Interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Eli T.; Ottley, Alvitta; Zhao, Helen
Visual analytics is inherently a collaboration between human and computer. However, in current visual analytics systems, the computer has limited means of knowing about its users and their analysis processes. While existing research has shown that a user’s interactions with a system reflect a large amount of the user’s reasoning process, there has been limited advancement in developing automated, real-time techniques that mine interactions to learn about the user. In this paper, we demonstrate that we can accurately predict a user’s task performance and infer some user personality traits by using machine learning techniques to analyze interaction data. Specifically, wemore » conduct an experiment in which participants perform a visual search task and we apply well-known machine learning algorithms to three encodings of the users interaction data. We achieve, depending on algorithm and encoding, between 62% and 96% accuracy at predicting whether each user will be fast or slow at completing the task. Beyond predicting performance, we demonstrate that using the same techniques, we can infer aspects of the user’s personality factors, including locus of control, extraversion, and neuroticism. Further analyses show that strong results can be attained with limited observation time, in some cases, 82% of the final accuracy is gained after a quarter of the average task completion time. Overall, our findings show that interactions can provide information to the computer about its human collaborator, and establish a foundation for realizing mixed- initiative visual analytics systems.« less
Computer-mediated communication: task performance and satisfaction.
Simon, Andrew F
2006-06-01
The author assessed satisfaction and performance on 3 tasks (idea generation, intellective, judgment) among 75 dyads (N = 150) working through 1 of 3 modes of communication (instant messaging, videoconferencing, face to face). The author based predictions on the Media Naturalness Theory (N. Kock, 2001, 2002) and on findings from past researchers (e.g., D. M. DeRosa, C. Smith, & D. A. Hantula, in press) of the interaction between tasks and media. The present author did not identify task performance differences, although satisfaction with the medium was lower among those dyads communicating through an instant-messaging system than among those interacting face to face or through videoconferencing. The findings support the Media Naturalness Theory. The author discussed them in relation to the participants' frequent use of instant messaging and their familiarity with new communication media.
Using Apex To Construct CPM-GOMS Models
NASA Technical Reports Server (NTRS)
John, Bonnie; Vera, Alonso; Matessa, Michael; Freed, Michael; Remington, Roger
2006-01-01
process for automatically generating computational models of human/computer interactions as well as graphical and textual representations of the models has been built on the conceptual foundation of a method known in the art as CPM-GOMS. This method is so named because it combines (1) the task decomposition of analysis according to an underlying method known in the art as the goals, operators, methods, and selection (GOMS) method with (2) a model of human resource usage at the level of cognitive, perceptual, and motor (CPM) operations. CPM-GOMS models have made accurate predictions about behaviors of skilled computer users in routine tasks, but heretofore, such models have been generated in a tedious, error-prone manual process. In the present process, CPM-GOMS models are generated automatically from a hierarchical task decomposition expressed by use of a computer program, known as Apex, designed previously to be used to model human behavior in complex, dynamic tasks. An inherent capability of Apex for scheduling of resources automates the difficult task of interleaving the cognitive, perceptual, and motor resources that underlie common task operators (e.g., move and click mouse). The user interface of Apex automatically generates Program Evaluation Review Technique (PERT) charts, which enable modelers to visualize the complex parallel behavior represented by a model. Because interleaving and the generation of displays to aid visualization are automated, it is now feasible to construct arbitrarily long sequences of behaviors. The process was tested by using Apex to create a CPM-GOMS model of a relatively simple human/computer-interaction task and comparing the time predictions of the model and measurements of the times taken by human users in performing the various steps of the task. The task was to withdraw $80 in cash from an automated teller machine (ATM). For the test, a Visual Basic mockup of an ATM was created, with a provision for input from (and measurement of the performance of) the user via a mouse. The times predicted by the automatically generated model turned out to approximate the measured times fairly well (see figure). While these results are promising, there is need for further development of the process. Moreover, it will also be necessary to test other, more complex models: The actions required of the user in the ATM task are too sequential to involve substantial parallelism and interleaving and, hence, do not serve as an adequate test of the unique strength of CPM-GOMS models to accommodate parallelism and interleaving.
Silberstein, M.; Tzemach, A.; Dovgolevsky, N.; Fishelson, M.; Schuster, A.; Geiger, D.
2006-01-01
Computation of LOD scores is a valuable tool for mapping disease-susceptibility genes in the study of Mendelian and complex diseases. However, computation of exact multipoint likelihoods of large inbred pedigrees with extensive missing data is often beyond the capabilities of a single computer. We present a distributed system called “SUPERLINK-ONLINE,” for the computation of multipoint LOD scores of large inbred pedigrees. It achieves high performance via the efficient parallelization of the algorithms in SUPERLINK, a state-of-the-art serial program for these tasks, and through the use of the idle cycles of thousands of personal computers. The main algorithmic challenge has been to efficiently split a large task for distributed execution in a highly dynamic, nondedicated running environment. Notably, the system is available online, which allows computationally intensive analyses to be performed with no need for either the installation of software or the maintenance of a complicated distributed environment. As the system was being developed, it was extensively tested by collaborating medical centers worldwide on a variety of real data sets, some of which are presented in this article. PMID:16685644
Utility functions and resource management in an oversubscribed heterogeneous computing environment
Khemka, Bhavesh; Friese, Ryan; Briceno, Luis Diego; ...
2014-09-26
We model an oversubscribed heterogeneous computing system where tasks arrive dynamically and a scheduler maps the tasks to machines for execution. The environment and workloads are based on those being investigated by the Extreme Scale Systems Center at Oak Ridge National Laboratory. Utility functions that are designed based on specifications from the system owner and users are used to create a metric for the performance of resource allocation heuristics. Each task has a time-varying utility (importance) that the enterprise will earn based on when the task successfully completes execution. We design multiple heuristics, which include a technique to drop lowmore » utility-earning tasks, to maximize the total utility that can be earned by completing tasks. The heuristics are evaluated using simulation experiments with two levels of oversubscription. The results show the benefit of having fast heuristics that account for the importance of a task and the heterogeneity of the environment when making allocation decisions in an oversubscribed environment. Furthermore, the ability to drop low utility-earning tasks allow the heuristics to tolerate the high oversubscription as well as earn significant utility.« less
NASA Technical Reports Server (NTRS)
Washburn, David A.; Hopkins, William D.; Rumbaugh, Duane M.
1989-01-01
Effects of stimulus movement on learning, transfer, matching, and short-term memory performance were assessed with 2 monkeys using a video-task paradigm in which the animals responded to computer-generated images by manipulating a joystick. Performance on tests of learning set, transfer index, matching to sample, and delayed matching to sample in the video-task paradigm was comparable to that obtained in previous investigations using the Wisconsin General Testing Apparatus. Additionally, learning, transfer, and matching were reliably and significantly better when the stimuli or discriminanda moved than when the stimuli were stationary. External manipulations such as stimulus movement may increase attention to the demands of a task, which in turn should increase the efficiency of learning. These findings have implications for the investigation of learning in other populations, as well as for the application of the video-task paradigm to comparative study.
Virtual fixtures as tools to enhance operator performance in telepresence environments
NASA Astrophysics Data System (ADS)
Rosenberg, Louis B.
1993-12-01
This paper introduces the notion of virtual fixtures for use in telepresence systems and presents an empirical study which demonstrates that such virtual fixtures can greatly enhance operator performance within remote environments. Just as tools and fixtures in the real world can enhance human performance by guiding manual operations, providing localizing references, and reducing the mental processing required to perform a task, virtual fixtures are computer generated percepts overlaid on top of the reflection of a remote workspace which can provide similar benefits. Like a ruler guiding a pencil in a real manipulation task, a virtual fixture overlaid on top of a remote workspace can act to reduce the mental processing required to perform a task, limit the workload of certain sensory modalities, and most of all allow precision and performance to exceed natural human abilities. Because such perceptual overlays are virtual constructions they can be diverse in modality, abstract in form, and custom tailored to individual task or user needs. This study investigates the potential of virtual fixtures by implementing simple combinations of haptic and auditory sensations as perceptual overlays during a standardized telemanipulation task.
Strategic Adaptation to Task Characteristics, Incentives, and Individual Differences in Dual-Tasking
Janssen, Christian P.; Brumby, Duncan P.
2015-01-01
We investigate how good people are at multitasking by comparing behavior to a prediction of the optimal strategy for dividing attention between two concurrent tasks. In our experiment, 24 participants had to interleave entering digits on a keyboard with controlling a randomly moving cursor with a joystick. The difficulty of the tracking task was systematically varied as a within-subjects factor. Participants were also exposed to different explicit reward functions that varied the relative importance of the tracking task relative to the typing task (between-subjects). Results demonstrate that these changes in task characteristics and monetary incentives, together with individual differences in typing ability, influenced how participants choose to interleave tasks. This change in strategy then affected their performance on each task. A computational cognitive model was used to predict performance for a wide set of alternative strategies for how participants might have possibly interleaved tasks. This allowed for predictions of optimal performance to be derived, given the constraints placed on performance by the task and cognition. A comparison of human behavior with the predicted optimal strategy shows that participants behaved near optimally. Our findings have implications for the design and evaluation of technology for multitasking situations, as consideration should be given to the characteristics of the task, but also to how different users might use technology depending on their individual characteristics and their priorities. PMID:26161851
Measurement of the effect of physical exercise on the concentration of individuals with ADHD.
Silva, Alessandro P; Prado, Sueli O S; Scardovelli, Terigi A; Boschi, Silvia R M S; Campos, Luiz C; Frère, Annie F
2015-01-01
Attention Deficit Hyperactivity Disorder (ADHD) mainly affects the academic performance of children and adolescents. In addition to bringing physical and mental health benefits, physical activity has been used to prevent and improve ADHD comorbidities; however, its effectiveness has not been quantified. In this study, the effect of physical activity on children's attention was measured using a computer game. Intense physical activity was promoted by a relay race, which requires a 5-min run without a rest interval. The proposed physical stimulus was performed with 28 volunteers: 14 with ADHD (GE-EF) and 14 without ADHD symptoms (GC-EF). After 5 min of rest, these volunteers accessed the computer game to accomplish the tasks in the shortest time possible. The computer game was also accessed by another 28 volunteers: 14 with ADHD (GE) and 14 without these symptoms (GC). The response time to solve the tasks that require attention was recorded. The results of the four groups were analyzed using D'Agostino statistical tests of normality, Kruskal-Wallis analyses of variance and post-hoc Dunn tests. The groups of volunteers with ADHD who performed exercise (GE-EF) showed improved performance for the tasks that require attention with a difference of 30.52% compared with the volunteers with ADHD who did not perform the exercise (GE). The (GE-EF) group showed similar performance (2.5% difference) with the volunteers in the (GC) group who have no ADHD symptoms and did not exercise. This study shows that intense exercise can improve the attention of children with ADHD and may help their school performance.
ERIC Educational Resources Information Center
Rimland, Jeffrey C.
2013-01-01
In many evolving systems, inputs can be derived from both human observations and physical sensors. Additionally, many computation and analysis tasks can be performed by either human beings or artificial intelligence (AI) applications. For example, weather prediction, emergency event response, assistive technology for various human sensory and…
Computed Tomography Measuring Inside Machines
NASA Technical Reports Server (NTRS)
Wozniak, James F.; Scudder, Henry J.; Anders, Jeffrey E.
1995-01-01
Computed tomography applied to obtain approximate measurements of radial distances from centerline of turbopump to leading edges of diffuser vanes in turbopump. Use of computed tomography has significance beyond turbopump application: example of general concept of measuring internal dimensions of assembly of parts without having to perform time-consuming task of taking assembly apart and measuring internal parts on coordinate-measuring machine.
Computer versus Paper-Based Reading: A Case Study in English Language Teaching Context
ERIC Educational Resources Information Center
Solak, Ekrem
2014-01-01
This research aims to determine the preference of prospective English teachers in performing computer and paper-based reading tasks and to what extent computer and paper-based reading influence their reading speed, accuracy and comprehension. The research was conducted at a State run University, English Language Teaching Department in Turkey. The…
PACS 2000: quality control using the task allocation chart
NASA Astrophysics Data System (ADS)
Norton, Gary S.; Romlein, John R.; Lyche, David K.; Richardson, Ronald R., Jr.
2000-05-01
Medical imaging's technological evolution in the next century will continue to include Picture Archive and Communication Systems (PACS) and teleradiology. It is difficult to predict radiology's future in the new millennium with both computed radiography and direct digital capture competing as the primary image acquisition methods for routine radiography. Changes in Computed Axial Tomography (CT) and Magnetic Resonance Imaging (MRI) continue to amaze the healthcare community. No matter how the acquisition, display, and archive functions change, Quality Control (QC) of the radiographic imaging chain will remain an important step in the imaging process. The Task Allocation Chart (TAC) is a tool that can be used in a medical facility's QC process to indicate the testing responsibilities of the image stakeholders and the medical informatics department. The TAC shows a grid of equipment to be serviced, tasks to be performed, and the organization assigned to perform each task. Additionally, skills, tasks, time, and references for each task can be provided. QC of the PACS must be stressed as a primary element of a PACS' implementation. The TAC can be used to clarify responsibilities during warranty and paid maintenance periods. Establishing a TAC a part of a PACS implementation has a positive affect on patient care and clinical acceptance.
FDNS code to predict wall heat fluxes or wall temperatures in rocket nozzles
NASA Technical Reports Server (NTRS)
Karr, Gerald R.
1993-01-01
This report summarizes the findings on the NASA contract NAG8-212, Task No. 3. The overall project consists of three tasks, all of which have been successfully completed. In addition, some supporting supplemental work, not required by the contract, has been performed and is documented herein. Task 1 involved the modification of the wall functions in the code FDNS to use a Reynolds Analogy-based method. Task 2 involved the verification of the code against experimentally available data. The data chosen for comparison was from an experiment involving the injection of helium from a wall jet. Results obtained in completing this task also show the sensitivity of the FDNS code to unknown conditions at the injection slot. Task 3 required computation of the flow of hot exhaust gases through the P&W 40K subscale nozzle. Computations were performed both with and without film coolant injection. The FDNS program tends to overpredict heat fluxes, but, with suitable modeling of backside cooling, may give reasonable wall temperature predictions. For film cooling in the P&W 40K calorimeter subscale nozzle, the average wall temperature is reduced from 1750 R to about 1050 R by the film cooling. The average wall heat flux is reduced by a factor of three.
Simplified Distributed Computing
NASA Astrophysics Data System (ADS)
Li, G. G.
2006-05-01
The distributed computing runs from high performance parallel computing, GRID computing, to an environment where idle CPU cycles and storage space of numerous networked systems are harnessed to work together through the Internet. In this work we focus on building an easy and affordable solution for computationally intensive problems in scientific applications based on existing technology and hardware resources. This system consists of a series of controllers. When a job request is detected by a monitor or initialized by an end user, the job manager launches the specific job handler for this job. The job handler pre-processes the job, partitions the job into relative independent tasks, and distributes the tasks into the processing queue. The task handler picks up the related tasks, processes the tasks, and puts the results back into the processing queue. The job handler also monitors and examines the tasks and the results, and assembles the task results into the overall solution for the job request when all tasks are finished for each job. A resource manager configures and monitors all participating notes. A distributed agent is deployed on all participating notes to manage the software download and report the status. The processing queue is the key to the success of this distributed system. We use BEA's Weblogic JMS queue in our implementation. It guarantees the message delivery and has the message priority and re-try features so that the tasks never get lost. The entire system is built on the J2EE technology and it can be deployed on heterogeneous platforms. It can handle algorithms and applications developed in any languages on any platforms. J2EE adaptors are provided to manage and communicate the existing applications to the system so that the applications and algorithms running on Unix, Linux and Windows can all work together. This system is easy and fast to develop based on the industry's well-adopted technology. It is highly scalable and heterogeneous. It is an open system and any number and type of machines can join the system to provide the computational power. This asynchronous message-based system can achieve second of response time. For efficiency, communications between distributed tasks are often done at the start and end of the tasks but intermediate status of the tasks can also be provided.
NASA Technical Reports Server (NTRS)
Siapkaras, A.
1977-01-01
A computational method to deal with the multidimensional nature of tracking and/or monitoring tasks is developed. Operator centered variables, including the operator's perception of the task, are considered. Matrix ratings are defined based on multidimensional scaling techniques and multivariate analysis. The method consists of two distinct steps: (1) to determine the mathematical space of subjective judgements of a certain individual (or group of evaluators) for a given set of tasks and experimental conditionings; and (2) to relate this space with respect to both the task variables and the objective performance criteria used. Results for a variety of second-order trackings with smoothed noise-driven inputs indicate that: (1) many of the internally perceived task variables form a nonorthogonal set; and (2) the structure of the subjective space varies among groups of individuals according to the degree of familiarity they have with such tasks.
Cooperation, Technology, and Performance: A Case Study.
ERIC Educational Resources Information Center
Cavanagh, Thomas; Dickenson, Sabrina; Brandt, Suzanne
1999-01-01
Describes the CTP (Cooperation, Technology, and Performance) model and explains how it is used by the Department of Veterans Affairs-Veteran's Benefit Administration (VBA) for training. Discusses task analysis; computer-based training; cooperative-based learning environments; technology-based learning; performance-assessment methods; courseware…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, L.; Notkin, D.; Adams, L.
1990-03-31
This task relates to research on programming massively parallel computers. Previous work on the Ensamble concept of programming was extended and investigation into nonshared memory models of parallel computation was undertaken. Previous work on the Ensamble concept defined a set of programming abstractions and was used to organize the programming task into three distinct levels; Composition of machine instruction, composition of processes, and composition of phases. It was applied to shared memory models of computations. During the present research period, these concepts were extended to nonshared memory models. During the present research period, one Ph D. thesis was completed, onemore » book chapter, and six conference proceedings were published.« less
Redundancy management for efficient fault recovery in NASA's distributed computing system
NASA Technical Reports Server (NTRS)
Malek, Miroslaw; Pandya, Mihir; Yau, Kitty
1991-01-01
The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance.
DISPATCH: a numerical simulation framework for the exa-scale era - I. Fundamentals
NASA Astrophysics Data System (ADS)
Nordlund, Åke; Ramsey, Jon P.; Popovas, Andrius; Küffmeier, Michael
2018-06-01
We introduce a high-performance simulation framework that permits the semi-independent, task-based solution of sets of partial differential equations, typically manifesting as updates to a collection of `patches' in space-time. A hybrid MPI/OpenMP execution model is adopted, where work tasks are controlled by a rank-local `dispatcher' which selects, from a set of tasks generally much larger than the number of physical cores (or hardware threads), tasks that are ready for updating. The definition of a task can vary, for example, with some solving the equations of ideal magnetohydrodynamics (MHD), others non-ideal MHD, radiative transfer, or particle motion, and yet others applying particle-in-cell (PIC) methods. Tasks do not have to be grid based, while tasks that are, may use either Cartesian or orthogonal curvilinear meshes. Patches may be stationary or moving. Mesh refinement can be static or dynamic. A feature of decisive importance for the overall performance of the framework is that time-steps are determined and applied locally; this allows potentially large reductions in the total number of updates required in cases when the signal speed varies greatly across the computational domain, and therefore a corresponding reduction in computing time. Another feature is a load balancing algorithm that operates `locally' and aims to simultaneously minimize load and communication imbalance. The framework generally relies on already existing solvers, whose performance is augmented when run under the framework, due to more efficient cache usage, vectorization, local time-stepping, plus near-linear and, in principle, unlimited OpenMP and MPI scaling.
NASA Astrophysics Data System (ADS)
Tong, Qiujie; Wang, Qianqian; Li, Xiaoyang; Shan, Bin; Cui, Xuntai; Li, Chenyu; Peng, Zhong
2016-11-01
In order to satisfy the requirements of the real-time and generality, a laser target simulator in semi-physical simulation system based on RTX+LabWindows/CVI platform is proposed in this paper. Compared with the upper-lower computers simulation platform architecture used in the most of the real-time system now, this system has better maintainability and portability. This system runs on the Windows platform, using Windows RTX real-time extension subsystem to ensure the real-time performance of the system combining with the reflective memory network to complete some real-time tasks such as calculating the simulation model, transmitting the simulation data, and keeping real-time communication. The real-time tasks of simulation system run under the RTSS process. At the same time, we use the LabWindows/CVI to compile a graphical interface, and complete some non-real-time tasks in the process of simulation such as man-machine interaction, display and storage of the simulation data, which run under the Win32 process. Through the design of RTX shared memory and task scheduling algorithm, the data interaction between the real-time tasks process of RTSS and non-real-time tasks process of Win32 is completed. The experimental results show that this system has the strongly real-time performance, highly stability, and highly simulation accuracy. At the same time, it also has the good performance of human-computer interaction.
The role of early visual cortex in visual short-term memory and visual attention.
Offen, Shani; Schluppeck, Denis; Heeger, David J
2009-06-01
We measured cortical activity with functional magnetic resonance imaging to probe the involvement of early visual cortex in visual short-term memory and visual attention. In four experimental tasks, human subjects viewed two visual stimuli separated by a variable delay period. The tasks placed differential demands on short-term memory and attention, but the stimuli were visually identical until after the delay period. Early visual cortex exhibited sustained responses throughout the delay when subjects performed attention-demanding tasks, but delay-period activity was not distinguishable from zero when subjects performed a task that required short-term memory. This dissociation reveals different computational mechanisms underlying the two processes.
Improving Physical Task Performance with Counterfactual and Prefactual Thinking
Hammell, Cecilia; Chan, Amy Y. C.
2016-01-01
Counterfactual thinking (reflecting on “what might have been”) has been shown to enhance future performance by translating information about past mistakes into plans for future action. Prefactual thinking (imagining “what might be if…”) may serve a greater preparative function than counterfactual thinking as it is future-orientated and focuses on more controllable features, thus providing a practical script to prime future behaviour. However, whether or not this difference in hypothetical thought content may translate into a difference in actual task performance has been largely unexamined. In Experiment 1 (n = 42), participants performed trials of a computer-simulated physical task, in between which they engaged in either task-related hypothetical thinking (counterfactual or prefactual) or an unrelated filler task (control). As hypothesised, prefactuals contained more controllable features than counterfactuals. Moreover, participants who engaged in either form of hypothetical thinking improved significantly in task performance over trials compared to participants in the control group. The difference in thought content between counterfactuals and prefactuals, however, did not yield a significant difference in performance improvement. Experiment 2 (n = 42) replicated these findings in a dynamic balance task environment. Together, these findings provide further evidence for the preparatory function of counterfactuals, and demonstrate that prefactuals share this same functional characteristic. PMID:27942041
NASA Technical Reports Server (NTRS)
Santiago-Espada, Yamira; Myer, Robert R.; Latorella, Kara A.; Comstock, James R., Jr.
2011-01-01
The Multi-Attribute Task Battery (MAT Battery). is a computer-based task designed to evaluate operator performance and workload, has been redeveloped to operate in Windows XP Service Pack 3, Windows Vista and Windows 7 operating systems.MATB-II includes essentially the same tasks as the original MAT Battery, plus new configuration options including a graphical user interface for controlling modes of operation. MATB-II can be executed either in training or testing mode, as defined by the MATB-II configuration file. The configuration file also allows set up of the default timeouts for the tasks, the flow rates of the pumps and tank levels of the Resource Management (RESMAN) task. MATB-II comes with a default event file that an experimenter can modify and adapt
Development of New Generation of Multibody System Computer Software
2012-04-12
DEVELOPMENT OF NEW GENERATION OF MULTIBODY SYSTEM COMPUTER SOFTWARE Ahmed A. Shabana University of Illinois at Chicago Paramsothy Jayakumar ...Paramsothy Jayakumar ; Michael Letherwood 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES
A view of Kanerva's sparse distributed memory
NASA Technical Reports Server (NTRS)
Denning, P. J.
1986-01-01
Pentti Kanerva is working on a new class of computers, which are called pattern computers. Pattern computers may close the gap between capabilities of biological organisms to recognize and act on patterns (visual, auditory, tactile, or olfactory) and capabilities of modern computers. Combinations of numeric, symbolic, and pattern computers may one day be capable of sustaining robots. The overview of the requirements for a pattern computer, a summary of Kanerva's Sparse Distributed Memory (SDM), and examples of tasks this computer can be expected to perform well are given.
NASA Astrophysics Data System (ADS)
Thomas, W. A.; McAnally, W. H., Jr.
1985-07-01
TABS-2 is a generalized numerical modeling system for open-channel flows, sedimentation, and constituent transport. It consists of more than 40 computer programs to perform modeling and related tasks. The major modeling components--RMA-2V, STUDH, and RMA-4--calculate two-dimensional, depth-averaged flows, sedimentation, and dispersive transport, respectively. The other programs in the system perform digitizing, mesh generation, data management, graphical display, output analysis, and model interfacing tasks. Utilities include file management and automatic generation of computer job control instructions. TABS-2 has been applied to a variety of waterways, including rivers, estuaries, bays, and marshes. It is designed for use by engineers and scientists who may not have a rigorous computer background. Use of the various components is described in Appendices A-O. The bound version of the report does not include the appendices. A looseleaf form with Appendices A-O is distributed to system users.
Seeing the forest for the trees: Networked workstations as a parallel processing computer
NASA Technical Reports Server (NTRS)
Breen, J. O.; Meleedy, D. M.
1992-01-01
Unlike traditional 'serial' processing computers in which one central processing unit performs one instruction at a time, parallel processing computers contain several processing units, thereby, performing several instructions at once. Many of today's fastest supercomputers achieve their speed by employing thousands of processing elements working in parallel. Few institutions can afford these state-of-the-art parallel processors, but many already have the makings of a modest parallel processing system. Workstations on existing high-speed networks can be harnessed as nodes in a parallel processing environment, bringing the benefits of parallel processing to many. While such a system can not rival the industry's latest machines, many common tasks can be accelerated greatly by spreading the processing burden and exploiting idle network resources. We study several aspects of this approach, from algorithms to select nodes to speed gains in specific tasks. With ever-increasing volumes of astronomical data, it becomes all the more necessary to utilize our computing resources fully.
Neural networks as a control methodology
NASA Technical Reports Server (NTRS)
Mccullough, Claire L.
1990-01-01
While conventional computers must be programmed in a logical fashion by a person who thoroughly understands the task to be performed, the motivation behind neural networks is to develop machines which can train themselves to perform tasks, using available information about desired system behavior and learning from experience. There are three goals of this fellowship program: (1) to evaluate various neural net methods and generate computer software to implement those deemed most promising on a personal computer equipped with Matlab; (2) to evaluate methods currently in the professional literature for system control using neural nets to choose those most applicable to control of flexible structures; and (3) to apply the control strategies chosen in (2) to a computer simulation of a test article, the Control Structures Interaction Suitcase Demonstrator, which is a portable system consisting of a small flexible beam driven by a torque motor and mounted on springs tuned to the first flexible mode of the beam. Results of each are discussed.
Jeunet, Camille; Jahanpour, Emilie; Lotte, Fabien
2016-06-01
While promising, electroencephaloraphy based brain-computer interfaces (BCIs) are barely used due to their lack of reliability: 15% to 30% of users are unable to control a BCI. Standard training protocols may be partly responsible as they do not satisfy recommendations from psychology. Our main objective was to determine in practice to what extent standard training protocols impact users' motor imagery based BCI (MI-BCI) control performance. We performed two experiments. The first consisted in evaluating the efficiency of a standard BCI training protocol for the acquisition of non-BCI related skills in a BCI-free context, which enabled us to rule out the possible impact of BCIs on the training outcome. Thus, participants (N = 54) were asked to perform simple motor tasks. The second experiment was aimed at measuring the correlations between motor tasks and MI-BCI performance. The ten best and ten worst performers of the first study were recruited for an MI-BCI experiment during which they had to learn to perform two MI tasks. We also assessed users' spatial ability and pre-training μ rhythm amplitude, as both have been related to MI-BCI performance in the literature. Around 17% of the participants were unable to learn to perform the motor tasks, which is close to the BCI illiteracy rate. This suggests that standard training protocols are suboptimal for skill teaching. No correlation was found between motor tasks and MI-BCI performance. However, spatial ability played an important role in MI-BCI performance. In addition, once the spatial ability covariable had been controlled for, using an ANCOVA, it appeared that participants who faced difficulty during the first experiment improved during the second while the others did not. These studies suggest that (1) standard MI-BCI training protocols are suboptimal for skill teaching, (2) spatial ability is confirmed as impacting on MI-BCI performance, and (3) when faced with difficult pre-training, subjects seemed to explore more strategies and therefore learn better.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brun, B.
1997-07-01
Computer technology has improved tremendously during the last years with larger media capacity, memory and more computational power. Visual computing with high-performance graphic interface and desktop computational power have changed the way engineers accomplish everyday tasks, development and safety studies analysis. The emergence of parallel computing will permit simulation over a larger domain. In addition, new development methods, languages and tools have appeared in the last several years.
Buccafusco, Jerry J; Terry, Alvin V; Webster, Scott J; Martin, Daniel; Hohnadel, Elizabeth J; Bouchard, Kristy A; Warner, Samantha E
2008-08-01
The scopolamine-reversal model is enjoying a resurgence of interest in clinical studies as a reversible pharmacological model for Alzheimer's disease (AD). The cognitive impairment associated with scopolamine is similar to that in AD. The scopolamine model is not simply a cholinergic model, as it can be reversed by drugs that are noncholinergic cognition-enhancing agents. The objective of the study was to determine relevance of computer-assisted operant-conditioning tasks in the scopolamine-reversal model in rats and monkeys. Rats were evaluated for their acquisition of a spatial reference memory task in the Morris water maze. A separate cohort was proficient in performance of an automated delayed stimulus discrimination task (DSDT). Rhesus monkeys were proficient in the performance of an automated delayed matching-to-sample task (DMTS). The AD drug donepezil was evaluated for its ability to reverse the decrements in accuracy induced by scopolamine administration in all three tasks. In the DSDT and DMTS tasks, the effects of donepezil were delay (retention interval)-dependent, affecting primarily short delay trials. Donepezil produced significant but partial reversals of the scopolamine-induced impairment in task accuracies after 2 mg/kg in the water maze, after 1 mg/kg in the DSDT, and after 50 microg/kg in the DMTS task. The two operant-conditioning tasks (DSDT and DMTS) provided data most in keeping with those reported in clinical studies with these drugs. The model applied to nonhuman primates provides an excellent transitional model for new cognition-enhancing drugs before clinical trials.
Wearable computer for mobile augmented-reality-based controlling of an intelligent robot
NASA Astrophysics Data System (ADS)
Turunen, Tuukka; Roening, Juha; Ahola, Sami; Pyssysalo, Tino
2000-10-01
An intelligent robot can be utilized to perform tasks that are either hazardous or unpleasant for humans. Such tasks include working in disaster areas or conditions that are, for example, too hot. An intelligent robot can work on its own to some extent, but in some cases the aid of humans will be needed. This requires means for controlling the robot from somewhere else, i.e. teleoperation. Mobile augmented reality can be utilized as a user interface to the environment, as it enhances the user's perception of the situation compared to other interfacing methods and allows the user to perform other tasks while controlling the intelligent robot. Augmented reality is a method that combines virtual objects into the user's perception of the real world. As computer technology evolves, it is possible to build very small devices that have sufficient capabilities for augmented reality applications. We have evaluated the existing wearable computers and mobile augmented reality systems to build a prototype of a future mobile terminal- the CyPhone. A wearable computer with sufficient system resources for applications, wireless communication media with sufficient throughput and enough interfaces for peripherals has been built at the University of Oulu. It is self-sustained in energy, with enough operating time for the applications to be useful, and uses accurate positioning systems.
The development of expertise on an intelligent tutoring system
NASA Technical Reports Server (NTRS)
Johnson, Debra Steele
1989-01-01
An initial examination was conducted of an Intelligent Tutoring System (ITS) developed for use in industry. The ITS, developed by NASA, simulated a satellite deployment task. More specifically, the PD (Payload Assist Module Deployment)/ICAT (Intelligent Computer Aided Training) System simulated a nominal Payload Assist Module (PAM) deployment. The development of expertise on this task was examined using three Flight Dynamics Officer (FDO) candidates who had no previous experience with this task. The results indicated that performance improved rapidly until Trial 5, followed by more gradual improvements through Trial 12. The performance dimensions measured included performance speed, actions completed, errors, help required, and display fields checked. Suggestions for further refining the software and for deciding when to expose trainees to more difficult task scenarios are discussed. Further, the results provide an initial demonstration of the effectiveness of the PD/ICAT system in training the nominal PAM deployment task and indicate the potential benefits of using ITS's for training other FDO tasks.
Effects of competition on video-task performance in monkeys (Macaca mulatta)
NASA Technical Reports Server (NTRS)
Washburn, David A.; Hopkins, William D.; Rumbaugh, Duane M.
1990-01-01
The effects of competition on performance of a video-formatted task were examined in a series of experiments. Two rhesus monkeys (Macaca mulatta) were trained to manipulate a joystick to shoot at moving targets on a computer screen. The task was made competitive by requiring both animals to shoot at the same target and by rewarding only the animal that hit the target first each trial. The competitive task produced a significant and robust speed-accuracy trade-off in performance. The monkeys hit the target in significantly less time on contested than on uncontested trials. However, they required significantly more shots to hit the target on contested trials in relation to uncontested trials. This effect was unchanged when various schedules of reinforcement were introduced in the uncontested trials. This supports the influence of competition qua competition on performance, a point further bolstered by other findings of behavioral contrast presented here.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duro, Francisco Rodrigo; Blas, Javier Garcia; Isaila, Florin
The increasing volume of scientific data and the limited scalability and performance of storage systems are currently presenting a significant limitation for the productivity of the scientific workflows running on both high-performance computing (HPC) and cloud platforms. Clearly needed is better integration of storage systems and workflow engines to address this problem. This paper presents and evaluates a novel solution that leverages codesign principles for integrating Hercules—an in-memory data store—with a workflow management system. We consider four main aspects: workflow representation, task scheduling, task placement, and task termination. As a result, the experimental evaluation on both cloud and HPC systemsmore » demonstrates significant performance and scalability improvements over existing state-of-the-art approaches.« less
Chen, Ping-Shun; Yu, Chun-Jen; Chen, Gary Yu-Hsin
2015-08-01
With the growth in the number of elderly and people with chronic diseases, the number of hospital services will need to increase in the near future. With myriad of information technologies utilized daily and crucial information-sharing tasks performed at hospitals, understanding the relationship between task performance and information system has become a critical topic. This research explored the resource pooling of hospital management and considered a computed tomography (CT) patient-referral mechanism between two hospitals using the information system theory framework of Task-Technology Fit (TTF) model. The TTF model could be used to assess the 'match' between the task and technology characteristics. The patient-referral process involved an integrated information framework consisting of a hospital information system (HIS), radiology information system (RIS), and picture archiving and communication system (PACS). A formal interview was conducted with the director of the case image center on the applicable characteristics of TTF model. Next, the Icam DEFinition (IDEF0) method was utilized to depict the As-Is and To-Be models for CT patient-referral medical operational processes. Further, the study used the 'leagility' concept to remove non-value-added activities and increase the agility of hospitals. The results indicated that hospital information systems could support the CT patient-referral mechanism, increase hospital performance, reduce patient wait time, and enhance the quality of care for patients.
The effects of working memory on brain-computer interface performance.
Sprague, Samantha A; McBee, Matthew T; Sellers, Eric W
2016-02-01
The purpose of the present study is to evaluate the relationship between working memory and BCI performance. Participants took part in two separate sessions. The first session consisted of three computerized tasks. The List Sorting Working Memory Task was used to measure working memory, the Picture Vocabulary Test was used to measure general intelligence, and the Dimensional Change Card Sort Test was used to measure executive function, specifically cognitive flexibility. The second session consisted of a P300-based BCI copy-spelling task. The results indicate that both working memory and general intelligence are significant predictors of BCI performance. This suggests that working memory training could be used to improve performance on a BCI task. Working memory training may help to reduce a portion of the individual differences that exist in BCI performance allowing for a wider range of users to successfully operate the BCI system as well as increase the BCI performance of current users. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Felton, E. A.; Radwin, R. G.; Wilson, J. A.; Williams, J. C.
2009-10-01
A brain-computer interface (BCI) is a communication system that takes recorded brain signals and translates them into real-time actions, in this case movement of a cursor on a computer screen. This work applied Fitts' law to the evaluation of performance on a target acquisition task during sensorimotor rhythm-based BCI training. Fitts' law, which has been used as a predictor of movement time in studies of human movement, was used here to determine the information transfer rate, which was based on target acquisition time and target difficulty. The information transfer rate was used to make comparisons between control modalities and subject groups on the same task. Data were analyzed from eight able-bodied and five motor disabled participants who wore an electrode cap that recorded and translated their electroencephalogram (EEG) signals into computer cursor movements. Direct comparisons were made between able-bodied and disabled subjects, and between EEG and joystick cursor control in able-bodied subjects. Fitts' law aptly described the relationship between movement time and index of difficulty for each task movement direction when evaluated separately and averaged together. This study showed that Fitts' law can be successfully applied to computer cursor movement controlled by neural signals.
Effect of Modality and Task Type on Interlanguage Variation
ERIC Educational Resources Information Center
Kim, Hye Yeong
2017-01-01
An essential component for assessing the accuracy and fluency of language learners is understanding how mode of communication and task type affect performance in second-language (L2) acquisition. This study investigates how text-based synchronous computer-mediated communication (SCMC) and face-to-face (F2F) oral interaction can influence the…
Task Virtuality and Its Effect on Student Project Team Effectiveness
ERIC Educational Resources Information Center
Pineda, Rodley C.
2015-01-01
This study explores the extent to which students in colocated teams use synchronous and asynchronous computer-mediated communication channels (task virtuality) and how this use affects their perceptions of the team's performance, their satisfaction with the team, and the learning they derive from the process. Survey results show that different…
Designing Effective Web Forms for Older Web Users
ERIC Educational Resources Information Center
Li, Hui; Rau, Pei-Luen Patrick; Fujimura, Kaori; Gao, Qin; Wang, Lin
2012-01-01
This research aims to provide insight for web form design for older users. The effects of task complexity and information structure of web forms on older users' performance were examined. Forty-eight older participants with abundant computer and web experience were recruited. The results showed significant differences in task time and error rate…
The Role of Context in a Collaborative Problem-Solving Task during Professional Development
ERIC Educational Resources Information Center
Ritella, Giuseppe; Ligorio, Maria Beatrice; Hakkarainen, Kai
2016-01-01
This article analyses how a group of teachers managed the resources available while performing computer-supported collaborative problem-solving tasks in the context of professional development. The authors video-recorded and analysed collaborative sessions during which the group of teachers used a digital environment to prepare a pedagogical…
Is Neural Activity Detected by ERP-Based Brain-Computer Interfaces Task Specific?
Wenzel, Markus A; Almeida, Inês; Blankertz, Benjamin
2016-01-01
Brain-computer interfaces (BCIs) that are based on event-related potentials (ERPs) can estimate to which stimulus a user pays particular attention. In typical BCIs, the user silently counts the selected stimulus (which is repeatedly presented among other stimuli) in order to focus the attention. The stimulus of interest is then inferred from the electroencephalogram (EEG). Detecting attention allocation implicitly could be also beneficial for human-computer interaction (HCI), because it would allow software to adapt to the user's interest. However, a counting task would be inappropriate for the envisaged implicit application in HCI. Therefore, the question was addressed if the detectable neural activity is specific for silent counting, or if it can be evoked also by other tasks that direct the attention to certain stimuli. Thirteen people performed a silent counting, an arithmetic and a memory task. The tasks required the subjects to pay particular attention to target stimuli of a random color. The stimulus presentation was the same in all three tasks, which allowed a direct comparison of the experimental conditions. Classifiers that were trained to detect the targets in one task, according to patterns present in the EEG signal, could detect targets in all other tasks (irrespective of some task-related differences in the EEG). The neural activity detected by the classifiers is not strictly task specific but can be generalized over tasks and is presumably a result of the attention allocation or of the augmented workload. The results may hold promise for the transfer of classification algorithms from BCI research to implicit relevance detection in HCI.
An intelligent multi-media human-computer dialogue system
NASA Technical Reports Server (NTRS)
Neal, J. G.; Bettinger, K. E.; Byoun, J. S.; Dobes, Z.; Thielman, C. Y.
1988-01-01
Sophisticated computer systems are being developed to assist in the human decision-making process for very complex tasks performed under stressful conditions. The human-computer interface is a critical factor in these systems. The human-computer interface should be simple and natural to use, require a minimal learning period, assist the user in accomplishing his task(s) with a minimum of distraction, present output in a form that best conveys information to the user, and reduce cognitive load for the user. In pursuit of this ideal, the Intelligent Multi-Media Interfaces project is devoted to the development of interface technology that integrates speech, natural language text, graphics, and pointing gestures for human-computer dialogues. The objective of the project is to develop interface technology that uses the media/modalities intelligently in a flexible, context-sensitive, and highly integrated manner modelled after the manner in which humans converse in simultaneous coordinated multiple modalities. As part of the project, a knowledge-based interface system, called CUBRICON (CUBRC Intelligent CONversationalist) is being developed as a research prototype. The application domain being used to drive the research is that of military tactical air control.
Wongcharoen, Suleeporn; Sungkarat, Somporn; Munkhetvit, Peeraya; Lugade, Vipul; Silsupadol, Patima
2017-02-01
The purpose of this study was to compare the efficacy of four different home-based interventions on dual-task balance performance and to determine the generalizability of the four trainings to untrained tasks. Sixty older adults, aged 65 and older, were randomly assigned to one of four home-based interventions: single-task motor training, single-task cognitive training, dual-task motor-cognitive training, and dual-task cognitive-cognitive training. Participants received 60-min individualized training sessions, 3 times a week for 4 weeks. Prior to and following the training program, participants were asked to walk under two single-task conditions (i.e. narrow walking and obstacle crossing) and two dual-task conditions (i.e. a trained narrow walking while performing verbal fluency task and an untrained obstacle crossing while counting backward by 3s task). A nine-camera motion capture system was used to collect the trajectories of 32 reflective markers placed on bony landmarks of participants. Three-dimensional kinematics of the whole body center of mass and base of support were computed. Results from the extrapolated center of mass displacement indicated that motor-cognitive training was more effective than the single-task motor training to improve dual-task balance performance (p=0.04, ES=0.11). Interestingly, balance performance under both single-task and dual-task conditions can also be improved through a non-motor, single-task cognitive training program (p=0.01, ES=0.13, and p=0.01, ES=0.11, respectively). However, improved dual-task processing skills during training were not transferred to the novel dual task (p=0.15, ES=0.09). This is the first study demonstrating that home-based dual-task training can be effectively implemented to improve balance performance during gait in older adults. Copyright © 2016 Elsevier B.V. All rights reserved.
Computer vision syndrome (CVS) - Thermographic Analysis
NASA Astrophysics Data System (ADS)
Llamosa-Rincón, L. E.; Jaime-Díaz, J. M.; Ruiz-Cardona, D. F.
2017-01-01
The use of computers has reported an exponential growth in the last decades, the possibility of carrying out several tasks for both professional and leisure purposes has contributed to the great acceptance by the users. The consequences and impact of uninterrupted tasks with computers screens or displays on the visual health, have grabbed researcher’s attention. When spending long periods of time in front of a computer screen, human eyes are subjected to great efforts, which in turn triggers a set of symptoms known as Computer Vision Syndrome (CVS). Most common of them are: blurred vision, visual fatigue and Dry Eye Syndrome (DES) due to unappropriate lubrication of ocular surface when blinking decreases. An experimental protocol was de-signed and implemented to perform thermographic studies on healthy human eyes during exposure to dis-plays of computers, with the main purpose of comparing the existing differences in temperature variations of healthy ocular surfaces.
Enabling computer decisions based on EEG input.
Culpepper, Benjamin J; Keller, Robert M
2003-12-01
Multilayer neural networks were successfully trained to classify segments of 12-channel electroencephalogram (EEG) data into one of five classes corresponding to five cognitive tasks performed by a subject. Independent component analysis (ICA) was used to segregate obvious artifact EEG components from other sources, and a frequency-band representation was used to represent the sources computed by ICA. Examples of results include an 85% accuracy rate on differentiation between two tasks, using a segment of EEG only 0.05 s long and a 95% accuracy rate using a 0.5-s-long segment.
Enabling computer decisions based on EEG input
NASA Technical Reports Server (NTRS)
Culpepper, Benjamin J.; Keller, Robert M.
2003-01-01
Multilayer neural networks were successfully trained to classify segments of 12-channel electroencephalogram (EEG) data into one of five classes corresponding to five cognitive tasks performed by a subject. Independent component analysis (ICA) was used to segregate obvious artifact EEG components from other sources, and a frequency-band representation was used to represent the sources computed by ICA. Examples of results include an 85% accuracy rate on differentiation between two tasks, using a segment of EEG only 0.05 s long and a 95% accuracy rate using a 0.5-s-long segment.
Timing considerations of Helmet Mounted Display performance
NASA Technical Reports Server (NTRS)
Tharp, Gregory; Liu, Andrew; French, Lloyd; Lai, Steve; Stark, Lawrence
1992-01-01
The Helmet Mounted Display (HMD) system developed in our lab should be a useful teleoperator systems display if it increases operator performance of the desired task; it can, however, introduce degradation in performance due to display update rate constraints and communication delays. Display update rates are slowed by communication bandwidth and/or computational power limitations. We used simulated 3D tracking and pick-and-place tasks to characterize performance levels for a range of update rates. Initial experiments with 3D tracking indicate that performance levels plateau at an update rate between 10 and 20 Hz. We have found that using the HMD with delay decreases performance as delay increases.
Use of computers in dysmorphology.
Diliberti, J H
1988-01-01
As a consequence of the increasing power and decreasing cost of digital computers, dysmorphologists have begun to explore a wide variety of computerised applications in clinical genetics. Of considerable interest are developments in the areas of syndrome databases, expert systems, literature searches, image processing, and pattern recognition. Each of these areas is reviewed from the perspective of the underlying computer principles, existing applications, and the potential for future developments. Particular emphasis is placed on the analysis of the tasks performed by the dysmorphologist and the design of appropriate tools to facilitate these tasks. In this context the computer and associated software are considered paradigmatically as tools for the dysmorphologist and should be designed accordingly. Continuing improvements in the ability of computers to manipulate vast amounts of data rapidly makes the development of increasingly powerful tools for the dysmorphologist highly probable. PMID:3050092
Dual tasking and stuttering: from the laboratory to the clinic.
Metten, Christine; Bosshardt, Hans-Georg; Jones, Mark; Eisenhuth, John; Block, Susan; Carey, Brenda; O'Brian, Sue; Packman, Ann; Onslow, Mark; Menzies, Ross
2011-01-01
The aim of the three studies in this article was to develop a way to include dual tasking in speech restructuring treatment for persons who stutter (PWS). It is thought that this may help clients maintain the benefits of treatment in the real world, where attentional resources are frequently diverted away from controlling fluency by the demands of other tasks. In Part 1, 17 PWS performed a story-telling task and a computer semantic task simultaneously. Part 2 reports the incorporation of the Part 1 protocol into a handy device for use in a clinical setting (the Dual Task and Stuttering Device, DAS-D). Part 3 is a proof of concept study in which three PWS reported on their experiences of using the device during treatment. In Part 1, stuttering frequency and errors on the computer task both increased under dual task conditions, indicating that the protocol would be appropriate for use in a clinical setting. All three participants in Part 3 reported positively on their experiences using the DAS-D. Dual tasking during treatment using the DAS-D appears to be a viable clinical procedure. Further research is required to establish effectiveness.
Human Factors in Aviation Maintenance. Phase 3. Volume 2. Progress Report
1994-07-01
computer-simulated NDI eddy-current task developed by Dr. Colin G. Drury and his colleagues at the State University of New York (SUNY) at Buffalo. The task...to lower cost and increase performance ............... 28 Figure 3.2 Comparison of development and delivery costs for CBT and traditional training...Active Female Aircraft Mechanics. . 110 Figure 5.1 NDI Inspection Task Simulation ( Drury et al., 1992) ............... 147 Figure 5.2 Mean Misses and
Efficient Symbolic Task Planning for Multiple Mobile Robots
2016-12-13
Efficient Symbolic Task Planning for Multiple Mobile Robots Yuqian Jiang December 13, 2016 Abstract Symbolic task planning enables a robot to make...high-level deci- sions toward a complex goal by computing a sequence of actions with minimum expected costs. This thesis builds on a single- robot ...time complexity of optimal planning for multiple mobile robots . In this thesis we first investigate the performance of the state-of-the-art solvers of
NASA Technical Reports Server (NTRS)
Crane, J. M.; Boucek, G. P., Jr.; Smith, W. D.
1986-01-01
A flight management computer (FMC) control display unit (CDU) test was conducted to compare two types of input devices: a fixed legend (dedicated) keyboard and a programmable legend (multifunction) keyboard. The task used for comparison was operation of the flight management computer for the Boeing 737-300. The same tasks were performed by twelve pilots on the FMC control display unit configured with a programmable legend keyboard and with the currently used B737-300 dedicated keyboard. Flight simulator work activity levels and input task complexity were varied during each pilot session. Half of the points tested were previously familiar with the B737-300 dedicated keyboard CDU and half had no prior experience with it. The data collected included simulator flight parameters, keystroke time and sequences, and pilot questionnaire responses. A timeline analysis was also used for evaluation of the two keyboard concepts.
NASA Astrophysics Data System (ADS)
Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.
2016-05-01
In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.
CQPSO scheduling algorithm for heterogeneous multi-core DAG task model
NASA Astrophysics Data System (ADS)
Zhai, Wenzheng; Hu, Yue-Li; Ran, Feng
2017-07-01
Efficient task scheduling is critical to achieve high performance in a heterogeneous multi-core computing environment. The paper focuses on the heterogeneous multi-core directed acyclic graph (DAG) task model and proposes a novel task scheduling method based on an improved chaotic quantum-behaved particle swarm optimization (CQPSO) algorithm. A task priority scheduling list was built. A processor with minimum cumulative earliest finish time (EFT) was acted as the object of the first task assignment. The task precedence relationships were satisfied and the total execution time of all tasks was minimized. The experimental results show that the proposed algorithm has the advantage of optimization abilities, simple and feasible, fast convergence, and can be applied to the task scheduling optimization for other heterogeneous and distributed environment.
Effect of computer game playing on baseline laparoscopic simulator skills.
Halvorsen, Fredrik H; Cvancarova, Milada; Fosse, Erik; Mjåland, Odd
2013-08-01
Studies examining the possible association between computer game playing and laparoscopic performance in general have yielded conflicting results and neither has a relationship between computer game playing and baseline performance on laparoscopic simulators been established. The aim of this study was to examine the possible association between previous and present computer game playing and baseline performance on a virtual reality laparoscopic performance in a sample of potential future medical students. The participating students completed a questionnaire covering the weekly amount and type of computer game playing activity during the previous year and 3 years ago. They then performed 2 repetitions of 2 tasks ("gallbladder dissection" and "traverse tube") on a virtual reality laparoscopic simulator. Performance on the simulator were then analyzed for association to their computer game experience. Local high school, Norway. Forty-eight students from 2 high school classes volunteered to participate in the study. No association between prior and present computer game playing and baseline performance was found. The results were similar both for prior and present action game playing and prior and present computer game playing in general. Our results indicate that prior and present computer game playing may not affect baseline performance in a virtual reality simulator.
ERIC Educational Resources Information Center
Mercer County Schools, Princeton, WV.
A project was undertaken to identify machine shop occupations requiring workers to use computers, identify the computer skills needed to perform machine shop tasks, and determine which software products are currently being used in machine shop programs. A search of the Dictionary of Occupational Titles revealed that computer skills will become…
Investigating neural efficiency of elite karate athletes during a mental arithmetic task using EEG.
Duru, Adil Deniz; Assem, Moataz
2018-02-01
Neural efficiency is proposed as one of the neural mechanisms underlying elite athletic performances. Previous sports studies examined neural efficiency using tasks that involve motor functions. In this study we investigate the extent of neural efficiency beyond motor tasks by using a mental subtraction task. A group of elite karate athletes are compared to a matched group of non-athletes. Electroencephalogram is used to measure cognitive dynamics during resting and increased mental workload periods. Mainly posterior alpha band power of the karate players was found to be higher than control subjects under both tasks. Moreover, event related synchronization/desynchronization has been computed to investigate the neural efficiency hypothesis among subjects. Finally, this study is the first study to examine neural efficiency related to a cognitive task, not a motor task, in elite karate players using ERD/ERS analysis. The results suggest that the effect of neural efficiency in the brain is global rather than local and thus might be contributing to the elite athletic performances. Also the results are in line with the neural efficiency hypothesis tested for motor performance studies.
Laszlo, Sarah; Plaut, David C
2012-03-01
The Parallel Distributed Processing (PDP) framework has significant potential for producing models of cognitive tasks that approximate how the brain performs the same tasks. To date, however, there has been relatively little contact between PDP modeling and data from cognitive neuroscience. In an attempt to advance the relationship between explicit, computational models and physiological data collected during the performance of cognitive tasks, we developed a PDP model of visual word recognition which simulates key results from the ERP reading literature, while simultaneously being able to successfully perform lexical decision-a benchmark task for reading models. Simulations reveal that the model's success depends on the implementation of several neurally plausible features in its architecture which are sufficiently domain-general to be relevant to cognitive modeling more generally. Copyright © 2011 Elsevier Inc. All rights reserved.
Effects of spatially displaced feedback on remote manipulation tasks
NASA Technical Reports Server (NTRS)
Manahan, Meera K.; Stuart, Mark A.; Bierschwale, John M.; Hwang, Ellen Y.; Legendre, A. J.
1992-01-01
Several studies have been performed to determine the effects on computer and direct manipulation task performance when viewing conditions are spatially displaced. Whether results from these studies can be directly applied to remote manipulation tasks is quenstionable. The objective of this evaluation was to determine the effects of reversed, inverted, and inverted/reversed views on remote manipulation task performance using two 3-Degree of Freedom (DOF) hand controllers and a replica position hand controller. Results showed that trials using the inverted viewing condition showed the worst performance, followed by the inverted/reversed view and the reversed view when using the 2x3 DOF. However, these differences were not significant. The inverted and inverted/reversed viewing conditions were significantly worse than the normal and reversed viewing conditions when using the Kraft Replica. A second evaluation was conducted in which additional trials were performed with each viewing condition to determine the long term effects of spatially displaced views on task performance for the hand controllers. Results of the second evaluation indicated that there was more of a difference in performance between the perturbed viewing conditions and the normal viewing condition with the Kraft Replica than with the 2x3 DOF.
1987-10-01
19 treated in interaction with each other and the hardware and software design. The authors point out some of the inadequacies in HP technologies and...life cycle costs recognition performance on secondary tasks effort/efficiency number of wins ( gaming tasks) number of instructors needed amount of...student interacts with this material in real time via a terminal and display system. The computer performs many functions, such as diagnose student
A computational study of whole-brain connectivity in resting state and task fMRI
Goparaju, Balaji; Rana, Kunjan D.; Calabro, Finnegan J.; Vaina, Lucia Maria
2014-01-01
Background We compared the functional brain connectivity produced during resting-state in which subjects were not actively engaged in a task with that produced while they actively performed a visual motion task (task-state). Material/Methods In this paper we employed graph-theoretical measures and network statistics in novel ways to compare, in the same group of human subjects, functional brain connectivity during resting-state fMRI with brain connectivity during performance of a high level visual task. We performed a whole-brain connectivity analysis to compare network statistics in resting and task states among anatomically defined Brodmann areas to investigate how brain networks spanning the cortex changed when subjects were engaged in task performance. Results In the resting state, we found strong connectivity among the posterior cingulate cortex (PCC), precuneus, medial prefrontal cortex (MPFC), lateral parietal cortex, and hippocampal formation, consistent with previous reports of the default mode network (DMN). The connections among these areas were strengthened while subjects actively performed an event-related visual motion task, indicating a continued and strong engagement of the DMN during task processing. Regional measures such as degree (number of connections) and betweenness centrality (number of shortest paths), showed that task performance induces stronger inter-regional connections, leading to a denser processing network, but that this does not imply a more efficient system as shown by the integration measures such as path length and global efficiency, and from global measures such as small-worldness. Conclusions In spite of the maintenance of connectivity and the “hub-like” behavior of areas, our results suggest that the network paths may be rerouted when performing the task condition. PMID:24947491
PTS performance by flight- and control-group macaques
NASA Technical Reports Server (NTRS)
Washburn, D. A.; Rumbaugh, D. M.; Richardson, W. K.; Gulledge, J. P.; Shlyk, G. G.; Vasilieva, O. N.
2000-01-01
A total of 25 young monkeys (Macaca mulatta) were trained with the Psychomotor Test System, a package of software tasks and computer hardware developed for spaceflight research with nonhuman primates. Two flight monkeys and two control monkeys were selected from this pool and performed a psychomotor task before and after the Bion 11 flight or a ground-control period. Monkeys from both groups showed significant disruption in performance after the 14-day flight or simulation (plus one anesthetized day of biopsies and other tests), and this disruption appeared to be magnified for the flight animal.
Linear static structural and vibration analysis on high-performance computers
NASA Technical Reports Server (NTRS)
Baddourah, M. A.; Storaasli, O. O.; Bostic, S. W.
1993-01-01
Parallel computers offer the oppurtunity to significantly reduce the computation time necessary to analyze large-scale aerospace structures. This paper presents algorithms developed for and implemented on massively-parallel computers hereafter referred to as Scalable High-Performance Computers (SHPC), for the most computationally intensive tasks involved in structural analysis, namely, generation and assembly of system matrices, solution of systems of equations and calculation of the eigenvalues and eigenvectors. Results on SHPC are presented for large-scale structural problems (i.e. models for High-Speed Civil Transport). The goal of this research is to develop a new, efficient technique which extends structural analysis to SHPC and makes large-scale structural analyses tractable.
The Association Between Computer Use and Cognition Across Adulthood: Use it so You Won't Lose it?
Tun, Patricia A.; Lachman, Margie E.
2012-01-01
Understanding the association between computer use and adult cognition has been limited until now by self-selected samples with restricted ranges of age and education. Here we studied effects of computer use in a large national sample (N=2671) of adults aged 32 to 84, assessing cognition with the Brief Test of Adult Cognition by Telephone (Tun & Lachman, 2005), and executive function with the Stop and Go Switch Task (Tun & Lachman, 2008). Frequency of computer activity was associated with cognitive performance after controlling for age, sex, education, and health status: that is, individuals who used the computer frequently scored significantly higher than those who seldom used the computer. Greater computer use was also associated with better executive function on a task-switching test, even after controlling for basic cognitive ability as well as demographic variables. These findings suggest that frequent computer activity is associated with good cognitive function, particularly executive control, across adulthood into old age, especially for those with lower intellectual ability. PMID:20677884
Human computer interface guide, revision A
NASA Technical Reports Server (NTRS)
1993-01-01
The Human Computer Interface Guide, SSP 30540, is a reference document for the information systems within the Space Station Freedom Program (SSFP). The Human Computer Interface Guide (HCIG) provides guidelines for the design of computer software that affects human performance, specifically, the human-computer interface. This document contains an introduction and subparagraphs on SSFP computer systems, users, and tasks; guidelines for interactions between users and the SSFP computer systems; human factors evaluation and testing of the user interface system; and example specifications. The contents of this document are intended to be consistent with the tasks and products to be prepared by NASA Work Package Centers and SSFP participants as defined in SSP 30000, Space Station Program Definition and Requirements Document. The Human Computer Interface Guide shall be implemented on all new SSFP contractual and internal activities and shall be included in any existing contracts through contract changes. This document is under the control of the Space Station Control Board, and any changes or revisions will be approved by the deputy director.
Diamond, Alan; Nowotny, Thomas; Schmuker, Michael
2016-01-01
Neuromorphic computing employs models of neuronal circuits to solve computing problems. Neuromorphic hardware systems are now becoming more widely available and “neuromorphic algorithms” are being developed. As they are maturing toward deployment in general research environments, it becomes important to assess and compare them in the context of the applications they are meant to solve. This should encompass not just task performance, but also ease of implementation, speed of processing, scalability, and power efficiency. Here, we report our practical experience of implementing a bio-inspired, spiking network for multivariate classification on three different platforms: the hybrid digital/analog Spikey system, the digital spike-based SpiNNaker system, and GeNN, a meta-compiler for parallel GPU hardware. We assess performance using a standard hand-written digit classification task. We found that whilst a different implementation approach was required for each platform, classification performances remained in line. This suggests that all three implementations were able to exercise the model's ability to solve the task rather than exposing inherent platform limits, although differences emerged when capacity was approached. With respect to execution speed and power consumption, we found that for each platform a large fraction of the computing time was spent outside of the neuromorphic device, on the host machine. Time was spent in a range of combinations of preparing the model, encoding suitable input spiking data, shifting data, and decoding spike-encoded results. This is also where a large proportion of the total power was consumed, most markedly for the SpiNNaker and Spikey systems. We conclude that the simulation efficiency advantage of the assessed specialized hardware systems is easily lost in excessive host-device communication, or non-neuronal parts of the computation. These results emphasize the need to optimize the host-device communication architecture for scalability, maximum throughput, and minimum latency. Moreover, our results indicate that special attention should be paid to minimize host-device communication when designing and implementing networks for efficient neuromorphic computing. PMID:26778950
Gender differences in mental rotation across adulthood.
Jansen, Petra; Heil, Martin
2010-01-01
Although gender differences in mental rotation in younger adults are prominent in paper-pencil tests as well as in chronometric tests with polygons as stimuli, less is known about this topic in the older age ranges. Therefore, performance was assessed with the Mental Rotation Test (MRT) paper-pencil test as well as with a computer-based two-stimulus same-different task with polygons in a sample of 150 adults divided into three age groups, 20-30, 40-50, and 60-70 years. Performance decreased with age, and men outperformed women in all age groups. The gender effect decreased with age in the MRT, possibly due to a floor effect. Gender differences remained constant across age, however, in the error rates of the computer-based task.
CoreTSAR: Core Task-Size Adapting Runtime
Scogland, Thomas R. W.; Feng, Wu-chun; Rountree, Barry; ...
2014-10-27
Heterogeneity continues to increase at all levels of computing, with the rise of accelerators such as GPUs, FPGAs, and other co-processors into everything from desktops to supercomputers. As a consequence, efficiently managing such disparate resources has become increasingly complex. CoreTSAR seeks to reduce this complexity by adaptively worksharing parallel-loop regions across compute resources without requiring any transformation of the code within the loop. Lastly, our results show performance improvements of up to three-fold over a current state-of-the-art heterogeneous task scheduler as well as linear performance scaling from a single GPU to four GPUs for many codes. In addition, CoreTSAR demonstratesmore » a robust ability to adapt to both a variety of workloads and underlying system configurations.« less
A general method for assessing brain-computer interface performance and its limitations
NASA Astrophysics Data System (ADS)
Hill, N. Jeremy; Häuser, Ann-Katrin; Schalk, Gerwin
2014-04-01
Objective. When researchers evaluate brain-computer interface (BCI) systems, we want quantitative answers to questions such as: How good is the system’s performance? How good does it need to be? and: Is it capable of reaching the desired level in future? In response to the current lack of objective, quantitative, study-independent approaches, we introduce methods that help to address such questions. We identified three challenges: (I) the need for efficient measurement techniques that adapt rapidly and reliably to capture a wide range of performance levels; (II) the need to express results in a way that allows comparison between similar but non-identical tasks; (III) the need to measure the extent to which certain components of a BCI system (e.g. the signal processing pipeline) not only support BCI performance, but also potentially restrict the maximum level it can reach. Approach. For challenge (I), we developed an automatic staircase method that adjusted task difficulty adaptively along a single abstract axis. For challenge (II), we used the rate of information gain between two Bernoulli distributions: one reflecting the observed success rate, the other reflecting chance performance estimated by a matched random-walk method. This measure includes Wolpaw’s information transfer rate as a special case, but addresses the latter’s limitations including its restriction to item-selection tasks. To validate our approach and address challenge (III), we compared four healthy subjects’ performance using an EEG-based BCI, a ‘Direct Controller’ (a high-performance hardware input device), and a ‘Pseudo-BCI Controller’ (the same input device, but with control signals processed by the BCI signal processing pipeline). Main results. Our results confirm the repeatability and validity of our measures, and indicate that our BCI signal processing pipeline reduced attainable performance by about 33% (21 bits min-1). Significance. Our approach provides a flexible basis for evaluating BCI performance and its limitations, across a wide range of tasks and task difficulties.
Understanding and enhancing user acceptance of computer technology
NASA Technical Reports Server (NTRS)
Rouse, William B.; Morris, Nancy M.
1986-01-01
Technology-driven efforts to implement computer technology often encounter problems due to lack of acceptance or begrudging acceptance of the personnel involved. It is argued that individuals' acceptance of automation, in terms of either computerization or computer aiding, is heavily influenced by their perceptions of the impact of the automation on their discretion in performing their jobs. It is suggested that desired levels of discretion reflect needs to feel in control and achieve self-satisfaction in task performance, as well as perceptions of inadequacies of computer technology. Discussion of these factors leads to a structured set of considerations for performing front-end analysis, deciding what to automate, and implementing the resulting changes.
Kanumuri, Prathima; Ganai, Sabha; Wohaibi, Eyad M.; Bush, Ronald W.; Grow, Daniel R.
2008-01-01
Background: The study aim was to compare the effectiveness of virtual reality and computer-enhanced video-scopic training devices for training novice surgeons in complex laparoscopic skills. Methods: Third-year medical students received instruction on laparoscopic intracorporeal suturing and knot tying and then underwent a pretraining assessment of the task using a live porcine model. Students were then randomized to objectives-based training on either the virtual reality (n=8) or computer-enhanced (n=8) training devices for 4 weeks, after which the assessment was repeated. Results: Posttraining performance had improved compared with pretraining performance in both task completion rate (94% versus 18%; P<0.001*) and time [181±58 (SD) versus 292±24*]. Performance of the 2 groups was comparable before and after training. Of the subjects, 88% thought that haptic cues were important in simulators. Both groups agreed that their respective training systems were effective teaching tools, but computer-enhanced device trainees were more likely to rate their training as representative of reality (P<0.01). Conclusions: Training on virtual reality and computer-enhanced devices had equivalent effects on skills improvement in novices. Despite the perception that haptic feedback is important in laparoscopic simulation training, its absence in the virtual reality device did not impede acquisition of skill. PMID:18765042
2004-07-01
steadily for the past fifteen years, while memory latency and bandwidth have improved much more slowly. For example, Intel processor clock rates38 have... processor and memory performance) all greatly restrict the ability to achieve high levels of performance for science, engineering, and national...sub-nuclear distances. Guide experiments to identify transition from quantum chromodynamics to quark -gluon plasma. Accelerator Physics Accurate
Computational Study of Inlet Active Flow Control
2007-05-01
AFRL-VA-WP-TR-2007-3077 COMPUTATIONAL STUDY OF INLET ACTIVE FLOW CONTROL Delivery Order 0005 Dr. Sonya T. Smith Howard University Department...NUMBER A0A2 5e. TASK NUMBER 6. AUTHOR(S) Dr. Sonya T. Smith ( Howard University ) Dr. Angela Scribben and Matthew Goettke (AFRL/VAAI) 5f...WORK UNIT NUMBER 0B 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION Howard University Department of Mechanical
Watanabe, Tatsunori; Tsutou, Kotaro; Saito, Kotaro; Ishida, Kazuto; Tanabe, Shigeo; Nojima, Ippei
2016-11-01
Choice reaction requires response conflict resolution, and the resolution processes that occur during a choice stepping reaction task undertaken in a standing position, which requires maintenance of balance, may be different to those processes occurring during a choice reaction task performed in a seated position. The study purpose was to investigate the resolution processes during a choice stepping reaction task at the cortical level using electroencephalography and compare the results with a control task involving ankle dorsiflexion responses. Twelve young adults either stepped forward or dorsiflexed the ankle in response to a visual imperative stimulus presented on a computer screen. We used the Simon task and examined the error-related negativity (ERN) that follows an incorrect response and the correct-response negativity (CRN) that follows a correct response. Error was defined as an incorrect initial weight transfer for the stepping task and as an incorrect initial tibialis anterior activation for the control task. Results revealed that ERN and CRN amplitudes were similar in size for the stepping task, whereas the amplitude of ERN was larger than that of CRN for the control task. The ERN amplitude was also larger in the stepping task than the control task. These observations suggest that a choice stepping reaction task involves a strategy emphasizing post-response conflict and general performance monitoring of actual and required responses and also requires greater cognitive load than a choice dorsiflexion reaction. The response conflict resolution processes appear to be different for stepping tasks and reaction tasks performed in a seated position.
ERIC Educational Resources Information Center
La Brecque, Mort
1984-01-01
To break the bottleneck inherent in today's linear computer architectures, parallel schemes (which allow computers to perform multiple tasks at one time) are being devised. Several of these schemes are described. Dataflow devices, parallel number-crunchers, programing languages, and a device based on a neurological model are among the areas…
Working Together: Computers and People with Sensory Impairments.
ERIC Educational Resources Information Center
Washington Univ., Seattle.
This brief paper considers ways in which people with sensory impairments can benefit from the assistive technology available with computers. Assistive technology practitioners are urged not to focus on the disability, but on the individual's abilities and the tasks to be performed. Explanations of the major sensory disability areas precedes…
Patterns of Computer-Mediated Interaction in Small Writing Groups Using Wikis
ERIC Educational Resources Information Center
Li, Mimi; Zhu, Wei
2013-01-01
Informed by sociocultural theory and guided especially by "collective scaffolding", this study investigated the nature of computer-mediated interaction of three groups of English as a Foreign Language students when they performed collaborative writing tasks using wikis. Nine college students from a Chinese university participated in the…
Validating models of target acquisition performance in the dismounted soldier context
NASA Astrophysics Data System (ADS)
Glaholt, Mackenzie G.; Wong, Rachel K.; Hollands, Justin G.
2018-04-01
The problem of predicting real-world operator performance with digital imaging devices is of great interest within the military and commercial domains. There are several approaches to this problem, including: field trials with imaging devices, laboratory experiments using imagery captured from these devices, and models that predict human performance based on imaging device parameters. The modeling approach is desirable, as both field trials and laboratory experiments are costly and time-consuming. However, the data from these experiments is required for model validation. Here we considered this problem in the context of dismounted soldiering, for which detection and identification of human targets are essential tasks. Human performance data were obtained for two-alternative detection and identification decisions in a laboratory experiment in which photographs of human targets were presented on a computer monitor and the images were digitally magnified to simulate range-to-target. We then compared the predictions of different performance models within the NV-IPM software package: Targeting Task Performance (TTP) metric model and the Johnson model. We also introduced a modification to the TTP metric computation that incorporates an additional correction for target angular size. We examined model predictions using NV-IPM default values for a critical model constant, V50, and we also considered predictions when this value was optimized to fit the behavioral data. When using default values, certain model versions produced a reasonably close fit to the human performance data in the detection task, while for the identification task all models substantially overestimated performance. When using fitted V50 values the models produced improved predictions, though the slopes of the performance functions were still shallow compared to the behavioral data. These findings are discussed in relation to the models' designs and parameters, and the characteristics of the behavioral paradigm.
SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks.
Zenke, Friedemann; Ganguli, Surya
2018-06-01
A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in silico. Here we revisit the problem of supervised learning in temporally coding multilayer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three-factor learning rule capable of training multilayer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric, and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike time patterns.
Optimization of tomographic reconstruction workflows on geographically distributed resources
Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar; ...
2016-01-01
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less
Optimization of tomographic reconstruction workflows on geographically distributed resources
Bicer, Tekin; Gürsoy, Doǧa; Kettimuthu, Rajkumar; De Carlo, Francesco; Foster, Ian T.
2016-01-01
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modeling of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Moreover, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks. PMID:27359149
Optimization of tomographic reconstruction workflows on geographically distributed resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less
Farias Zuniga, Amanda M; Côté, Julie N
2017-06-01
The effects of performing a 90-minute computer task with a laptop versus a dual monitor desktop workstation were investigated in healthy young male and female adults. Work-related musculoskeletal disorders are common among computer (especially female) users. Laptops have surpassed desktop computer sales, and working with multiple monitors has also become popular. However, few studies have provided objective evidence on how they affect the musculoskeletal system in both genders. Twenty-seven healthy participants (mean age = 24.6 years; 13 males) completed a 90-minute computer task while using a laptop or dual monitor (DualMon) desktop. Electromyography (EMG) from eight upper body muscles and visual strain were measured throughout the task. Neck proprioception was tested before and after the computer task using a head-repositioning test. EMG amplitude (root mean square [RMS]), variability (coefficients of variation [CV]), and normalized mutual information (NMI) were computed. Visual strain ( p < .01) and right upper trapezius RMS ( p = .03) increased significantly over time regardless of workstation. Right cervical erector spinae RMS and cervical NMI were smaller, while degrees of overshoot (mean = 4.15°) and end position error (mean = 1.26°) were larger in DualMon regardless of time. Effects on muscle activity were more pronounced in males, whereas effects on proprioception were more pronounced in females. Results suggest that compared to laptop, DualMon work is effective in reducing cervical muscle activity, dissociating cervical connectivity, and maintaining more typical neck repositioning patterns, suggesting some health-protective effects. This evidence could be considered when deciding on computer workstation designs.
Quadrado, Virgínia Helena; Silva, Talita Dias da; Favero, Francis Meire; Tonks, James; Massetti, Thais; Monteiro, Carlos Bandeira de Mello
2017-11-10
To examine whether performance improvements in the virtual environment generalize to the natural environment. we had 64 individuals, 32 of which were individuals with DMD and 32 were typically developing individuals. The groups practiced two coincidence timing tasks. In the more tangible button-press task, the individuals were required to 'intercept' a falling virtual object at the moment it reached the interception point by pressing a key on the computer. In the more abstract task, they were instructed to 'intercept' the virtual object by making a hand movement in a virtual environment using a webcam. For individuals with DMD, conducting a coincidence timing task in a virtual environment facilitated transfer to the real environment. However, we emphasize that a task practiced in a virtual environment should have higher rates of difficulties than a task practiced in a real environment. IMPLICATIONS FOR REHABILITATION Virtual environments can be used to promote improved performance in ?real-world? environments. Virtual environments offer the opportunity to create paradigms similar ?real-life? tasks, however task complexity and difficulty levels can be manipulated, graded and enhanced to increase likelihood of success in transfer of learning and performance. Individuals with DMD, in particular, showed immediate performance benefits after using virtual reality.
Evolution of a designless nanoparticle network into reconfigurable Boolean logic
NASA Astrophysics Data System (ADS)
Bose, S. K.; Lawrence, C. P.; Liu, Z.; Makarenko, K. S.; van Damme, R. M. J.; Broersma, H. J.; van der Wiel, W. G.
2015-12-01
Natural computers exploit the emergent properties and massive parallelism of interconnected networks of locally active components. Evolution has resulted in systems that compute quickly and that use energy efficiently, utilizing whatever physical properties are exploitable. Man-made computers, on the other hand, are based on circuits of functional units that follow given design rules. Hence, potentially exploitable physical processes, such as capacitive crosstalk, to solve a problem are left out. Until now, designless nanoscale networks of inanimate matter that exhibit robust computational functionality had not been realized. Here we artificially evolve the electrical properties of a disordered nanomaterials system (by optimizing the values of control voltages using a genetic algorithm) to perform computational tasks reconfigurably. We exploit the rich behaviour that emerges from interconnected metal nanoparticles, which act as strongly nonlinear single-electron transistors, and find that this nanoscale architecture can be configured in situ into any Boolean logic gate. This universal, reconfigurable gate would require about ten transistors in a conventional circuit. Our system meets the criteria for the physical realization of (cellular) neural networks: universality (arbitrary Boolean functions), compactness, robustness and evolvability, which implies scalability to perform more advanced tasks. Our evolutionary approach works around device-to-device variations and the accompanying uncertainties in performance. Moreover, it bears a great potential for more energy-efficient computation, and for solving problems that are very hard to tackle in conventional architectures.
Spinelli, Simona; Ballard, Theresa; Feldon, Joram; Higgins, Guy A; Pryce, Christopher R
2006-08-01
With the CAmbridge Neuropsychological Test Automated Battery (CANTAB), computerized neuropsychological tasks can be presented on a touch-sensitive computer screen, and this system has been used to assess cognitive processes in neuropsychiatric patients, healthy volunteers, and species of non-human primate, primarily the rhesus macaque and common marmoset. Recently, we reported that the common marmoset, a small-bodied primate, can be trained to a high and stable level of performance on the CANTAB five-choice serial reaction time (5-CSRT) task of attention, and a novel task of working memory, the concurrent delayed match-to-position (CDMP) task. Here, in order to increase understanding of the specific cognitive demands of these tasks and the importance of acetylcholine to their performance, the effects of systemic delivery of the muscarinic receptor antagonist scopolamine and the nicotinic receptor agonist nicotine were studied. In the 5-CSRT task, nicotine enhanced performance in terms of increased sustained attention, whilst scopolamine led to increased omissions despite a high level of orientation to the correct stimulus location. In the CDMP task, scopolamine impaired performance at two stages of the task that differ moderately in terms of memory retention load but both of which are likely to require working memory, including interference-coping, abilities. Nicotine tended to enhance performance at the long-delay stage specifically but only against a background of relatively low baseline performance. These data are consistent with a dissociation of the roles of muscarinic and nicotinic cholinergic receptors in the regulation of both sustained attention and working memory in primates.
Webb, Taylor W.; Kelly, Yin T.; Graziano, Michael S. A.
2016-01-01
Abstract The temporoparietal junction (TPJ) is activated in association with a large range of functions, including social cognition, episodic memory retrieval, and attentional reorienting. An ongoing debate is whether the TPJ performs an overarching, domain-general computation, or whether functions reside in domain-specific subdivisions. We scanned subjects with fMRI during five tasks known to activate the TPJ, probing social, attentional, and memory functions, and used data-driven parcellation (independent component analysis) to isolate task-related functional processes in the bilateral TPJ. We found that one dorsal component in the right TPJ, which was connected with the frontoparietal control network, was activated in all of the tasks. Other TPJ subregions were specific for attentional reorienting, oddball target detection, or social attribution of belief. The TPJ components that participated in attentional reorienting and oddball target detection appeared spatially separated, but both were connected with the ventral attention network. The TPJ component that participated in the theory-of-mind task was part of the default-mode network. Further, we found that the BOLD response in the domain-general dorsal component had a longer latency than responses in the domain-specific components, suggesting an involvement in distinct, perhaps postperceptual, computations. These findings suggest that the TPJ performs both domain-general and domain-specific computations that reside within spatially distinct functional components. PMID:27280153
ERIC Educational Resources Information Center
Marcum, Deanna; Boss, Richard
1983-01-01
Relates office automation to its application in libraries, discussing computer software packages for microcomputers performing tasks involved in word processing, accounting, statistical analysis, electronic filing cabinets, and electronic mail systems. (EJS)
Physical Medicine and Rehabilitation Resident Use of iPad Mini Mobile Devices.
Niehaus, William; Boimbo, Sandra; Akuthota, Venu
2015-05-01
Previous research on the use of tablet devices in residency programs has been undertaken in radiology and medicine or with standard-sized tablet devices. With new, smaller tablet devices, there is an opportunity to assess their effect on resident behavior. This prospective study attempts to evaluate resident behavior after receiving a smaller tablet device. To evaluate whether smaller tablet computers facilitate residents' daily tasks. Prospective study that administered surveys to evaluate tablet computer use. Residency program. Thirteen physical medicine and rehabilitation residents. Residents were provided 16-GB iPad Minis and surveyed with Redcap to collect usage information at baseline, 3, and 6 months. Survey analysis was conducted using SAS (SAS, Cary, NC) for descriptive analysis. To evaluate multiple areas of resident education, the following tasks were selected: accessing e-mail, logging duty hours, logging procedures, researching clinical information, accessing medical journals, reviewing didactic presentations, and completing evaluations. Then, measurements were taken of: (1) residents' response to how tablet computers made it easier to access the aforementioned tasks; and (2) residents' response to how tablet computers affected the frequency they performed the aforementioned tasks. After being provided tablet computers, our physical medicine and rehabilitation residents reported significantly greater access to e-mail, medical journals, and didactic material. Also, receiving tablet computers was reported to increase the frequency that residents accessed e-mail, researched clinical information, accessed medical journals, reviewed didactic presentations, and completed evaluations. After receiving a tablet computer, residents reported an increase in the use of calendar programs, note-taking programs, PDF readers, online storage programs, and file organization programs. These physical medicine and rehabilitation residents reported tablet computers increased access to e-mail, presentation material, and medical journals. Tablet computers also were reported to increase the frequency residents were able to complete tasks associated with residency training. Copyright © 2015 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
Cook, Nicola A; Kim, Jin Un; Pasha, Yasmin; Crossey, Mary ME; Schembri, Adrian J; Harel, Brian T; Kimhofer, Torben; Taylor-Robinson, Simon D
2017-01-01
Background Psychometric testing is used to identify patients with cirrhosis who have developed hepatic encephalopathy (HE). Most batteries consist of a series of paper-and-pencil tests, which are cumbersome for most clinicians. A modern, easy-to-use, computer-based battery would be a helpful clinical tool, given that in its minimal form, HE has an impact on both patients’ quality of life and the ability to drive and operate machinery (with societal consequences). Aim We compared the Cogstate™ computer battery testing with the Psychometric Hepatic Encephalopathy Score (PHES) tests, with a view to simplify the diagnosis. Methods This was a prospective study of 27 patients with histologically proven cirrhosis. An analysis of psychometric testing was performed using accuracy of task performance and speed of completion as primary variables to create a correlation matrix. A stepwise linear regression analysis was performed with backward elimination, using analysis of variance. Results Strong correlations were found between the international shopping list, international shopping list delayed recall of Cogstate and the PHES digit symbol test. The Shopping List Tasks were the only tasks that consistently had P values of <0.05 in the linear regression analysis. Conclusion Subtests of the Cogstate battery correlated very strongly with the digit symbol component of PHES in discriminating severity of HE. These findings would indicate that components of the current PHES battery with the international shopping list tasks of Cogstate would be discriminant and have the potential to be used easily in clinical practice. PMID:28919805
Grid Computing Application for Brain Magnetic Resonance Image Processing
NASA Astrophysics Data System (ADS)
Valdivia, F.; Crépeault, B.; Duchesne, S.
2012-02-01
This work emphasizes the use of grid computing and web technology for automatic post-processing of brain magnetic resonance images (MRI) in the context of neuropsychiatric (Alzheimer's disease) research. Post-acquisition image processing is achieved through the interconnection of several individual processes into pipelines. Each process has input and output data ports, options and execution parameters, and performs single tasks such as: a) extracting individual image attributes (e.g. dimensions, orientation, center of mass), b) performing image transformations (e.g. scaling, rotation, skewing, intensity standardization, linear and non-linear registration), c) performing image statistical analyses, and d) producing the necessary quality control images and/or files for user review. The pipelines are built to perform specific sequences of tasks on the alphanumeric data and MRIs contained in our database. The web application is coded in PHP and allows the creation of scripts to create, store and execute pipelines and their instances either on our local cluster or on high-performance computing platforms. To run an instance on an external cluster, the web application opens a communication tunnel through which it copies the necessary files, submits the execution commands and collects the results. We present result on system tests for the processing of a set of 821 brain MRIs from the Alzheimer's Disease Neuroimaging Initiative study via a nonlinear registration pipeline composed of 10 processes. Our results show successful execution on both local and external clusters, and a 4-fold increase in performance if using the external cluster. However, the latter's performance does not scale linearly as queue waiting times and execution overhead increase with the number of tasks to be executed.
Scaffolding a Complex Task of Experimental Design in Chemistry with a Computer Environment
ERIC Educational Resources Information Center
Girault, Isabelle; d'Ham, Cédric
2014-01-01
When solving a scientific problem through experimentation, students may have the responsibility to design the experiment. When students work in a conventional condition, with paper and pencil, the designed procedures stay at a very general level. There is a need for additional scaffolds to help the students perform this complex task. We propose a…
What Makes a Team? The Composition of Small Groups for C.A.I.
ERIC Educational Resources Information Center
Bellows, B. P.
This study examined the task performance and social interaction of young children who used a computer to learn map skills. Specifically, ability and sex were examined in relation to students' achievement on a social studies task and in relation to student interaction in small groups. Subjects were 66 second grade students in 3 different…
Alwanni, Hisham; Baslan, Yara; Alnuman, Nasim; Daoud, Mohammad I.
2017-01-01
This paper presents an EEG-based brain-computer interface system for classifying eleven motor imagery (MI) tasks within the same hand. The proposed system utilizes the Choi-Williams time-frequency distribution (CWD) to construct a time-frequency representation (TFR) of the EEG signals. The constructed TFR is used to extract five categories of time-frequency features (TFFs). The TFFs are processed using a hierarchical classification model to identify the MI task encapsulated within the EEG signals. To evaluate the performance of the proposed approach, EEG data were recorded for eighteen intact subjects and four amputated subjects while imagining to perform each of the eleven hand MI tasks. Two performance evaluation analyses, namely channel- and TFF-based analyses, are conducted to identify the best subset of EEG channels and the TFFs category, respectively, that enable the highest classification accuracy between the MI tasks. In each evaluation analysis, the hierarchical classification model is trained using two training procedures, namely subject-dependent and subject-independent procedures. These two training procedures quantify the capability of the proposed approach to capture both intra- and inter-personal variations in the EEG signals for different MI tasks within the same hand. The results demonstrate the efficacy of the approach for classifying the MI tasks within the same hand. In particular, the classification accuracies obtained for the intact and amputated subjects are as high as 88.8% and 90.2%, respectively, for the subject-dependent training procedure, and 80.8% and 87.8%, respectively, for the subject-independent training procedure. These results suggest the feasibility of applying the proposed approach to control dexterous prosthetic hands, which can be of great benefit for individuals suffering from hand amputations. PMID:28832513
Kirchner, Elsa A; Kim, Su Kyoung
2018-01-01
Event-related potentials (ERPs) are often used in brain-computer interfaces (BCIs) for communication or system control for enhancing or regaining control for motor-disabled persons. Especially results from single-trial EEG classification approaches for BCIs support correlations between single-trial ERP detection performance and ERP expression. Hence, BCIs can be considered as a paradigm shift contributing to new methods with strong influence on both neuroscience and clinical applications. Here, we investigate the relevance of the choice of training data and classifier transfer for the interpretability of results from single-trial ERP detection. In our experiments, subjects performed a visual-motor oddball task with motor-task relevant infrequent ( targets ), motor-task irrelevant infrequent ( deviants ), and motor-task irrelevant frequent ( standards ) stimuli. Under dual-task condition, a secondary senso-motor task was performed, compared to the simple-task condition. For evaluation, average ERP analysis and single-trial detection analysis with different numbers of electrodes were performed. Further, classifier transfer was investigated between simple and dual task. Parietal positive ERPs evoked by target stimuli (but not by deviants) were expressed stronger under dual-task condition, which is discussed as an increase of task emphasis and brain processes involved in task coordination and change of task set. Highest classification performance was found for targets irrespective whether all 62, 6 or 2 parietal electrodes were used. Further, higher detection performance of targets compared to standards was achieved under dual-task compared to simple-task condition in case of training on data from 2 parietal electrodes corresponding to results of ERP average analysis. Classifier transfer between tasks improves classification performance in case that training took place on more varying examples (from dual task). In summary, we showed that P300 and overlaying parietal positive ERPs can successfully be detected while subjects are performing additional ongoing motor activity. This supports single-trial detection of ERPs evoked by target events to, e.g., infer a patient's attentional state during therapeutic intervention.
Kirchner, Elsa A.; Kim, Su Kyoung
2018-01-01
Event-related potentials (ERPs) are often used in brain-computer interfaces (BCIs) for communication or system control for enhancing or regaining control for motor-disabled persons. Especially results from single-trial EEG classification approaches for BCIs support correlations between single-trial ERP detection performance and ERP expression. Hence, BCIs can be considered as a paradigm shift contributing to new methods with strong influence on both neuroscience and clinical applications. Here, we investigate the relevance of the choice of training data and classifier transfer for the interpretability of results from single-trial ERP detection. In our experiments, subjects performed a visual-motor oddball task with motor-task relevant infrequent (targets), motor-task irrelevant infrequent (deviants), and motor-task irrelevant frequent (standards) stimuli. Under dual-task condition, a secondary senso-motor task was performed, compared to the simple-task condition. For evaluation, average ERP analysis and single-trial detection analysis with different numbers of electrodes were performed. Further, classifier transfer was investigated between simple and dual task. Parietal positive ERPs evoked by target stimuli (but not by deviants) were expressed stronger under dual-task condition, which is discussed as an increase of task emphasis and brain processes involved in task coordination and change of task set. Highest classification performance was found for targets irrespective whether all 62, 6 or 2 parietal electrodes were used. Further, higher detection performance of targets compared to standards was achieved under dual-task compared to simple-task condition in case of training on data from 2 parietal electrodes corresponding to results of ERP average analysis. Classifier transfer between tasks improves classification performance in case that training took place on more varying examples (from dual task). In summary, we showed that P300 and overlaying parietal positive ERPs can successfully be detected while subjects are performing additional ongoing motor activity. This supports single-trial detection of ERPs evoked by target events to, e.g., infer a patient's attentional state during therapeutic intervention. PMID:29636660
DOT National Transportation Integrated Search
1980-10-01
The present study examined a variety of possible predictors of complex monitoring performance. The criterion task was designed to resemble that of a highly automated air traffic control radar system containing computer-generated alphanumeric displays...
Boedecker, Joschka; Lampe, Thomas; Riedmiller, Martin
2013-01-01
A common assumption in psychology, economics, and other fields holds that higher performance will result if extrinsic rewards (such as money) are offered as an incentive. While this principle seems to work well for tasks that require the execution of the same sequence of steps over and over, with little uncertainty about the process, in other cases, especially where creative problem solving is required due to the difficulty in finding the optimal sequence of actions, external rewards can actually be detrimental to task performance. Furthermore, they have the potential to undermine intrinsic motivation to do an otherwise interesting activity. In this work, we extend a computational model of the dorsomedial and dorsolateral striatal reinforcement learning systems to account for the effects of extrinsic and intrinsic rewards. The model assumes that the brain employs both a goal-directed and a habitual learning system, and competition between both is based on the trade-off between the cost of the reasoning process and value of information. The goal-directed system elicits internal rewards when its models of the environment improve, while the habitual system, being model-free, does not. Our results account for the phenomena that initial extrinsic reward leads to reduced activity after extinction compared to the case without any initial extrinsic rewards, and that performance in complex task settings drops when higher external rewards are promised. We also test the hypothesis that external rewards bias the competition in favor of the computationally efficient, but cruder and less flexible habitual system, which can negatively influence intrinsic motivation and task performance in the class of tasks we consider.
Dettmer, Marius; Pourmoghaddam, Amir; Lee, Beom-Chan; Layne, Charles S
2015-01-01
Postural control in certain situations depends on functioning of tactile or proprioceptive receptors and their respective dynamic integration. Loss of sensory functioning can lead to increased risk of falls in challenging postural tasks, especially in older adults. Stochastic resonance, a concept describing better function of systems with addition of optimal levels of noise, has shown to be beneficial for balance performance in certain populations and simple postural tasks. In this study, we tested the effects of aging and a tactile stochastic resonance stimulus (TSRS) on balance of adults in a sensory conflict task. Nineteen older (71-84 years of age) and younger participants (22-29 years of age) stood on a force plate for repeated trials of 20 s duration, while foot sole stimulation was either turned on or off, and the visual surrounding was sway-referenced. Balance performance was evaluated by computing an Equilibrium Score (ES) and anterior-posterior sway path length (APPlength). For postural control evaluation, strategy scores and approximate entropy (ApEn) were computed. Repeated-measures ANOVA, Wilcoxon signed-rank tests, and Mann-Whitney U-tests were conducted for statistical analysis. Our results showed that balance performance differed between older and younger adults as indicated by ES (p = 0.01) and APPlength (0.01), and addition of vibration only improved performance in the older group significantly (p = 0.012). Strategy scores differed between both age groups, whereas vibration only affected the older group (p = 0.025). Our results indicate that aging affects specific postural outcomes and that TSRS is beneficial for older adults in a visual sensory conflict task, but more research is needed to investigate the effectiveness in individuals with more severe balance problems, for example, due to neuropathy.
Lee, Jong-Eun Roselyn; Nass, Clifford I; Bailenson, Jeremy N
2014-04-01
Virtual environments employing avatars for self-representation-including the opportunity to represent or misrepresent social categories-raise interesting and intriguing questions as to how one's avatar-based social category shapes social identity dynamics, particularly when stereotypes prevalent in the offline world apply to the social categories visually represented by avatars. The present experiment investigated how social category representation via avatars (i.e., graphical representations of people in computer-mediated environments) affects stereotype-relevant task performance. In particular, building on and extending the Proteus effect model, we explored whether and how stereotype lift (i.e., a performance boost caused by the awareness of a domain-specific negative stereotype associated with outgroup members) occurred in virtual group settings in which avatar-based gender representation was arbitrary. Female and male participants (N=120) were randomly assigned either a female avatar or a male avatar through a process masked as a random drawing. They were then placed in a numerical minority status with respect to virtual gender-as the only virtual female (male) in a computer-mediated triad with two opposite-gendered avatars-and performed a mental arithmetic task either competitively or cooperatively. The data revealed that participants who were arbitrarily represented by a male avatar and competed against two ostensible female avatars showed strongest performance compared to others on the arithmetic task. This pattern occurred regardless of participants' actual gender, pointing to a virtual stereotype lift effect. Additional mediation tests showed that task motivation partially mediated the effect. Theoretical and practical implications for social identity dynamics in avatar-based virtual environments are discussed.
1977-01-26
Sisteme Matematicheskogo Obespecheniya YeS EVM [ Applied Programs in the Software System for the Unified System of Computers], by A. Ye. Fateyev, A. I...computerized systems are most effective in large production complexes , in which the level of utilization of computers can be as high as 500,000...performance of these tasks could be furthered by the complex introduction of electronic computers in automated control systems. The creation of ASU
Dedovic, Katarina; Renwick, Robert; Mahani, Najmeh Khalili; Engert, Veronika; Lupien, Sonia J.; Pruessner, Jens C.
2005-01-01
Objective We developed a protocol for inducing moderate psychologic stress in a functional imaging setting and evaluated the effects of stress on physiology and brain activation. Methods The Montreal Imaging Stress Task (MIST), derived from the Trier Mental Challenge Test, consists of a series of computerized mental arithmetic challenges, along with social evaluative threat components that are built into the program or presented by the investigator. To allow the effects of stress and mental arithmetic to be investigated separately, the MIST has 3 test conditions (rest, control and experimental), which can be presented in either a block or an event-related design, for use with functional magnetic resonance imaging (fMRI) or positron emission tomography (PET). In the rest condition, subjects look at a static computer screen on which no tasks are shown. In the control condition, a series of mental arithmetic tasks are displayed on the computer screen, and subjects submit their answers by means of a response interface. In the experimental condition, the difficulty and time limit of the tasks are manipulated to be just beyond the individual's mental capacity. In addition, in this condition the presentation of the mental arithmetic tasks is supplemented by a display of information on individual and average performance, as well as expected performance. Upon completion of each task, the program presents a performance evaluation to further increase the social evaluative threat of the situation. Results In 2 independent studies using PET and a third independent study using fMRI, with a total of 42 subjects, levels of salivary free cortisol for the whole group were significantly increased under the experimental condition, relative to the control and rest conditions. Performing mental arithmetic was linked to activation of motor and visual association cortices, as well as brain structures involved in the performance of these tasks (e.g., the angular gyrus). Conclusions We propose the MIST as a tool for investigating the effects of perceiving and processing psychosocial stress in functional imaging studies. PMID:16151536
Chen, Yong-Quan; Hsieh, Shulan
2018-01-01
The aim of this study was to investigate if individuals with frequent internet gaming (IG) experience exhibited better or worse multitasking ability compared with those with infrequent IG experience. The individuals' multitasking abilities were measured using virtual environment multitasks, such as Edinburgh Virtual Errands Test (EVET), and conventional laboratory multitasks, such as the dual task and task switching. Seventy-two young healthy college students participated in this study. They were split into two groups based on the time spent on playing online games, as evaluated using the Internet Use Questionnaire. Each participant performed EVET, dual-task, and task-switching paradigms on a computer. The current results showed that the frequent IG group performed better on EVET compared with the infrequent IG group, but their performance on the dual-task and task-switching paradigms did not differ significantly. The results suggest that the frequent IG group exhibited better multitasking efficacy if measured using a more ecologically valid task, but not when measured using a conventional laboratory multitasking task. The differences in terms of the subcomponents of executive function measured by these task paradigms were discussed. The current results show the importance of the task effect while evaluating frequent internet gamers' multitasking ability.
Chen, Yong-Quan
2018-01-01
The aim of this study was to investigate if individuals with frequent internet gaming (IG) experience exhibited better or worse multitasking ability compared with those with infrequent IG experience. The individuals’ multitasking abilities were measured using virtual environment multitasks, such as Edinburgh Virtual Errands Test (EVET), and conventional laboratory multitasks, such as the dual task and task switching. Seventy-two young healthy college students participated in this study. They were split into two groups based on the time spent on playing online games, as evaluated using the Internet Use Questionnaire. Each participant performed EVET, dual-task, and task-switching paradigms on a computer. The current results showed that the frequent IG group performed better on EVET compared with the infrequent IG group, but their performance on the dual-task and task-switching paradigms did not differ significantly. The results suggest that the frequent IG group exhibited better multitasking efficacy if measured using a more ecologically valid task, but not when measured using a conventional laboratory multitasking task. The differences in terms of the subcomponents of executive function measured by these task paradigms were discussed. The current results show the importance of the task effect while evaluating frequent internet gamers’ multitasking ability. PMID:29879150
Computational Psychometrics for the Measurement of Collaborative Problem Solving Skills
Polyak, Stephen T.; von Davier, Alina A.; Peterschmidt, Kurt
2017-01-01
This paper describes a psychometrically-based approach to the measurement of collaborative problem solving skills, by mining and classifying behavioral data both in real-time and in post-game analyses. The data were collected from a sample of middle school children who interacted with a game-like, online simulation of collaborative problem solving tasks. In this simulation, a user is required to collaborate with a virtual agent to solve a series of tasks within a first-person maze environment. The tasks were developed following the psychometric principles of Evidence Centered Design (ECD) and are aligned with the Holistic Framework developed by ACT. The analyses presented in this paper are an application of an emerging discipline called computational psychometrics which is growing out of traditional psychometrics and incorporates techniques from educational data mining, machine learning and other computer/cognitive science fields. In the real-time analysis, our aim was to start with limited knowledge of skill mastery, and then demonstrate a form of continuous Bayesian evidence tracing that updates sub-skill level probabilities as new conversation flow event evidence is presented. This is performed using Bayes' rule and conversation item conditional probability tables. The items are polytomous and each response option has been tagged with a skill at a performance level. In our post-game analysis, our goal was to discover unique gameplay profiles by performing a cluster analysis of user's sub-skill performance scores based on their patterns of selected dialog responses. PMID:29238314
Computational Psychometrics for the Measurement of Collaborative Problem Solving Skills.
Polyak, Stephen T; von Davier, Alina A; Peterschmidt, Kurt
2017-01-01
This paper describes a psychometrically-based approach to the measurement of collaborative problem solving skills, by mining and classifying behavioral data both in real-time and in post-game analyses. The data were collected from a sample of middle school children who interacted with a game-like, online simulation of collaborative problem solving tasks. In this simulation, a user is required to collaborate with a virtual agent to solve a series of tasks within a first-person maze environment. The tasks were developed following the psychometric principles of Evidence Centered Design (ECD) and are aligned with the Holistic Framework developed by ACT. The analyses presented in this paper are an application of an emerging discipline called computational psychometrics which is growing out of traditional psychometrics and incorporates techniques from educational data mining, machine learning and other computer/cognitive science fields. In the real-time analysis, our aim was to start with limited knowledge of skill mastery, and then demonstrate a form of continuous Bayesian evidence tracing that updates sub-skill level probabilities as new conversation flow event evidence is presented. This is performed using Bayes' rule and conversation item conditional probability tables. The items are polytomous and each response option has been tagged with a skill at a performance level. In our post-game analysis, our goal was to discover unique gameplay profiles by performing a cluster analysis of user's sub-skill performance scores based on their patterns of selected dialog responses.
Weed, M R; Taffe, M A; Polis, I; Roberts, A C; Robbins, T W; Koob, G F; Bloom, F E; Gold, L H
1999-10-25
A computerized behavioral battery based upon human neuropsychological tests (CANTAB, CeNeS, Cambridge, UK) has been developed to assess cognitive behaviors of rhesus monkeys. Monkeys reliably performed multiple tasks, providing long-term assessment of changes in a number of behaviors for a given animal. The overall goal of the test battery is to characterize changes in cognitive behaviors following central nervous system (CNS) manipulations. The battery addresses memory (delayed non-matching to sample, DNMS; spatial working memory, using a self-ordered spatial search task, SOSS), attention (intra-/extra-dimensional shift, ID/ED), motivation (progressive-ratio, PR), reaction time (RT) and motor coordination (bimanual task). As with human neuropsychological batteries, different tasks are thought to involve different neural substrates, and therefore performance profiles should assess function in particular brain regions. Monkeys were tested in transport cages, and responding on a touch sensitive computer monitor was maintained by food reinforcement. Parametric manipulations of several tasks demonstrated the sensitivity of performance to increases in task difficulty. Furthermore, the factors influencing difficulty for rhesus monkeys were the same as those shown to affect human performance. Data from this study represent performance of a population of healthy normal monkeys that will be used for comparison in subsequent studies of performance following CNS manipulations such as infection with simian immunodeficiency virus (NeuroAIDS) or drug administration.
The Effects of Working Memory on Brain-Computer Interface Performance
Sprague, Samantha A.; McBee, Matthew; Sellers, Eric W.
2015-01-01
Objective The purpose of the present study is to evaluate the relationship between working memory and BCI performance. Methods Participants took part in two separate sessions. The first session consisted of three computerized tasks. The LSWM was used to measure working memory, the TPVT was used to measure general intelligence, and the DCCS was used to measure executive function, specifically cognitive flexibility. The second session consisted of a P300-based BCI copy-spelling task. Results The results indicate that both working memory and general intelligence are significant predictors of BCI performance. Conclusions This suggests that working memory training could be used to improve performance on a BCI task. Significance Working memory training may help to reduce a portion of the individual differences that exist in BCI performance allowing for a wider range of users to successfully operate the BCI system as well as increase the BCI performance of current users. PMID:26620822
Memory consolidation and contextual interference effects with computer games.
Shewokis, Patricia A
2003-10-01
Some investigators of the contextual interference effect contend that there is a direct relation between the amount of practice and the contextual interference effect based on the prediction that the improvement in learning tasks in a random practice schedule, compared to a blocked practice schedule, increases in magnitude as the amount of practice during acquisition on the tasks increases. Research using computer games in contextual interference studies has yielded a large effect (f = .50) with a random practice schedule advantage during transfer. These investigations had a total of 36 and 72 acquisition trials, respectively. The present study tested this prediction by having 72 college students, who were randomly assigned to a blocked or random practice schedule, practice 102 trials of three computer-game tasks across three days. After a 24-hr. interval, 6 retention and 5 transfer trials were performed. Dependent variables were time to complete an event in seconds and number of errors. No significant differences were found for retention and transfer. These results are discussed in terms of how the amount of practice, task-related factors, and memory consolidation mediate the contextual interference effect.
Dovis, Sebastiaan; Van der Oord, Saskia; Wiers, Reinout W; Prins, Pier J M
2012-07-01
Visual-spatial Working Memory (WM) is the most impaired executive function in children with Attention-Deficit/Hyperactivity Disorder (ADHD). Some suggest that deficits in executive functioning are caused by motivational deficits. However, there are no studies that investigate the effects of motivation on the visual-spatial WM of children with- and without ADHD. Studies examining this in executive functions other than WM, show inconsistent results. These inconsistencies may be related to differences in the reinforcement used. The effects of different reinforcers on WM performance were investigated in 30 children with ADHD and 31 non-ADHD controls. A visual-spatial WM task was administered in four reinforcement conditions: Feedback-only, 1 euro, 10 euros, and a computer-game version of the task. In the Feedback-only condition, children with ADHD performed worse on the WM measure than controls. Although incentives significantly improved the WM performance of children with ADHD, even the strongest incentives (10 euros and Gaming) were unable to normalize their performance. Feedback-only provided sufficient reinforcement for controls to reach optimal performance, while children with ADHD required extra reinforcement. Only children with ADHD showed a decrease in performance over time. Importantly, the strongest incentives (10 euros and Gaming) normalized persistence of performance in these children, whereas 1 euro had no such effect. Both executive and motivational deficits give rise to visual-spatial WM deficits in ADHD. Problems with task-persistence in ADHD result from motivational deficits. In ADHD-reinforcement studies and clinical practice (e.g., assessment), reinforcement intensity can be a confounding factor and should be taken into account. Gaming can be a cost-effective way to maximize performance in ADHD.
Incorporation of RAM techniques into simulation modeling
NASA Astrophysics Data System (ADS)
Nelson, S. C., Jr.; Haire, M. J.; Schryver, J. C.
1995-01-01
This work concludes that reliability, availability, and maintainability (RAM) analytical techniques can be incorporated into computer network simulation modeling to yield an important new analytical tool. This paper describes the incorporation of failure and repair information into network simulation to build a stochastic computer model to represent the RAM Performance of two vehicles being developed for the US Army: The Advanced Field Artillery System (AFAS) and the Future Armored Resupply Vehicle (FARV). The AFAS is the US Army's next generation self-propelled cannon artillery system. The FARV is a resupply vehicle for the AFAS. Both vehicles utilize automation technologies to improve the operational performance of the vehicles and reduce manpower. The network simulation model used in this work is task based. The model programmed in this application requirements a typical battle mission and the failures and repairs that occur during that battle. Each task that the FARV performs--upload, travel to the AFAS, refuel, perform tactical/survivability moves, return to logistic resupply, etc.--is modeled. Such a model reproduces a model reproduces operational phenomena (e.g., failures and repairs) that are likely to occur in actual performance. Simulation tasks are modeled as discrete chronological steps; after the completion of each task decisions are programmed that determine the next path to be followed. The result is a complex logic diagram or network. The network simulation model is developed within a hierarchy of vehicle systems, subsystems, and equipment and includes failure management subnetworks. RAM information and other performance measures are collected which have impact on design requirements. Design changes are evaluated through 'what if' questions, sensitivity studies, and battle scenario changes.
From photons to big-data applications: terminating terabits
2016-01-01
Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers. PMID:26809573
From photons to big-data applications: terminating terabits.
Zilberman, Noa; Moore, Andrew W; Crowcroft, Jon A
2016-03-06
Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers. © 2016 The Authors.
A computer simulation experiment of supervisory control of remote manipulation. M.S. Thesis
NASA Technical Reports Server (NTRS)
Mccandlish, S. G.
1966-01-01
A computer simulation of a remote manipulation task and a rate-controlled manipulator is described. Some low-level automatic decision making ability which could be used at the operator's discretion to augment his direct continuous control was built into the manipulator. Experiments were made on the effect of transmission delay, dynamic lag, and intermittent vision on human manipulative ability. Delay does not make remote manipulation impossible. Intermittent visual feedback, and the absence of rate information in the display presented to the operator do not seem to impair the operator's performance. A small-capacity visual feedback channel may be sufficient for remote manipulation tasks, or one channel might be time-shared between several operators. In other experiments the operator called in sequence various on-site automatic control programs of the machine, and thereby acted as a supervisor. The supervisory mode of operation has some advantages when the task to be performed is difficult for a human controlling directly.
Optimizing the number of steps in learning tasks for complex skills.
Nadolski, Rob J; Kirschner, Paul A; van Merriënboer, Jeroen J G
2005-06-01
Carrying out whole tasks is often too difficult for novice learners attempting to acquire complex skills. The common solution is to split up the tasks into a number of smaller steps. The number of steps must be optimized for efficient and effective learning. The aim of the study is to investigate the relation between the number of steps provided to learners and the quality of their learning of complex skills. It is hypothesized that students receiving an optimized number of steps will learn better than those receiving either the whole task in only one step or those receiving a large number of steps. Participants were 35 sophomore law students studying at Dutch universities, mean age=22.8 years (SD=3.5), 63% were female. Participants were randomly assigned to 1 of 3 computer-delivered versions of a multimedia programme on how to prepare and carry out a law plea. The versions differed only in the number of learning steps provided. Videotaped plea-performance results were determined, various related learning measures were acquired and all computer actions were logged and analyzed. Participants exposed to an intermediate (i.e. optimized) number of steps outperformed all others on the compulsory learning task. No differences in performance on a transfer task were found. A high number of steps proved to be less efficient for carrying out the learning task. An intermediate number of steps is the most effective, proving that the number of steps can be optimized for improving learning.
Comprehensive silicon solar-cell computer modeling
NASA Technical Reports Server (NTRS)
Lamorte, M. F.
1984-01-01
A comprehensive silicon solar cell computer modeling scheme was developed to perform the following tasks: (1) model and analysis of the net charge distribution in quasineutral regions; (2) experimentally determined temperature behavior of Spire Corp. n+pp+ solar cells where n+-emitter is formed by ion implantation of 75As or 31P; and (3) initial validation results of computer simulation program using Spire Corp. n+pp+ cells.
NASA Technical Reports Server (NTRS)
Hanley, G.
1979-01-01
Computer assisted design of a gallium arsenide solid state dc-to-RF converter with supportive fabrication data was investigated. Specific tasks performed include: computer program checkout; amplifier comparisons; computer design analysis of GaSa solar cells; and GaAs diode evaluation. Results obtained in the design and evaluation of transistors for the microwave space power system are presented.
2013-01-01
Objective. This study compared the relationship between computer experience and performance on computerized cognitive tests and a traditional paper-and-pencil cognitive test in a sample of older adults (N = 634). Method. Participants completed computer experience and computer attitudes questionnaires, three computerized cognitive tests (Useful Field of View (UFOV) Test, Road Sign Test, and Stroop task) and a paper-and-pencil cognitive measure (Trail Making Test). Multivariate analysis of covariance was used to examine differences in cognitive performance across the four measures between those with and without computer experience after adjusting for confounding variables. Results. Although computer experience had a significant main effect across all cognitive measures, the effect sizes were similar. After controlling for computer attitudes, the relationship between computer experience and UFOV was fully attenuated. Discussion. Findings suggest that computer experience is not uniquely related to performance on computerized cognitive measures compared with paper-and-pencil measures. Because the relationship between computer experience and UFOV was fully attenuated by computer attitudes, this may imply that motivational factors are more influential to UFOV performance than computer experience. Our findings support the hypothesis that computer use is related to cognitive performance, and this relationship is not stronger for computerized cognitive measures. Implications and directions for future research are provided. PMID:22929395
Deep learning with coherent nanophotonic circuits
NASA Astrophysics Data System (ADS)
Shen, Yichen; Harris, Nicholas C.; Skirlo, Scott; Prabhu, Mihika; Baehr-Jones, Tom; Hochberg, Michael; Sun, Xin; Zhao, Shijie; Larochelle, Hugo; Englund, Dirk; Soljačić, Marin
2017-07-01
Artificial neural networks are computational network models inspired by signal processing in the brain. These models have dramatically improved performance for many machine-learning tasks, including speech and image recognition. However, today's computing hardware is inefficient at implementing neural networks, in large part because much of it was designed for von Neumann computing schemes. Significant effort has been made towards developing electronic architectures tuned to implement artificial neural networks that exhibit improved computational speed and accuracy. Here, we propose a new architecture for a fully optical neural network that, in principle, could offer an enhancement in computational speed and power efficiency over state-of-the-art electronics for conventional inference tasks. We experimentally demonstrate the essential part of the concept using a programmable nanophotonic processor featuring a cascaded array of 56 programmable Mach-Zehnder interferometers in a silicon photonic integrated circuit and show its utility for vowel recognition.
Efficient parallel architecture for highly coupled real-time linear system applications
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Homaifar, Abdollah; Barua, Soumavo
1988-01-01
A systematic procedure is developed for exploiting the parallel constructs of computation in a highly coupled, linear system application. An overall top-down design approach is adopted. Differential equations governing the application under consideration are partitioned into subtasks on the basis of a data flow analysis. The interconnected task units constitute a task graph which has to be computed in every update interval. Multiprocessing concepts utilizing parallel integration algorithms are then applied for efficient task graph execution. A simple scheduling routine is developed to handle task allocation while in the multiprocessor mode. Results of simulation and scheduling are compared on the basis of standard performance indices. Processor timing diagrams are developed on the basis of program output accruing to an optimal set of processors. Basic architectural attributes for implementing the system are discussed together with suggestions for processing element design. Emphasis is placed on flexible architectures capable of accommodating widely varying application specifics.
Abbey, Craig K.; Zemp, Roger J.; Liu, Jie; Lindfors, Karen K.; Insana, Michael F.
2009-01-01
We investigate and extend the ideal observer methodology developed by Smith and Wagner to detection and discrimination tasks related to breast sonography. We provide a numerical approach for evaluating the ideal observer acting on radio-frequency (RF) frame data, which involves inversion of large nonstationary covariance matrices, and we describe a power-series approach to computing this inverse. Considering a truncated power series suggests that the RF data be Wiener-filtered before forming the final envelope image. We have compared human performance for Wiener-filtered and conventional B-mode envelope images using psychophysical studies for 5 tasks related to breast cancer classification. We find significant improvements in visual detection and discrimination efficiency in four of these five tasks. We also use the Smith-Wagner approach to distinguish between human and processing inefficiencies, and find that generally the principle limitation comes from the information lost in computing the final envelope image. PMID:16468454
Crack Damage Detection Method via Multiple Visual Features and Efficient Multi-Task Learning Model.
Wang, Baoxian; Zhao, Weigang; Gao, Po; Zhang, Yufeng; Wang, Zhe
2018-06-02
This paper proposes an effective and efficient model for concrete crack detection. The presented work consists of two modules: multi-view image feature extraction and multi-task crack region detection. Specifically, multiple visual features (such as texture, edge, etc.) of image regions are calculated, which can suppress various background noises (such as illumination, pockmark, stripe, blurring, etc.). With the computed multiple visual features, a novel crack region detector is advocated using a multi-task learning framework, which involves restraining the variability for different crack region features and emphasizing the separability between crack region features and complex background ones. Furthermore, the extreme learning machine is utilized to construct this multi-task learning model, thereby leading to high computing efficiency and good generalization. Experimental results of the practical concrete images demonstrate that the developed algorithm can achieve favorable crack detection performance compared with traditional crack detectors.
1995-01-01
possible to determine communication points. For this version, a C program spawning Posix threads and using semaphores to synchronize would have to...performance such as the time required for network communication and synchronization as well as issues of asynchrony and memory hierarchy. For example...enhances reusability. Process (or task) parallel computations can also be succinctly expressed with a small set of process creation and synchronization
Phased models for evaluating the performability of computing systems
NASA Technical Reports Server (NTRS)
Wu, L. T.; Meyer, J. F.
1979-01-01
A phase-by-phase modelling technique is introduced to evaluate a fault tolerant system's ability to execute different sets of computational tasks during different phases of the control process. Intraphase processes are allowed to differ from phase to phase. The probabilities of interphase state transitions are specified by interphase transition matrices. Based on constraints imposed on the intraphase and interphase transition probabilities, various iterative solution methods are developed for calculating system performability.
Software systems for modeling articulated figures
NASA Technical Reports Server (NTRS)
Phillips, Cary B.
1989-01-01
Research in computer animation and simulation of human task performance requires sophisticated geometric modeling and user interface tools. The software for a research environment should present the programmer with a powerful but flexible substrate of facilities for displaying and manipulating geometric objects, yet insure that future tools have a consistent and friendly user interface. Jack is a system which provides a flexible and extensible programmer and user interface for displaying and manipulating complex geometric figures, particularly human figures in a 3D working environment. It is a basic software framework for high-performance Silicon Graphics IRIS workstations for modeling and manipulating geometric objects in a general but powerful way. It provides a consistent and user-friendly interface across various applications in computer animation and simulation of human task performance. Currently, Jack provides input and control for applications including lighting specification and image rendering, anthropometric modeling, figure positioning, inverse kinematics, dynamic simulation, and keyframe animation.
Unsteady Aero Computation of a 1 1/2 Stage Large Scale Rotating Turbine
NASA Technical Reports Server (NTRS)
To, Wai-Ming
2012-01-01
This report is the documentation of the work performed for the Subsonic Rotary Wing Project under the NASA s Fundamental Aeronautics Program. It was funded through Task Number NNC10E420T under GESS-2 Contract NNC06BA07B in the period of 10/1/2010 to 8/31/2011. The objective of the task is to provide support for the development of variable speed power turbine technology through application of computational fluid dynamics analyses. This includes work elements in mesh generation, multistage URANS simulations, and post-processing of the simulation results for comparison with the experimental data. The unsteady CFD calculations were performed with the TURBO code running in multistage single passage (phase lag) mode. Meshes for the blade rows were generated with the NASA developed TCGRID code. The CFD performance is assessed and improvements are recommended for future research in this area. For that, the United Technologies Research Center's 1 1/2 stage Large Scale Rotating Turbine was selected to be the candidate engine configuration for this computational effort because of the completeness and availability of the data.
NASA Technical Reports Server (NTRS)
Juang, Hann-Ming Henry; Tao, Wei-Kuo; Zeng, Xi-Ping; Shie, Chung-Lin; Simpson, Joanne; Lang, Steve
2004-01-01
The capability for massively parallel programming (MPP) using a message passing interface (MPI) has been implemented into a three-dimensional version of the Goddard Cumulus Ensemble (GCE) model. The design for the MPP with MPI uses the concept of maintaining similar code structure between the whole domain as well as the portions after decomposition. Hence the model follows the same integration for single and multiple tasks (CPUs). Also, it provides for minimal changes to the original code, so it is easily modified and/or managed by the model developers and users who have little knowledge of MPP. The entire model domain could be sliced into one- or two-dimensional decomposition with a halo regime, which is overlaid on partial domains. The halo regime requires that no data be fetched across tasks during the computational stage, but it must be updated before the next computational stage through data exchange via MPI. For reproducible purposes, transposing data among tasks is required for spectral transform (Fast Fourier Transform, FFT), which is used in the anelastic version of the model for solving the pressure equation. The performance of the MPI-implemented codes (i.e., the compressible and anelastic versions) was tested on three different computing platforms. The major results are: 1) both versions have speedups of about 99% up to 256 tasks but not for 512 tasks; 2) the anelastic version has better speedup and efficiency because it requires more computations than that of the compressible version; 3) equal or approximately-equal numbers of slices between the x- and y- directions provide the fastest integration due to fewer data exchanges; and 4) one-dimensional slices in the x-direction result in the slowest integration due to the need for more memory relocation for computation.
Experimental evaluation of the concept of supevisory manipulation
NASA Technical Reports Server (NTRS)
Brooks, T. L.; Sheridan, T. B.
1982-01-01
A computer-controlled teleoperator system which is based on task-referenced sensor-aided control has been developed to study supervisory manipulation. This system, called SUPERMAN, is capable of performing complicated tasks in real-time by utilizing the operator for high-level functions related to the unpredictable portions of a task, while the subordinate machine performs the more well-defined subtasks under human supervison. To determine whether supervisory control schemes such as these offer any advantage over manual control under real-time conditions, a number of experiments involving both simple and complicated tasks were performed. Six representative tasks were chosen for the study: (1) obtaining a tool from a rack, (2) returning the tool to the rack, (3) removing a nut, (4) placing samples in a storage bin, (5) opening and closing a valve, and (6) digging with a shovel. The experiments were performed under simulated conditions using four forms of manual control (i.e., switch rate, joystick rate, master-slave position control, and master-slave with force feedback), as well as supervisory control. Through these experiments the effectiveness and quality of control were evaluated on the basis of the time required to complete each portion of the task and the type and number of errors which occurred.
Merrill, Edward C; Yang, Yingying; Roskos, Beverly; Steele, Sara
2016-01-01
Previous studies have reported sex differences in wayfinding performance among adults. Men are typically better at using Euclidean information and survey strategies while women are better at using landmark information and route strategies. However, relatively few studies have examined sex differences in wayfinding in children. This research investigated relationships between route learning performance and two general abilities: spatial ability and verbal memory in 153 boys and girls between 6- to 12-years-old. Children completed a battery of spatial ability tasks (a two-dimension mental rotation task, a paper folding task, a visuo-spatial working memory task, and a Piagetian water level task) and a verbal memory task. In the route learning task, they had to learn a route through a series of hallways presented via computer. Boys had better overall route learning performance than did girls. In fact, the difference between boys and girls was constant across the age range tested. Structural equation modeling of the children's performance revealed that spatial abilities and verbal memory were significant contributors to route learning performance. However, there were different patterns of correlates for boys and girls. For boys, spatial abilities contributed to route learning while verbal memory did not. In contrast, for girls both spatial abilities and verbal memory contributed to their route learning performance. This difference may reflect the precursor of a strategic difference between boys and girls in wayfinding that is commonly observed in adults.
Grantcharov, T P; Bardram, L; Funch-Jensen, P; Rosenberg, J
2003-07-01
The impact of gender and hand dominance on operative performance may be a subject of prejudice among surgeons, reportedly leading to discrimination and lack of professional promotion. However, very little objective evidence is available yet on the matter. This study was conducted to identify factors that influence surgeons' performance, as measured by a virtual reality computer simulator for laparoscopic surgery. This study included 25 surgical residents who had limited experience with laparoscopic surgery, having performed fewer than 10 laparoscopic cholecystectomies. The participants were registered according to their gender, hand dominance, and experience with computer games. All of the participants performed 10 repetitions of the six tasks on the Minimally Invasive Surgical Trainer-Virtual Reality (MIST-VR) within 1 month. Assessment of laparoscopic skills was based on three parameters measured by the simulator: time, errors, and economy of hand movement. Differences in performance existed between the compared groups. Men completed the tasks in less time than women ( p = 0.01, Mann-Whitney test), but there was no statistical difference between the genders in the number of errors and unnecessary movements. Individuals with right hand dominance performed fewer unnecessary movements ( p = 0.045, Mann-Whitney test), and there was a trend toward better results in terms of time and errors among the residence with right hand dominance than among those with left dominance. Users of computer games made fewer errors than nonusers ( p = 0.035, Mann-Whitney test). The study provides objective evidence of a difference in laparoscopic skills between surgeons differing gender, hand dominance, and computer experience. These results may influence the future development of training program for laparoscopic surgery. They also pose a challenge to individuals responsible for the selection and training of the residents.
Caballero Sánchez, Carla; Barbado Murillo, David; Davids, Keith; Moreno Hernández, Francisco J
2016-06-01
This study investigated the extent to which specific interacting constraints of performance might increase or decrease the emergent complexity in a movement system, and whether this could affect the relationship between observed movement variability and the central nervous system's capacity to adapt to perturbations during balancing. Fifty-two healthy volunteers performed eight trials where different performance constraints were manipulated: task difficulty (three levels) and visual biofeedback conditions (with and without the center of pressure (COP) displacement and a target displayed). Balance performance was assessed using COP-based measures: mean velocity magnitude (MVM) and bivariate variable error (BVE). To assess the complexity of COP, fuzzy entropy (FE) and detrended fluctuation analysis (DFA) were computed. ANOVAs showed that MVM and BVE increased when task difficulty increased. During biofeedback conditions, individuals showed higher MVM but lower BVE at the easiest level of task difficulty. Overall, higher FE and lower DFA values were observed when biofeedback was available. On the other hand, FE reduced and DFA increased as difficulty level increased, in the presence of biofeedback. However, when biofeedback was not available, the opposite trend in FE and DFA values was observed. Regardless of changes to task constraints and the variable investigated, balance performance was positively related to complexity in every condition. Data revealed how specificity of task constraints can result in an increase or decrease in complexity emerging in a neurobiological system during balance performance.
Introducing the Microcomputer into Undergraduate Tax Courses.
ERIC Educational Resources Information Center
Dillaway, Manson P.; Savage, Allan H.
Although accountants have used computers for tax planning and tax return preparation for many years, tax education has been slow to reflect the increasing role of computers in tax accounting. The following are only some of the tasks that a business education department offering undergraduate tax courses for accounting majors should perform when…
Running R Statistical Computing Environment Software on the Peregrine
for the development of new statistical methodologies and enjoys a large user base. Please consult the distribution details. Natural language support but running in an English locale R is a collaborative project programming paradigms to better leverage modern HPC systems. The CRAN task view for High Performance Computing
MOVANAID: An Interactive Aid for Analysis of Movement Capabilities.
ERIC Educational Resources Information Center
Cooper, George E.; And Others
A computer-drive interactive aid for movement analysis, called MOVANAID, has been developed to be of assistance in the performance of certain Army intelligence processing tasks in a tactical environment. It can compute fastest travel times and paths through road networks for military units of various types, as well as fastest times in which…
Smaller Satellite Operations Near Geostationary Orbit
2007-09-01
At the time, this was considered a very difficult task, due to the complexity involved with creating computer code to autonomously perform... computer systems and even permanently damage equipment. Depending on the solar cycle, solar weather will be properly characterized and modeled to...30 Wayne Tomasi. Electronic Communciations Systems. Upper Saddle River: Pearson Education, 2004. 1041
NASA Technical Reports Server (NTRS)
Kyle, R. G.
1972-01-01
Information transfer between the operator and computer-generated display systems is an area where the human factors engineer discovers little useful design data relating human performance to system effectiveness. This study utilized a computer-driven, cathode-ray-tube graphic display to quantify human response speed in a sequential information processing task. The performance criteria was response time to sixteen cell elements of a square matrix display. A stimulus signal instruction specified selected cell locations by both row and column identification. An equal probable number code, from one to four, was assigned at random to the sixteen cells of the matrix and correspondingly required one of four, matched keyed-response alternatives. The display format corresponded to a sequence of diagnostic system maintenance events, that enable the operator to verify prime system status, engage backup redundancy for failed subsystem components, and exercise alternate decision-making judgements. The experimental task bypassed the skilled decision-making element and computer processing time, in order to determine a lower bound on the basic response speed for given stimulus/response hardware arrangement.
Information processing via physical soft body
Nakajima, Kohei; Hauser, Helmut; Li, Tao; Pfeifer, Rolf
2015-01-01
Soft machines have recently gained prominence due to their inherent softness and the resulting safety and resilience in applications. However, these machines also have disadvantages, as they respond with complex body dynamics when stimulated. These dynamics exhibit a variety of properties, including nonlinearity, memory, and potentially infinitely many degrees of freedom, which are often difficult to control. Here, we demonstrate that these seemingly undesirable properties can in fact be assets that can be exploited for real-time computation. Using body dynamics generated from a soft silicone arm, we show that they can be employed to emulate desired nonlinear dynamical systems. First, by using benchmark tasks, we demonstrate that the nonlinearity and memory within the body dynamics can increase the computational performance. Second, we characterize our system’s computational capability by comparing its task performance with a standard machine learning technique and identify its range of validity and limitation. Our results suggest that soft bodies are not only impressive in their deformability and flexibility but can also be potentially used as computational resources on top and for free. PMID:26014748
Feasibility study for the implementation of NASTRAN on the ILLIAC 4 parallel processor
NASA Technical Reports Server (NTRS)
Field, E. I.
1975-01-01
The ILLIAC IV, a fourth generation multiprocessor using parallel processing hardware concepts, is operational at Moffett Field, California. Its capability to excel at matrix manipulation, makes the ILLIAC well suited for performing structural analyses using the finite element displacement method. The feasibility of modifying the NASTRAN (NASA structural analysis) computer program to make effective use of the ILLIAC IV was investigated. The characteristics are summarized of the ILLIAC and the ARPANET, a telecommunications network which spans the continent making the ILLIAC accessible to nearly all major industrial centers in the United States. Two distinct approaches are studied: retaining NASTRAN as it now operates on many of the host computers of the ARPANET to process the input and output while using the ILLIAC only for the major computational tasks, and installing NASTRAN to operate entirely in the ILLIAC environment. Though both alternatives offer similar and significant increases in computational speed over modern third generation processors, the full installation of NASTRAN on the ILLIAC is recommended. Specifications are presented for performing that task with manpower estimates and schedules to correspond.
Surgical simulation tasks challenge visual working memory and visual-spatial ability differently.
Schlickum, Marcus; Hedman, Leif; Enochsson, Lars; Henningsohn, Lars; Kjellin, Ann; Felländer-Tsai, Li
2011-04-01
New strategies for selection and training of physicians are emerging. Previous studies have demonstrated a correlation between visual-spatial ability and visual working memory with surgical simulator performance. The aim of this study was to perform a detailed analysis on how these abilities are associated with metrics in simulator performance with different task content. The hypothesis is that the importance of visual-spatial ability and visual working memory varies with different task contents. Twenty-five medical students participated in the study that involved testing visual-spatial ability using the MRT-A test and visual working memory using the RoboMemo computer program. Subjects were also trained and tested for performance in three different surgical simulators. The scores from the psychometric tests and the performance metrics were then correlated using multivariate analysis. MRT-A score correlated significantly with the performance metrics Efficiency of screening (p = 0.006) and Total time (p = 0.01) in the GI Mentor II task and Total score (p = 0.02) in the MIST-VR simulator task. In the Uro Mentor task, both the MRT-A score and the visual working memory 3-D cube test score as presented in the RoboMemo program (p = 0.02) correlated with Total score (p = 0.004). In this study we have shown that some differences exist regarding the impact of visual abilities and task content on simulator performance. When designing future cognitive training programs and testing regimes, one might have to consider that the design must be adjusted in accordance with the specific surgical task to be trained in mind.
Advantages of Parallel Processing and the Effects of Communications Time
NASA Technical Reports Server (NTRS)
Eddy, Wesley M.; Allman, Mark
2000-01-01
Many computing tasks involve heavy mathematical calculations, or analyzing large amounts of data. These operations can take a long time to complete using only one computer. Networks such as the Internet provide many computers with the ability to communicate with each other. Parallel or distributed computing takes advantage of these networked computers by arranging them to work together on a problem, thereby reducing the time needed to obtain the solution. The drawback to using a network of computers to solve a problem is the time wasted in communicating between the various hosts. The application of distributed computing techniques to a space environment or to use over a satellite network would therefore be limited by the amount of time needed to send data across the network, which would typically take much longer than on a terrestrial network. This experiment shows how much faster a large job can be performed by adding more computers to the task, what role communications time plays in the total execution time, and the impact a long-delay network has on a distributed computing system.
Scherer, Reinhold; Faller, Josef; Friedrich, Elisabeth V C; Opisso, Eloy; Costa, Ursula; Kübler, Andrea; Müller-Putz, Gernot R
2015-01-01
Brain-computer interfaces (BCIs) translate oscillatory electroencephalogram (EEG) patterns into action. Different mental activities modulate spontaneous EEG rhythms in various ways. Non-stationarity and inherent variability of EEG signals, however, make reliable recognition of modulated EEG patterns challenging. Able-bodied individuals who use a BCI for the first time achieve - on average - binary classification performance of about 75%. Performance in users with central nervous system (CNS) tissue damage is typically lower. User training generally enhances reliability of EEG pattern generation and thus also robustness of pattern recognition. In this study, we investigated the impact of mental tasks on binary classification performance in BCI users with central nervous system (CNS) tissue damage such as persons with stroke or spinal cord injury (SCI). Motor imagery (MI), that is the kinesthetic imagination of movement (e.g. squeezing a rubber ball with the right hand), is the "gold standard" and mainly used to modulate EEG patterns. Based on our recent results in able-bodied users, we hypothesized that pair-wise combination of "brain-teaser" (e.g. mental subtraction and mental word association) and "dynamic imagery" (e.g. hand and feet MI) tasks significantly increases classification performance of induced EEG patterns in the selected end-user group. Within-day (How stable is the classification within a day?) and between-day (How well does a model trained on day one perform on unseen data of day two?) analysis of variability of mental task pair classification in nine individuals confirmed the hypothesis. We found that the use of the classical MI task pair hand vs. feed leads to significantly lower classification accuracy - in average up to 15% less - in most users with stroke or SCI. User-specific selection of task pairs was again essential to enhance performance. We expect that the gained evidence will significantly contribute to make imagery-based BCI technology become accessible to a larger population of users including individuals with special needs due to CNS damage.
Scherer, Reinhold; Faller, Josef; Friedrich, Elisabeth V. C.; Opisso, Eloy; Costa, Ursula; Kübler, Andrea; Müller-Putz, Gernot R.
2015-01-01
Brain-computer interfaces (BCIs) translate oscillatory electroencephalogram (EEG) patterns into action. Different mental activities modulate spontaneous EEG rhythms in various ways. Non-stationarity and inherent variability of EEG signals, however, make reliable recognition of modulated EEG patterns challenging. Able-bodied individuals who use a BCI for the first time achieve - on average - binary classification performance of about 75%. Performance in users with central nervous system (CNS) tissue damage is typically lower. User training generally enhances reliability of EEG pattern generation and thus also robustness of pattern recognition. In this study, we investigated the impact of mental tasks on binary classification performance in BCI users with central nervous system (CNS) tissue damage such as persons with stroke or spinal cord injury (SCI). Motor imagery (MI), that is the kinesthetic imagination of movement (e.g. squeezing a rubber ball with the right hand), is the "gold standard" and mainly used to modulate EEG patterns. Based on our recent results in able-bodied users, we hypothesized that pair-wise combination of "brain-teaser" (e.g. mental subtraction and mental word association) and "dynamic imagery" (e.g. hand and feet MI) tasks significantly increases classification performance of induced EEG patterns in the selected end-user group. Within-day (How stable is the classification within a day?) and between-day (How well does a model trained on day one perform on unseen data of day two?) analysis of variability of mental task pair classification in nine individuals confirmed the hypothesis. We found that the use of the classical MI task pair hand vs. feed leads to significantly lower classification accuracy - in average up to 15% less - in most users with stroke or SCI. User-specific selection of task pairs was again essential to enhance performance. We expect that the gained evidence will significantly contribute to make imagery-based BCI technology become accessible to a larger population of users including individuals with special needs due to CNS damage. PMID:25992718
Hypercube matrix computation task
NASA Technical Reports Server (NTRS)
Calalo, R.; Imbriale, W.; Liewer, P.; Lyons, J.; Manshadi, F.; Patterson, J.
1987-01-01
The Hypercube Matrix Computation (Year 1986-1987) task investigated the applicability of a parallel computing architecture to the solution of large scale electromagnetic scattering problems. Two existing electromagnetic scattering codes were selected for conversion to the Mark III Hypercube concurrent computing environment. They were selected so that the underlying numerical algorithms utilized would be different thereby providing a more thorough evaluation of the appropriateness of the parallel environment for these types of problems. The first code was a frequency domain method of moments solution, NEC-2, developed at Lawrence Livermore National Laboratory. The second code was a time domain finite difference solution of Maxwell's equations to solve for the scattered fields. Once the codes were implemented on the hypercube and verified to obtain correct solutions by comparing the results with those from sequential runs, several measures were used to evaluate the performance of the two codes. First, a comparison was provided of the problem size possible on the hypercube with 128 megabytes of memory for a 32-node configuration with that available in a typical sequential user environment of 4 to 8 megabytes. Then, the performance of the codes was anlyzed for the computational speedup attained by the parallel architecture.
Fast neuromimetic object recognition using FPGA outperforms GPU implementations.
Orchard, Garrick; Martin, Jacob G; Vogelstein, R Jacob; Etienne-Cummings, Ralph
2013-08-01
Recognition of objects in still images has traditionally been regarded as a difficult computational problem. Although modern automated methods for visual object recognition have achieved steadily increasing recognition accuracy, even the most advanced computational vision approaches are unable to obtain performance equal to that of humans. This has led to the creation of many biologically inspired models of visual object recognition, among them the hierarchical model and X (HMAX) model. HMAX is traditionally known to achieve high accuracy in visual object recognition tasks at the expense of significant computational complexity. Increasing complexity, in turn, increases computation time, reducing the number of images that can be processed per unit time. In this paper we describe how the computationally intensive and biologically inspired HMAX model for visual object recognition can be modified for implementation on a commercial field-programmable aate Array, specifically the Xilinx Virtex 6 ML605 evaluation board with XC6VLX240T FPGA. We show that with minor modifications to the traditional HMAX model we can perform recognition on images of size 128 × 128 pixels at a rate of 190 images per second with a less than 1% loss in recognition accuracy in both binary and multiclass visual object recognition tasks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Z.T.
2001-11-15
The objective of this project was to conduct high-performance computing research and teaching at AAMU, and to train African-American and other minority students and scientists in the computational science field for eventual employment with DOE. During the project period, eight tasks were accomplished. Student Research Assistant, Work Study, Summer Interns, Scholarship were proved to be one of the best ways for us to attract top-quality minority students. Under the support of DOE, through research, summer interns, collaborations, scholarships programs, AAMU has successfully provided research and educational opportunities to minority students in the field related to computational science.
Bowman, Caroline H; Evans, Cathryn E Y; Turnbull, Oliver H
2005-02-01
In the last decade, the Iowa Gambling Task (IGT) has become a widely employed neuropsychological research instrument for the investigation of executive function. The task has been employed in a wide range of formats, from 'manual' procedures to more recently introduced computerised versions. Computer-based formats often require that responses on the task should be artificially delayed by a number of seconds between trials to collect skin-conductance data. Participants, however, may become frustrated when they want to select from a particular deck in the time-limited versions--so that an unintended emotional experience of frustration might well disrupt a task presumed to be reliant on emotion-based learning. We investigated the effect of the various types of Iowa Gambling Task format on performance, using three types of task: the classic manual administration, with no time limitations; a computerised administration with a 6-s enforced delay; and a control computerised version which had no time constraints. We also evaluated the subjective experience of participants on each task. There were no significant differences in performance, between formats, in behavioural terms. Subjective experience measures on the task also showed consistent effects across all three formats-with substantial, and rapidly developing, awareness of which decks were 'good' and 'bad.'
Management of health care expenditure by soft computing methodology
NASA Astrophysics Data System (ADS)
Maksimović, Goran; Jović, Srđan; Jovanović, Radomir; Aničić, Obrad
2017-01-01
In this study was managed the health care expenditure by soft computing methodology. The main goal was to predict the gross domestic product (GDP) according to several factors of health care expenditure. Soft computing methodologies were applied since GDP prediction is very complex task. The performances of the proposed predictors were confirmed with the simulation results. According to the results, support vector regression (SVR) has better prediction accuracy compared to other soft computing methodologies. The soft computing methods benefit from the soft computing capabilities of global optimization in order to avoid local minimum issues.
Evaluation of the leap motion controller as a new contact-free pointing device.
Bachmann, Daniel; Weichert, Frank; Rinkenauer, Gerhard
2014-12-24
This paper presents a Fitts' law-based analysis of the user's performance in selection tasks with the Leap Motion Controller compared with a standard mouse device. The Leap Motion Controller (LMC) is a new contact-free input system for gesture-based human-computer interaction with declared sub-millimeter accuracy. Up to this point, there has hardly been any systematic evaluation of this new system available. With an error rate of 7.8% for the LMC and 2.8% for the mouse device, movement times twice as large as for a mouse device and high overall effort ratings, the Leap Motion Controller's performance as an input device for everyday generic computer pointing tasks is rather limited, at least with regard to the selection recognition provided by the LMC.
Evaluation of the Leap Motion Controller as a New Contact-Free Pointing Device
Bachmann, Daniel; Weichert, Frank; Rinkenauer, Gerhard
2015-01-01
This paper presents a Fitts' law-based analysis of the user's performance in selection tasks with the Leap Motion Controller compared with a standard mouse device. The Leap Motion Controller (LMC) is a new contact-free input system for gesture-based human-computer interaction with declared sub-millimeter accuracy. Up to this point, there has hardly been any systematic evaluation of this new system available. With an error rate of 7.8 % for the LMC and 2.8% for the mouse device, movement times twice as large as for a mouse device and high overall effort ratings, the Leap Motion Controller's performance as an input device for everyday generic computer pointing tasks is rather limited, at least with regard to the selection recognition provided by the LMC. PMID:25609043
DOE Office of Scientific and Technical Information (OSTI.GOV)
GEL, Aytekin; Jiao, Yang; Emady, Heather
Two major challenges hinder the effective use and adoption of multiphase computational fluid dynamics tools by the industry. The first is the need for significant computational resources, which is inversely proportional to the accuracy of solutions due to computational intensity of the algorithms. The second barrier is assessing the prediction credibility and confidence in the simulation results. In this project, a multi-tiered approach has been proposed under four broad activities to overcome these challenges while addressing all of the objectives outlined in FOA-0001238 through Phases 1 and 2 of the project. The present report consists of the results for onlymore » Phase 1, which was the funded performance period. From the start the project, all of the objectives outlined in FOA were addressed through four major activity tasks in an integrated and balanced fashion to improve adoption of MFIX suite of solvers for industrial use. The first task aimed to improve the performance of MFIX-DEM specifically targeting to acquire the peak performance on Intel Xeon and Xeon Phi based systems, which are expected to be one of the primary high-performance computing platforms both affordable and available for the industrial users in the next two to five years. However, due to a number of changes in course of the project, the scope of the performance improvements related task was significantly reduced to avoid duplicate work. Hence, more emphasis was placed on the other three tasks as discussed below.The second task aimed at physical modeling enhancements through implementation of polydispersity capability and validation of heat transfer models in MFIX. An extended verification and validation (V&V) study was performed for the new polydispersity feature implemented in MFIX-DEM both for granular and coupled gas-solid flows. The features of the polydispersity capability and results for an industrially relevant problem were disseminated through journal papers (one published and one under review at the time of writing of the final technical report). As part of the validation efforts, another industrially relevant problem of interest based on rotary drums was studied for several modes of heat transfer and results were presented in conferences. Third task was aimed towards an important and unique contribution of the project, which was to develop a unified uncertainty quantification framework by integrating MFIX-DEM with a graphical user interface (GUI) driven uncertainty quantification (UQ) engine, i.e., MFIX-GUI and PSUADE. The goal was to enable a user with only modest knowledge of statistics to effectively utilize the UQ framework offered with MFIX-DEM Phi to perform UQ analysis routinely. For Phase 1, a proof-of-concept demonstration of the proposed framework was completed and shared. Direct industry involvement was one of the key virtues of this project, which was performed through forth task. For this purpose, even at the proposal stage, the project team received strong interest in the proposed capabilities from two major corporations, which were further expanded throughout Phase 1 and a new collaboration with another major corporation from chemical industry was also initiated. The level of interest received and continued collaboration for the project during Phase 1 clearly shows the relevance and potential impact of the project for the industrial users.« less
GPU computing in medical physics: a review.
Pratx, Guillem; Xing, Lei
2011-05-01
The graphics processing unit (GPU) has emerged as a competitive platform for computing massively parallel problems. Many computing applications in medical physics can be formulated as data-parallel tasks that exploit the capabilities of the GPU for reducing processing times. The authors review the basic principles of GPU computing as well as the main performance optimization techniques, and survey existing applications in three areas of medical physics, namely image reconstruction, dose calculation and treatment plan optimization, and image processing.
ERIC Educational Resources Information Center
Chong, Raymond K. Y.; Mills, Bradley; Dailey, Leanna; Lane, Elizabeth; Smith, Sarah; Lee, Kyoung-Hyun
2010-01-01
We tested the hypothesis that a computational overload results when two activities, one motor and the other cognitive that draw on the same neural processing pathways, are performed concurrently. Healthy young adult subjects carried out two seemingly distinct tasks of maintaining standing balance control under conditions of low (eyes closed),…
The Relation between Acquisition of a Theory of Mind and the Capacity to Hold in Mind.
ERIC Educational Resources Information Center
Gordon, Anne C. L.; Olson, David R.
1998-01-01
Tested hypothesized relationship between development of a theory of mind and increasing computational resources in 3- to 5-year olds. Found that the correlations between performance on theory of mind tasks and dual processing tasks were as high as r=.64, suggesting that changes in working memory capacity allow the expression of, and arguably the…
2014-09-16
the display, matching the depth and vertical positioning of an identical reference or “target” object. This task served as a replication-and... cinema and computer games: A review.” Ophthalmic and Physiological Optics, 31, pp. 111-122. Hsu, J., Pizlo, Z., Chelberg, D. M., Babbs, C. F., and Delp
ERIC Educational Resources Information Center
Subiaul, Francys; Zimmermann, Laura; Renner, Elizabeth; Schilder, Brian; Barr, Rachel
2016-01-01
During the first 5 years of life, the versatility, breadth, and fidelity with which children imitate change dramatically. Currently, there is no model to explain what underlies such significant changes. To that end, the present study examined whether task-independent but domain-specific--elemental--imitation mechanism explains performance across…
Running High-Throughput Jobs on Peregrine | High-Performance Computing |
unique name (using "name=") and usse the task name to create a unique output file name. For runs on and how many tasks to give to each worker at a time using the NITRO_COORD_OPTIONS environment . Finally, you start Nitro by executing launch_nitro.sh. Sample Nitro job script To run a job using the
Effects of Whole-Body Motion Simulation on Flight Skill Development.
1981-10-01
computation requirements, compared to the implementation allowing for a deviate internal model, provided further motivation for assuming a correct...We are left with two more likely explanations for the apparent trends: (1) subjects were motivated differently by the different task configurations...because of modeling constraints. The notion of task-related motivational differences are explored in Appendix E. Sensitivity analysis performed with
2015-03-18
Problem (TSP) to solve, a canonical computer science problem that involves identifying the shortest itinerary for a hypothetical salesman traveling among a...also created working versions of the travelling salesperson problem , prisoners’ dilemma, public goods game, ultimatum game, word ladders, and...the task within networks of others performing the task. Thus, we built five problems which could be embedded in networks: the traveling salesperson
In Search of the Optimal Path: How Learners at Task Use an Online Dictionary
ERIC Educational Resources Information Center
Hamel, Marie-Josee
2012-01-01
We have analyzed circa 180 navigation paths followed by six learners while they performed three language encoding tasks at the computer using an online dictionary prototype. Our hypothesis was that learners who follow an "optimal path" while navigating within the dictionary, using its search and look-up functions, would have a high chance of…
Greene, Runyu L; Azari, David P; Hu, Yu Hen; Radwin, Robert G
2017-11-01
Patterns of physical stress exposure are often difficult to measure, and the metrics of variation and techniques for identifying them is underdeveloped in the practice of occupational ergonomics. Computer vision has previously been used for evaluating repetitive motion tasks for hand activity level (HAL) utilizing conventional 2D videos. The approach was made practical by relaxing the need for high precision, and by adopting a semi-automatic approach for measuring spatiotemporal characteristics of the repetitive task. In this paper, a new method for visualizing task factors, using this computer vision approach, is demonstrated. After videos are made, the analyst selects a region of interest on the hand to track and the hand location and its associated kinematics are measured for every frame. The visualization method spatially deconstructs and displays the frequency, speed and duty cycle components of tasks that are part of the threshold limit value for hand activity for the purpose of identifying patterns of exposure associated with the specific job factors, as well as for suggesting task improvements. The localized variables are plotted as a heat map superimposed over the video, and displayed in the context of the task being performed. Based on the intensity of the specific variables used to calculate HAL, we can determine which task factors most contribute to HAL, and readily identify those work elements in the task that contribute more to increased risk for an injury. Work simulations and actual industrial examples are described. This method should help practitioners more readily measure and interpret temporal exposure patterns and identify potential task improvements. Copyright © 2017. Published by Elsevier Ltd.
Sex differences in effects of testing medium and response format on a visuospatial task.
Cherney, Isabelle D; Rendell, Jariel A
2010-06-01
Sex differences on visuospatial tests are among the most reliably replicated. It is unclear to what extent these performance differences reflect underlying differences in skills or testing factors. To assess whether testing medium and response format affect visuospatial sex differences, performances of introductory psychology students (100 men, 104 women) were examined on a visuospatial task presented in paper-and-pencil and tablet computer forms. Both sexes performed better when tested on paper, although men outperformed women. The introduction of an open-ended component to the visuospatial task eliminated sex differences when prior spatial experiences were controlled, but men outperformed women when prior spatial experiences were not considered. In general, the open-ended version and computerized format of the test diminished performance, suggesting that response format and medium are testing factors that influence visuospatial abilities.
Kundhal, Pavi S; Grantcharov, Teodor P
2009-03-01
This study was conducted to validate the role of virtual reality computer simulation as an objective method for assessing laparoscopic technical skills. The authors aimed to investigate whether performance in the operating room, assessed using a modified Objective Structured Assessment of Technical Skill (OSATS), correlated with the performance parameters registered by a virtual reality laparoscopic trainer (LapSim). The study enrolled 10 surgical residents (3 females) with a median of 5.5 years (range, 2-6 years) since graduation who had similar limited experience in laparoscopic surgery (median, 5; range, 1-16 laparoscopic cholecystectomies). All the participants performed three repetitions of seven basic skills tasks on the LapSim laparoscopic trainer and one laparoscopic cholecystectomy in the operating room. The operating room procedure was video recorded and blindly assessed by two independent observers using a modified OSATS rating scale. Assessment in the operating room was based on three parameters: time used, error score, and economy of motion score. During the tasks on the LapSim, time, error (tissue damage and millimeters of tissue damage [tasks 2-6], error score [incomplete target areas, badly placed clips, and dropped clips [task 7]), and economy of movement parameters (path length and angular path) were registered. The correlation between time, economy, and error parameters during the simulated tasks and the operating room procedure was statistically assessed using Spearman's test. Significant correlations were demonstrated between the time used to complete the operating room procedure and time used for task 7 (r (s) = 0.74; p = 0.015). The error score demonstrated during the laparoscopic cholecystectomy correlated well with the tissue damage in three of the seven tasks (p < 0.05), the millimeters of tissue damage during two of the tasks, and the error score in task 7 (r (s) = 0.67; p = 0.034). Furthermore, statistically significant correlations were observed between the economy of motion score from the operative procedure and LapSim's economy parameters (path length and angular path in six of the tasks) (p < 0.05). The current study demonstrated significant correlations between operative performance in the operating room (assessed using a well-validated rating scale) and psychomotor performance in virtual environment assessed by a computer simulator. This provides strong evidence for the validity of the simulator system as an objective tool for assessing laparoscopic skills. Virtual reality simulation can be used in practice to assess technical skills relevant for minimally invasive surgery.
Bilinearity, Rules, and Prefrontal Cortex
Dayan, Peter
2007-01-01
Humans can be instructed verbally to perform computationally complex cognitive tasks; their performance then improves relatively slowly over the course of practice. Many skills underlie these abilities; in this paper, we focus on the particular question of a uniform architecture for the instantiation of habitual performance and the storage, recall, and execution of simple rules. Our account builds on models of gated working memory, and involves a bilinear architecture for representing conditional input-output maps and for matching rules to the state of the input and working memory. We demonstrate the performance of our model on two paradigmatic tasks used to investigate prefrontal and basal ganglia function. PMID:18946523
1982-12-01
i.i. task, Navy pilots were asked to draw symbols for a variety of objects (such as oilfield, airport, and factory) for each of the following...tasks have the additional advantages of: ease of testing many participants simultaneously, simplicity of performing the required tasks, and capability...Report 459-2, 1982. * McGrath, J.J., Osterhoff, W.E., and Borden, G.J. Geographic orientation in aircraft pilots : Experimental studes ia two
Pilot interaction with automated airborne decision making systems
NASA Technical Reports Server (NTRS)
Rouse, W. B.; Chu, Y. Y.; Greenstein, J. S.; Walden, R. S.
1976-01-01
An investigation was made of interaction between a human pilot and automated on-board decision making systems. Research was initiated on the topic of pilot problem solving in automated and semi-automated flight management systems and attempts were made to develop a model of human decision making in a multi-task situation. A study was made of allocation of responsibility between human and computer, and discussed were various pilot performance parameters with varying degrees of automation. Optimal allocation of responsibility between human and computer was considered and some theoretical results found in the literature were presented. The pilot as a problem solver was discussed. Finally the design of displays, controls, procedures, and computer aids for problem solving tasks in automated and semi-automated systems was considered.
Postural dynamism during computer mouse and keyboard use: A pilot study.
Van Niekerk, S M; Fourie, S M; Louw, Q A
2015-09-01
Prolonged sedentary computer use is a risk factor for musculoskeletal pain. The aim of this study was to explore postural dynamism during two common computer tasks, namely mouse use and keyboard typing. Postural dynamism was described as the total number of postural changes that occurred during the data capture period. Twelve participants were recruited to perform a mouse and a typing task. The data of only eight participants could be analysed. A 3D motion analysis system measured the number of cervical and thoracic postural changes as well as, the range in which the postural changes occurred. The study findings illustrate that there is less postural dynamism of the cervical and thoracic spinal regions during computer mouse use, when compared to keyboard typing. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
NASA Astrophysics Data System (ADS)
Xavier, M. P.; do Nascimento, T. M.; dos Santos, R. W.; Lobosco, M.
2014-03-01
The development of computational systems that mimics the physiological response of organs or even the entire body is a complex task. One of the issues that makes this task extremely complex is the huge computational resources needed to execute the simulations. For this reason, the use of parallel computing is mandatory. In this work, we focus on the simulation of temporal and spatial behaviour of some human innate immune system cells and molecules in a small three-dimensional section of a tissue. To perform this simulation, we use multiple Graphics Processing Units (GPUs) in a shared-memory environment. Despite of high initialization and communication costs imposed by the use of GPUs, the techniques used to implement the HIS simulator have shown to be very effective to achieve this purpose.
Edelman, Bradley J; Meng, Jianjun; Gulachek, Nicholas; Cline, Christopher C; He, Bin
2018-05-01
EEG-based brain-computer interface (BCI) technology creates non-biological pathways for conveying a user's mental intent solely through noninvasively measured neural signals. While optimizing the performance of a single task has long been the focus of BCI research, in order to translate this technology into everyday life, realistic situations, in which multiple tasks are performed simultaneously, must be investigated. In this paper, we explore the concept of cognitive flexibility, or multitasking, within the BCI framework by utilizing a 2-D cursor control task, using sensorimotor rhythms (SMRs), and a four-target visual attention task, using steady-state visual evoked potentials (SSVEPs), both individually and simultaneously. We found no significant difference between the accuracy of the tasks when executing them alone (SMR-57.9% ± 15.4% and SSVEP-59.0% ± 14.2%) and simultaneously (SMR-54.9% ± 17.2% and SSVEP-57.5% ± 15.4%). These modest decreases in performance were supported by similar, non-significant changes in the electrophysiology of the SSVEP and SMR signals. In this sense, we report that multiple BCI tasks can be performed simultaneously without a significant deterioration in performance; this finding will help drive these systems toward realistic daily use in which a user's cognition will need to be involved in multiple tasks at once.
A rate-controlled teleoperator task with simulated transport delays
NASA Technical Reports Server (NTRS)
Pennington, J. E.
1983-01-01
A teleoperator-system simulation was used to examine the effects of two control modes (joint-by-joint and resolved-rate), a proximity-display method, and time delays (up to 2 sec) on the control of a five-degree-of-freedom manipulator performing a probe-in-hole alignment task. Four subjects used proportional rotational control and discrete (on-off) translation control with computer-generated visual displays. The proximity display enabled subjects to separate rotational errors from displacement (translation) errors; thus, when the proximity display was used with resolved-rate control, the simulated task was trivial. The time required to perform the simulated task increased linearly with time delay, but time delays had no effect on alignment accuracy. Based on the results of this simulation, several future studies are recommended.
Douglas, Heather E; Raban, Magdalena Z; Walter, Scott R; Westbrook, Johanna I
2017-03-01
Multi-tasking is an important skill for clinical work which has received limited research attention. Its impacts on clinical work are poorly understood. In contrast, there is substantial multi-tasking research in cognitive psychology, driver distraction, and human-computer interaction. This review synthesises evidence of the extent and impacts of multi-tasking on efficiency and task performance from health and non-healthcare literature, to compare and contrast approaches, identify implications for clinical work, and to develop an evidence-informed framework for guiding the measurement of multi-tasking in future healthcare studies. The results showed healthcare studies using direct observation have focused on descriptive studies to quantify concurrent multi-tasking and its frequency in different contexts, with limited study of impact. In comparison, non-healthcare studies have applied predominantly experimental and simulation designs, focusing on interleaved and concurrent multi-tasking, and testing theories of the mechanisms by which multi-tasking impacts task efficiency and performance. We propose a framework to guide the measurement of multi-tasking in clinical settings that draws together lessons from these siloed research efforts. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Deconstructing and Reconstructing Cognitive Performance in Sleep Deprivation
Jackson, Melinda L.; Gunzelmann, Glenn; Whitney, Paul; Hinson, John M.; Belenky, Gregory; Rabat, Arnaud; Van Dongen, Hans P. A.
2012-01-01
Summary Mitigation of cognitive impairment due to sleep deprivation in operational settings is critical for safety and productivity. Achievements in this area are hampered by limited knowledge about the effects of sleep loss on actual job tasks. Sleep deprivation has different effects on different cognitive performance tasks, but the mechanisms behind this task-specificity are poorly understood. In this context it is important to recognize that cognitive performance is not a unitary process, but involves a number of component processes. There is emerging evidence that these component processes are differentially affected by sleep loss. Experiments have been conducted to decompose sleep-deprived performance into underlying cognitive processes using cognitive-behavioral, neuroimaging and cognitive modeling techniques. Furthermore, computational modeling in cognitive architectures has been employed to simulate sleep-deprived cognitive performance on the basis of the constituent cognitive processes. These efforts are beginning to enable quantitative prediction of the effects of sleep deprivation across different task contexts. This paper reviews a rapidly evolving area of research, and outlines a theoretical framework in which the effects of sleep loss on cognition may be understood from the deficits in the underlying neurobiology to the applied consequences in real-world job tasks. PMID:22884948
Deconstructing and reconstructing cognitive performance in sleep deprivation.
Jackson, Melinda L; Gunzelmann, Glenn; Whitney, Paul; Hinson, John M; Belenky, Gregory; Rabat, Arnaud; Van Dongen, Hans P A
2013-06-01
Mitigation of cognitive impairment due to sleep deprivation in operational settings is critical for safety and productivity. Achievements in this area are hampered by limited knowledge about the effects of sleep loss on actual job tasks. Sleep deprivation has different effects on different cognitive performance tasks, but the mechanisms behind this task-specificity are poorly understood. In this context it is important to recognize that cognitive performance is not a unitary process, but involves a number of component processes. There is emerging evidence that these component processes are differentially affected by sleep loss. Experiments have been conducted to decompose sleep-deprived performance into underlying cognitive processes using cognitive-behavioral, neuroimaging and cognitive modeling techniques. Furthermore, computational modeling in cognitive architectures has been employed to simulate sleep-deprived cognitive performance on the basis of the constituent cognitive processes. These efforts are beginning to enable quantitative prediction of the effects of sleep deprivation across different task contexts. This paper reviews a rapidly evolving area of research, and outlines a theoretical framework in which the effects of sleep loss on cognition may be understood from the deficits in the underlying neurobiology to the applied consequences in real-world job tasks. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jeunet, Camille; Jahanpour, Emilie; Lotte, Fabien
2016-06-01
Objective. While promising, electroencephaloraphy based brain-computer interfaces (BCIs) are barely used due to their lack of reliability: 15% to 30% of users are unable to control a BCI. Standard training protocols may be partly responsible as they do not satisfy recommendations from psychology. Our main objective was to determine in practice to what extent standard training protocols impact users’ motor imagery based BCI (MI-BCI) control performance. Approach. We performed two experiments. The first consisted in evaluating the efficiency of a standard BCI training protocol for the acquisition of non-BCI related skills in a BCI-free context, which enabled us to rule out the possible impact of BCIs on the training outcome. Thus, participants (N = 54) were asked to perform simple motor tasks. The second experiment was aimed at measuring the correlations between motor tasks and MI-BCI performance. The ten best and ten worst performers of the first study were recruited for an MI-BCI experiment during which they had to learn to perform two MI tasks. We also assessed users’ spatial ability and pre-training μ rhythm amplitude, as both have been related to MI-BCI performance in the literature. Main results. Around 17% of the participants were unable to learn to perform the motor tasks, which is close to the BCI illiteracy rate. This suggests that standard training protocols are suboptimal for skill teaching. No correlation was found between motor tasks and MI-BCI performance. However, spatial ability played an important role in MI-BCI performance. In addition, once the spatial ability covariable had been controlled for, using an ANCOVA, it appeared that participants who faced difficulty during the first experiment improved during the second while the others did not. Significance. These studies suggest that (1) standard MI-BCI training protocols are suboptimal for skill teaching, (2) spatial ability is confirmed as impacting on MI-BCI performance, and (3) when faced with difficult pre-training, subjects seemed to explore more strategies and therefore learn better.
Effects of age and auditory and visual dual tasks on closed-road driving performance.
Chaparro, Alex; Wood, Joanne M; Carberry, Trent
2005-08-01
This study investigated how driving performance of young and old participants is affected by visual and auditory secondary tasks on a closed driving course. Twenty-eight participants comprising two age groups (younger, mean age = 27.3 years; older, mean age = 69.2 years) drove around a 5.1-km closed-road circuit under both single and dual task conditions. Measures of driving performance included detection and identification of road signs, detection and avoidance of large low-contrast road hazards, gap judgment, lane keeping, and time to complete the course. The dual task required participants to verbally report the sums of pairs of single-digit numbers presented through either a computer speaker (auditorily) or a dashboard-mounted monitor (visually) while driving. Participants also completed a vision and cognitive screening battery, including LogMAR visual acuity, Pelli-Robson letter contrast sensitivity, the Trails test, and the Digit Symbol Substitution (DSS) test. Drivers reported significantly fewer signs, hit more road hazards, misjudged more gaps, and increased their time to complete the course under the dual task (visual and auditory) conditions compared with the single task condition. The older participants also reported significantly fewer road signs and drove significantly more slowly than the younger participants, and this was exacerbated for the visual dual task condition. The results of the regression analysis revealed that cognitive aging (measured by the DSS and Trails test) rather than chronologic age was a better predictor of the declines seen in driving performance under dual task conditions. An overall z score was calculated, which took into account both driving and the secondary task (summing) performance under the two dual task conditions. Performance was significantly worse for the auditory dual task compared with the visual dual task, and the older participants performed significantly worse than the young subjects. These findings demonstrate that multitasking had a significant detrimental impact on driving performance and that cognitive aging was the best predictor of the declines seen in driving performance under dual task conditions. These results have implications for use of mobile phones or in-vehicle navigational devices while driving, especially for older adults.
Correlating Trainee Attributes to Performance in 3D CAD Training
ERIC Educational Resources Information Center
Hamade, Ramsey F.; Artail, Hassan A.; Sikstrom, Sverker
2007-01-01
Purpose: The purpose of this exploratory study is to identify trainee attributes relevant for development of skills in 3D computer-aided design (CAD). Design/methodology/approach: Participants were trained to perform cognitive tasks of comparable complexity over time. Performance data were collected on the time needed to construct test models, and…
Testing the Self-Efficacy-Performance Linkage of Social-Cognitive Theory.
ERIC Educational Resources Information Center
Harrison, Allison W.; Rainer, R. Kelly, Jr.; Hochwarter, Wayne A.; Thompson, Kenneth R.
1997-01-01
Briefly reviews Albert Bandura's Self-Efficacy Performance Model (ability to perform a task is influenced by an individual's belief in their capability). Tests this model with a sample of 776 university employees and computer-related knowledge and skills. Results supported Bandura's thesis. Includes statistical tables and a discussion of related…