Methods for Conducting Cognitive Task Analysis for a Decision Making Task.
1996-01-01
Cognitive task analysis (CTA) improves traditional task analysis procedures by analyzing the thought processes of performers while they complete a...for using these methods to conduct a CTA for domains which involve critical decision making tasks in naturalistic settings. The cognitive task analysis methods
A Method for Cognitive Task Analysis
1992-07-01
A method for cognitive task analysis is described based on the notion of ’generic tasks’. The method distinguishes three layers of analysis. At the...model for applied areas such as the development of knowledge-based systems and training, are discussed. Problem solving, Cognitive Task Analysis , Knowledge, Strategies.
Applied Cognitive Task Analysis (ACTA) Methodology
1997-11-01
experienced based cognitive skills. The primary goal of this project was to develop streamlined methods of Cognitive Task Analysis that would fill this need...We have made important progression this direction. We have developed streamlined methods of Cognitive Task Analysis . Our evaluation study indicates...developed a CD-based stand alone instructional package, which will make the Applied Cognitive Task Analysis (ACTA) tools widely accessible. A survey of the
Militello, L G; Hutton, R J
1998-11-01
Cognitive task analysis (CTA) is a set of methods for identifying cognitive skills, or mental demands, needed to perform a task proficiently. The product of the task analysis can be used to inform the design of interfaces and training systems. However, CTA is resource intensive and has previously been of limited use to design practitioners. A streamlined method of CTA, Applied Cognitive Task Analysis (ACTA), is presented in this paper. ACTA consists of three interview methods that help the practitioner to extract information about the cognitive demands and skills required for a task. ACTA also allows the practitioner to represent this information in a format that will translate more directly into applied products, such as improved training scenarios or interface recommendations. This paper will describe the three methods, an evaluation study conducted to assess the usability and usefulness of the methods, and some directions for future research for making cognitive task analysis accessible to practitioners. ACTA techniques were found to be easy to use, flexible, and to provide clear output. The information and training materials developed based on ACTA interviews were found to be accurate and important for training purposes.
Using a Knowledge Representations Approach to Cognitive Task Analysis.
ERIC Educational Resources Information Center
Black, John B.; And Others
Task analyses have traditionally been framed in terms of overt behaviors performed in accomplishing tasks and goals. Pioneering work at the Learning Research and Development Center looked at what contribution a cognitive analysis might make to current task analysis procedures, since traditional task analysis methods neither elicit nor capture…
Comparative Cognitive Task Analysis
2007-01-01
is to perform a task analyses to determine how people operate in a specific domain on a specific task. Cognitive Task Analysis (CTA) is a set of...accomplish a task. In this chapter, we build on CTA methods by suggesting that comparative cognitive task analysis (C2TA) can help solve the aforementioned
NASA Astrophysics Data System (ADS)
Panfil, Wawrzyniec; Moczulski, Wojciech
2017-10-01
In the paper presented is a control system of a mobile robots group intended for carrying out inspection missions. The main research problem was to define such a control system in order to facilitate a cooperation of the robots resulting in realization of the committed inspection tasks. Many of the well-known control systems use auctions for tasks allocation, where a subject of an auction is a task to be allocated. It seems that in the case of missions characterized by much larger number of tasks than number of robots it will be better if robots (instead of tasks) are subjects of auctions. The second identified problem concerns the one-sided robot-to-task fitness evaluation. Simultaneous assessment of the robot-to-task fitness and task attractiveness for robot should affect positively for the overall effectiveness of the multi-robot system performance. The elaborated system allows to assign tasks to robots using various methods for evaluation of fitness between robots and tasks, and using some tasks allocation methods. There is proposed the method for multi-criteria analysis, which is composed of two assessments, i.e. robot's concurrency position for task among other robots and task's attractiveness for robot among other tasks. Furthermore, there are proposed methods for tasks allocation applying the mentioned multi-criteria analysis method. The verification of both the elaborated system and the proposed tasks' allocation methods was carried out with the help of simulated experiments. The object under test was a group of inspection mobile robots being a virtual counterpart of the real mobile-robot group.
Participatory Design Methods for C2 Systems (Proceedings/Presentation)
2006-01-01
Cognitive Task Analysis (CTA) 16. SECURITY CLASSIFICATION OF: 17. LIMITATION 18. NUMBER 19a. NAME OF RESPONSIBLE PERSON OF ABSTRACT OF PAGES Janet E. Miller...systems to support cognitive work such as is accomplished in a network-centric -environment. Cognitive task analysis (CTA) methods are used to...of cognitive task analysis methodologies exist (Schraagen et al., 2000). However, many of these methods are skeptically viewed by a domain’s
Integrating Cognitive Task Analysis into Instructional Systems Development.
ERIC Educational Resources Information Center
Ryder, Joan M.; Redding, Richard E.
1993-01-01
Discussion of instructional systems development (ISD) focuses on recent developments in cognitive task analysis and describes the Integrated Task Analysis Model, a framework for integrating cognitive and behavioral task analysis methods within the ISD model. Three components of expertise are analyzed: skills, knowledge, and mental models. (96…
Skill Components of Task Analysis
ERIC Educational Resources Information Center
Adams, Anne E.; Rogers, Wendy A.; Fisk, Arthur D.
2013-01-01
Some task analysis methods break down a task into a hierarchy of subgoals. Although an important tool of many fields of study, learning to create such a hierarchy (redescription) is not trivial. To further the understanding of what makes task analysis a skill, the present research examined novices' problems with learning Hierarchical Task…
A Review of Content and Task Analysis Methodology. Technical Report No. 2.
ERIC Educational Resources Information Center
Gibbons, Andrew S.
A review of the literature related to methods for analyzing content or tasks prior to instructional development is presented. The review classifies methods according to a two-dimensional matrix. The first dimension differentiates phases of analysis, each dealing with content and tasks of a particular scope and each generating certain…
ERIC Educational Resources Information Center
van der Molen, Hugo H.
1984-01-01
Describes a study designed to demonstrate that child pedestrian training objectives may be identified systematically through various task analysis methods, making use of different types of empirical information. Early approaches to analysis of pedestrian tasks are reviewed, and an outline of the Traffic Research Centre's pedestrian task analysis…
A Standard Procedure for Conducting Cognitive Task Analysis.
ERIC Educational Resources Information Center
Redding, Richard E.
Traditional methods for task analysis have been largely based on the Instructional Systems Development (ISD) model, which is widely used throughout industry and the military. The first part of this document gives an overview of cognitive task analysis, which is conducted within the first phase of ISD. The following steps of cognitive task analysis…
Using link analysis to explore the impact of the physical environment on pharmacist tasks.
Lester, Corey A; Chui, Michelle A
2016-01-01
National community pharmacy organizations have been redesigning pharmacies to better facilitate direct patient care. However, evidence suggests that changing the physical layout of a pharmacy prior to understanding how the environment impacts pharmacists' work may not achieve the desired benefits. This study describes an objective method to understanding how the physical layout of the pharmacy may affect how pharmacists perform tasks. Link analysis is a systems engineering method used to describe the influence of the physical environment on task completion. This study used a secondary data set of field notes collected from 9 h of direct observation in one mass-merchandise community pharmacy in the U.S. State, Wisconsin. A node is an individual location in the environment. A link is the movement between two nodes. Tasks were inventoried and task themes identified. The mean, minimum, and maximum number of links needed to complete each task were then determined and used to construct a link table. A link diagram is a graphical display showing the links in conjunction with the physical layout of the pharmacy. A total of 92 unique tasks were identified resulting in 221 links. Tasks were sorted into five themes: patient care activities, insurance issues, verifying prescriptions, filling prescriptions, and other. Insurance issues required the greatest number of links with a mean of 4.75. Verifying prescriptions and performing patient care were the most commonly performed tasks with 36 and 30 unique task occurrences, respectively. Link analysis provides an objective method for identifying how a pharmacist interacts with the physical environment to complete tasks. This method provides designers with useful information to target interventions to improve the effectiveness of pharmacist work. Analysis beyond link analysis should be considered for large scale system redesign. Copyright © 2015 Elsevier Inc. All rights reserved.
Assessing performance of an Electronic Health Record (EHR) using Cognitive Task Analysis.
Saitwal, Himali; Feng, Xuan; Walji, Muhammad; Patel, Vimla; Zhang, Jiajie
2010-07-01
Many Electronic Health Record (EHR) systems fail to provide user-friendly interfaces due to the lack of systematic consideration of human-centered computing issues. Such interfaces can be improved to provide easy to use, easy to learn, and error-resistant EHR systems to the users. To evaluate the usability of an EHR system and suggest areas of improvement in the user interface. The user interface of the AHLTA (Armed Forces Health Longitudinal Technology Application) was analyzed using the Cognitive Task Analysis (CTA) method called GOMS (Goals, Operators, Methods, and Selection rules) and an associated technique called KLM (Keystroke Level Model). The GOMS method was used to evaluate the AHLTA user interface by classifying each step of a given task into Mental (Internal) or Physical (External) operators. This analysis was performed by two analysts independently and the inter-rater reliability was computed to verify the reliability of the GOMS method. Further evaluation was performed using KLM to estimate the execution time required to perform the given task through application of its standard set of operators. The results are based on the analysis of 14 prototypical tasks performed by AHLTA users. The results show that on average a user needs to go through 106 steps to complete a task. To perform all 14 tasks, they would spend about 22 min (independent of system response time) for data entry, of which 11 min are spent on more effortful mental operators. The inter-rater reliability analysis performed for all 14 tasks was 0.8 (kappa), indicating good reliability of the method. This paper empirically reveals and identifies the following finding related to the performance of AHLTA: (1) large number of average total steps to complete common tasks, (2) high average execution time and (3) large percentage of mental operators. The user interface can be improved by reducing (a) the total number of steps and (b) the percentage of mental effort, required for the tasks. 2010 Elsevier Ireland Ltd. All rights reserved.
Cognitive task analysis: Techniques applied to airborne weapons training
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terranova, M.; Seamster, T.L.; Snyder, C.E.
1989-01-01
This is an introduction to cognitive task analysis as it may be used in Naval Air Systems Command (NAVAIR) training development. The focus of a cognitive task analysis is human knowledge, and its methods of analysis are those developed by cognitive psychologists. This paper explains the role that cognitive task analysis and presents the findings from a preliminary cognitive task analysis of airborne weapons operators. Cognitive task analysis is a collection of powerful techniques that are quantitative, computational, and rigorous. The techniques are currently not in wide use in the training community, so examples of this methodology are presented alongmore » with the results. 6 refs., 2 figs., 4 tabs.« less
Cognitive task analysis: harmonizing tasks to human capacities.
Neerincx, M A; Griffioen, E
1996-04-01
This paper presents the development of a cognitive task analysis that assesses the task load of jobs and provides indicators for the redesign of jobs. General principles of human task performance were selected and, subsequently, integrated into current task modelling techniques. The resulting cognitive task analysis centres around four aspects of task load: the number of actions in a period, the ratio between knowledge- and rule-based actions, lengthy uninterrupted actions, and momentary overloading. The method consists of three stages: (1) construction of a hierarchical task model, (2) a time-line analysis and task load assessment, and (3), if necessary, adjustment of the task model. An application of the cognitive task analysis in railway traffic control showed its benefits over the 'old' task load analysis of the Netherlands Railways. It provided a provisional standard for traffic control jobs, conveyed two load risks -- momentary overloading and underloading -- and resulted in proposals to satisfy the standard and to diminish the two load risk.
Grouping individual independent BOLD effects: a new way to ICA group analysis
NASA Astrophysics Data System (ADS)
Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott
2009-04-01
A new group analysis method to summarize the task-related BOLD responses based on independent component analysis (ICA) was presented. As opposite to the previously proposed group ICA (gICA) method, which first combined multi-subject fMRI data in either temporal or spatial domain and applied ICA decomposition only once to the combined fMRI data to extract the task-related BOLD effects, the method presented here applied ICA decomposition to the individual subjects' fMRI data to first find the independent BOLD effects specifically for each individual subject. Then, the task-related independent BOLD component was selected among the resulting independent components from the single-subject ICA decomposition and hence grouped across subjects to derive the group inference. In this new ICA group analysis (ICAga) method, one does not need to assume that the task-related BOLD time courses are identical across brain areas and subjects as used in the grand ICA decomposition on the spatially concatenated fMRI data. Neither does one need to assume that after spatial normalization, the voxels at the same coordinates represent exactly the same functional or structural brain anatomies across different subjects. These two assumptions have been problematic given the recent BOLD activation evidences. Further, since the independent BOLD effects were obtained from each individual subject, the ICAga method can better account for the individual differences in the task-related BOLD effects. Unlike the gICA approach whereby the task-related BOLD effects could only be accounted for by a single unified BOLD model across multiple subjects. As a result, the newly proposed method, ICAga, was able to better fit the task-related BOLD effects at individual level and thus allow grouping more appropriate multisubject BOLD effects in the group analysis.
A Framework for Characterizing eHealth Literacy Demands and Barriers
Chan, Connie V
2011-01-01
Background Consumer eHealth interventions are of a growing importance in the individual management of health and health behaviors. However, a range of access, resources, and skills barriers prevent health care consumers from fully engaging in and benefiting from the spectrum of eHealth interventions. Consumers may engage in a range of eHealth tasks, such as participating in health discussion forums and entering information into a personal health record. eHealth literacy names a set of skills and knowledge that are essential for productive interactions with technology-based health tools, such as proficiency in information retrieval strategies, and communicating health concepts effectively. Objective We propose a theoretical and methodological framework for characterizing complexity of eHealth tasks, which can be used to diagnose and describe literacy barriers and inform the development of solution strategies. Methods We adapted and integrated two existing theoretical models relevant to the analysis of eHealth literacy into a single framework to systematically categorize and describe task demands and user performance on tasks needed by health care consumers in the information age. The method derived from the framework is applied to (1) code task demands using a cognitive task analysis, and (2) code user performance on tasks. The framework and method are applied to the analysis of a Web-based consumer eHealth task with information-seeking and decision-making demands. We present the results from the in-depth analysis of the task performance of a single user as well as of 20 users on the same task to illustrate both the detailed analysis and the aggregate measures obtained and potential analyses that can be performed using this method. Results The analysis shows that the framework can be used to classify task demands as well as the barriers encountered in user performance of the tasks. Our approach can be used to (1) characterize the challenges confronted by participants in performing the tasks, (2) determine the extent to which application of the framework to the cognitive task analysis can predict and explain the problems encountered by participants, and (3) inform revisions to the framework to increase accuracy of predictions. Conclusions The results of this illustrative application suggest that the framework is useful for characterizing task complexity and for diagnosing and explaining barriers encountered in task completion. The framework and analytic approach can be a potentially powerful generative research platform to inform development of rigorous eHealth examination and design instruments, such as to assess eHealth competence, to design and evaluate consumer eHealth tools, and to develop an eHealth curriculum. PMID:22094891
Doytchev, Doytchin E; Szwillus, Gerd
2009-11-01
Understanding the reasons for incident and accident occurrence is important for an organization's safety. Different methods have been developed to achieve this goal. To better understand the human behaviour in incident occurrence we propose an analysis concept that combines Fault Tree Analysis (FTA) and Task Analysis (TA). The former method identifies the root causes of an accident/incident, while the latter analyses the way people perform the tasks in their work environment and how they interact with machines or colleagues. These methods were complemented with the use of the Human Error Identification in System Tools (HEIST) methodology and the concept of Performance Shaping Factors (PSF) to deepen the insight into the error modes of an operator's behaviour. HEIST shows the external error modes that caused the human error and the factors that prompted the human to err. To show the validity of the approach, a case study at a Bulgarian Hydro power plant was carried out. An incident - the flooding of the plant's basement - was analysed by combining the afore-mentioned methods. The case study shows that Task Analysis in combination with other methods can be applied successfully to human error analysis, revealing details about erroneous actions in a realistic situation.
Heuristic Task Analysis on E-Learning Course Development: A Formative Research Study
ERIC Educational Resources Information Center
Lee, Ji-Yeon; Reigeluth, Charles M.
2009-01-01
Utilizing heuristic task analysis (HTA), a method developed for eliciting, analyzing, and representing expertise in complex cognitive tasks, a formative research study was conducted on the task of e-learning course development to further improve the HTA process. Three instructional designers from three different post-secondary institutions in the…
ERIC Educational Resources Information Center
Skinner, Anna; Diller, David; Kumar, Rohit; Cannon-Bowers, Jan; Smith, Roger; Tanaka, Alyssa; Julian, Danielle; Perez, Ray
2018-01-01
Background: Contemporary work in the design and development of intelligent training systems employs task analysis (TA) methods for gathering knowledge that is subsequently encoded into task models. These task models form the basis of intelligent interpretation of student performance within education and training systems. Also referred to as expert…
Skill components of task analysis
Rogers, Wendy A.; Fisk, Arthur D.
2017-01-01
Some task analysis methods break down a task into a hierarchy of subgoals. Although an important tool of many fields of study, learning to create such a hierarchy (redescription) is not trivial. To further the understanding of what makes task analysis a skill, the present research examined novices’ problems with learning Hierarchical Task Analysis and captured practitioners’ performance. All participants received a task description and analyzed three cooking and three communication tasks by drawing on their knowledge of those tasks. Thirty six younger adults (18–28 years) in Study 1 analyzed one task before training and five afterwards. Training consisted of a general handout that all participants received and an additional handout that differed between three conditions: a list of steps, a flow-diagram, and concept map. In Study 2, eight experienced task analysts received the same task descriptions as in Study 1 and demonstrated their understanding of task analysis while thinking aloud. Novices’ initial task analysis scored low on all coding criteria. Performance improved on some criteria but was well below 100 % on others. Practitioners’ task analyses were 2–3 levels deep but also scored low on some criteria. A task analyst’s purpose of analysis may be the reason for higher specificity of analysis. This research furthers the understanding of Hierarchical Task Analysis and provides insights into the varying nature of task analyses as a function of experience. The derived skill components can inform training objectives. PMID:29075044
Surgical task analysis of simulated laparoscopic cholecystectomy with a navigation system.
Sugino, T; Kawahira, H; Nakamura, R
2014-09-01
Advanced surgical procedures, which have become complex and difficult, increase the burden of surgeons. Quantitative analysis of surgical procedures can improve training, reduce variability, and enable optimization of surgical procedures. To this end, a surgical task analysis system was developed that uses only surgical navigation information. Division of the surgical procedure, task progress analysis, and task efficiency analysis were done. First, the procedure was divided into five stages. Second, the operating time and progress rate were recorded to document task progress during specific stages, including the dissecting task. Third, the speed of the surgical instrument motion (mean velocity and acceleration), as well as the size and overlap ratio of the approximate ellipse of the location log data distribution, was computed to estimate the task efficiency during each stage. These analysis methods were evaluated based on experimental validation with two groups of surgeons, i.e., skilled and "other" surgeons. The performance metrics and analytical parameters included incidents during the operation, the surgical environment, and the surgeon's skills or habits. Comparison of groups revealed that skilled surgeons tended to perform the procedure in less time and involved smaller regions; they also manipulated the surgical instruments more gently. Surgical task analysis developed for quantitative assessment of surgical procedures and surgical performance may provide practical methods and metrics for objective evaluation of surgical expertise.
User-Centered Iterative Design of a Collaborative Virtual Environment
2001-03-01
cognitive task analysis methods to study land navigators. This study was intended to validate the use of user-centered design methodologies for the design of...have explored the cognitive aspects of collaborative human way finding and design for collaborative virtual environments. Further investigation of design paradigms should include cognitive task analysis and behavioral task analysis.
NASA Astrophysics Data System (ADS)
Thompson, N. A.; Ruck, H. W.
1984-04-01
The Air Force is interested in identifying potentially hazardous tasks and prevention of accidents. This effort proposes four methods for determining safety training priorities for job tasks in three enlisted specialties. These methods can be used to design training aimed at avoiding loss of people, time, materials, and money associated with on-the-job accidents. Job tasks performed by airmen were measured using task and job factor ratings. Combining accident reports and job inventories, subject-matter experts identified tasks associated with accidents over a 3-year period. Applying correlational, multiple regression, and cost-benefit analysis, four methods were developed for ordering hazardous tasks to determine safety training priorities.
Meyer, Georg F; Spray, Amy; Fairlie, Jo E; Uomini, Natalie T
2014-01-01
Current neuroimaging techniques with high spatial resolution constrain participant motion so that many natural tasks cannot be carried out. The aim of this paper is to show how a time-locked correlation-analysis of cerebral blood flow velocity (CBFV) lateralization data, obtained with functional TransCranial Doppler (fTCD) ultrasound, can be used to infer cerebral activation patterns across tasks. In a first experiment we demonstrate that the proposed analysis method results in data that are comparable with the standard Lateralization Index (LI) for within-task comparisons of CBFV patterns, recorded during cued word generation (CWG) at two difficulty levels. In the main experiment we demonstrate that the proposed analysis method shows correlated blood-flow patterns for two different cognitive tasks that are known to draw on common brain areas, CWG, and Music Synthesis. We show that CBFV patterns for Music and CWG are correlated only for participants with prior musical training. CBFV patterns for tasks that draw on distinct brain areas, the Tower of London and CWG, are not correlated. The proposed methodology extends conventional fTCD analysis by including temporal information in the analysis of cerebral blood-flow patterns to provide a robust, non-invasive method to infer whether common brain areas are used in different cognitive tasks. It complements conventional high resolution imaging techniques.
Rehabilitation Associate Training for Employed Staff. Task Analysis (RA-2).
ERIC Educational Resources Information Center
Davis, Michael J.; Jensen, Mary
This learning module, which is intended for use in in-service training for vocational rehabilitation counselors, deals with writing a task analysis. Step-by-step guidelines are provided for breaking down a task into small teachable steps by analyzing the task in terms of the way in which it will be performed once learned (method), the steps to be…
A framework for characterizing eHealth literacy demands and barriers.
Chan, Connie V; Kaufman, David R
2011-11-17
Consumer eHealth interventions are of a growing importance in the individual management of health and health behaviors. However, a range of access, resources, and skills barriers prevent health care consumers from fully engaging in and benefiting from the spectrum of eHealth interventions. Consumers may engage in a range of eHealth tasks, such as participating in health discussion forums and entering information into a personal health record. eHealth literacy names a set of skills and knowledge that are essential for productive interactions with technology-based health tools, such as proficiency in information retrieval strategies, and communicating health concepts effectively. We propose a theoretical and methodological framework for characterizing complexity of eHealth tasks, which can be used to diagnose and describe literacy barriers and inform the development of solution strategies. We adapted and integrated two existing theoretical models relevant to the analysis of eHealth literacy into a single framework to systematically categorize and describe task demands and user performance on tasks needed by health care consumers in the information age. The method derived from the framework is applied to (1) code task demands using a cognitive task analysis, and (2) code user performance on tasks. The framework and method are applied to the analysis of a Web-based consumer eHealth task with information-seeking and decision-making demands. We present the results from the in-depth analysis of the task performance of a single user as well as of 20 users on the same task to illustrate both the detailed analysis and the aggregate measures obtained and potential analyses that can be performed using this method. The analysis shows that the framework can be used to classify task demands as well as the barriers encountered in user performance of the tasks. Our approach can be used to (1) characterize the challenges confronted by participants in performing the tasks, (2) determine the extent to which application of the framework to the cognitive task analysis can predict and explain the problems encountered by participants, and (3) inform revisions to the framework to increase accuracy of predictions. The results of this illustrative application suggest that the framework is useful for characterizing task complexity and for diagnosing and explaining barriers encountered in task completion. The framework and analytic approach can be a potentially powerful generative research platform to inform development of rigorous eHealth examination and design instruments, such as to assess eHealth competence, to design and evaluate consumer eHealth tools, and to develop an eHealth curriculum.
NASA Astrophysics Data System (ADS)
Afrahamiryano, A.; Ariani, D.
2018-04-01
The student task analysis is one part of the define stage in development research using the 4-D development model. Analysis of this task is useful to determine the level of understanding of students on lecture materials that have been given. The results of this task analysis serve as a measuring tool to determine the level of success of learning and as a basis in the development of lecture system. Analysis of this task is done by the method of observation and documentation study of the tasks undertaken by students. The results of this analysis are then described and after that triangulation are done to draw conclusions. The results of the analysis indicate that the students' level of understanding is high for theoretical and low material for counting material. Based on the results of this task analysis, it can be concluded that e-learning lecture system developed should be able to increase students' understanding on basic chemicals that are calculated.
Long, Zhiying; Chen, Kewei; Wu, Xia; Reiman, Eric; Peng, Danling; Yao, Li
2009-02-01
Spatial Independent component analysis (sICA) has been widely used to analyze functional magnetic resonance imaging (fMRI) data. The well accepted implicit assumption is the spatially statistical independency of intrinsic sources identified by sICA, making the sICA applications difficult for data in which there exist interdependent sources and confounding factors. This interdependency can arise, for instance, from fMRI studies investigating two tasks in a single session. In this study, we introduced a linear projection approach and considered its utilization as a tool to separate task-related components from two-task fMRI data. The robustness and feasibility of the method are substantiated through simulation on computer data and fMRI real rest data. Both simulated and real two-task fMRI experiments demonstrated that sICA in combination with the projection method succeeded in separating spatially dependent components and had better detection power than pure model-based method when estimating activation induced by each task as well as both tasks.
Formative Research on the Simplifying Conditions Method (SCM) for Task Analysis and Sequencing.
ERIC Educational Resources Information Center
Kim, YoungHwan; Reigluth, Charles M.
The Simplifying Conditions Method (SCM) is a set of guidelines for task analysis and sequencing of instructional content under the Elaboration Theory (ET). This article introduces the fundamentals of SCM and presents the findings from a formative research study on SCM. It was conducted in two distinct phases: design and instruction. In the first…
Detection and categorization of bacteria habitats using shallow linguistic analysis
2015-01-01
Background Information regarding bacteria biotopes is important for several research areas including health sciences, microbiology, and food processing and preservation. One of the challenges for scientists in these domains is the huge amount of information buried in the text of electronic resources. Developing methods to automatically extract bacteria habitat relations from the text of these electronic resources is crucial for facilitating research in these areas. Methods We introduce a linguistically motivated rule-based approach for recognizing and normalizing names of bacteria habitats in biomedical text by using an ontology. Our approach is based on the shallow syntactic analysis of the text that include sentence segmentation, part-of-speech (POS) tagging, partial parsing, and lemmatization. In addition, we propose two methods for identifying bacteria habitat localization relations. The underlying assumption for the first method is that discourse changes with a new paragraph. Therefore, it operates on a paragraph-basis. The second method performs a more fine-grained analysis of the text and operates on a sentence-basis. We also develop a novel anaphora resolution method for bacteria coreferences and incorporate it with the sentence-based relation extraction approach. Results We participated in the Bacteria Biotope (BB) Task of the BioNLP Shared Task 2013. Our system (Boun) achieved the second best performance with 68% Slot Error Rate (SER) in Sub-task 1 (Entity Detection and Categorization), and ranked third with an F-score of 27% in Sub-task 2 (Localization Event Extraction). This paper reports the system that is implemented for the shared task, including the novel methods developed and the improvements obtained after the official evaluation. The extensions include the expansion of the OntoBiotope ontology using the training set for Sub-task 1, and the novel sentence-based relation extraction method incorporated with anaphora resolution for Sub-task 2. These extensions resulted in promising results for Sub-task 1 with a SER of 68%, and state-of-the-art performance for Sub-task 2 with an F-score of 53%. Conclusions Our results show that a linguistically-oriented approach based on the shallow syntactic analysis of the text is as effective as machine learning approaches for the detection and ontology-based normalization of habitat entities. Furthermore, the newly developed sentence-based relation extraction system with the anaphora resolution module significantly outperforms the paragraph-based one, as well as the other systems that participated in the BB Shared Task 2013. PMID:26201262
Mathematical Practice in Textbooks Analysis: Praxeological Reference Models, the Case of Proportion
ERIC Educational Resources Information Center
Wijayanti, Dyana; Winsløw, Carl
2017-01-01
We present a new method in textbook analysis, based on so-called praxeological reference models focused on specific content at task level. This method implies that the mathematical contents of a textbook (or textbook part) is analyzed in terms of the tasks and techniques which are exposed to or demanded from readers; this can then be interpreted…
Uncovering the requirements of cognitive work.
Roth, Emilie M
2008-06-01
In this article, the author provides an overview of cognitive analysis methods and how they can be used to inform system analysis and design. Human factors has seen a shift toward modeling and support of cognitively intensive work (e.g., military command and control, medical planning and decision making, supervisory control of automated systems). Cognitive task analysis and cognitive work analysis methods extend traditional task analysis techniques to uncover the knowledge and thought processes that underlie performance in cognitively complex settings. The author reviews the multidisciplinary roots of cognitive analysis and the variety of cognitive task analysis and cognitive work analysis methods that have emerged. Cognitive analysis methods have been used successfully to guide system design, as well as development of function allocation, team structure, and training, so as to enhance performance and reduce the potential for error. A comprehensive characterization of cognitive work requires two mutually informing analyses: (a) examination of domain characteristics and constraints that define cognitive requirements and challenges and (b) examination of practitioner knowledge and strategies that underlie both expert and error-vulnerable performance. A variety of specific methods can be adapted to achieve these aims within the pragmatic constraints of particular projects. Cognitive analysis methods can be used effectively to anticipate cognitive performance problems and specify ways to improve individual and team cognitive performance (be it through new forms of training, user interfaces, or decision aids).
Development of a Methodology for Assessing Aircrew Workloads.
1981-11-01
Workload Feasibility Study. .. ...... 52 Subjects. .. .............. ........ 53 Equipment .. ............... ....... 53 Date Analysis ... analysis ; simulation; standard time systems; switching synthetic time systems; task activities; task interference; time study; tracking; workload; work sampl...standard data systems, information content analysis , work sampling and job evaluation. Con- ventional methods were found to be deficient in accounting
ERIC Educational Resources Information Center
Gilpatrick, Eleanor
The two research reports included in this document describe the application of the Health Services Mobility Study (HSMS) task analysis method to two technologist functions and examine the interrelationships of these tasks with those in diagnostic radiology. (The HSMS method includes processes for using the data for designing job ladders, for…
Moutsopoulou, Karolina; Waszak, Florian
2012-04-01
The differential effects of task and response conflict in priming paradigms where associations are strengthened between a stimulus, a task, and a response have been demonstrated in recent years with neuroimaging methods. However, such effects are not easily disentangled with only measurements of behavior, such as reaction times (RTs). Here, we report the application of ex-Gaussian distribution analysis on task-switching RT data and show that conflict related to stimulus-response associations retrieved after a switch of tasks is reflected in the Gaussian component. By contrast, conflict related to the retrieval of stimulus-task associations is reflected in the exponential component. Our data confirm that the retrieval of stimulus-task and -response associations affects behavior differently. Ex-Gaussian distribution analysis is a useful tool for pulling apart these different levels of associative priming that are not distinguishable in analyses of RT means.
Verification and validation of a Work Domain Analysis with turing machine task analysis.
Rechard, J; Bignon, A; Berruet, P; Morineau, T
2015-03-01
While the use of Work Domain Analysis as a methodological framework in cognitive engineering is increasing rapidly, verification and validation of work domain models produced by this method are becoming a significant issue. In this article, we propose the use of a method based on Turing machine formalism named "Turing Machine Task Analysis" to verify and validate work domain models. The application of this method on two work domain analyses, one of car driving which is an "intentional" domain, and the other of a ship water system which is a "causal domain" showed the possibility of highlighting improvements needed by these models. More precisely, the step by step analysis of a degraded task scenario in each work domain model pointed out unsatisfactory aspects in the first modelling, like overspecification, underspecification, omission of work domain affordances, or unsuitable inclusion of objects in the work domain model. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Phase-change lines, scale breaks, and trend lines using Excel 2013.
Deochand, Neil; Costello, Mack S; Fuqua, R Wayne
2015-01-01
The development of graphing skills for behavior analysts is an ongoing process. Specialized graphing software is often expensive, is not widely disseminated, and may require specific training. Dixon et al. (2009) provided an updated task analysis for graph making in the widely used platform Excel 2007. Vanselow and Bourret (2012) provided online tutorials that outline some alternate methods also using Office 2007. This article serves as an update to those task analyses and includes some alternative and underutilized methods in Excel 2013. To examine the utility of our recommendations, 12 psychology graduate students were presented with the task analyses, and the experimenters evaluated their performance and noted feedback. The task analyses were rated favorably. © Society for the Experimental Analysis of Behavior.
Bridges, John F P; Hauber, A Brett; Marshall, Deborah; Lloyd, Andrew; Prosser, Lisa A; Regier, Dean A; Johnson, F Reed; Mauskopf, Josephine
2011-06-01
The application of conjoint analysis (including discrete-choice experiments and other multiattribute stated-preference methods) in health has increased rapidly over the past decade. A wider acceptance of these methods is limited by an absence of consensus-based methodological standards. The International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Good Research Practices for Conjoint Analysis Task Force was established to identify good research practices for conjoint-analysis applications in health. The task force met regularly to identify the important steps in a conjoint analysis, to discuss good research practices for conjoint analysis, and to develop and refine the key criteria for identifying good research practices. ISPOR members contributed to this process through an extensive consultation process. A final consensus meeting was held to revise the article using these comments, and those of a number of international reviewers. Task force findings are presented as a 10-item checklist covering: 1) research question; 2) attributes and levels; 3) construction of tasks; 4) experimental design; 5) preference elicitation; 6) instrument design; 7) data-collection plan; 8) statistical analyses; 9) results and conclusions; and 10) study presentation. A primary question relating to each of the 10 items is posed, and three sub-questions examine finer issues within items. Although the checklist should not be interpreted as endorsing any specific methodological approach to conjoint analysis, it can facilitate future training activities and discussions of good research practices for the application of conjoint-analysis methods in health care studies. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Psychophysical Models for Signal Detection with Time Varying Uncertainty. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Gai, E.
1975-01-01
Psychophysical models for the behavior of the human operator in detection tasks which include change in detectability, correlation between observations and deferred decisions are developed. Classical Signal Detection Theory (SDT) is discussed and its emphasis on the sensory processes is contrasted to decision strategies. The analysis of decision strategies utilizes detection tasks with time varying signal strength. The classical theory is modified to include such tasks and several optimal decision strategies are explored. Two methods of classifying strategies are suggested. The first method is similar to the analysis of ROC curves, while the second is based on the relation between the criterion level (CL) and the detectability. Experiments to verify the analysis of tasks with changes of signal strength are designed. The results show that subjects are aware of changes in detectability and tend to use strategies that involve changes in the CL's.
Lohmann, Gabriele; Stelzer, Johannes; Zuber, Verena; Buschmann, Tilo; Margulies, Daniel; Bartels, Andreas; Scheffler, Klaus
2016-01-01
The formation of transient networks in response to external stimuli or as a reflection of internal cognitive processes is a hallmark of human brain function. However, its identification in fMRI data of the human brain is notoriously difficult. Here we propose a new method of fMRI data analysis that tackles this problem by considering large-scale, task-related synchronisation networks. Networks consist of nodes and edges connecting them, where nodes correspond to voxels in fMRI data, and the weight of an edge is determined via task-related changes in dynamic synchronisation between their respective times series. Based on these definitions, we developed a new data analysis algorithm that identifies edges that show differing levels of synchrony between two distinct task conditions and that occur in dense packs with similar characteristics. Hence, we call this approach “Task-related Edge Density” (TED). TED proved to be a very strong marker for dynamic network formation that easily lends itself to statistical analysis using large scale statistical inference. A major advantage of TED compared to other methods is that it does not depend on any specific hemodynamic response model, and it also does not require a presegmentation of the data for dimensionality reduction as it can handle large networks consisting of tens of thousands of voxels. We applied TED to fMRI data of a fingertapping and an emotion processing task provided by the Human Connectome Project. TED revealed network-based involvement of a large number of brain areas that evaded detection using traditional GLM-based analysis. We show that our proposed method provides an entirely new window into the immense complexity of human brain function. PMID:27341204
Lohmann, Gabriele; Stelzer, Johannes; Zuber, Verena; Buschmann, Tilo; Margulies, Daniel; Bartels, Andreas; Scheffler, Klaus
2016-01-01
The formation of transient networks in response to external stimuli or as a reflection of internal cognitive processes is a hallmark of human brain function. However, its identification in fMRI data of the human brain is notoriously difficult. Here we propose a new method of fMRI data analysis that tackles this problem by considering large-scale, task-related synchronisation networks. Networks consist of nodes and edges connecting them, where nodes correspond to voxels in fMRI data, and the weight of an edge is determined via task-related changes in dynamic synchronisation between their respective times series. Based on these definitions, we developed a new data analysis algorithm that identifies edges that show differing levels of synchrony between two distinct task conditions and that occur in dense packs with similar characteristics. Hence, we call this approach "Task-related Edge Density" (TED). TED proved to be a very strong marker for dynamic network formation that easily lends itself to statistical analysis using large scale statistical inference. A major advantage of TED compared to other methods is that it does not depend on any specific hemodynamic response model, and it also does not require a presegmentation of the data for dimensionality reduction as it can handle large networks consisting of tens of thousands of voxels. We applied TED to fMRI data of a fingertapping and an emotion processing task provided by the Human Connectome Project. TED revealed network-based involvement of a large number of brain areas that evaded detection using traditional GLM-based analysis. We show that our proposed method provides an entirely new window into the immense complexity of human brain function.
A Method for Functional Task Alignment Analysis of an Arthrocentesis Simulator.
Adams, Reid A; Gilbert, Gregory E; Buckley, Lisa A; Nino Fong, Rodolfo; Fuentealba, I Carmen; Little, Erika L
2018-05-16
During simulation-based education, simulators are subjected to procedures composed of a variety of tasks and processes. Simulators should functionally represent a patient in response to the physical action of these tasks. The aim of this work was to describe a method for determining whether a simulator does or does not have sufficient functional task alignment (FTA) to be used in a simulation. Potential performance checklist items were gathered from published arthrocentesis guidelines and aggregated into a performance checklist using Lawshe's method. An expert panel used this performance checklist and an FTA analysis questionnaire to evaluate a simulator's ability to respond to the physical actions required by the performance checklist. Thirteen items, from a pool of 39, were included on the performance checklist. Experts had mixed reviews of the simulator's FTA and its suitability for use in simulation. Unexpectedly, some positive FTA was found for several tasks where the simulator lacked functionality. By developing a detailed list of specific tasks required to complete a clinical procedure, and surveying experts on the simulator's response to those actions, educators can gain insight into the simulator's clinical accuracy and suitability. Unexpected of positive FTA ratings of function deficits suggest that further revision of the survey method is required.
1987-11-01
differential qualita- tive (DQ) analysis, which solves the task, providing explanations suitable for use by design systems, automated diagnosis, intelligent...solves the task, providing explanations suitable for use by design systems, automated diagnosis, intelligent tutoring systems, and explanation based...comparative analysis as an important component; the explanation is used in many different ways. * One way method of automated design is the principlvd
NASA Technical Reports Server (NTRS)
Winters, J. M.; Stark, L.
1984-01-01
Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.
Gong, Anmin; Liu, Jianping; Chen, Si; Fu, Yunfa
2018-01-01
To study the physiologic mechanism of the brain during different motor imagery (MI) tasks, the authors employed a method of brain-network modeling based on time-frequency cross mutual information obtained from 4-class (left hand, right hand, feet, and tongue) MI tasks recorded as brain-computer interface (BCI) electroencephalography data. The authors explored the brain network revealed by these MI tasks using statistical analysis and the analysis of topologic characteristics, and observed significant differences in the reaction level, reaction time, and activated target during 4-class MI tasks. There was a great difference in the reaction level between the execution and resting states during different tasks: the reaction level of the left-hand MI task was the greatest, followed by that of the right-hand, feet, and tongue MI tasks. The reaction time required to perform the tasks also differed: during the left-hand and right-hand MI tasks, the brain networks of subjects reacted promptly and strongly, but there was a delay during the feet and tongue MI task. Statistical analysis and the analysis of network topology revealed the target regions of the brain network during different MI processes. In conclusion, our findings suggest a new way to explain the neural mechanism behind MI.
ERIC Educational Resources Information Center
Moutsopoulou, Karolina; Waszak, Florian
2012-01-01
The differential effects of task and response conflict in priming paradigms where associations are strengthened between a stimulus, a task, and a response have been demonstrated in recent years with neuroimaging methods. However, such effects are not easily disentangled with only measurements of behavior, such as reaction times (RTs). Here, we…
ERIC Educational Resources Information Center
Gilpatrick, Eleanor
This document is volume 3 of a four-volume report which describes the components of the Health Services Mobility Study (HSMS) method of task analysis, job ladder design, and curriculum development. Divided into four chapters, volume 3 is a manual for using HSMS computer based statistical procedures to design job structures and job ladders. Chapter…
Proposal of Constraints Analysis Method Based on Network Model for Task Planning
NASA Astrophysics Data System (ADS)
Tomiyama, Tomoe; Sato, Tatsuhiro; Morita, Toyohisa; Sasaki, Toshiro
Deregulation has been accelerating several activities toward reengineering business processes, such as railway through service and modal shift in logistics. Making those activities successful, business entities have to regulate new business rules or know-how (we call them ‘constraints’). According to the new constraints, they need to manage business resources such as instruments, materials, workers and so on. In this paper, we propose a constraint analysis method to define constraints for task planning of the new business processes. To visualize each constraint's influence on planning, we propose a network model which represents allocation relations between tasks and resources. The network can also represent task ordering relations and resource grouping relations. The proposed method formalizes the way of defining constraints manually as repeatedly checking the network structure and finding conflicts between constraints. Being applied to crew scheduling problems shows that the method can adequately represent and define constraints of some task planning problems with the following fundamental features, (1) specifying work pattern to some resources, (2) restricting the number of resources for some works, (3) requiring multiple resources for some works, (4) prior allocation of some resources to some works and (5) considering the workload balance between resources.
Test-retest stability of the Task and Ego Orientation Questionnaire.
Lane, Andrew M; Nevill, Alan M; Bowes, Neal; Fox, Kenneth R
2005-09-01
Establishing stability, defined as observing minimal measurement error in a test-retest assessment, is vital to validating psychometric tools. Correlational methods, such as Pearson product-moment, intraclass, and kappa are tests of association or consistency, whereas stability or reproducibility (regarded here as synonymous) assesses the agreement between test-retest scores. Indexes of reproducibility using the Task and Ego Orientation in Sport Questionnaire (TEOSQ; Duda & Nicholls, 1992) were investigated using correlational (Pearson product-moment, intraclass, and kappa) methods, repeated measures multivariate analysis of variance, and calculating the proportion of agreement within a referent value of +/-1 as suggested by Nevill, Lane, Kilgour, Bowes, and Whyte (2001). Two hundred thirteen soccer players completed the TEOSQ on two occasions, 1 week apart. Correlation analyses indicated a stronger test-retest correlation for the Ego subscale than the Task subscale. Multivariate analysis of variance indicated stability for ego items but with significant increases in four task items. The proportion of test-retest agreement scores indicated that all ego items reported relatively poor stability statistics with test-retest scores within a range of +/-1, ranging from 82.7-86.9%. By contrast, all task items showed test-retest difference scores ranging from 92.5-99%, although further analysis indicated that four task subscale items increased significantly. Findings illustrated that correlational methods (Pearson product-moment, intraclass, and kappa) are influenced by the range in scores, and calculating the proportion of agreement of test-retest differences with a referent value of +/-1 could provide additional insight into the stability of the questionnaire. It is suggested that the item-by-item proportion of agreement method proposed by Nevill et al. (2001) should be used to supplement existing methods and could be especially helpful in identifying rogue items in the initial stages of psychometric questionnaire validation.
Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang
2014-01-01
Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images). Experimental results show very promising performance of our proposed MLPD method.
Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang
2014-01-01
Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images). Experimental results show very promising performance of our proposed MLPD method. PMID:24820966
Richardson, Miles
2017-04-01
In ergonomics there is often a need to identify and predict the separate effects of multiple factors on performance. A cost-effective fractional factorial approach to understanding the relationship between task characteristics and task performance is presented. The method has been shown to provide sufficient independent variability to reveal and predict the effects of task characteristics on performance in two domains. The five steps outlined are: selection of performance measure, task characteristic identification, task design for user trials, data collection, regression model development and task characteristic analysis. The approach can be used for furthering knowledge of task performance, theoretical understanding, experimental control and prediction of task performance. Practitioner Summary: A cost-effective method to identify and predict the separate effects of multiple factors on performance is presented. The five steps allow a better understanding of task factors during the design process.
NASA Technical Reports Server (NTRS)
Benek, John A.; Luckring, James M.
2017-01-01
A NATO symposium held in Greece in 2008 identified many promising sensitivity analysis and uncertainty quantification technologies, but the maturity and suitability of these methods for realistic applications was not clear. The NATO Science and Technology Organization, Task Group AVT-191 was established to evaluate the maturity and suitability of various sensitivity analysis and uncertainty quantification methods for application to realistic vehicle development problems. The program ran from 2011 to 2015, and the work was organized into four discipline-centric teams: external aerodynamics, internal aerodynamics, aeroelasticity, and hydrodynamics. This paper summarizes findings and lessons learned from the task group.
The use of cognitive task analysis to improve instructional descriptions of procedures.
Clark, Richard E; Pugh, Carla M; Yates, Kenneth A; Inaba, Kenji; Green, Donald J; Sullivan, Maura E
2012-03-01
Surgical training relies heavily on the ability of expert surgeons to provide complete and accurate descriptions of a complex procedure. However, research from a variety of domains suggests that experts often omit critical information about the judgments, analysis, and decisions they make when solving a difficult problem or performing a complex task. In this study, we compared three methods for capturing surgeons' descriptions of how to perform the procedure for inserting a femoral artery shunt (unaided free-recall, unaided free-recall with simulation, and cognitive task analysis methods) to determine which method produced more accurate and complete results. Cognitive task analysis was approximately 70% more complete and accurate than free-recall and or free-recall during a simulation of the procedure. Ten expert trauma surgeons at a major urban trauma center were interviewed separately and asked to describe how to perform an emergency shunt procedure. Four surgeons provided an unaided free-recall description of the shunt procedure, five surgeons provided an unaided free-recall description of the procedure using visual aids and surgical instruments (simulation), and one (chosen randomly) was interviewed using cognitive task analysis (CTA) methods. An 11th vascular surgeon approved the final CTA protocol. The CTA interview with only one expert surgeon resulted in significantly greater accuracy and completeness of the descriptions compared with the unaided free-recall interviews with multiple expert surgeons. Surgeons in the unaided group omitted nearly 70% of necessary decision steps. In the free-recall group, heavy use of simulation improved surgeons' completeness when describing the steps of the procedure. CTA significantly increases the completeness and accuracy of surgeons' instructional descriptions of surgical procedures. In addition, simulation during unaided free-recall interviews may improve the completeness of interview data. Copyright © 2012 Elsevier Inc. All rights reserved.
ARBAN-A new method for analysis of ergonomic effort.
Holzmann, P
1982-06-01
ARBAN is a method for the ergonomic analysis of work, including work situations which involve widely differing body postures and loads. The idea of the method is thal all phases of the analysis process that imply specific knowledge on ergonomics are teken over by filming equipment and a computer routine. All tasks that must be carried out by the investigator in the process of analysis are so designed that they appear as evident by the use of systematic common sense. The ARBAN analysis method contains four steps: 1. Recording of the workplace situation on video or film. 2. Coding the posture and load situation at a number of closely spaced 'frozen' situations. 3. Computerisation. 4. Evaluation of the results. The computer calculates figures for the total ergonomic stress on the whole body as well as on different parts of the body separately. They are presented as 'Ergonomic stress/ time curves', where the heavy load situations occur as peaks of the curve. The work cycle may also be divided into different tasks, where the stress and duration patterns can be compared. The integral of the curves are calculated for single-figure comparison of different tasks as well as different work situations.
ERIC Educational Resources Information Center
Hazelwood, R. Jordan; Armeson, Kent E.; Hill, Elizabeth G.; Bonilha, Heather Shaw; Martin-Harris, Bonnie
2017-01-01
Purpose: The purpose of this study was to identify which swallowing task(s) yielded the worst performance during a standardized modified barium swallow study (MBSS) in order to optimize the detection of swallowing impairment. Method: This secondary data analysis of adult MBSSs estimated the probability of each swallowing task yielding the derived…
Lizier, Joseph T; Heinzle, Jakob; Horstmann, Annette; Haynes, John-Dylan; Prokopenko, Mikhail
2011-02-01
The human brain undertakes highly sophisticated information processing facilitated by the interaction between its sub-regions. We present a novel method for interregional connectivity analysis, using multivariate extensions to the mutual information and transfer entropy. The method allows us to identify the underlying directed information structure between brain regions, and how that structure changes according to behavioral conditions. This method is distinguished in using asymmetric, multivariate, information-theoretical analysis, which captures not only directional and non-linear relationships, but also collective interactions. Importantly, the method is able to estimate multivariate information measures with only relatively little data. We demonstrate the method to analyze functional magnetic resonance imaging time series to establish the directed information structure between brain regions involved in a visuo-motor tracking task. Importantly, this results in a tiered structure, with known movement planning regions driving visual and motor control regions. Also, we examine the changes in this structure as the difficulty of the tracking task is increased. We find that task difficulty modulates the coupling strength between regions of a cortical network involved in movement planning and between motor cortex and the cerebellum which is involved in the fine-tuning of motor control. It is likely these methods will find utility in identifying interregional structure (and experimentally induced changes in this structure) in other cognitive tasks and data modalities.
Diagnosis of multiple sclerosis from EEG signals using nonlinear methods.
Torabi, Ali; Daliri, Mohammad Reza; Sabzposhan, Seyyed Hojjat
2017-12-01
EEG signals have essential and important information about the brain and neural diseases. The main purpose of this study is classifying two groups of healthy volunteers and Multiple Sclerosis (MS) patients using nonlinear features of EEG signals while performing cognitive tasks. EEG signals were recorded when users were doing two different attentional tasks. One of the tasks was based on detecting a desired change in color luminance and the other task was based on detecting a desired change in direction of motion. EEG signals were analyzed in two ways: EEG signals analysis without rhythms decomposition and EEG sub-bands analysis. After recording and preprocessing, time delay embedding method was used for state space reconstruction; embedding parameters were determined for original signals and their sub-bands. Afterwards nonlinear methods were used in feature extraction phase. To reduce the feature dimension, scalar feature selections were done by using T-test and Bhattacharyya criteria. Then, the data were classified using linear support vector machines (SVM) and k-nearest neighbor (KNN) method. The best combination of the criteria and classifiers was determined for each task by comparing performances. For both tasks, the best results were achieved by using T-test criterion and SVM classifier. For the direction-based and the color-luminance-based tasks, maximum classification performances were 93.08 and 79.79% respectively which were reached by using optimal set of features. Our results show that the nonlinear dynamic features of EEG signals seem to be useful and effective in MS diseases diagnosis.
Modeling Cognitive Strategies during Complex Task Performing Process
ERIC Educational Resources Information Center
Mazman, Sacide Guzin; Altun, Arif
2012-01-01
The purpose of this study is to examine individuals' computer based complex task performing processes and strategies in order to determine the reasons of failure by cognitive task analysis method and cued retrospective think aloud with eye movement data. Study group was five senior students from Computer Education and Instructional Technologies…
A reliability analysis tool for SpaceWire network
NASA Astrophysics Data System (ADS)
Zhou, Qiang; Zhu, Longjiang; Fei, Haidong; Wang, Xingyou
2017-04-01
A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. It is becoming more and more popular in space applications due to its technical advantages, including reliability, low power and fault protection, etc. High reliability is the vital issue for spacecraft. Therefore, it is very important to analyze and improve the reliability performance of the SpaceWire network. This paper deals with the problem of reliability modeling and analysis with SpaceWire network. According to the function division of distributed network, a reliability analysis method based on a task is proposed, the reliability analysis of every task can lead to the system reliability matrix, the reliability result of the network system can be deduced by integrating these entire reliability indexes in the matrix. With the method, we develop a reliability analysis tool for SpaceWire Network based on VC, where the computation schemes for reliability matrix and the multi-path-task reliability are also implemented. By using this tool, we analyze several cases on typical architectures. And the analytic results indicate that redundancy architecture has better reliability performance than basic one. In practical, the dual redundancy scheme has been adopted for some key unit, to improve the reliability index of the system or task. Finally, this reliability analysis tool will has a directive influence on both task division and topology selection in the phase of SpaceWire network system design.
Chellali, Amine; Schwaitzberg, Steven D.; Jones, Daniel B.; Romanelli, John; Miller, Amie; Rattner, David; Roberts, Kurt E.; Cao, Caroline G.L.
2014-01-01
Background NOTES is an emerging technique for performing surgical procedures, such as cholecystectomy. Debate about its real benefit over the traditional laparoscopic technique is on-going. There have been several clinical studies comparing NOTES to conventional laparoscopic surgery. However, no work has been done to compare these techniques from a Human Factors perspective. This study presents a systematic analysis describing and comparing different existing NOTES methods to laparoscopic cholecystectomy. Methods Videos of endoscopic/laparoscopic views from fifteen live cholecystectomies were analyzed to conduct a detailed task analysis of the NOTES technique. A hierarchical task analysis of laparoscopic cholecystectomy and several hybrid transvaginal NOTES cholecystectomies was performed and validated by expert surgeons. To identify similarities and differences between these techniques, their hierarchical decomposition trees were compared. Finally, a timeline analysis was conducted to compare the steps and substeps. Results At least three variations of the NOTES technique were used for cholecystectomy. Differences between the observed techniques at the substep level of hierarchy and on the instruments being used were found. The timeline analysis showed an increase in time to perform some surgical steps and substeps in NOTES compared to laparoscopic cholecystectomy. Conclusion As pure NOTES is extremely difficult given the current state of development in instrumentation design, most surgeons utilize different hybrid methods – combination of endoscopic and laparoscopic instruments/optics. Results of our hierarchical task analysis yielded an identification of three different hybrid methods to perform cholecystectomy with significant variability amongst them. The varying degrees to which laparoscopic instruments are utilized to assist in NOTES methods appear to introduce different technical issues and additional tasks leading to an increase in the surgical time. The NOTES continuum of invasiveness is proposed here as a classification scheme for these methods, which was used to construct a clear roadmap for training and technology development. PMID:24902811
Complete 3D kinematics of upper extremity functional tasks.
van Andel, Carolien J; Wolterbeek, Nienke; Doorenbosch, Caroline A M; Veeger, DirkJan H E J; Harlaar, Jaap
2008-01-01
Upper extremity (UX) movement analysis by means of 3D kinematics has the potential to become an important clinical evaluation method. However, no standardized protocol for clinical application has yet been developed, that includes the whole upper limb. Standardization problems include the lack of a single representative function, the wide range of motion of joints and the complexity of the anatomical structures. A useful protocol would focus on the functional status of the arm and particularly the orientation of the hand. The aim of this work was to develop a standardized measurement method for unconstrained movement analysis of the UX that includes hand orientation, for a set of functional tasks for the UX and obtain normative values. Ten healthy subjects performed four representative activities of daily living (ADL). In addition, six standard active range of motion (ROM) tasks were executed. Joint angles of the wrist, elbow, shoulder and scapula were analyzed throughout each ADL task and minimum/maximum angles were determined from the ROM tasks. Characteristic trajectories were found for the ADL tasks, standard deviations were generally small and ROM results were consistent with the literature. The results of this study could form the normative basis for the development of a 'UX analysis report' equivalent to the 'gait analysis report' and would allow for future comparisons with pediatric and/or pathologic movement patterns.
Wu, Xia; Yu, Xinyu; Yao, Li; Li, Rui
2014-01-01
Functional magnetic resonance imaging (fMRI) studies have converged to reveal the default mode network (DMN), a constellation of regions that display co-activation during resting-state but co-deactivation during attention-demanding tasks in the brain. Here, we employed a Bayesian network (BN) analysis method to construct a directed effective connectivity model of the DMN and compared the organizational architecture and interregional directed connections under both resting-state and task-state. The analysis results indicated that the DMN was consistently organized into two closely interacting subsystems in both resting-state and task-state. The directed connections between DMN regions, however, changed significantly from the resting-state to task-state condition. The results suggest that the DMN intrinsically maintains a relatively stable structure whether at rest or performing tasks but has different information processing mechanisms under varied states. PMID:25309414
Analysis of Tasks in Pre-Service Elementary Teacher Education Courses
ERIC Educational Resources Information Center
Sierpinska, Anna; Osana, Helena
2012-01-01
This paper presents some results of research aimed at contributing to the development of a professional knowledge base for teachers of elementary mathematics methods courses, called here "teacher educators." We propose that a useful unit of analysis for this knowledge could be the tasks in which teacher-educators engage pre-service…
ERIC Educational Resources Information Center
Embrey, Karen K.
2012-01-01
Cognitive task analysis (CTA) is a knowledge elicitation technique employed for acquiring expertise from domain specialists to support the effective instruction of novices. CTA guided instruction has proven effective in improving surgical skills training for medical students and surgical residents. The standard, current method of teaching clinical…
NASA Technical Reports Server (NTRS)
Kirlik, Alex; Kossack, Merrick Frank
1993-01-01
This status report consists of a thesis entitled 'Ecological Task Analysis: A Method for Display Enhancements.' Previous use of various analysis processes for the purpose of display interface design or enhancement has run the risk of failing to improve user performance due to the analysis resulting in only a sequencial listing of user tasks. Adopting an ecological approach to performing the task analysis, however, may result in the necessary modeling of an unpredictable and variable task domain required to improve user performance. Kirlik has proposed an Ecological Task Analysis framework which is designed for this purpose. It is the purpose of this research to measure this framework's effectiveness at enhancing display interfaces in order to improve user performance. Following the proposed framework, an ecological task analysis of experienced users of a complex and dynamic laboratory task, Star Cruiser, was performed. Based on this analysis, display enhancements were proposed and implemented. An experiment was then conducted to compare this new version of Star Cruiser to the original. By measuring user performance at different tasks, it was determined that during early sessions, use of the enhanced display contributed to better user performance compared to that achieved using the original display. Furthermore, the results indicate that the enhancements proposed as a result of the ecological task analysis affected user performance differently depending on whether they are enhancements which aid in the selection of a possible action or in the performance of an action. Generalizations of these findings to larger, more complex systems were avoided since the analysis was only performed on this one particular system.
ERIC Educational Resources Information Center
Gravesen, Katrine Frovin; Grønbaek, Niels; Winsløw, Carl
2017-01-01
We investigate the challenges students face in the transition from calculus courses, focusing on methods related to the analysis of real valued functions given in closed form, to more advanced courses on analysis where focus is on theoretical structure, including proof. We do so based on task design aiming for a number of generic potentials for…
Dedicated tool to assess the impact of a rhetorical task on human body temperature.
Koprowski, Robert; Wilczyński, Sławomir; Martowska, Katarzyna; Gołuch, Dominik; Wrocławska-Warchala, Emilia
2017-10-01
Functional infrared thermal imaging is a method widely used in medicine, including analysis of the mechanisms related to the effect of emotions on physiological processes. The article shows how the body temperature may change during stress associated with performing a rhetorical task and proposes new parameters useful for dynamic thermal imaging measurements MATERIALS AND METHODS: 29 healthy male subjects were examined. They were given a rhetorical task that induced stress. Analysis and processing of collected body temperature data in a spatial resolution of 256×512pixels and a temperature resolution of 0.1°C enabled to show the dynamics of temperature changes. This analysis was preceded by dedicated image analysis and processing methods RESULTS: The presented dedicated algorithm for image analysis and processing allows for fully automated, reproducible and quantitative assessment of temperature changes and time constants in a sequence of thermal images of the patient. When performing the rhetorical task, the temperature rose by 0.47±0.19°C in 72.41% of the subjects, including 20.69% in whom the temperature decreased by 0.49±0.14°C after 237±141s. For 20.69% of the subjects only a drop in temperature was registered. For the remaining 6.89% of the cases, no temperature changes were registered CONCLUSIONS: The performance of the rhetorical task by the subjects causes body temperature changes. The ambiguous temperature response to the given stress factor indicates the complex mechanisms responsible for regulating stressful situations. Stress associated with the examination itself induces body temperature changes. These changes should always be taken into account in the analysis of infrared data. Copyright © 2017 Elsevier B.V. All rights reserved.
Comparison of Seven Methods for Boolean Factor Analysis and Their Evaluation by Information Gain.
Frolov, Alexander A; Húsek, Dušan; Polyakov, Pavel Yu
2016-03-01
An usual task in large data set analysis is searching for an appropriate data representation in a space of fewer dimensions. One of the most efficient methods to solve this task is factor analysis. In this paper, we compare seven methods for Boolean factor analysis (BFA) in solving the so-called bars problem (BP), which is a BFA benchmark. The performance of the methods is evaluated by means of information gain. Study of the results obtained in solving BP of different levels of complexity has allowed us to reveal strengths and weaknesses of these methods. It is shown that the Likelihood maximization Attractor Neural Network with Increasing Activity (LANNIA) is the most efficient BFA method in solving BP in many cases. Efficacy of the LANNIA method is also shown, when applied to the real data from the Kyoto Encyclopedia of Genes and Genomes database, which contains full genome sequencing for 1368 organisms, and to text data set R52 (from Reuters 21578) typically used for label categorization.
Luker, Kali R; Sullivan, Maura E; Peyre, Sarah E; Sherman, Randy; Grunwald, Tiffany
2008-01-01
The aim of this study was to compare the surgical knowledge of residents before and after receiving a cognitive task analysis-based multimedia teaching module. Ten plastic surgery residents were evaluated performing flexor tendon repair on 3 occasions. Traditional learning occurred between the first and second trial and served as the control. A teaching module was introduced as an intervention between the second and third trial using cognitive task analysis to illustrate decision-making skills. All residents showed improvement in their decision-making ability when performing flexor tendon repair after each surgical procedure. The group improved through traditional methods as well as exposure to our talk-aloud protocol (P > .01). After being trained using the cognitive task analysis curriculum the group displayed a statistically significant knowledge expansion (P < .01). Residents receiving cognitive task analysis-based multimedia surgical curriculum instruction achieved greater command of problem solving and are better equipped to make correct decisions in flexor tendon repair.
Robust gene selection methods using weighting schemes for microarray data analysis.
Kang, Suyeon; Song, Jongwoo
2017-09-02
A common task in microarray data analysis is to identify informative genes that are differentially expressed between two different states. Owing to the high-dimensional nature of microarray data, identification of significant genes has been essential in analyzing the data. However, the performances of many gene selection techniques are highly dependent on the experimental conditions, such as the presence of measurement error or a limited number of sample replicates. We have proposed new filter-based gene selection techniques, by applying a simple modification to significance analysis of microarrays (SAM). To prove the effectiveness of the proposed method, we considered a series of synthetic datasets with different noise levels and sample sizes along with two real datasets. The following findings were made. First, our proposed methods outperform conventional methods for all simulation set-ups. In particular, our methods are much better when the given data are noisy and sample size is small. They showed relatively robust performance regardless of noise level and sample size, whereas the performance of SAM became significantly worse as the noise level became high or sample size decreased. When sufficient sample replicates were available, SAM and our methods showed similar performance. Finally, our proposed methods are competitive with traditional methods in classification tasks for microarrays. The results of simulation study and real data analysis have demonstrated that our proposed methods are effective for detecting significant genes and classification tasks, especially when the given data are noisy or have few sample replicates. By employing weighting schemes, we can obtain robust and reliable results for microarray data analysis.
Walking Stroop carpet: an innovative dual-task concept for detecting cognitive impairment
Perrochon, A; Kemoun, G; Watelain, E; Berthoz, A
2013-01-01
Background Several studies have reported the potential value of the dual-task concept during locomotion in clinical evaluation because cognitive decline is strongly associated with gait abnormalities. However, current dual-task tests appear to be insufficient for early diagnosis of cognitive impairment. Methods Forty-nine subjects (young, old, with or without mild cognitive impairment) underwent cognitive evaluation (Mini-Mental State Examination, Frontal Assessment Battery, five-word test, Stroop, clock-drawing) and single-task locomotor evaluation on an electronic walkway. They were then dual-task-tested on the Walking Stroop carpet, which is an adaptation of the Stroop color–word task for locomotion. A cluster analysis, followed by an analysis of variance, was performed to assess gait parameters. Results Cluster analysis of gait parameters on the Walking Stroop carpet revealed an interaction between cognitive and functional abilities because it made it possible to distinguish dysexecutive cognitive fragility or decline with a sensitivity of 89% and a specificity of 94%. Locomotor abilities differed according to the group and dual-task conditions. Healthy subjects performed less well on dual-tasking under reading conditions than when they were asked to distinguish colors, whereas dysexecutive subjects had worse motor performances when they were required to dual task. Conclusion The Walking Stroop carpet is a dual-task test that enables early detection of cognitive fragility that has not been revealed by traditional neuropsychological tests or single-task walking analysis. PMID:23682211
Feature-Oriented Domain Analysis (FODA) Feasibility Study
1990-11-01
controlling the synchronous behavior of the task. A task may wait for one or more synchronizing or message queue events. "* Each task is designed using the...Comparative Study 13 2.2.1. The Genesis System 13 2.2.2. MCC Work 15 2.2.2.1. The DESIRE Design Recovery Tool 15 0 2.2.2.2. Domain Analysis Method 1f...Illustration 43 Figure 6-1: Architectural Layers 48 Figure 6-2: Window Management Subsystem Design Structure 49 Figure 7-1: Function of a Window Manager
System and method for seamless task-directed autonomy for robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nielsen, Curtis; Bruemmer, David; Few, Douglas
Systems, methods, and user interfaces are used for controlling a robot. An environment map and a robot designator are presented to a user. The user may place, move, and modify task designators on the environment map. The task designators indicate a position in the environment map and indicate a task for the robot to achieve. A control intermediary links task designators with robot instructions issued to the robot. The control intermediary analyzes a relative position between the task designators and the robot. The control intermediary uses the analysis to determine a task-oriented autonomy level for the robot and communicates targetmore » achievement information to the robot. The target achievement information may include instructions for directly guiding the robot if the task-oriented autonomy level indicates low robot initiative and may include instructions for directing the robot to determine a robot plan for achieving the task if the task-oriented autonomy level indicates high robot initiative.« less
Spatially Regularized Machine Learning for Task and Resting-state fMRI
Song, Xiaomu; Panych, Lawrence P.; Chen, Nan-kuei
2015-01-01
Background Reliable mapping of brain function across sessions and/or subjects in task- and resting-state has been a critical challenge for quantitative fMRI studies although it has been intensively addressed in the past decades. New Method A spatially regularized support vector machine (SVM) technique was developed for the reliable brain mapping in task- and resting-state. Unlike most existing SVM-based brain mapping techniques, which implement supervised classifications of specific brain functional states or disorders, the proposed method performs a semi-supervised classification for the general brain function mapping where spatial correlation of fMRI is integrated into the SVM learning. The method can adapt to intra- and inter-subject variations induced by fMRI nonstationarity, and identify a true boundary between active and inactive voxels, or between functionally connected and unconnected voxels in a feature space. Results The method was evaluated using synthetic and experimental data at the individual and group level. Multiple features were evaluated in terms of their contributions to the spatially regularized SVM learning. Reliable mapping results in both task- and resting-state were obtained from individual subjects and at the group level. Comparison with Existing Methods A comparison study was performed with independent component analysis, general linear model, and correlation analysis methods. Experimental results indicate that the proposed method can provide a better or comparable mapping performance at the individual and group level. Conclusions The proposed method can provide accurate and reliable mapping of brain function in task- and resting-state, and is applicable to a variety of quantitative fMRI studies. PMID:26470627
The approach to engineering tasks composition on knowledge portals
NASA Astrophysics Data System (ADS)
Novogrudska, Rina; Globa, Larysa; Schill, Alexsander; Romaniuk, Ryszard; Wójcik, Waldemar; Karnakova, Gaini; Kalizhanova, Aliya
2017-08-01
The paper presents an approach to engineering tasks composition on engineering knowledge portals. The specific features of engineering tasks are highlighted, their analysis makes the basis for partial engineering tasks integration. The formal algebraic system for engineering tasks composition is proposed, allowing to set the context-independent formal structures for engineering tasks elements' description. The method of engineering tasks composition is developed that allows to integrate partial calculation tasks into general calculation tasks on engineering portals, performed on user request demand. The real world scenario «Calculation of the strength for the power components of magnetic systems» is represented, approving the applicability and efficiency of proposed approach.
Exploring Ways to Implement the Health Services Mobility Study: A Feasibility Study.
ERIC Educational Resources Information Center
Lavine, Eileen M.; Moore, Audrey
A feasibility study was aimed at developing a strategy for implementing and utilizing the job analysis methodology which resulted from the Health Services Mobility Study (HSMS), particularly as it can be applied to the field of diagnostic radiology. (The HSMS method of job analysis starts with task descriptions analyzing the tasks that make up a…
SOAR: An Architecture for General Intelligence
1987-12-01
these tasks, and (3) learn about all aspects of the tasks and its performance on them. Soar has existed since mid 1982 as an experimental software system...intelligence. Soar’s behavior has already been studied over a range of tasks and methods (Figure 1), which sample its intended range, though...in multiple small tasks: Generate and test, AND/OR search, hill climbing ( simple and steepest-ascent), means-ends analysis, operator subgoaling
Protocol Analysis as a Tool in Function and Task Analysis
1999-10-01
Autocontingency The use of log-linear and logistic regression methods to analyse sequential data seems appealing , and is strongly advocated by...collection and analysis of observational data. Behavior Research Methods, Instruments, and Computers, 23(3), 415-429. Patrick, J. D. (1991). Snob : A
Diverse task scheduling for individualized requirements in cloud manufacturing
NASA Astrophysics Data System (ADS)
Zhou, Longfei; Zhang, Lin; Zhao, Chun; Laili, Yuanjun; Xu, Lida
2018-03-01
Cloud manufacturing (CMfg) has emerged as a new manufacturing paradigm that provides ubiquitous, on-demand manufacturing services to customers through network and CMfg platforms. In CMfg system, task scheduling as an important means of finding suitable services for specific manufacturing tasks plays a key role in enhancing the system performance. Customers' requirements in CMfg are highly individualized, which leads to diverse manufacturing tasks in terms of execution flows and users' preferences. We focus on diverse manufacturing tasks and aim to address their scheduling issue in CMfg. First of all, a mathematical model of task scheduling is built based on analysis of the scheduling process in CMfg. To solve this scheduling problem, we propose a scheduling method aiming for diverse tasks, which enables each service demander to obtain desired manufacturing services. The candidate service sets are generated according to subtask directed graphs. An improved genetic algorithm is applied to searching for optimal task scheduling solutions. The effectiveness of the scheduling method proposed is verified by a case study with individualized customers' requirements. The results indicate that the proposed task scheduling method is able to achieve better performance than some usual algorithms such as simulated annealing and pattern search.
Part-task vs. whole-task training on a supervisory control task
NASA Technical Reports Server (NTRS)
Battiste, Vernol
1987-01-01
The efficacy of a part-task training for the psychomotor portion of a supervisory control simulation was compared to that of the whole-task training, using six subjects in each group, who were asked to perform a task as quickly as possible. Part-task training was provided with the cursor-control device prior to transition to the whole-task. The analysis of both the training and experimental trials demonstrated a significant performance advantage for the part-task group: the tasks were performed better and at higher speed. Although the subjects finally achieved the same level of performance in terms of score, the part-task method was preferable for economic reasons, since simple pretraining systems are significantly less expensive than the whole-task training systems.
Estimation of low back moments from video analysis: a validation study.
Coenen, Pieter; Kingma, Idsart; Boot, Cécile R L; Faber, Gert S; Xu, Xu; Bongers, Paulien M; van Dieën, Jaap H
2011-09-02
This study aimed to develop, compare and validate two versions of a video analysis method for assessment of low back moments during occupational lifting tasks since for epidemiological studies and ergonomic practice relatively cheap and easily applicable methods to assess low back loads are needed. Ten healthy subjects participated in a protocol comprising 12 lifting conditions. Low back moments were assessed using two variants of a video analysis method and a lab-based reference method. Repeated measures ANOVAs showed no overall differences in peak moments between the two versions of the video analysis method and the reference method. However, two conditions showed a minor overestimation of one of the video analysis method moments. Standard deviations were considerable suggesting that errors in the video analysis were random. Furthermore, there was a small underestimation of dynamic components and overestimation of the static components of the moments. Intraclass correlations coefficients for peak moments showed high correspondence (>0.85) of the video analyses with the reference method. It is concluded that, when a sufficient number of measurements can be taken, the video analysis method for assessment of low back loads during lifting tasks provides valid estimates of low back moments in ergonomic practice and epidemiological studies for lifts up to a moderate level of asymmetry. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Siapkaras, A.
1977-01-01
A computational method to deal with the multidimensional nature of tracking and/or monitoring tasks is developed. Operator centered variables, including the operator's perception of the task, are considered. Matrix ratings are defined based on multidimensional scaling techniques and multivariate analysis. The method consists of two distinct steps: (1) to determine the mathematical space of subjective judgements of a certain individual (or group of evaluators) for a given set of tasks and experimental conditionings; and (2) to relate this space with respect to both the task variables and the objective performance criteria used. Results for a variety of second-order trackings with smoothed noise-driven inputs indicate that: (1) many of the internally perceived task variables form a nonorthogonal set; and (2) the structure of the subjective space varies among groups of individuals according to the degree of familiarity they have with such tasks.
Rail Inspection Systems Analysis and Technology Survey
DOT National Transportation Integrated Search
1977-09-01
The study was undertaken to identify existing rail inspection system capabilities and methods which might be used to improve these capabilities. Task I was a study to quantify existing inspection parameters and Task II was a cost effectiveness study ...
Ergonomic assessment for the task of repairing computers in a manufacturing company: A case study.
Maldonado-Macías, Aidé; Realyvásquez, Arturo; Hernández, Juan Luis; García-Alcaraz, Jorge
2015-01-01
Manufacturing industry workers who repair computers may be exposed to ergonomic risk factors. This project analyzes the tasks involved in the computer repair process to (1) find the risk level for musculoskeletal disorders (MSDs) and (2) propose ergonomic interventions to address any ergonomic issues. Work procedures and main body postures were video recorded and analyzed using task analysis, the Rapid Entire Body Assessment (REBA) postural method, and biomechanical analysis. High risk for MSDs was found on every subtask using REBA. Although biomechanical analysis found an acceptable mass center displacement during tasks, a hazardous level of compression on the lower back during computer's transportation was detected. This assessment found ergonomic risks mainly in the trunk, arm/forearm, and legs; the neck and hand/wrist were also compromised. Opportunities for ergonomic analyses and interventions in the design and execution of computer repair tasks are discussed.
Nonlinear analysis of saccade speed fluctuations during combined action and perception tasks
Stan, C.; Astefanoaei, C.; Pretegiani, E.; Optican, L.; Creanga, D.; Rufa, A.; Cristescu, C.P.
2014-01-01
Background: Saccades are rapid eye movements used to gather information about a scene which requires both action and perception. These are usually studied separately, so that how perception influences action is not well understood. In a dual task, where the subject looks at a target and reports a decision, subtle changes in the saccades might be caused by action-perception interactions. Studying saccades might provide insight into how brain pathways for action and for perception interact. New method: We applied two complementary methods, multifractal detrended fluctuation analysis and Lempel-Ziv complexity index to eye peak speed recorded in two experiments, a pure action task and a combined action-perception task. Results: Multifractality strength is significantly different in the two experiments, showing smaller values for dual decision task saccades compared to simple-task saccades. The normalized Lempel-Ziv complexity index behaves similarly i.e. is significantly smaller in the decision saccade task than in the simple task. Comparison with existing methods: Compared to the usual statistical and linear approaches, these analyses emphasize the character of the dynamics involved in the fluctuations and offer a sensitive tool for quantitative evaluation of the multifractal features and of the complexity measure in the saccades peak speeds when different brain circuits are involved. Conclusion: Our results prove that the peak speed fluctuations have multifractal characteristics with lower magnitude for the multifractality strength and for the complexity index when two neural pathways are simultaneously activated, demonstrating the nonlinear interaction in the brain pathways for action and perception. PMID:24854830
A CCA+ICA based model for multi-task brain imaging data fusion and its application to schizophrenia.
Sui, Jing; Adali, Tülay; Pearlson, Godfrey; Yang, Honghui; Sponheim, Scott R; White, Tonya; Calhoun, Vince D
2010-05-15
Collection of multiple-task brain imaging data from the same subject has now become common practice in medical imaging studies. In this paper, we propose a simple yet effective model, "CCA+ICA", as a powerful tool for multi-task data fusion. This joint blind source separation (BSS) model takes advantage of two multivariate methods: canonical correlation analysis and independent component analysis, to achieve both high estimation accuracy and to provide the correct connection between two datasets in which sources can have either common or distinct between-dataset correlation. In both simulated and real fMRI applications, we compare the proposed scheme with other joint BSS models and examine the different modeling assumptions. The contrast images of two tasks: sensorimotor (SM) and Sternberg working memory (SB), derived from a general linear model (GLM), were chosen to contribute real multi-task fMRI data, both of which were collected from 50 schizophrenia patients and 50 healthy controls. When examining the relationship with duration of illness, CCA+ICA revealed a significant negative correlation with temporal lobe activation. Furthermore, CCA+ICA located sensorimotor cortex as the group-discriminative regions for both tasks and identified the superior temporal gyrus in SM and prefrontal cortex in SB as task-specific group-discriminative brain networks. In summary, we compared the new approach to some competitive methods with different assumptions, and found consistent results regarding each of their hypotheses on connecting the two tasks. Such an approach fills a gap in existing multivariate methods for identifying biomarkers from brain imaging data.
Crew interface with a telerobotic control station
NASA Technical Reports Server (NTRS)
Mok, Eva
1987-01-01
A method for apportioning crew-telerobot tasks has been derived to facilitate the design of a crew-friendly telerobot control station. To identify the most appropriate state-of-the-art hardware for the control station, task apportionment must first be conducted to identify if an astronaut or a telerobot is best to execute the task and which displays and controls are required for monitoring and performance. Basic steps that comprise the task analysis process are: (1) identify space station tasks; (2) define tasks; (3) define task performance criteria and perform task apportionment; (4) verify task apportionment; (5) generate control station requirements; (6) develop design concepts to meet requirements; and (7) test and verify design concepts.
Filtering and left ventricle segmentation of the fetal heart in ultrasound images
NASA Astrophysics Data System (ADS)
Vargas-Quintero, Lorena; Escalante-Ramírez, Boris
2013-11-01
In this paper, we propose to use filtering methods and a segmentation algorithm for the analysis of fetal heart in ultrasound images. Since noise speckle makes difficult the analysis of ultrasound images, the filtering process becomes a useful task in these types of applications. The filtering techniques consider in this work assume that the speckle noise is a random variable with a Rayleigh distribution. We use two multiresolution methods: one based on wavelet decomposition and the another based on the Hermite transform. The filtering process is used as way to strengthen the performance of the segmentation tasks. For the wavelet-based approach, a Bayesian estimator at subband level for pixel classification is employed. The Hermite method computes a mask to find those pixels that are corrupted by speckle. On the other hand, we picked out a method based on a deformable model or "snake" to evaluate the influence of the filtering techniques in the segmentation task of left ventricle in fetal echocardiographic images.
Scalable Kernel Methods and Algorithms for General Sequence Analysis
ERIC Educational Resources Information Center
Kuksa, Pavel
2011-01-01
Analysis of large-scale sequential data has become an important task in machine learning and pattern recognition, inspired in part by numerous scientific and technological applications such as the document and text classification or the analysis of biological sequences. However, current computational methods for sequence comparison still lack…
Use of modeling to identify vulnerabilities to human error in laparoscopy.
Funk, Kenneth H; Bauer, James D; Doolen, Toni L; Telasha, David; Nicolalde, R Javier; Reeber, Miriam; Yodpijit, Nantakrit; Long, Myra
2010-01-01
This article describes an exercise to investigate the utility of modeling and human factors analysis in understanding surgical processes and their vulnerabilities to medical error. A formal method to identify error vulnerabilities was developed and applied to a test case of Veress needle insertion during closed laparoscopy. A team of 2 surgeons, a medical assistant, and 3 engineers used hierarchical task analysis and Integrated DEFinition language 0 (IDEF0) modeling to create rich models of the processes used in initial port creation. Using terminology from a standardized human performance database, detailed task descriptions were written for 4 tasks executed in the process of inserting the Veress needle. Key terms from the descriptions were used to extract from the database generic errors that could occur. Task descriptions with potential errors were translated back into surgical terminology. Referring to the process models and task descriptions, the team used a modified failure modes and effects analysis (FMEA) to consider each potential error for its probability of occurrence, its consequences if it should occur and be undetected, and its probability of detection. The resulting likely and consequential errors were prioritized for intervention. A literature-based validation study confirmed the significance of the top error vulnerabilities identified using the method. Ongoing work includes design and evaluation of procedures to correct the identified vulnerabilities and improvements to the modeling and vulnerability identification methods. Copyright 2010 AAGL. Published by Elsevier Inc. All rights reserved.
A method for multitask fMRI data fusion applied to schizophrenia.
Calhoun, Vince D; Adali, Tulay; Kiehl, Kent A; Astur, Robert; Pekar, James J; Pearlson, Godfrey D
2006-07-01
It is becoming common to collect data from multiple functional magnetic resonance imaging (fMRI) paradigms on a single individual. The data from these experiments are typically analyzed separately and sometimes directly subtracted from one another on a voxel-by-voxel basis. These comparative approaches, although useful, do not directly attempt to examine potential commonalities between tasks and between voxels. To remedy this we propose a method to extract maximally spatially independent maps for each task that are "coupled" together by a shared loading parameter. We first compute an activation map for each task and each individual as "features," which are then used to perform joint independent component analysis (jICA) on the group data. We demonstrate our approach on a data set derived from healthy controls and schizophrenia patients, each of which carried out an auditory oddball task and a Sternberg working memory task. Our analysis approach revealed two interesting findings in the data that were missed with traditional analyses. First, consistent with our hypotheses, schizophrenia patients demonstrate "decreased" connectivity in a joint network including portions of regions implicated in two prevalent models of schizophrenia. A second finding is that for the voxels identified by the jICA analysis, the correlation between the two tasks was significantly higher in patients than in controls. This finding suggests that schizophrenia patients activate "more similarly" for both tasks than do controls. A possible synthesis of both findings is that patients are activating less, but also activating with a less-unique set of regions for these very different tasks. Both of the findings described support the claim that examination of joint activation across multiple tasks can enable new questions to be posed about fMRI data. Our approach can also be applied to data using more than two tasks. It thus provides a way to integrate and probe brain networks using a variety of tasks and may increase our understanding of coordinated brain networks and the impact of pathology upon them. 2005 Wiley-Liss, Inc.
Yargholi, Elahe'; Nasrabadi, Ali Motie
2015-01-01
The purpose of this study was to apply RQA (recurrence quantification analysis) on hypnotic electroencephalograph (EEG) signals recorded after hypnotic induction while subjects were doing standard tasks of the Waterloo-Stanford Group Scale (WSGS) of hypnotic susceptibility. Then recurrence quantifiers were used to analyse the influence of hypnotic depth on EEGs. By the application of this method, the capability of tasks to distinguish subjects of different hypnotizability levels was determined. Besides, medium hypnotizable subjects showed the highest disposition to be inducted by hypnotizer. Similarities between brain governing dynamics during tasks of the same type were also observed. The present study demonstrated two remarkable innovations; investigating the EEGs of the hypnotized as doing mental tasks of Waterloo-Stanford Group Scale (WSGS) and applying RQA on hypnotic EEGs.
IDHEAS – A NEW APPROACH FOR HUMAN RELIABILITY ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
G. W. Parry; J.A Forester; V.N. Dang
2013-09-01
This paper describes a method, IDHEAS (Integrated Decision-Tree Human Event Analysis System) that has been developed jointly by the US NRC and EPRI as an improved approach to Human Reliability Analysis (HRA) that is based on an understanding of the cognitive mechanisms and performance influencing factors (PIFs) that affect operator responses. The paper describes the various elements of the method, namely the performance of a detailed cognitive task analysis that is documented in a crew response tree (CRT), and the development of the associated time-line to identify the critical tasks, i.e. those whose failure results in a human failure eventmore » (HFE), and an approach to quantification that is based on explanations of why the HFE might occur.« less
Estimation of Handgrip Force from SEMG Based on Wavelet Scale Selection.
Wang, Kai; Zhang, Xianmin; Ota, Jun; Huang, Yanjiang
2018-02-24
This paper proposes a nonlinear correlation-based wavelet scale selection technology to select the effective wavelet scales for the estimation of handgrip force from surface electromyograms (SEMG). The SEMG signal corresponding to gripping force was collected from extensor and flexor forearm muscles during the force-varying analysis task. We performed a computational sensitivity analysis on the initial nonlinear SEMG-handgrip force model. To explore the nonlinear correlation between ten wavelet scales and handgrip force, a large-scale iteration based on the Monte Carlo simulation was conducted. To choose a suitable combination of scales, we proposed a rule to combine wavelet scales based on the sensitivity of each scale and selected the appropriate combination of wavelet scales based on sequence combination analysis (SCA). The results of SCA indicated that the scale combination VI is suitable for estimating force from the extensors and the combination V is suitable for the flexors. The proposed method was compared to two former methods through prolonged static and force-varying contraction tasks. The experiment results showed that the root mean square errors derived by the proposed method for both static and force-varying contraction tasks were less than 20%. The accuracy and robustness of the handgrip force derived by the proposed method is better than that obtained by the former methods.
Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Abdulhamid, Shafi'i Muhammad; Usman, Mohammed Joda
2017-01-01
Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.
Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Usman, Mohammed Joda
2017-01-01
Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing. PMID:28467505
Reverse control for humanoid robot task recognition.
Hak, Sovannara; Mansard, Nicolas; Stasse, Olivier; Laumond, Jean Paul
2012-12-01
Efficient methods to perform motion recognition have been developed using statistical tools. Those methods rely on primitive learning in a suitable space, for example, the latent space of the joint angle and/or adequate task spaces. Learned primitives are often sequential: A motion is segmented according to the time axis. When working with a humanoid robot, a motion can be decomposed into parallel subtasks. For example, in a waiter scenario, the robot has to keep some plates horizontal with one of its arms while placing a plate on the table with its free hand. Recognition can thus not be limited to one task per consecutive segment of time. The method presented in this paper takes advantage of the knowledge of what tasks the robot is able to do and how the motion is generated from this set of known controllers, to perform a reverse engineering of an observed motion. This analysis is intended to recognize parallel tasks that have been used to generate a motion. The method relies on the task-function formalism and the projection operation into the null space of a task to decouple the controllers. The approach is successfully applied on a real robot to disambiguate motion in different scenarios where two motions look similar but have different purposes.
Analysis of Feedback in after Action Reviews
1987-06-01
CONNTSM Page INTRODUCTIUN . . . . . . . . . . . . . . . . . . . A Perspective on Feedback. . ....... • • ..... • 1 Overviev of %,•urrent Research...part of their training program . The AAR is in marked contrast to the critique method of feedback which is often used in military training. The AAR...feedback is task-inherent feedback. Task-inherent feedback refers to human-machine interacting systems, e.g., computers , where in a visual tracking task
Towards Better Computational Models of the Balance Scale Task: A Reply to Shultz and Takane
ERIC Educational Resources Information Center
van der Maas, Han L. J.; Quinlan, Philip T.; Jansen, Brenda R. J.
2007-01-01
In contrast to Shultz and Takane [Shultz, T.R., & Takane, Y. (2007). Rule following and rule use in the balance-scale task. "Cognition", in press, doi:10.1016/j.cognition.2006.12.004.] we do not accept that the traditional Rule Assessment Method (RAM) of scoring responses on the balance scale task has advantages over latent class analysis (LCA):…
Advancing the Certified in Public Health Examination: A Job Task Analysis.
Kurz, Richard S; Yager, Christopher; Yager, James D; Foster, Allison; Breidenbach, Daniel H; Irwin, Zachary
In 2014, the National Board of Public Health Examiners performed a job task analysis (JTA) to revise the Certified in Public Health (CPH) examination. The objectives of this study were to describe the development, administration, and results of the JTA survey; to present an analysis of the survey results; and to review the implications of this first-ever public health JTA. An advisory committee of public health professionals developed a list of 200 public health job tasks categorized into 10 work domains. The list of tasks was incorporated into a web-based survey, and a snowball sample of public health professionals provided 4850 usable responses. Respondents rated job tasks as essential (4), very important (3), important (2), not very important (1), and never performed (0). The mean task importance ratings ranged from 2.61 to 3.01 (important to very important). The highest mean ratings were for tasks in the ethics domain (mean rating, 3.01). Respondents ranked 10 of the 200 tasks as the most important, with mean task rankings ranging from 2.98 to 3.39. We found subtle differences between male and female respondents and between master of public health and doctor of public health respondents in their rankings. The JTA established a set of job tasks in 10 public health work domains, and the results provided a foundation for refining the CPH examination. Additional steps are needed to further modify the content outline of the examination. An empirical assessment of public health job tasks, using methods such as principal components analysis, may provide additional insight.
Sharp, Marilyn A; Cohen, Bruce S; Boye, Michael W; Foulis, Stephen A; Redmond, Jan E; Larcom, Kathleen; Hydren, Jay R; Gebhardt, Deborah L; Canino, Maria C; Warr, Bradley J; Zambraski, Edward J
2017-11-01
In 2013, the U.S. Army began developing physical tests to predict a recruit's ability to perform the critical, physically demanding tasks (CPDTs) of combat arms jobs previously not open to women. The purpose of this paper is to describe the methodology and results of analyses of the accuracy and inclusiveness of the critical physically demanding task list. While the job analysis included seven combat arms jobs, only data from the 19D Cavalry Scout occupation are presented as the process was similar for all seven jobs. Job analysis METHODS: As the foundation, senior subject matter experts from each job reviewed materials and reached consensus on the CPDTs and performance standards for each job. The list was reviewed by Army leadership and provided to the researchers. The job analysis consisted of reviewing job and task related documents and field manuals, observing >900 soldiers performing the 32 CPDTs, conducting two focus groups for each job, and analyzing responses to widely distributed job analysis questionnaires. Of the 32 CPDTs identified for seven combat jobs, nine were relevant to 19D soldiers. Focus group discussions and job analysis questionnaire results supported the tasks and standards identified by subject matter experts while also identifying additional tasks. The tasks identified by subject matter experts were representative of the physically demanding aspects of the 19D occupation. Published by Elsevier Ltd.
Militello, L G
1998-01-01
The growing role of information technology in our society has changed the very nature of many of the tasks that workers are called on to perform. Technology has resulted in a dramatic reduction in the number of proceduralized, rote tasks that workers must face. The impact of technology on many tasks and functions has been to greatly increase demands on the cognitive skills of workers. More procedural or predictable tasks are now handled by smart machines, while workers are responsible for tasks that require inference, diagnosis, judgment, and decision making. The increase in the cognitive demands placed on workers and the redistribution of tasks have created a need for a better understanding of the cognitive components of many tasks. This need has been recognized by many in the health care domain, including the U.S. Food and Drug Administration (FDA). Recent FDA regulations encourage the use of human factors in the development of medical devices, instruments, and systems. One promising set of methods for filling this need is cognitive task analysis.
ERIC Educational Resources Information Center
Becker, Nicole M.; Rupp, Charlie A.; Brandriet, Alexandra
2017-01-01
Models related to the topic of chemical kinetics are critical for predicting and explaining chemical reactivity. Here we present a qualitative study of 15 general chemistry students' reasoning about a method of initial rates task. We asked students to discuss their understanding of the terms rate law and initial rate, and then analyze rate and…
2007-05-01
of the current project was to unpack and develop the concept of sensemaking, principally by developing and testing a cognitive model of the processes...themselves. In Year 2, new Cognitive Task Analysis data collection methods were developed and used to further test the model. Cognitive Task Analysis is a...2004) to examine the phenomenon of "sensemaking," a concept initially formulated by Weick (1995), but not developed from a cognitive perspective
Social Insects: A Model System for Network Dynamics
NASA Astrophysics Data System (ADS)
Charbonneau, Daniel; Blonder, Benjamin; Dornhaus, Anna
Social insect colonies (ants, bees, wasps, and termites) show sophisticated collective problem-solving in the face of variable constraints. Individuals exchange information and materials such as food. The resulting network structure and dynamics can inform us about the mechanisms by which the insects achieve particular collective behaviors and these can be transposed to man-made and social networks. We discuss how network analysis can answer important questions about social insects, such as how effective task allocation or information flow is realized. We put forward the idea that network analysis methods are under-utilized in social insect research, and that they can provide novel ways to view the complexity of collective behavior, particularly if network dynamics are taken into account. To illustrate this, we present an example of network tasks performed by ant workers, linked by instances of workers switching from one task to another. We show how temporal network analysis can propose and test new hypotheses on mechanisms of task allocation, and how adding temporal elements to static networks can drastically change results. We discuss the benefits of using social insects as models for complex systems in general. There are multiple opportunities emergent technologies and analysis methods in facilitating research on social insect network. The potential for interdisciplinary work could significantly advance diverse fields such as behavioral ecology, computer sciences, and engineering.
Mapping university students' epistemic framing of computational physics using network analysis
NASA Astrophysics Data System (ADS)
Bodin, Madelen
2012-06-01
Solving physics problem in university physics education using a computational approach requires knowledge and skills in several domains, for example, physics, mathematics, programming, and modeling. These competences are in turn related to students’ beliefs about the domains as well as about learning. These knowledge and beliefs components are referred to here as epistemic elements, which together represent the students’ epistemic framing of the situation. The purpose of this study was to investigate university physics students’ epistemic framing when solving and visualizing a physics problem using a particle-spring model system. Students’ epistemic framings are analyzed before and after the task using a network analysis approach on interview transcripts, producing visual representations as epistemic networks. The results show that students change their epistemic framing from a modeling task, with expectancies about learning programming, to a physics task, in which they are challenged to use physics principles and conservation laws in order to troubleshoot and understand their simulations. This implies that the task, even though it is not introducing any new physics, helps the students to develop a more coherent view of the importance of using physics principles in problem solving. The network analysis method used in this study is shown to give intelligible representations of the students’ epistemic framing and is proposed as a useful method of analysis of textual data.
Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment.
Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel
2016-08-30
Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks' execution time can be improved, in particular for some regular jobs.
An efficient liner cooling scheme for advanced small gas turbine combustors
NASA Technical Reports Server (NTRS)
Paskin, Marc D.; Mongia, Hukam C.; Acosta, Waldo A.
1993-01-01
A joint Army/NASA program was conducted to design, fabricate, and test an advanced, small gas turbine, reverse-flow combustor utilizing a compliant metal/ceramic (CMC) wall cooling concept. The objectives of this effort were to develop a design method (basic design data base and analysis) for the CMC cooling technique and then demonstrate its application to an advanced cycle, small, reverse-flow combustor with 3000 F burner outlet temperature. The CMC concept offers significant improvements in wall cooling effectiveness resulting in a large reduction in cooling air requirements. Therefore, more air is available for control of burner outlet temperature pattern in addition to the benefits of improved efficiency, reduced emissions, and lower smoke levels. The program was divided into four tasks. Task 1 defined component materials and localized design of the composite wall structure in conjunction with development of basic design models for the analysis of flow and heat transfer through the wall. Task 2 included implementation of the selected materials and validated design models during combustor preliminary design. Detail design of the selected combustor concept and its refinement with 3D aerothermal analysis were completed in Task 3. Task 4 covered detail drawings, process development and fabrication, and a series of burner rig tests. The purpose of this paper is to provide details of the investigation into the fundamental flow and heat transfer characteristics of the CMC wall structure as well as implementation of the fundamental analysis method for full-scale combustor design.
Anwar, A R; Muthalib, M; Perrey, S; Galka, A; Granert, O; Wolff, S; Deuschl, G; Raethjen, J; Heute, U; Muthuraman, M
2012-01-01
Directionality analysis of signals originating from different parts of brain during motor tasks has gained a lot of interest. Since brain activity can be recorded over time, methods of time series analysis can be applied to medical time series as well. Granger Causality is a method to find a causal relationship between time series. Such causality can be referred to as a directional connection and is not necessarily bidirectional. The aim of this study is to differentiate between different motor tasks on the basis of activation maps and also to understand the nature of connections present between different parts of the brain. In this paper, three different motor tasks (finger tapping, simple finger sequencing, and complex finger sequencing) are analyzed. Time series for each task were extracted from functional magnetic resonance imaging (fMRI) data, which have a very good spatial resolution and can look into the sub-cortical regions of the brain. Activation maps based on fMRI images show that, in case of complex finger sequencing, most parts of the brain are active, unlike finger tapping during which only limited regions show activity. Directionality analysis on time series extracted from contralateral motor cortex (CMC), supplementary motor area (SMA), and cerebellum (CER) show bidirectional connections between these parts of the brain. In case of simple finger sequencing and complex finger sequencing, the strongest connections originate from SMA and CMC, while connections originating from CER in either direction are the weakest ones in magnitude during all paradigms.
Task-technology fit of video telehealth for nurses in an outpatient clinic setting.
Cady, Rhonda G; Finkelstein, Stanley M
2014-07-01
Incorporating telehealth into outpatient care delivery supports management of consumer health between clinic visits. Task-technology fit is a framework for understanding how technology helps and/or hinders a person during work processes. Evaluating the task-technology fit of video telehealth for personnel working in a pediatric outpatient clinic and providing care between clinic visits ensures the information provided matches the information needed to support work processes. The workflow of advanced practice registered nurse (APRN) care coordination provided via telephone and video telehealth was described and measured using a mixed-methods workflow analysis protocol that incorporated cognitive ethnography and time-motion study. Qualitative and quantitative results were merged and analyzed within the task-technology fit framework to determine the workflow fit of video telehealth for APRN care coordination. Incorporating video telehealth into APRN care coordination workflow provided visual information unavailable during telephone interactions. Despite additional tasks and interactions needed to obtain the visual information, APRN workflow efficiency, as measured by time, was not significantly changed. Analyzed within the task-technology fit framework, the increased visual information afforded by video telehealth supported the assessment and diagnostic information needs of the APRN. Telehealth must provide the right information to the right clinician at the right time. Evaluating task-technology fit using a mixed-methods protocol ensured rigorous analysis of fit within work processes and identified workflows that benefit most from the technology.
ERIC Educational Resources Information Center
Stone, Paul
2012-01-01
In this article I investigate how Japanese students manage interaction together in a task-based English as a foreign language (EFL) classroom. Using methods from conversation analysis and focusing on the contextual dimensions of language, I analyse data from a real classroom task with a view to understanding the ways in which social processes and…
Sustained Attention in Children with Primary Language Impairment: A Meta-Analysis
Ebert, Kerry Danahy; Kohnert, Kathryn
2014-01-01
Purpose This study provides a meta-analysis of the difference between children with primary or specific language impairment (LI) and their typically developing peers on tasks of sustained attention. The meta-analysis seeks to determine if children with LI demonstrate subclinical deficits in sustained attention and, if so, under what conditions. Methods Articles that reported empirical data from the performance of children with LI, in comparison to typically developing peers, on a task assessing sustained attention were considered for inclusion. Twenty-eight effect sizes were included in the meta-analysis. Two moderator analyses addressed the effects of stimulus modality and ADHD exclusion. In addition, reaction time outcomes and the effects of task variables were summarized qualitatively. Results The meta-analysis supports the existence of sustained attention deficits in children with LI in both auditory and visual modalities, as demonstrated by reduced accuracy compared to typically developing peers. Larger effect sizes are found in tasks that use auditory and linguistic stimuli than in studies that use visual stimuli. Conclusions Future research should consider the role that sustained attention weaknesses play in LI, as well as the implications for clinical and research assessment tasks. Methodological recommendations are summarized. PMID:21646419
Thread concept for automatic task parallelization in image analysis
NASA Astrophysics Data System (ADS)
Lueckenhaus, Maximilian; Eckstein, Wolfgang
1998-09-01
Parallel processing of image analysis tasks is an essential method to speed up image processing and helps to exploit the full capacity of distributed systems. However, writing parallel code is a difficult and time-consuming process and often leads to an architecture-dependent program that has to be re-implemented when changing the hardware. Therefore it is highly desirable to do the parallelization automatically. For this we have developed a special kind of thread concept for image analysis tasks. Threads derivated from one subtask may share objects and run in the same context but may process different threads of execution and work on different data in parallel. In this paper we describe the basics of our thread concept and show how it can be used as basis of an automatic task parallelization to speed up image processing. We further illustrate the design and implementation of an agent-based system that uses image analysis threads for generating and processing parallel programs by taking into account the available hardware. The tests made with our system prototype show that the thread concept combined with the agent paradigm is suitable to speed up image processing by an automatic parallelization of image analysis tasks.
Using Robust Standard Errors to Combine Multiple Regression Estimates with Meta-Analysis
ERIC Educational Resources Information Center
Williams, Ryan T.
2012-01-01
Combining multiple regression estimates with meta-analysis has continued to be a difficult task. A variety of methods have been proposed and used to combine multiple regression slope estimates with meta-analysis, however, most of these methods have serious methodological and practical limitations. The purpose of this study was to explore the use…
Fine tuning breath-hold-based cerebrovascular reactivity analysis models.
van Niftrik, Christiaan Hendrik Bas; Piccirelli, Marco; Bozinov, Oliver; Pangalu, Athina; Valavanis, Antonios; Regli, Luca; Fierstra, Jorn
2016-02-01
We elaborate on existing analysis methods for breath-hold (BH)-derived cerebrovascular reactivity (CVR) measurements and describe novel insights and models toward more exact CVR interpretation. Five blood-oxygen-level-dependent (BOLD) fMRI datasets of neurovascular patients with unilateral hemispheric hemodynamic impairment were used to test various BH CVR analysis methods. Temporal lag (phase), percent BOLD signal change (CVR), and explained variance (coherence) maps were calculated using three different sine models and two novel "Optimal Signal" model-free methods based on the unaffected hemisphere and the sagittal sinus fMRI signal time series, respectively. All models showed significant differences in CVR and coherence between the affected-hemodynamic impaired-and unaffected hemisphere. Voxel-wise phase determination significantly increases CVR (0.60 ± 0.18 vs. 0.82 ± 0.27; P < 0.05). Incorporating different durations of breath hold and resting period in one sine model (two-task) did increase coherence in the unaffected hemisphere, as well as eliminating negative phase commonly obtained by one-task frequency models. The novel model-free "optimal signal" methods both explained the BOLD MR data similar to the two task sine model. Our CVR analysis demonstrates an improved CVR and coherence after implementation of voxel-wise phase and frequency adjustment. The novel "optimal signal" methods provide a robust and feasible alternative to the sine models, as both are model-free and independent of compliance. Here, the sagittal sinus model may be advantageous, as it is independent of hemispheric CVR impairment.
Abe, Kazuhiro; Takahashi, Toshimitsu; Takikawa, Yoriko; Arai, Hajime; Kitazawa, Shigeru
2011-10-01
Independent component analysis (ICA) can be usefully applied to functional imaging studies to evaluate the spatial extent and temporal profile of task-related brain activity. It requires no a priori assumptions about the anatomical areas that are activated or the temporal profile of the activity. We applied spatial ICA to detect a voluntary but hidden response of silent speech. To validate the method against a standard model-based approach, we used the silent speech of a tongue twister as a 'Yes' response to single questions that were delivered at given times. In the first task, we attempted to estimate one number that was chosen by a participant from 10 possibilities. In the second task, we increased the possibilities to 1000. In both tasks, spatial ICA was as effective as the model-based method for determining the number in the subject's mind (80-90% correct per digit), but spatial ICA outperformed the model-based method in terms of time, especially in the 1000-possibility task. In the model-based method, calculation time increased by 30-fold, to 15 h, because of the necessity of testing 1000 possibilities. In contrast, the calculation time for spatial ICA remained as short as 30 min. In addition, spatial ICA detected an unexpected response that occurred by mistake. This advantage was validated in a third task, with 13 500 possibilities, in which participants had the freedom to choose when to make one of four responses. We conclude that spatial ICA is effective for detecting the onset of silent speech, especially when it occurs unexpectedly. © 2011 The Authors. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Reed Johnson, F; Lancsar, Emily; Marshall, Deborah; Kilambi, Vikram; Mühlbacher, Axel; Regier, Dean A; Bresnahan, Brian W; Kanninen, Barbara; Bridges, John F P
2013-01-01
Stated-preference methods are a class of evaluation techniques for studying the preferences of patients and other stakeholders. While these methods span a variety of techniques, conjoint-analysis methods-and particularly discrete-choice experiments (DCEs)-have become the most frequently applied approach in health care in recent years. Experimental design is an important stage in the development of such methods, but establishing a consensus on standards is hampered by lack of understanding of available techniques and software. This report builds on the previous ISPOR Conjoint Analysis Task Force Report: Conjoint Analysis Applications in Health-A Checklist: A Report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. This report aims to assist researchers specifically in evaluating alternative approaches to experimental design, a difficult and important element of successful DCEs. While this report does not endorse any specific approach, it does provide a guide for choosing an approach that is appropriate for a particular study. In particular, it provides an overview of the role of experimental designs for the successful implementation of the DCE approach in health care studies, and it provides researchers with an introduction to constructing experimental designs on the basis of study objectives and the statistical model researchers have selected for the study. The report outlines the theoretical requirements for designs that identify choice-model preference parameters and summarizes and compares a number of available approaches for constructing experimental designs. The task-force leadership group met via bimonthly teleconferences and in person at ISPOR meetings in the United States and Europe. An international group of experimental-design experts was consulted during this process to discuss existing approaches for experimental design and to review the task force's draft reports. In addition, ISPOR members contributed to developing a consensus report by submitting written comments during the review process and oral comments during two forum presentations at the ISPOR 16th and 17th Annual International Meetings held in Baltimore (2011) and Washington, DC (2012). Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Analysis of cigarette purchase task instrument data with a left-censored mixed effects model.
Liao, Wenjie; Luo, Xianghua; Le, Chap T; Chu, Haitao; Epstein, Leonard H; Yu, Jihnhee; Ahluwalia, Jasjit S; Thomas, Janet L
2013-04-01
The drug purchase task is a frequently used instrument for measuring the relative reinforcing efficacy (RRE) of a substance, a central concept in psychopharmacological research. Although a purchase task instrument, such as the cigarette purchase task (CPT), provides a comprehensive and inexpensive way to assess various aspects of a drug's RRE, the application of conventional statistical methods to data generated from such an instrument may not be adequate by simply ignoring or replacing the extra zeros or missing values in the data with arbitrary small consumption values, for example, 0.001. We applied the left-censored mixed effects model to CPT data from a smoking cessation study of college students and demonstrated its superiority over the existing methods with simulation studies. Theoretical implications of the findings, limitations of the proposed method, and future directions of research are also discussed.
Analysis of Cigarette Purchase Task Instrument Data with a Left-Censored Mixed Effects Model
Liao, Wenjie; Luo, Xianghua; Le, Chap; Chu, Haitao; Epstein, Leonard H.; Yu, Jihnhee; Ahluwalia, Jasjit S.; Thomas, Janet L.
2015-01-01
The drug purchase task is a frequently used instrument for measuring the relative reinforcing efficacy (RRE) of a substance, a central concept in psychopharmacological research. While a purchase task instrument, such as the cigarette purchase task (CPT), provides a comprehensive and inexpensive way to assess various aspects of a drug’s RRE, the application of conventional statistical methods to data generated from such an instrument may not be adequate by simply ignoring or replacing the extra zeros or missing values in the data with arbitrary small consumption values, e.g. 0.001. We applied the left-censored mixed effects model to CPT data from a smoking cessation study of college students and demonstrated its superiority over the existing methods with simulation studies. Theoretical implications of the findings, limitations of the proposed method and future directions of research are also discussed. PMID:23356731
Development and Validation of Cognitive Screening Instruments.
ERIC Educational Resources Information Center
Jarman, Ronald F.
The author suggests that most research on the early detection of learning disabilities is characterisized by an ineffective and a theoretical method of selecting and validating tasks. An alternative technique is proposed, based on a neurological theory of cognitive processes, whereby task analysis is a first step, with empirical analyses as…
Computational Prosodic Markers for Autism
ERIC Educational Resources Information Center
Van Santen, Jan P.H.; Prud'hommeaux, Emily T.; Black, Lois M.; Mitchell, Margaret
2010-01-01
We present results obtained with new instrumental methods for the acoustic analysis of prosody to evaluate prosody production by children with Autism Spectrum Disorder (ASD) and Typical Development (TD). Two tasks elicit focal stress--one in a vocal imitation paradigm, the other in a picture-description paradigm; a third task also uses a vocal…
Task-based statistical image reconstruction for high-quality cone-beam CT
NASA Astrophysics Data System (ADS)
Dang, Hao; Webster Stayman, J.; Xu, Jennifer; Zbijewski, Wojciech; Sisniega, Alejandro; Mow, Michael; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.
2017-11-01
Task-based analysis of medical imaging performance underlies many ongoing efforts in the development of new imaging systems. In statistical image reconstruction, regularization is often formulated in terms to encourage smoothness and/or sharpness (e.g. a linear, quadratic, or Huber penalty) but without explicit formulation of the task. We propose an alternative regularization approach in which a spatially varying penalty is determined that maximizes task-based imaging performance at every location in a 3D image. We apply the method to model-based image reconstruction (MBIR—viz., penalized weighted least-squares, PWLS) in cone-beam CT (CBCT) of the head, focusing on the task of detecting a small, low-contrast intracranial hemorrhage (ICH), and we test the performance of the algorithm in the context of a recently developed CBCT prototype for point-of-care imaging of brain injury. Theoretical predictions of local spatial resolution and noise are computed via an optimization by which regularization (specifically, the quadratic penalty strength) is allowed to vary throughout the image to maximize local task-based detectability index ({{d}\\prime} ). Simulation studies and test-bench experiments were performed using an anthropomorphic head phantom. Three PWLS implementations were tested: conventional (constant) penalty; a certainty-based penalty derived to enforce constant point-spread function, PSF; and the task-based penalty derived to maximize local detectability at each location. Conventional (constant) regularization exhibited a fairly strong degree of spatial variation in {{d}\\prime} , and the certainty-based method achieved uniform PSF, but each exhibited a reduction in detectability compared to the task-based method, which improved detectability up to ~15%. The improvement was strongest in areas of high attenuation (skull base), where the conventional and certainty-based methods tended to over-smooth the data. The task-driven reconstruction method presents a promising regularization method in MBIR by explicitly incorporating task-based imaging performance as the objective. The results demonstrate improved ICH conspicuity and support the development of high-quality CBCT systems.
Decision paths in complex tasks
NASA Technical Reports Server (NTRS)
Galanter, Eugene
1991-01-01
Complex real world action and its prediction and control has escaped analysis by the classical methods of psychological research. The reason is that psychologists have no procedures to parse complex tasks into their constituents. Where such a division can be made, based say on expert judgment, there is no natural scale to measure the positive or negative values of the components. Even if we could assign numbers to task parts, we lack rules i.e., a theory, to combine them into a total task representation. We compare here two plausible theories for the amalgamation of the value of task components. Both of these theories require a numerical representation of motivation, for motivation is the primary variable that guides choice and action in well-learned tasks. We address this problem of motivational quantification and performance prediction by developing psychophysical scales of the desireability or aversiveness of task components based on utility scaling methods (Galanter 1990). We modify methods used originally to scale sensory magnitudes (Stevens and Galanter 1957), and that have been applied recently to the measure of task 'workload' by Gopher and Braune (1984). Our modification uses utility comparison scaling techniques which avoid the unnecessary assumptions made by Gopher and Braune. Formula for the utility of complex tasks based on the theoretical models are used to predict decision and choice of alternate paths to the same goal.
Handbook for Construction of Task Inventories for Navy Enlisted Ratings
1984-01-01
1 March 1977. Cambardella, J. J. 9., & Alvord, V. 0. TI-CODAP: A computerized method of Aob analysis for personnel nasement. Prince George’s County...ficity needed In the occupational analysis and influences the choice of an analysis method . The primary source of job data usually is the job holder at...analyzed, various methods of collecting and procersing data were considered, and an Introduc- tory approach to the collection and analysis of Navy
Islam, Md Rabiul; Tanaka, Toshihisa; Molla, Md Khademul Islam
2018-05-08
When designing multiclass motor imagery-based brain-computer interface (MI-BCI), a so-called tangent space mapping (TSM) method utilizing the geometric structure of covariance matrices is an effective technique. This paper aims to introduce a method using TSM for finding accurate operational frequency bands related brain activities associated with MI tasks. A multichannel electroencephalogram (EEG) signal is decomposed into multiple subbands, and tangent features are then estimated on each subband. A mutual information analysis-based effective algorithm is implemented to select subbands containing features capable of improving motor imagery classification accuracy. Thus obtained features of selected subbands are combined to get feature space. A principal component analysis-based approach is employed to reduce the features dimension and then the classification is accomplished by a support vector machine (SVM). Offline analysis demonstrates the proposed multiband tangent space mapping with subband selection (MTSMS) approach outperforms state-of-the-art methods. It acheives the highest average classification accuracy for all datasets (BCI competition dataset 2a, IIIa, IIIb, and dataset JK-HH1). The increased classification accuracy of MI tasks with the proposed MTSMS approach can yield effective implementation of BCI. The mutual information-based subband selection method is implemented to tune operation frequency bands to represent actual motor imagery tasks.
Thokala, Praveen; Devlin, Nancy; Marsh, Kevin; Baltussen, Rob; Boysen, Meindert; Kalo, Zoltan; Longrenn, Thomas; Mussen, Filip; Peacock, Stuart; Watkins, John; Ijzerman, Maarten
2016-01-01
Health care decisions are complex and involve confronting trade-offs between multiple, often conflicting, objectives. Using structured, explicit approaches to decisions involving multiple criteria can improve the quality of decision making and a set of techniques, known under the collective heading multiple criteria decision analysis (MCDA), are useful for this purpose. MCDA methods are widely used in other sectors, and recently there has been an increase in health care applications. In 2014, ISPOR established an MCDA Emerging Good Practices Task Force. It was charged with establishing a common definition for MCDA in health care decision making and developing good practice guidelines for conducting MCDA to aid health care decision making. This initial ISPOR MCDA task force report provides an introduction to MCDA - it defines MCDA; provides examples of its use in different kinds of decision making in health care (including benefit risk analysis, health technology assessment, resource allocation, portfolio decision analysis, shared patient clinician decision making and prioritizing patients' access to services); provides an overview of the principal methods of MCDA; and describes the key steps involved. Upon reviewing this report, readers should have a solid overview of MCDA methods and their potential for supporting health care decision making. Copyright © 2016. Published by Elsevier Inc.
Analysis of methods of processing of expert information by optimization of administrative decisions
NASA Astrophysics Data System (ADS)
Churakov, D. Y.; Tsarkova, E. G.; Marchenko, N. D.; Grechishnikov, E. V.
2018-03-01
In the real operation the measure definition methodology in case of expert estimation of quality and reliability of application-oriented software products is offered. In operation methods of aggregation of expert estimates on the example of a collective choice of an instrumental control projects in case of software development of a special purpose for needs of institutions are described. Results of operation of dialogue decision making support system are given an algorithm of the decision of the task of a choice on the basis of a method of the analysis of hierarchies and also. The developed algorithm can be applied by development of expert systems to the solution of a wide class of the tasks anyway connected to a multicriteria choice.
Automatic motor task selection via a bandit algorithm for a brain-controlled button
NASA Astrophysics Data System (ADS)
Fruitet, Joan; Carpentier, Alexandra; Munos, Rémi; Clerc, Maureen
2013-02-01
Objective. Brain-computer interfaces (BCIs) based on sensorimotor rhythms use a variety of motor tasks, such as imagining moving the right or left hand, the feet or the tongue. Finding the tasks that yield best performance, specifically to each user, is a time-consuming preliminary phase to a BCI experiment. This study presents a new adaptive procedure to automatically select (online) the most promising motor task for an asynchronous brain-controlled button. Approach. We develop for this purpose an adaptive algorithm UCB-classif based on the stochastic bandit theory and design an EEG experiment to test our method. We compare (offline) the adaptive algorithm to a naïve selection strategy which uses uniformly distributed samples from each task. We also run the adaptive algorithm online to fully validate the approach. Main results. By not wasting time on inefficient tasks, and focusing on the most promising ones, this algorithm results in a faster task selection and a more efficient use of the BCI training session. More precisely, the offline analysis reveals that the use of this algorithm can reduce the time needed to select the most appropriate task by almost half without loss in precision, or alternatively, allow us to investigate twice the number of tasks within a similar time span. Online tests confirm that the method leads to an optimal task selection. Significance. This study is the first one to optimize the task selection phase by an adaptive procedure. By increasing the number of tasks that can be tested in a given time span, the proposed method could contribute to reducing ‘BCI illiteracy’.
NASA Technical Reports Server (NTRS)
1976-01-01
Additional design and analysis data are provided to supplement the results of the two parallel design study efforts. The key results of the three supplemental tasks investigated are: (1) The velocity duration profile has a significant effect in determining the optimum wind turbine design parameters and the energy generation cost. (2) Modest increases in capacity factor can be achieved with small increases in energy generation costs and capital costs. (3) Reinforced concrete towers that are esthetically attractive can be designed and built at a cost comparable to those for steel truss towers. The approach used, method of analysis, assumptions made, design requirements, and the results for each task are discussed in detail.
Chen, Zikuan; Calhoun, Vince D
2016-03-01
Conventionally, independent component analysis (ICA) is performed on an fMRI magnitude dataset to analyze brain functional mapping (AICA). By solving the inverse problem of fMRI, we can reconstruct the brain magnetic susceptibility (χ) functional states. Upon the reconstructed χ dataspace, we propose an ICA-based brain functional χ mapping method (χICA) to extract task-evoked brain functional map. A complex division algorithm is applied to a timeseries of fMRI phase images to extract temporal phase changes (relative to an OFF-state snapshot). A computed inverse MRI (CIMRI) model is used to reconstruct a 4D brain χ response dataset. χICA is implemented by applying a spatial InfoMax ICA algorithm to the reconstructed 4D χ dataspace. With finger-tapping experiments on a 7T system, the χICA-extracted χ-depicted functional map is similar to the SPM-inferred functional χ map by a spatial correlation of 0.67 ± 0.05. In comparison, the AICA-extracted magnitude-depicted map is correlated with the SPM magnitude map by 0.81 ± 0.05. The understanding of the inferiority of χICA to AICA for task-evoked functional map is an ongoing research topic. For task-evoked brain functional mapping, we compare the data-driven ICA method with the task-correlated SPM method. In particular, we compare χICA with AICA for extracting task-correlated timecourses and functional maps. χICA can extract a χ-depicted task-evoked brain functional map from a reconstructed χ dataspace without the knowledge about brain hemodynamic responses. The χICA-extracted brain functional χ map reveals a bidirectional BOLD response pattern that is unavailable (or different) from AICA. Copyright © 2016 Elsevier B.V. All rights reserved.
Individual Differences in Dynamic Functional Brain Connectivity across the Human Lifespan.
Davison, Elizabeth N; Turner, Benjamin O; Schlesinger, Kimberly J; Miller, Michael B; Grafton, Scott T; Bassett, Danielle S; Carlson, Jean M
2016-11-01
Individual differences in brain functional networks may be related to complex personal identifiers, including health, age, and ability. Dynamic network theory has been used to identify properties of dynamic brain function from fMRI data, but the majority of analyses and findings remain at the level of the group. Here, we apply hypergraph analysis, a method from dynamic network theory, to quantify individual differences in brain functional dynamics. Using a summary metric derived from the hypergraph formalism-hypergraph cardinality-we investigate individual variations in two separate, complementary data sets. The first data set ("multi-task") consists of 77 individuals engaging in four consecutive cognitive tasks. We observe that hypergraph cardinality exhibits variation across individuals while remaining consistent within individuals between tasks; moreover, the analysis of one of the memory tasks revealed a marginally significant correspondence between hypergraph cardinality and age. This finding motivated a similar analysis of the second data set ("age-memory"), in which 95 individuals, aged 18-75, performed a memory task with a similar structure to the multi-task memory task. With the increased age range in the age-memory data set, the correlation between hypergraph cardinality and age correspondence becomes significant. We discuss these results in the context of the well-known finding linking age with network structure, and suggest that hypergraph analysis should serve as a useful tool in furthering our understanding of the dynamic network structure of the brain.
Exploring physical exposures and identifying high-risk work tasks within the floor layer trade
McGaha, Jamie; Miller, Kim; Descatha, Alexis; Welch, Laurie; Buchholz, Bryan; Evanoff, Bradley; Dale, Ann Marie
2014-01-01
Introduction Floor layers have high rates of musculoskeletal disorders yet few studies have examined their work exposures. This study used observational methods to describe physical exposures within floor laying tasks. Methods We analyzed 45 videos from 32 floor layers using Multimedia-Video Task Analysis software to determine the time in task, forces, postures, and repetitive hand movements for installation of four common flooring materials. We used the WISHA checklists to define exposure thresholds. Results Most workers (91%) met the caution threshold for one or more exposures. Workers showed high exposures in multiple body parts with variability in exposures across tasks and for different materials. Prolonged exposures were seen for kneeling, poor neck and low back postures, and intermittent but frequent hand grip forces. Conclusions Floor layers experience prolonged awkward postures and high force physical exposures in multiple body parts, which probably contribute to their high rates of musculoskeletal disorders. PMID:24274895
A Method for Multitask fMRI Data Fusion Applied to Schizophrenia
Calhoun, Vince D.; Adali, Tulay; Kiehl, Kent A.; Astur, Robert; Pekar, James J.; Pearlson, Godfrey D.
2009-01-01
It is becoming common to collect data from multiple functional magnetic resonance imaging (fMRI) paradigms on a single individual. The data from these experiments are typically analyzed separately and sometimes directly subtracted from one another on a voxel-by-voxel basis. These comparative approaches, although useful, do not directly attempt to examine potential commonalities between tasks and between voxels. To remedy this we propose a method to extract maximally spatially independent maps for each task that are “coupled” together by a shared loading parameter. We first compute an activation map for each task and each individual as “features, ” which are then used to perform joint independent component analysis (jICA) on the group data. We demonstrate our approach on a data set derived from healthy controls and schizophrenia patients, each of which carried out an auditory oddball task and a Sternberg working memory task. Our analysis approach revealed two interesting findings in the data that were missed with traditional analyses. First, consistent with our hypotheses, schizophrenia patients demonstrate “decreased” connectivity in a joint network including portions of regions implicated in two prevalent models of schizophrenia. A second finding is that for the voxels identified by the jICA analysis, the correlation between the two tasks was significantly higher in patients than in controls. This finding suggests that schizophrenia patients activate “more similarly” for both tasks than do controls. A possible synthesis of both findings is that patients are activating less, but also activating with a less-unique set of regions for these very different tasks. Both of the findings described support the claim that examination of joint activation across multiple tasks can enable new questions to be posed about fMRI data. Our approach can also be applied to data using more than two tasks. It thus provides a way to integrate and probe brain networks using a variety of tasks and may increase our understanding of coordinated brain networks and the impact of pathology upon them. PMID:16342150
A human factors analysis of EVA time requirements
NASA Technical Reports Server (NTRS)
Pate, D. W.
1996-01-01
Human Factors Engineering (HFE), also known as Ergonomics, is a discipline whose goal is to engineer a safer, more efficient interface between humans and machines. HFE makes use of a wide range of tools and techniques to fulfill this goal. One of these tools is known as motion and time study, a technique used to develop time standards for given tasks. A human factors motion and time study was initiated with the goal of developing a database of EVA task times and a method of utilizing the database to predict how long an ExtraVehicular Activity (EVA) should take. Initial development relied on the EVA activities performed during the STS-61 mission (Hubble repair). The first step of the analysis was to become familiar with EVAs and with the previous studies and documents produced on EVAs. After reviewing these documents, an initial set of task primitives and task time modifiers was developed. Videotaped footage of STS-61 EVAs were analyzed using these primitives and task time modifiers. Data for two entire EVA missions and portions of several others, each with two EVA astronauts, was collected for analysis. Feedback from the analysis of the data will be used to further refine the primitives and task time modifiers used. Analysis of variance techniques for categorical data will be used to determine which factors may, individually or by interactions, effect the primitive times and how much of an effect they have.
Clinical quality needs complex adaptive systems and machine learning.
Marsland, Stephen; Buchan, Iain
2004-01-01
The vast increase in clinical data has the potential to bring about large improvements in clinical quality and other aspects of healthcare delivery. However, such benefits do not come without cost. The analysis of such large datasets, particularly where the data may have to be merged from several sources and may be noisy and incomplete, is a challenging task. Furthermore, the introduction of clinical changes is a cyclical task, meaning that the processes under examination operate in an environment that is not static. We suggest that traditional methods of analysis are unsuitable for the task, and identify complexity theory and machine learning as areas that have the potential to facilitate the examination of clinical quality. By its nature the field of complex adaptive systems deals with environments that change because of the interactions that have occurred in the past. We draw parallels between health informatics and bioinformatics, which has already started to successfully use machine learning methods.
What Makes Patient Navigation Most Effective: Defining Useful Tasks and Networks.
Gunn, Christine; Battaglia, Tracy A; Parker, Victoria A; Clark, Jack A; Paskett, Electra D; Calhoun, Elizabeth; Snyder, Frederick R; Bergling, Emily; Freund, Karen M
2017-01-01
Given the momentum in adopting patient navigation into cancer care, there is a need to understand the contribution of specific navigator activities to improved clinical outcomes. A mixed-methods study combined direct observations of patient navigators within the Patient Navigation Research Program and outcome data from the trial. We correlated the frequency of navigator tasks with the outcome of rate of diagnostic resolution within 365 days among patients who received the intervention relative to controls. A focused content analysis examined those tasks with the strongest correlations between navigator tasks and patient outcomes. Navigating directly with specific patients (r = 0.679), working with clinical providers to facilitate patient care (r = 0.643), and performing tasks not directly related to their diagnostic evaluation for patients were positively associated with more timely diagnosis (r = 0.714). Using medical records for non-navigation tasks had a negative association (r = -0.643). Content analysis revealed service provision directed at specific patients improved care while systems-focused activities did not.
Analysis of EUV/FUV dayglow and auroral measurements
NASA Technical Reports Server (NTRS)
Majeed, T.; Strickland, D. J.; Link, R.
1994-01-01
This report documents investigations carried out over the twelve month period which commenced in November 1992. The contract identifies the following three tasks: analysis of the O II 83.4 nm dayglow and comparison with incoherent scatter radar data, analysis of the EUV spectrum of an electron aurora, and analysis of the EUV spectrum of a proton-hydrogen-electron aurora. The analysis approach, data reduction methods, and results, including plots of O I 98.9 nm versus time and average spectra, are presented for the last two tasks. The appendices contain preprints of two papers written under the first task. The first paper examines the effect of new O(3P) photoionization cross sections, N2 photoabsorption cross sections, and O(+) oscillator strengths and transition probabilities on the O II 83.4 nm dayglow. The second addresses the problem of remotely sensing the dayside F2 region using limb O II 83.4 nm data.
NASA Technical Reports Server (NTRS)
Gallardo, V. C.; Storace, A. S.; Gaffney, E. F.; Bach, L. J.; Stallone, M. J.
1981-01-01
The component element method was used to develop a transient dynamic analysis computer program which is essentially based on modal synthesis combined with a central, finite difference, numerical integration scheme. The methodology leads to a modular or building-block technique that is amenable to computer programming. To verify the analytical method, turbine engine transient response analysis (TETRA), was applied to two blade-out test vehicles that had been previously instrumented and tested. Comparison of the time dependent test data with those predicted by TETRA led to recommendations for refinement or extension of the analytical method to improve its accuracy and overcome its shortcomings. The development of working equations, their discretization, numerical solution scheme, the modular concept of engine modelling, the program logical structure and some illustrated results are discussed. The blade-loss test vehicles (rig full engine), the type of measured data, and the engine structural model are described.
Advanced stress analysis methods applicable to turbine engine structures
NASA Technical Reports Server (NTRS)
Pian, Theodore H. H.
1991-01-01
The following tasks on the study of advanced stress analysis methods applicable to turbine engine structures are described: (1) constructions of special elements which contain traction-free circular boundaries; (2) formulation of new version of mixed variational principles and new version of hybrid stress elements; (3) establishment of methods for suppression of kinematic deformation modes; (4) construction of semiLoof plate and shell elements by assumed stress hybrid method; and (5) elastic-plastic analysis by viscoplasticity theory using the mechanical subelement model.
EEG source analysis of data from paralysed subjects
NASA Astrophysics Data System (ADS)
Carabali, Carmen A.; Willoughby, John O.; Fitzgibbon, Sean P.; Grummett, Tyler; Lewis, Trent; DeLosAngeles, Dylan; Pope, Kenneth J.
2015-12-01
One of the limitations of Encephalography (EEG) data is its quality, as it is usually contaminated with electric signal from muscle. This research intends to study results of two EEG source analysis methods applied to scalp recordings taken in paralysis and in normal conditions during the performance of a cognitive task. The aim is to determinate which types of analysis are appropriate for dealing with EEG data containing myogenic components. The data used are the scalp recordings of six subjects in normal conditions and during paralysis while performing different cognitive tasks including the oddball task which is the object of this research. The data were pre-processed by filtering it and correcting artefact, then, epochs of one second long for targets and distractors were extracted. Distributed source analysis was performed in BESA Research 6.0, using its results and information from the literature, 9 ideal locations for source dipoles were identified. The nine dipoles were used to perform discrete source analysis, fitting them to the averaged epochs for obtaining source waveforms. The results were statistically analysed comparing the outcomes before and after the subjects were paralysed. Finally, frequency analysis was performed for better explain the results. The findings were that distributed source analysis could produce confounded results for EEG contaminated with myogenic signals, conversely, statistical analysis of the results from discrete source analysis showed that this method could help for dealing with EEG data contaminated with muscle electrical signal.
Foster, Scott D; Feutry, Pierre; Grewe, Peter M; Berry, Oliver; Hui, Francis K C; Davies, Campbell R
2018-06-26
Delineating naturally occurring and self-sustaining sub-populations (stocks) of a species is an important task, especially for species harvested from the wild. Despite its central importance to natural resource management, analytical methods used to delineate stocks are often, and increasingly, borrowed from superficially similar analytical tasks in human genetics even though models specifically for stock identification have been previously developed. Unfortunately, the analytical tasks in resource management and human genetics are not identical { questions about humans are typically aimed at inferring ancestry (often referred to as 'admixture') rather than breeding stocks. In this article, we argue, and show through simulation experiments and an analysis of yellowfin tuna data, that ancestral analysis methods are not always appropriate for stock delineation. In this work, we advocate a variant of a previouslyintroduced and simpler model that identifies stocks directly. We also highlight that the computational aspects of the analysis, irrespective of the model, are difficult. We introduce some alternative computational methods and quantitatively compare these methods to each other and to established methods. We also present a method for quantifying uncertainty in model parameters and in assignment probabilities. In doing so, we demonstrate that point estimates can be misleading. One of the computational strategies presented here, based on an expectation-maximisation algorithm with judiciously chosen starting values, is robust and has a modest computational cost. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Real-time design with peer tasks
NASA Technical Reports Server (NTRS)
Goforth, Andre; Howes, Norman R.; Wood, Jonathan D.; Barnes, Michael J.
1995-01-01
We introduce a real-time design methodology for large scale, distributed, parallel architecture, real-time systems (LDPARTS), as an alternative to those methods using rate or dead-line monotonic analysis. In our method the fundamental units of prioritization, work items, are domain specific objects with timing requirements (deadlines) found in user's specification. A work item consists of a collection of tasks of equal priority. Current scheduling theories are applied with artifact deadlines introduced by the designer whereas our method schedules work items to meet user's specification deadlines (sometimes called end-to-end deadlines). Our method supports these scheduling properties. Work item scheduling is based on domain specific importance instead of task level urgency and still meets as many user specification deadlines as can be met by scheduling tasks with respect to urgency. Second, the minimum (closest) on-line deadline that can be guaranteed for a work item of highest importance, scheduled at run time, is approximately the inverse of the throughput, measured in work items per second. Third, throughput is not degraded during overload and instead of resorting to task shedding during overload, the designer can specify which work items to shed. We prove these properties in a mathematical model.
Qualitative task analysis to enhance sports characterization: a surfing case study.
Moreira, Miguel; Peixoto, César
2014-09-29
The aim of this study was to develop a Matrix of Analysis for Sports Tasks (MAST), regardless of the sports activity, based on practice classification and task analysis. Being this a qualitative research our main question was: in assessing sports' structure is it possible to make the characterization of any discipline through context and individuals' behaviours? The sample was within a surf discipline in a competition flowing having 5 of the top 16 Portuguese surfers training together. Based on a qualitative method, studying the surf as the main activity was an interpretative study case. The MAST was applied in four phases: taxonomy; tasks and context description; task analysis; teaching and performance strategies. Its application allowed the activities' characterization through the observation, surfer's opinions and bibliographical support. The triangulation of the data was used as an information data treatment. The elements were classified by the challenges proposed to the practitioners and the taxonomy was constituted by the sport activities, group, modality and discipline. Surf is a discipline of surfing which is a sliding sport modality, therefore, a nature sport. In the context description, we had the wave's components and constraints and the surfboards' qualities. Through task analysis we obtained a taxonomy of surf manoeuvres. The structural and functional analysis allowed finding solutions for learning of surf techniques with trampoline and skateboards because these fit in sliding sports. MAST makes possible the development of strategies that benefit teaching and performance intervention.
Qualitative Task Analysis to Enhance Sports Characterization: A Surfing Case Study
Moreira, Miguel; Peixoto, César
2014-01-01
The aim of this study was to develop a Matrix of Analysis for Sports Tasks (MAST), regardless of the sports activity, based on practice classification and task analysis. Being this a qualitative research our main question was: in assessing sports’ structure is it possible to make the characterization of any discipline through context and individuals’ behaviours? The sample was within a surf discipline in a competition flowing having 5 of the top 16 Portuguese surfers training together. Based on a qualitative method, studying the surf as the main activity was an interpretative study case. The MAST was applied in four phases: taxonomy; tasks and context description; task analysis; teaching and performance strategies. Its application allowed the activities’ characterization through the observation, surfer’s opinions and bibliographical support. The triangulation of the data was used as an information data treatment. The elements were classified by the challenges proposed to the practitioners and the taxonomy was constituted by the sport activities, group, modality and discipline. Surf is a discipline of surfing which is a sliding sport modality, therefore, a nature sport. In the context description, we had the wave’s components and constraints and the surfboards’ qualities. Through task analysis we obtained a taxonomy of surf manoeuvres. The structural and functional analysis allowed finding solutions for learning of surf techniques with trampoline and skateboards because these fit in sliding sports. MAST makes possible the development of strategies that benefit teaching and performance intervention. PMID:25414757
Executive Functions: Formative versus Reflective Measurement
ERIC Educational Resources Information Center
Willoughby, Michael; Holochwost, Steven J.; Blanton, Zane E.; Blair, Clancy B.
2014-01-01
The primary objective of this article was to critically evaluate the routine use of confirmatory factor analysis (CFA) for representing an individual's performance across a battery of executive function tasks. A conceptual review and statistical reanalysis of N = 10 studies that used CFA methods of EF tasks was undertaken. Despite evidence of…
Cognitive Process Modeling of Spatial Ability: The Assembling Objects Task
ERIC Educational Resources Information Center
Ivie, Jennifer L.; Embretson, Susan E.
2010-01-01
Spatial ability tasks appear on many intelligence and aptitude tests. Although the construct validity of spatial ability tests has often been studied through traditional correlational methods, such as factor analysis, less is known about the cognitive processes involved in solving test items. This study examines the cognitive processes involved in…
ERIC Educational Resources Information Center
Wang, Chia-Yu
2015-01-01
The purpose of this study was to use multiple assessments to investigate the general versus task-specific characteristics of metacognition in dissimilar chemistry topics. This mixed-method approach investigated the nature of undergraduate general chemistry students' metacognition using four assessments: a self-report questionnaire, assessment of…
NASA Technical Reports Server (NTRS)
Smith, Greg
2003-01-01
Schedule risk assessments determine the likelihood of finishing on time. Each task in a schedule has a varying degree of probability of being finished on time. A schedule risk assessment quantifies these probabilities by assigning values to each task. This viewgraph presentation contains a flow chart for conducting a schedule risk assessment, and profiles applicable several methods of data analysis.
Determination of awareness in patients with severe brain injury using EEG power spectral analysis
Goldfine, Andrew M.; Victor, Jonathan D.; Conte, Mary M.; Bardin, Jonathan C.; Schiff, Nicholas D.
2011-01-01
Objective To determine whether EEG spectral analysis could be used to demonstrate awareness in patients with severe brain injury. Methods We recorded EEG from healthy controls and three patients with severe brain injury, ranging from minimally conscious state (MCS) to locked-in-state (LIS), while they were asked to imagine motor and spatial navigation tasks. We assessed EEG spectral differences from 4 to 24 Hz with univariate comparisons (individual frequencies) and multivariate comparisons (patterns across the frequency range). Results In controls, EEG spectral power differed at multiple frequency bands and channels during performance of both tasks compared to a resting baseline. As patterns of signal change were inconsistent between controls, we defined a positive response in patient subjects as consistent spectral changes across task performances. One patient in MCS and one in LIS showed evidence of motor imagery task performance, though with patterns of spectral change different from the controls. Conclusion EEG power spectral analysis demonstrates evidence for performance of mental imagery tasks in healthy controls and patients with severe brain injury. Significance EEG power spectral analysis can be used as a flexible bedside tool to demonstrate awareness in brain-injured patients who are otherwise unable to communicate. PMID:21514214
Model-free fMRI group analysis using FENICA.
Schöpf, V; Windischberger, C; Robinson, S; Kasess, C H; Fischmeister, F PhS; Lanzenberger, R; Albrecht, J; Kleemann, A M; Kopietz, R; Wiesmann, M; Moser, E
2011-03-01
Exploratory analysis of functional MRI data allows activation to be detected even if the time course differs from that which is expected. Independent Component Analysis (ICA) has emerged as a powerful approach, but current extensions to the analysis of group studies suffer from a number of drawbacks: they can be computationally demanding, results are dominated by technical and motion artefacts, and some methods require that time courses be the same for all subjects or that templates be defined to identify common components. We have developed a group ICA (gICA) method which is based on single-subject ICA decompositions and the assumption that the spatial distribution of signal changes in components which reflect activation is similar between subjects. This approach, which we have called Fully Exploratory Network Independent Component Analysis (FENICA), identifies group activation in two stages. ICA is performed on the single-subject level, then consistent components are identified via spatial correlation. Group activation maps are generated in a second-level GLM analysis. FENICA is applied to data from three studies employing a wide range of stimulus and presentation designs. These are an event-related motor task, a block-design cognition task and an event-related chemosensory experiment. In all cases, the group maps identified by FENICA as being the most consistent over subjects correspond to task activation. There is good agreement between FENICA results and regions identified in prior GLM-based studies. In the chemosensory task, additional regions are identified by FENICA and temporal concatenation ICA that we show is related to the stimulus, but exhibit a delayed response. FENICA is a fully exploratory method that allows activation to be identified without assumptions about temporal evolution, and isolates activation from other sources of signal fluctuation in fMRI. It has the advantage over other gICA methods that it is computationally undemanding, spotlights components relating to activation rather than artefacts, allows the use of familiar statistical thresholding through deployment of a higher level GLM analysis and can be applied to studies where the paradigm is different for all subjects. Copyright © 2010 Elsevier Inc. All rights reserved.
Duncan, James R; Kline, Benjamin; Glaiberman, Craig B
2007-04-01
To create and test methods of extracting efficiency data from recordings of simulated renal stent procedures. Task analysis was performed and used to design a standardized testing protocol. Five experienced angiographers then performed 16 renal stent simulations using the Simbionix AngioMentor angiographic simulator. Audio and video recordings of these simulations were captured from multiple vantage points. The recordings were synchronized and compiled. A series of efficiency metrics (procedure time, contrast volume, and tool use) were then extracted from the recordings. The intraobserver and interobserver variability of these individual metrics was also assessed. The metrics were converted to costs and aggregated to determine the fixed and variable costs of a procedure segment or the entire procedure. Task analysis and pilot testing led to a standardized testing protocol suitable for performance assessment. Task analysis also identified seven checkpoints that divided the renal stent simulations into six segments. Efficiency metrics for these different segments were extracted from the recordings and showed excellent intra- and interobserver correlations. Analysis of the individual and aggregated efficiency metrics demonstrated large differences between segments as well as between different angiographers. These differences persisted when efficiency was expressed as either total or variable costs. Task analysis facilitated both protocol development and data analysis. Efficiency metrics were readily extracted from recordings of simulated procedures. Aggregating the metrics and dividing the procedure into segments revealed potential insights that could be easily overlooked because the simulator currently does not attempt to aggregate the metrics and only provides data derived from the entire procedure. The data indicate that analysis of simulated angiographic procedures will be a powerful method of assessing performance in interventional radiology.
Yoon, Jong H.; Tamir, Diana; Minzenberg, Michael J.; Ragland, J. Daniel; Ursu, Stefan; Carter, Cameron S.
2009-01-01
Background Multivariate pattern analysis is an alternative method of analyzing fMRI data, which is capable of decoding distributed neural representations. We applied this method to test the hypothesis of the impairment in distributed representations in schizophrenia. We also compared the results of this method with traditional GLM-based univariate analysis. Methods 19 schizophrenia and 15 control subjects viewed two runs of stimuli--exemplars of faces, scenes, objects, and scrambled images. To verify engagement with stimuli, subjects completed a 1-back matching task. A multi-voxel pattern classifier was trained to identify category-specific activity patterns on one run of fMRI data. Classification testing was conducted on the remaining run. Correlation of voxel-wise activity across runs evaluated variance over time in activity patterns. Results Patients performed the task less accurately. This group difference was reflected in the pattern analysis results with diminished classification accuracy in patients compared to controls, 59% and 72% respectively. In contrast, there was no group difference in GLM-based univariate measures. In both groups, classification accuracy was significantly correlated with behavioral measures. Both groups showed highly significant correlation between inter-run correlations and classification accuracy. Conclusions Distributed representations of visual objects are impaired in schizophrenia. This impairment is correlated with diminished task performance, suggesting that decreased integrity of cortical activity patterns is reflected in impaired behavior. Comparisons with univariate results suggest greater sensitivity of pattern analysis in detecting group differences in neural activity and reduced likelihood of non-specific factors driving these results. PMID:18822407
Fine-grained recognition of plants from images.
Šulc, Milan; Matas, Jiří
2017-01-01
Fine-grained recognition of plants from images is a challenging computer vision task, due to the diverse appearance and complex structure of plants, high intra-class variability and small inter-class differences. We review the state-of-the-art and discuss plant recognition tasks, from identification of plants from specific plant organs to general plant recognition "in the wild". We propose texture analysis and deep learning methods for different plant recognition tasks. The methods are evaluated and compared them to the state-of-the-art. Texture analysis is only applied to images with unambiguous segmentation (bark and leaf recognition), whereas CNNs are only applied when sufficiently large datasets are available. The results provide an insight in the complexity of different plant recognition tasks. The proposed methods outperform the state-of-the-art in leaf and bark classification and achieve very competitive results in plant recognition "in the wild". The results suggest that recognition of segmented leaves is practically a solved problem, when high volumes of training data are available. The generality and higher capacity of state-of-the-art CNNs makes them suitable for plant recognition "in the wild" where the views on plant organs or plants vary significantly and the difficulty is increased by occlusions and background clutter.
Zupanc, Christine M; Burgess-Limerick, Robin; Hill, Andrew; Riek, Stephan; Wallis, Guy M; Plooy, Annaliese M; Horswill, Mark S; Watson, Marcus O; Hewett, David G
2015-12-01
Colonoscopy is a difficult cognitive-perceptual-motor task. Designing an appropriate instructional program for such a task requires an understanding of the knowledge, skills and attitudes underpinning the competency required to perform the task. Cognitive task analysis techniques provide an empirical means of deriving this information. Video recording and a think-aloud protocol were conducted while 20 experienced endoscopists performed colonoscopy procedures. "Cued-recall" interviews were also carried out post-procedure with nine of the endoscopists. Analysis of the resulting transcripts employed the constant comparative coding method within a grounded theory framework. The resulting draft competency framework was modified after review during semi-structured interviews conducted with six expert endoscopists. The proposed colonoscopy competency framework consists of twenty-seven skill, knowledge and attitude components, grouped into six categories (clinical knowledge; colonoscope handling; situation awareness; heuristics and strategies; clinical reasoning; and intra- and inter-personal). The colonoscopy competency framework provides a principled basis for the design of a training program, and for the design of formative assessment to gauge progress towards attaining the knowledge, skills and attitudes underpinning the achievement of colonoscopy competence.
Radüntz, Thea
2017-01-01
One goal of advanced information and communication technology is to simplify work. However, there is growing consensus regarding the negative consequences of inappropriate workload on employee's health and the safety of persons. In order to develop a method for continuous mental workload monitoring, we implemented a task battery consisting of cognitive tasks with diverse levels of complexity and difficulty. We conducted experiments and registered the electroencephalogram (EEG), performance data, and the NASA-TLX questionnaire from 54 people. Analysis of the EEG spectra demonstrates an increase of the frontal theta band power and a decrease of the parietal alpha band power, both under increasing task difficulty level. Based on these findings we implemented a new method for monitoring mental workload, the so-called Dual Frequency Head Maps (DFHM) that are classified by support vectors machines (SVMs) in three different workload levels. The results are in accordance with the expected difficulty levels arising from the requirements of the tasks on the executive functions. Furthermore, this article includes an empirical validation of the new method on a secondary subset with new subjects and one additional new task without any adjustment of the classifiers. Hence, the main advantage of the proposed method compared with the existing solutions is that it provides an automatic, continuous classification of the mental workload state without any need for retraining the classifier—neither for new subjects nor for new tasks. The continuous workload monitoring can help ensure good working conditions, maintain a good level of performance, and simultaneously preserve a good state of health. PMID:29276490
Exploring Operational Test and Evaluation of Unmanned Aircraft Systems: A Qualitative Case Study
NASA Astrophysics Data System (ADS)
Saliceti, Jose A.
The purpose of this qualitative case study was to explore and identify strategies that may potentially remedy operational test and evaluation procedures used to evaluate Unmanned Aircraft Systems (UAS) technology. The sample for analysis consisted of organizations testing and evaluating UASs (e.g., U.S. Air Force, U.S. Navy, U.S. Army, U.S. Marine Corps, U.S. Coast Guard, and Customs Border Protection). A purposeful sampling technique was used to select 15 subject matter experts in the field of operational test and evaluation of UASs. A questionnaire was provided to participants to construct a descriptive and robust research. Analysis of responses revealed themes related to each research question. Findings revealed operational testers utilized requirements documents to extrapolate measures for testing UAS technology and develop critical operational issues. The requirements documents were (a) developed without the contribution of stakeholders and operational testers, (b) developed with vague or unrealistic measures, and (c) developed without a systematic method to derive requirements from mission tasks. Four approaches are recommended to develop testable operational requirements and assist operational testers: (a) use a mission task analysis tool to derive requirements for mission essential tasks for the system, (b) exercise collaboration among stakeholders and testers to ensure testable operational requirements based on mission tasks, (c) ensure testable measures are used in requirements documents, and (d) create a repository list of critical operational issues by mission areas. The preparation of operational test and evaluation processes for UAS technology is not uniform across testers. The processes in place are not standardized, thus test plan preparation and reporting are different among participants. A standard method to prepare and report UAS technology should be used when preparing and reporting on UAS technology. Using a systematic process, such as mission-based test design, resonated among participants as an analytical method to link UAS mission tasks and measures of performance to the capabilities of the system under test when developing operational test plans. Further research should examine system engineering designs for system requirements traceability matrix of mission tasks and subtasks while using an analysis tool that adequately evaluates UASs with an acceptable level of confidence in the results.
Occhipinti, E; Colombini, Daniela; Occhipinti, M
2008-01-01
In the Ocra methods (Ocra index and Ocra Checklist), when computing the final indices (Ocra index or checklist score), in the case of more than one repetitive task a "traditional" procedure was already proposed, the results of which could be defined as "time-weighted average". This approach appears to be appropriate when considering rotations among tasks that are performed very frequently, for instance almost once every hour (or for shorter periods). However, when rotation among repetitive tasks is less frequent (i.e. once every 1 1/2 or more hours), the "time-weighted average" approach could result in an underestimation of the exposure level (as it practically flattens peaks of high exposures). For those scenarios an alternative approach based on the "most stressful task as minimum" might be more realistic. This latter approach has already been included in the NIOSH approach for multiple sequential lifting tasks and, given the recent availability in the Ocra method of more detailed duration multipliers (practically one different Du(M) for each different step of one hour of duration of the repetitive task), it is now possible to define a particular procedure to compute the complex Ocra Multitask Index (cOCRA) and the complex Checklist Score (cCHESCO) for the analysis of two or more repetitive tasks when rotations are infrequent (rotations every 1 1/2 hours or more). The result of this approach will be at least equal to the index of the most stressful task considered for its individual daily duration and at the most equal to the index of the most stressful task when it is (only theoretically) considered as lasting for the overall daily duration of all examined repetitive tasks. The procedure is based on the following formula: Complex Ocra Multitask Index = Ocra(1(Dum1) + (Delta ocra1xK) where 1,2,3,...,N = repetitive tasks ordered by ocra index values (1 = highest; N = lowest) computed considering respective real duration multipliers (Dum(i)). ocra1 = ocra index of task, considering Dum(1). Dum(i) = duration multiplier for task(i) real duration. Dum(tot) = duration multiplier for total duration of all repetitive tasks. delta ocra1 = highest ocra index among N tasks considering Dum(tot) (ocra(i max)) - ocra index of task1 considering Dum1. K = (ocra(1 max)*FT1) + (ocra(2 max)*FT2) + ... + (ocra (N)*FT(N)) over (ocra(i max)). ocral,Nm(1,N MAX) = index of tasks 1 to Ncons idering Dum,, (tot)7=Fr(i) c tion of Time (values from 0 to 1) of task; wi(i)h respect to the total repetitive time.
Deep Learning for Classification of Colorectal Polyps on Whole-slide Images
Korbar, Bruno; Olofson, Andrea M.; Miraflor, Allen P.; Nicka, Catherine M.; Suriawinata, Matthew A.; Torresani, Lorenzo; Suriawinata, Arief A.; Hassanpour, Saeed
2017-01-01
Context: Histopathological characterization of colorectal polyps is critical for determining the risk of colorectal cancer and future rates of surveillance for patients. However, this characterization is a challenging task and suffers from significant inter- and intra-observer variability. Aims: We built an automatic image analysis method that can accurately classify different types of colorectal polyps on whole-slide images to help pathologists with this characterization and diagnosis. Setting and Design: Our method is based on deep-learning techniques, which rely on numerous levels of abstraction for data representation and have shown state-of-the-art results for various image analysis tasks. Subjects and Methods: Our method covers five common types of polyps (i.e., hyperplastic, sessile serrated, traditional serrated, tubular, and tubulovillous/villous) that are included in the US Multisociety Task Force guidelines for colorectal cancer risk assessment and surveillance. We developed multiple deep-learning approaches by leveraging a dataset of 2074 crop images, which were annotated by multiple domain expert pathologists as reference standards. Statistical Analysis: We evaluated our method on an independent test set of 239 whole-slide images and measured standard machine-learning evaluation metrics of accuracy, precision, recall, and F1 score and their 95% confidence intervals. Results: Our evaluation shows that our method with residual network architecture achieves the best performance for classification of colorectal polyps on whole-slide images (overall accuracy: 93.0%, 95% confidence interval: 89.0%–95.9%). Conclusions: Our method can reduce the cognitive burden on pathologists and improve their efficacy in histopathological characterization of colorectal polyps and in subsequent risk assessment and follow-up recommendations. PMID:28828201
Unsupervised discovery of information structure in biomedical documents.
Kiela, Douwe; Guo, Yufan; Stenius, Ulla; Korhonen, Anna
2015-04-01
Information structure (IS) analysis is a text mining technique, which classifies text in biomedical articles into categories that capture different types of information, such as objectives, methods, results and conclusions of research. It is a highly useful technique that can support a range of Biomedical Text Mining tasks and can help readers of biomedical literature find information of interest faster, accelerating the highly time-consuming process of literature review. Several approaches to IS analysis have been presented in the past, with promising results in real-world biomedical tasks. However, all existing approaches, even weakly supervised ones, require several hundreds of hand-annotated training sentences specific to the domain in question. Because biomedicine is subject to considerable domain variation, such annotations are expensive to obtain. This makes the application of IS analysis across biomedical domains difficult. In this article, we investigate an unsupervised approach to IS analysis and evaluate the performance of several unsupervised methods on a large corpus of biomedical abstracts collected from PubMed. Our best unsupervised algorithm (multilevel-weighted graph clustering algorithm) performs very well on the task, obtaining over 0.70 F scores for most IS categories when applied to well-known IS schemes. This level of performance is close to that of lightly supervised IS methods and has proven sufficient to aid a range of practical tasks. Thus, using an unsupervised approach, IS could be applied to support a wide range of tasks across sub-domains of biomedicine. We also demonstrate that unsupervised learning brings novel insights into IS of biomedical literature and discovers information categories that are not present in any of the existing IS schemes. The annotated corpus and software are available at http://www.cl.cam.ac.uk/∼dk427/bio14info.html. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Task 2 Report: Algorithm Development and Performance Analysis
1993-07-01
separated peaks ............................................. 39 7-16 Example ILGC data for schedule 3 phosphites showing an analysis method which integrates...more closely follows the baseline ................. 40 7-18 Example R.GC data for schedule 3 phosphites showing an analysis method resulting in unwanted...much of the ambiguity that can arise in GC/MS with trace environmental samples, for example. Correlated chromatography, on the other hand, separates the
Task Decomposition in Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boring, Ronald Laurids; Joe, Jeffrey Clark
2014-06-01
In the probabilistic safety assessments (PSAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally, both approachesmore » should arrive at the same set of HFEs. This question remains central as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PSAs tend to be top-down— defined as a subset of the PSA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) are more likely to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications.« less
Chenxi, Li; Chen, Yanni; Li, Youjun; Wang, Jue; Liu, Tian
2016-06-01
The multiscale entropy (MSE) is a novel method for quantifying the intrinsic dynamical complexity of physiological systems over several scales. To evaluate this method as a promising way to explore the neural mechanisms in ADHD, we calculated the MSE in EEG activity during the designed task. EEG data were collected from 13 outpatient boys with a confirmed diagnosis of ADHD and 13 age- and gender-matched normal control children during their doing multi-source interference task (MSIT). We estimated the MSE by calculating the sample entropy values of delta, theta, alpha and beta frequency bands over twenty time scales using coarse-grained procedure. The results showed increased complexity of EEG data in delta and theta frequency bands and decreased complexity in alpha frequency bands in ADHD children. The findings of this study revealed aberrant neural connectivity of kids with ADHD during interference task. The results showed that MSE method may be a new index to identify and understand the neural mechanism of ADHD. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strunk, W.D.
1987-01-01
Personnel at the Oak Ridge National Laboratory were tasked by the US Navy to assist in establishing a maintenance monitoring program for machinery aboard surface ships. Given the number of surface ships, the variety of locations in which they operate, the different types of equipment (rotating and reciprocating, as well as instrumentation), and the different procedures which control the operation and maintenance of a ship, it can be seen, apart from the logistics of organizing such a monitoring program, that the technical issues are as varied and numerous as the ships themselves. Unique methods and procedures have been developed tomore » perform the tasks required on a large scale. Among the specific tasks and technical issues addressed were the development and installation of a data collection and communication instrumentation system for each port, the qualification of measurement methodologies and techniques, the establishment of computer data bases, the evaluation of the instrumentation used, training of civilian and military personnel, development of machinery condition assessment aids using machine design and modal analysis information, and development of computer displays. After these tasks were completed and the appropriate resolution integrated into the program, the final task was the development of a method to continually evaluate the effectiveness of the program, using actual maintenance records.« less
Mukherjee, Shalini; Yadav, Rajeev; Yung, Iris; Zajdel, Daniel P.; Oken, Barry S.
2011-01-01
Objectives To determine 1) whether heart rate variability (HRV) was a sensitive and reliable measure in mental effort tasks carried out by healthy seniors and 2) whether non-linear approaches to HRV analysis, in addition to traditional time and frequency domain approaches were useful to study such effects. Methods Forty healthy seniors performed two visual working memory tasks requiring different levels of mental effort, while ECG was recorded. They underwent the same tasks and recordings two weeks later. Traditional and 13 non-linear indices of HRV including Poincaré, entropy and detrended fluctuation analysis (DFA) were determined. Results Time domain (especially mean R-R interval/RRI), frequency domain and, among nonlinear parameters- Poincaré and DFA were the most reliable indices. Mean RRI, time domain and Poincaré were also the most sensitive to different mental effort task loads and had the largest effect size. Conclusions Overall, linear measures were the most sensitive and reliable indices to mental effort. In non-linear measures, Poincaré was the most reliable and sensitive, suggesting possible usefulness as an independent marker in cognitive function tasks in healthy seniors. Significance A large number of HRV parameters was both reliable as well as sensitive indices of mental effort, although the simple linear methods were the most sensitive. PMID:21459665
Cognitive task analysis and innovation of training: the case of structured troubleshooting.
Schaafstal, A; Schraagen, J M; van Berlo, M
2000-01-01
Troubleshooting is often a time-consuming and difficult activity. The question of how the training of novice technicians can be improved was the starting point of the research described in this article. A cognitive task analysis was carried out consisting of two preliminary observational studies on troubleshooting in naturalistic settings, combined with an interpretation of the data obtained in the context of the existing literature. On the basis of this cognitive task analysis, a new method for the training of troubleshooting was developed (structured troubleshooting), which combines a domain-independent strategy for troubleshooting with a context-dependent, multiple-level, functional decomposition of systems. This method has been systematically evaluated for its use in training. The results show that technicians trained in structured troubleshooting solve twice as many malfunctions, in less time, than those trained in the traditional way. Moreover, structured troubleshooting can be taught in less time than can traditional troubleshooting. Finally, technicians learn to troubleshoot in an explicit and uniform way. These advantages of structured troubleshooting ultimately lead to a reduction in training and troubleshooting costs.
Analysis of tasks for dynamic man/machine load balancing in advanced helicopters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jorgensen, C.C.
1987-10-01
This report considers task allocation requirements imposed by advanced helicopter designs incorporating mixes of human pilots and intelligent machines. Specifically, it develops an analogy between load balancing using distributed non-homogeneous multiprocessors and human team functions. A taxonomy is presented which can be used to identify task combinations likely to cause overload for dynamic scheduling and process allocation mechanisms. Designer criteria are given for function decomposition, separation of control from data, and communication handling for dynamic tasks. Possible effects of n-p complete scheduling problems are noted and a class of combinatorial optimization methods are examined.
Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant
Jahangiri, Mehdi; Hoboubi, Naser; Rostamabadi, Akbar; Keshavarzi, Sareh; Hosseini, Ali Akbar
2015-01-01
Background A permit to work (PTW) is a formal written system to control certain types of work which are identified as potentially hazardous. However, human error in PTW processes can lead to an accident. Methods This cross-sectional, descriptive study was conducted to estimate the probability of human errors in PTW processes in a chemical plant in Iran. In the first stage, through interviewing the personnel and studying the procedure in the plant, the PTW process was analyzed using the hierarchical task analysis technique. In doing so, PTW was considered as a goal and detailed tasks to achieve the goal were analyzed. In the next step, the standardized plant analysis risk-human (SPAR-H) reliability analysis method was applied for estimation of human error probability. Results The mean probability of human error in the PTW system was estimated to be 0.11. The highest probability of human error in the PTW process was related to flammable gas testing (50.7%). Conclusion The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided. PMID:27014485
ERIC Educational Resources Information Center
Han, Hyemin
2017-01-01
The present study meta-analyzed 45 experiments with 959 subjects and 463 activation foci reported in 43 published articles that investigated the neural mechanism of moral functions by comparing neural activity between the moral task conditions and non-moral task conditions with the Activation Likelihood Estimation method. The present study…
Bliksted, Vibeke; Ubukata, Shiho; Koelkebeck, Katja
2016-03-01
In recent years, theories of how humans form a "theory of mind" of others ("mentalizing") have increasingly been called upon to explain impairments in social interaction in mental disorders, such as autism spectrum disorders (ASD) and schizophrenia. However, it remains unclear whether tasks that assess impairments in mentalizing can also contribute to determining differential deficits across disorders, which may be important for early identification and treatment. Paradigms that challenge mentalizing abilities in an on-line, real-life fashion have been considered helpful in detecting disease-specific deficits. In this review, we are therefore summarizing results of studies that assess the attribution of mental states using an animated triangles task. Behavioral as well as brain imaging studies in ASD and schizophrenia have been taken into account. While for neuroimaging methods, data are sparse and investigation methods inconsistent, we performed a meta-analysis of behavioral data to directly investigate performance deficits across disorders. Here, more impaired abilities in the appropriate description of interactions were found in ASD patients than in patients with schizophrenia. Moreover, an analysis of first-episode (FES) versus longer lasting (LLS) schizophrenia showed that usage of mental state terms was reduced in the LLS group. In our review and meta-analysis, we identified performance differences between ASD and schizophrenia that seem helpful in targeting differential deficits, taking into account different stages of schizophrenia. However, to tackle the deficits in more detail, studies are needed that directly compare patients with ASD and schizophrenia using behavioral or neuroimaging methods with more standardized task versions. Copyright © 2016 Elsevier B.V. All rights reserved.
Hauber, A Brett; González, Juan Marcos; Groothuis-Oudshoorn, Catharina G M; Prior, Thomas; Marshall, Deborah A; Cunningham, Charles; IJzerman, Maarten J; Bridges, John F P
2016-06-01
Conjoint analysis is a stated-preference survey method that can be used to elicit responses that reveal preferences, priorities, and the relative importance of individual features associated with health care interventions or services. Conjoint analysis methods, particularly discrete choice experiments (DCEs), have been increasingly used to quantify preferences of patients, caregivers, physicians, and other stakeholders. Recent consensus-based guidance on good research practices, including two recent task force reports from the International Society for Pharmacoeconomics and Outcomes Research, has aided in improving the quality of conjoint analyses and DCEs in outcomes research. Nevertheless, uncertainty regarding good research practices for the statistical analysis of data from DCEs persists. There are multiple methods for analyzing DCE data. Understanding the characteristics and appropriate use of different analysis methods is critical to conducting a well-designed DCE study. This report will assist researchers in evaluating and selecting among alternative approaches to conducting statistical analysis of DCE data. We first present a simplistic DCE example and a simple method for using the resulting data. We then present a pedagogical example of a DCE and one of the most common approaches to analyzing data from such a question format-conditional logit. We then describe some common alternative methods for analyzing these data and the strengths and weaknesses of each alternative. We present the ESTIMATE checklist, which includes a list of questions to consider when justifying the choice of analysis method, describing the analysis, and interpreting the results. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, J.A.; Clauss, S.A.; Grant, K.E.
The objectives of this task are to develop and document extraction and analysis methods for organics in waste tanks, and to extend these methods to the analysis of actual core samples to support the Waste Tank organic Safety Program. This report documents progress at Pacific Northwest Laboratory (a) during FY 1994 on methods development, the analysis of waste from Tank 241-C-103 (Tank C-103) and T-111, and the transfer of documented, developed analytical methods to personnel in the Analytical Chemistry Laboratory (ACL) and 222-S laboratory. This report is intended as an annual report, not a completed work.
A Multi-Methods Approach to HRA and Human Performance Modeling: A Field Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacques Hugo; David I Gertman
2012-06-01
The Advanced Test Reactor (ATR) is a research reactor at the Idaho National Laboratory is primarily designed and used to test materials to be used in other, larger-scale and prototype reactors. The reactor offers various specialized systems and allows certain experiments to be run at their own temperature and pressure. The ATR Canal temporarily stores completed experiments and used fuel. It also has facilities to conduct underwater operations such as experiment examination or removal. In reviewing the ATR safety basis, a number of concerns were identified involving the ATR canal. A brief study identified ergonomic issues involving the manual handlingmore » of fuel elements in the canal that may increase the probability of human error and possible unwanted acute physical outcomes to the operator. In response to this concern, that refined the previous HRA scoping analysis by determining the probability of the inadvertent exposure of a fuel element to the air during fuel movement and inspection was conducted. The HRA analysis employed the SPAR-H method and was supplemented by information gained from a detailed analysis of the fuel inspection and transfer tasks. This latter analysis included ergonomics, work cycles, task duration, and workload imposed by tool and workplace characteristics, personal protective clothing, and operational practices that have the potential to increase physical and mental workload. Part of this analysis consisted of NASA-TLX analyses, combined with operational sequence analysis, computational human performance analysis (CHPA), and 3D graphical modeling to determine task failures and precursors to such failures that have safety implications. Experience in applying multiple analysis techniques in support of HRA methods is discussed.« less
Van Driel, Robin; Trask, Catherine; Johnson, Peter W; Callaghan, Jack P; Koehoorn, Mieke; Teschke, Kay
2013-01-01
Measuring trunk posture in the workplace commonly involves subjective observation or self-report methods or the use of costly and time-consuming motion analysis systems (current gold standard). This work compared trunk inclination measurements using a simple data-logging inclinometer with trunk flexion measurements using a motion analysis system, and evaluated adding measures of subject anthropometry to exposure prediction models to improve the agreement between the two methods. Simulated lifting tasks (n=36) were performed by eight participants, and trunk postures were simultaneously measured with each method. There were significant differences between the two methods, with the inclinometer initially explaining 47% of the variance in the motion analysis measurements. However, adding one key anthropometric parameter (lower arm length) to the inclinometer-based trunk flexion prediction model reduced the differences between the two systems and accounted for 79% of the motion analysis method's variance. Although caution must be applied when generalizing lower-arm length as a correction factor, the overall strategy of anthropometric modeling is a novel contribution. In this lifting-based study, by accounting for subject anthropometry, a single, simple data-logging inclinometer shows promise for trunk posture measurement and may have utility in larger-scale field studies where similar types of tasks are performed.
Opening the Black Box: Cognitive Strategies in Family Practice
Christensen, Robert E.; Fetters, Michael D.; Green, Lee A.
2005-01-01
PURPOSE We wanted to describe the cognitive strategies used by family physicians when structuring the decision-making tasks of an outpatient visit. METHODS This qualitative study used cognitive task analysis, a structured interview method in which a trained interviewer works individually with expert decision makers to capture their stages and elements of information processing. RESULTS Eighteen family physicians of varying levels of experience participated. Three dominant themes emerged: time pressure, a high degree of variation in task structuring, and varying degrees of task automatization. Based on these data and previous research from the cognitive sciences, we developed a model of novice and expert approaches to decision making in primary care. The model illustrates differences in responses to unexpected opportunity in practice, particularly the expert’s use of attentional surplus (reserve capacity to handle problems) vs the novice’s choice between taking more time or displacing another task. CONCLUSIONS Family physicians have specific, highly individualized cognitive task-structuring approaches and show the decision behavior features typical of expert decision makers in other fields. This finding places constraints on and suggests useful approaches for improving practice. PMID:15798041
Amanpour, Behzad; Erfanian, Abbas
2013-01-01
An important issue in designing a practical brain-computer interface (BCI) is the selection of mental tasks to be imagined. Different types of mental tasks have been used in BCI including left, right, foot, and tongue motor imageries. However, the mental tasks are different from the actions to be controlled by the BCI. It is desirable to select a mental task to be consistent with the desired action to be performed by BCI. In this paper, we investigated the detecting the imagination of the hand grasping, hand opening, and hand reaching in one hand using electroencephalographic (EEG) signals. The results show that the ERD/ERS patterns, associated with the imagination of hand grasping, opening, and reaching are different. For classification of brain signals associated with these mental tasks and feature extraction, a method based on wavelet packet, regularized common spatial pattern (CSP), and mutual information is proposed. The results of an offline analysis on five subjects show that the two-class mental tasks can be classified with an average accuracy of 77.6% using proposed method. In addition, we examine the proposed method on datasets IVa from BCI Competition III and IIa from BCI Competition IV.
Gomez-Cardona, Daniel; Hayes, John W; Zhang, Ran; Li, Ke; Cruz-Bastida, Juan Pablo; Chen, Guang-Hong
2018-05-01
Different low-signal correction (LSC) methods have been shown to efficiently reduce noise streaks and noise level in CT to provide acceptable images at low-radiation dose levels. These methods usually result in CT images with highly shift-variant and anisotropic spatial resolution and noise, which makes the parameter optimization process highly nontrivial. The purpose of this work was to develop a local task-based parameter optimization framework for LSC methods. Two well-known LSC methods, the adaptive trimmed mean (ATM) filter and the anisotropic diffusion (AD) filter, were used as examples to demonstrate how to use the task-based framework to optimize filter parameter selection. Two parameters, denoted by the set P, for each LSC method were included in the optimization problem. For the ATM filter, these parameters are the low- and high-signal threshold levels p l and p h ; for the AD filter, the parameters are the exponents δ and γ in the brightness gradient function. The detectability index d' under the non-prewhitening (NPW) mathematical observer model was selected as the metric for parameter optimization. The optimization problem was formulated as an unconstrained optimization problem that consisted of maximizing an objective function d'(P), where i and j correspond to the i-th imaging task and j-th spatial location, respectively. Since there is no explicit mathematical function to describe the dependence of d' on the set of parameters P for each LSC method, the optimization problem was solved via an experimentally measured d' map over a densely sampled parameter space. In this work, three high-contrast-high-frequency discrimination imaging tasks were defined to explore the parameter space of each of the LSC methods: a vertical bar pattern (task I), a horizontal bar pattern (task II), and a multidirectional feature (task III). Two spatial locations were considered for the analysis, a posterior region-of-interest (ROI) located within the noise streaks region and an anterior ROI, located further from the noise streaks region. Optimal results derived from the task-based detectability index metric were compared to other operating points in the parameter space with different noise and spatial resolution trade-offs. The optimal operating points determined through the d' metric depended on the interplay between the major spatial frequency components of each imaging task and the highly shift-variant and anisotropic noise and spatial resolution properties associated with each operating point in the LSC parameter space. This interplay influenced imaging performance the most when the major spatial frequency component of a given imaging task coincided with the direction of spatial resolution loss or with the dominant noise spatial frequency component; this was the case of imaging task II. The performance of imaging tasks I and III was influenced by this interplay in a smaller scale than imaging task II, since the major frequency component of task I was perpendicular to imaging task II, and because imaging task III did not have strong directional dependence. For both LSC methods, there was a strong dependence of the overall d' magnitude and shape of the contours on the spatial location within the phantom, particularly for imaging tasks II and III. The d' value obtained at the optimal operating point for each spatial location and imaging task was similar when comparing the LSC methods studied in this work. A local task-based detectability framework to optimize the selection of parameters for LSC methods was developed. The framework takes into account the potential shift-variant and anisotropic spatial resolution and noise properties to maximize the imaging performance of the CT system. Optimal parameters for a given LSC method depend strongly on the spatial location within the image object. © 2018 American Association of Physicists in Medicine.
Time series modeling of human operator dynamics in manual control tasks
NASA Technical Reports Server (NTRS)
Biezad, D. J.; Schmidt, D. K.
1984-01-01
A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency responses of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that has not been previously modeled to demonstrate the strengths of the method.
Time Series Modeling of Human Operator Dynamics in Manual Control Tasks
NASA Technical Reports Server (NTRS)
Biezad, D. J.; Schmidt, D. K.
1984-01-01
A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency response of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that was previously modeled to demonstrate the strengths of the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Fangyan; Zhang, Song; Chung Wong, Pak
Effectively visualizing large graphs and capturing the statistical properties are two challenging tasks. To aid in these two tasks, many sampling approaches for graph simplification have been proposed, falling into three categories: node sampling, edge sampling, and traversal-based sampling. It is still unknown which approach is the best. We evaluate commonly used graph sampling methods through a combined visual and statistical comparison of graphs sampled at various rates. We conduct our evaluation on three graph models: random graphs, small-world graphs, and scale-free graphs. Initial results indicate that the effectiveness of a sampling method is dependent on the graph model, themore » size of the graph, and the desired statistical property. This benchmark study can be used as a guideline in choosing the appropriate method for a particular graph sampling task, and the results presented can be incorporated into graph visualization and analysis tools.« less
Walker, Judith; von Bergmann, HsingChi
2015-03-01
The purpose of this study was to explore the use of cognitive task analysis to inform the teaching of psychomotor skills and cognitive strategies in clinical tasks in dental education. Methods used were observing and videotaping an expert at one dental school thinking aloud while performing a specific preclinical task (in a simulated environment), interviewing the expert to probe deeper into his thinking processes, and applying the same procedures to analyze the performance of three second-year dental students who had recently learned the analyzed task and who represented a spectrum of their cohort's ability to undertake the procedure. The investigators sought to understand how experts (clinical educators) and intermediates (trained students) overlapped and differed at points in the procedure that represented the highest cognitive load, known as "critical incidents." Findings from this study and previous research identified possible limitations of current clinical teaching as a result of expert blind spots. These findings coupled with the growing evidence of the effectiveness of peer teaching suggest the potential role of intermediates in helping novices learn preclinical dentistry tasks.
Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment
Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel
2016-01-01
Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks’ execution time can be improved, in particular for some regular jobs. PMID:27589753
Effects of method and format on subjects' responses to a control of variables reasoning problem
NASA Astrophysics Data System (ADS)
Staver, John R.
Excessive time and training demands have rendered Piaget's clinical method of reasoning assessment impractical for researchers and science teachers who work with large numbers of students. The published literature[Note ][See: Lawson, A. E. Journal of Research in Science Teaching, 1978, 15(1), 11-24; Shayer, M., Adey, P., & Wylam, H. Journal of Research in Science Teaching, 1981, 18(2), 157-168; Staver, J. R., & Gabel, D. L. Journal of Research in Science Teaching, 1979, 16(6), 534-544; Tobin, K. G., & Capie, W. Educational and Psychological Measurement, 1981, 41(2), 413-424.] indicates that reliable, valid alternatives to clinical assessment are feasible. However, the overestimate/underestimate of reasoning for different methods and formats remains unresolved through research. The objective of this study was to determine the effects of various methods and formats on subjects' responses to a Piagetian reasoning problem requiring control of variables. The task chosen for this investigation was the Mealworm problem.[Note
In-Depth Analysis of the JACK Model.
DOT National Transportation Integrated Search
2009-04-30
Recently, as part of a comprehensive analysis of budget and funding options, a TxDOT : special task force has examined the agencys current financial forecasting methods and has : developed a model designed to estimate future State Highway Fund rev...
Relationships between Contextual and Task Performance and Interrater Agreement: Are There Any?
Díaz-Vilela, Luis F; Delgado Rodríguez, Naira; Isla-Díaz, Rosa; Díaz-Cabrera, Dolores; Hernández-Fernaud, Estefanía; Rosales-Sánchez, Christian
2015-01-01
Work performance is one of the most important dependent variables in Work and Organizational Psychology. The main objective of this paper was to explore the relationships between citizenship performance and task performance measures obtained from different appraisers and their consistency through a seldom-used methodology, intraclass correlation coefficients. Participants were 135 public employees, the total staff in a local government department. Jobs were clustered into job families through a work analysis based on standard questionnaires. A task description technique was used to develop a performance appraisal questionnaire for each job family, with three versions: self-, supervisor-, and peer-evaluation, in addition to a measure of citizenship performance. Only when the self-appraisal bias is controlled, significant correlations appeared between task performance rates. However, intraclass correlations analyses show that only self- (contextual and task) performance measures are consistent, while interrater agreement disappears. These results provide some interesting clues about the procedure of appraisal instrument development, the role of appraisers, and the importance of choosing adequate consistency analysis methods.
Relationships between Contextual and Task Performance and Interrater Agreement: Are There Any?
Díaz-Cabrera, Dolores; Hernández-Fernaud, Estefanía; Rosales-Sánchez, Christian
2015-01-01
Work performance is one of the most important dependent variables in Work and Organizational Psychology. The main objective of this paper was to explore the relationships between citizenship performance and task performance measures obtained from different appraisers and their consistency through a seldom-used methodology, intraclass correlation coefficients. Participants were 135 public employees, the total staff in a local government department. Jobs were clustered into job families through a work analysis based on standard questionnaires. A task description technique was used to develop a performance appraisal questionnaire for each job family, with three versions: self-, supervisor-, and peer-evaluation, in addition to a measure of citizenship performance. Only when the self-appraisal bias is controlled, significant correlations appeared between task performance rates. However, intraclass correlations analyses show that only self- (contextual and task) performance measures are consistent, while interrater agreement disappears. These results provide some interesting clues about the procedure of appraisal instrument development, the role of appraisers, and the importance of choosing adequate consistency analysis methods. PMID:26473956
Hidden marker position estimation during sit-to-stand with walker.
Yoon, Sang Ho; Jun, Hong Gul; Dan, Byung Ju; Jo, Byeong Rim; Min, Byung Hoon
2012-01-01
Motion capture analysis of sit-to-stand task with assistive device is hard to achieve due to obstruction on reflective makers. Previously developed robotic system, Smart Mobile Walker, is used as an assistive device to perform motion capture analysis in sit-to-stand task. All lower limb markers except hip markers are invisible through whole session. The link-segment and regression method is applied to estimate the marker position during sit-to-stand. Applying a new method, the lost marker positions are restored and the biomechanical evaluation of the sit-to-stand movement with a Smart Mobile Walker could be carried out. The accuracy of the marker position estimation is verified with normal sit-to-stand data from more than 30 clinical trials. Moreover, further research on improving the link segment and regression method is addressed.
The detection methods of dynamic objects
NASA Astrophysics Data System (ADS)
Knyazev, N. L.; Denisova, L. A.
2018-01-01
The article deals with the application of cluster analysis methods for solving the task of aircraft detection on the basis of distribution of navigation parameters selection into groups (clusters). The modified method of cluster analysis for search and detection of objects and then iterative combining in clusters with the subsequent count of their quantity for increase in accuracy of the aircraft detection have been suggested. The course of the method operation and the features of implementation have been considered. In the conclusion the noted efficiency of the offered method for exact cluster analysis for finding targets has been shown.
Situational Interest in Engineering Design Activities
NASA Astrophysics Data System (ADS)
Bonderup Dohn, Niels
2013-08-01
The aim of the present mixed-method study was to investigate task-based situational interest of sixth grade students (n = 46), between 12 and 14 years old, during an eight-week engineering design programme in a Science & Technology-class. Students' interests were investigated by means of a descriptive interpretative analysis of qualitative data from classroom observations and informal interviews. The analysis was complemented by a self-report survey to validate findings and determine prevalence. The analysis revealed four main sources of interest: designing inventions, trial-and-error experimentation, achieved functionality of invention, and collaboration. These sources differ in terms of stimuli factors, such as novelty, autonomy (choice), social involvement, self-generation of interest, and task goal orientation. The study shows that design tasks stimulated interest, but only to the extent that students were able to self-regulate their learning strategies.
NASA Technical Reports Server (NTRS)
Nakazawa, S.
1987-01-01
This Annual Status Report presents the results of work performed during the third year of the 3-D Inelastic Analysis Methods for Hot Section Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of new computer codes that permit more accurate and efficient three-dimensional analysis of selected hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The computer codes embody a progression of mathematical models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components. This report is presented in two volumes. Volume 1 describes effort performed under Task 4B, Special Finite Element Special Function Models, while Volume 2 concentrates on Task 4C, Advanced Special Functions Models.
Human brain mapping: A systematic comparison of parcellation methods for the human cerebral cortex.
Arslan, Salim; Ktena, Sofia Ira; Makropoulos, Antonios; Robinson, Emma C; Rueckert, Daniel; Parisot, Sarah
2018-04-15
The macro-connectome elucidates the pathways through which brain regions are structurally connected or functionally coupled to perform a specific cognitive task. It embodies the notion of representing and understanding all connections within the brain as a network, while the subdivision of the brain into interacting functional units is inherent in its architecture. As a result, the definition of network nodes is one of the most critical steps in connectivity network analysis. Although brain atlases obtained from cytoarchitecture or anatomy have long been used for this task, connectivity-driven methods have arisen only recently, aiming to delineate more homogeneous and functionally coherent regions. This study provides a systematic comparison between anatomical, connectivity-driven and random parcellation methods proposed in the thriving field of brain parcellation. Using resting-state functional MRI data from the Human Connectome Project and a plethora of quantitative evaluation techniques investigated in the literature, we evaluate 10 subject-level and 24 groupwise parcellation methods at different resolutions. We assess the accuracy of parcellations from four different aspects: (1) reproducibility across different acquisitions and groups, (2) fidelity to the underlying connectivity data, (3) agreement with fMRI task activation, myelin maps, and cytoarchitectural areas, and (4) network analysis. This extensive evaluation of different parcellations generated at the subject and group level highlights the strengths and shortcomings of the various methods and aims to provide a guideline for the choice of parcellation technique and resolution according to the task at hand. The results obtained in this study suggest that there is no optimal method able to address all the challenges faced in this endeavour simultaneously. Copyright © 2017 Elsevier Inc. All rights reserved.
Plis, Sergey M; Sui, Jing; Lane, Terran; Roy, Sushmita; Clark, Vincent P; Potluru, Vamsi K; Huster, Rene J; Michael, Andrew; Sponheim, Scott R; Weisend, Michael P; Calhoun, Vince D
2013-01-01
Identifying the complex activity relationships present in rich, modern neuroimaging data sets remains a key challenge for neuroscience. The problem is hard because (a) the underlying spatial and temporal networks may be nonlinear and multivariate and (b) the observed data may be driven by numerous latent factors. Further, modern experiments often produce data sets containing multiple stimulus contexts or tasks processed by the same subjects. Fusing such multi-session data sets may reveal additional structure, but raises further statistical challenges. We present a novel analysis method for extracting complex activity networks from such multifaceted imaging data sets. Compared to previous methods, we choose a new point in the trade-off space, sacrificing detailed generative probability models and explicit latent variable inference in order to achieve robust estimation of multivariate, nonlinear group factors (“network clusters”). We apply our method to identify relationships of task-specific intrinsic networks in schizophrenia patients and control subjects from a large fMRI study. After identifying network-clusters characterized by within- and between-task interactions, we find significant differences between patient and control groups in interaction strength among networks. Our results are consistent with known findings of brain regions exhibiting deviations in schizophrenic patients. However, we also find high-order, nonlinear interactions that discriminate groups but that are not detected by linear, pair-wise methods. We additionally identify high-order relationships that provide new insights into schizophrenia but that have not been found by traditional univariate or second-order methods. Overall, our approach can identify key relationships that are missed by existing analysis methods, without losing the ability to find relationships that are known to be important. PMID:23876245
NASA Technical Reports Server (NTRS)
Phillips, E. P.
1993-01-01
A second experimental Round Robin on the measurement of the crack opening load in fatigue crack growth tests has been completed by the ASTM Task Group E24.04.04 on Crack Closure Measurement and Analysis. Fourteen laboratories participated in the testing of aluminum alloy compact tension specimens. Opening-load measurements were made at three crack lengths during constant Delta K, constant stress ratio tests by most of the participants. Four participants made opening-load measurements during threshold tests. All opening-load measurements were based on the analysis of specimens compliance behavior, where the displacement/strain was measured either at the crack mouth or the mid-height back face location. The Round Robin data were analyzed for opening load using two non-subjective analysis methods: the compliance offset and the correlation coefficient methods. The scatter in the opening load results was significantly reduced when some of the results were excluded from the analysis population based on an accept/reject criterion for raw data quality. The compliance offset and correlation coefficient opening load analysis methods produced similar results for data populations that had been screened to eliminate poor quality data.
Aligning Event Logs to Task-Time Matrix Clinical Pathways in BPMN for Variance Analysis.
Yan, Hui; Van Gorp, Pieter; Kaymak, Uzay; Lu, Xudong; Ji, Lei; Chiau, Choo Chiap; Korsten, Hendrikus H M; Duan, Huilong
2018-03-01
Clinical pathways (CPs) are popular healthcare management tools to standardize care and ensure quality. Analyzing CP compliance levels and variances is known to be useful for training and CP redesign purposes. Flexible semantics of the business process model and notation (BPMN) language has been shown to be useful for the modeling and analysis of complex protocols. However, in practical cases one may want to exploit that CPs often have the form of task-time matrices. This paper presents a new method parsing complex BPMN models and aligning traces to the models heuristically. A case study on variance analysis is undertaken, where a CP from the practice and two large sets of patients data from an electronic medical record (EMR) database are used. The results demonstrate that automated variance analysis between BPMN task-time models and real-life EMR data are feasible, whereas that was not the case for the existing analysis techniques. We also provide meaningful insights for further improvement.
Reliability of drivers in urban intersections.
Gstalter, Herbert; Fastenmeier, Wolfgang
2010-01-01
The concept of human reliability has been widely used in industrial settings by human factors experts to optimise the person-task fit. Reliability is estimated by the probability that a task will successfully be completed by personnel in a given stage of system operation. Human Reliability Analysis (HRA) is a technique used to calculate human error probabilities as the ratio of errors committed to the number of opportunities for that error. To transfer this notion to the measurement of car driver reliability the following components are necessary: a taxonomy of driving tasks, a definition of correct behaviour in each of these tasks, a list of errors as deviations from the correct actions and an adequate observation method to register errors and opportunities for these errors. Use of the SAFE-task analysis procedure recently made it possible to derive driver errors directly from the normative analysis of behavioural requirements. Driver reliability estimates could be used to compare groups of tasks (e.g. different types of intersections with their respective regulations) as well as groups of drivers' or individual drivers' aptitudes. This approach was tested in a field study with 62 drivers of different age groups. The subjects drove an instrumented car and had to complete an urban test route, the main features of which were 18 intersections representing six different driving tasks. The subjects were accompanied by two trained observers who recorded driver errors using standardized observation sheets. Results indicate that error indices often vary between both the age group of drivers and the type of driving task. The highest error indices occurred in the non-signalised intersection tasks and the roundabout, which exactly equals the corresponding ratings of task complexity from the SAFE analysis. A comparison of age groups clearly shows the disadvantage of older drivers, whose error indices in nearly all tasks are significantly higher than those of the other groups. The vast majority of these errors could be explained by high task load in the intersections, as they represent difficult tasks. The discussion shows how reliability estimates can be used in a constructive way to propose changes in car design, intersection layout and regulation as well as driver training.
A new multi-spectral feature level image fusion method for human interpretation
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-03-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in a three-task experiment using MSSF against two established methods: averaging and principle components analysis (PCA), and against its two source bands, visible and infrared. The three tasks that we studied were: (1) simple target detection, (2) spatial orientation, and (3) camouflaged target detection. MSSF proved superior to the other fusion methods in all three tests; MSSF also outperformed the source images in the spatial orientation and camouflaged target detection tasks. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
Kim, Hyun-Chul; Yoo, Seung-Schik; Lee, Jong-Hwan
2015-01-01
Electroencephalography (EEG) data simultaneously acquired with functional magnetic resonance imaging (fMRI) data are preprocessed to remove gradient artifacts (GAs) and ballistocardiographic artifacts (BCAs). Nonetheless, these data, especially in the gamma frequency range, can be contaminated by residual artifacts produced by mechanical vibrations in the MRI system, in particular the cryogenic pump that compresses and transports the helium that chills the magnet (the helium-pump). However, few options are available for the removal of helium-pump artifacts. In this study, we propose a recursive approach of EEG-segment-based principal component analysis (rsPCA) that enables the removal of these helium-pump artifacts. Using the rsPCA method, feature vectors representing helium-pump artifacts were successfully extracted as eigenvectors, and the reconstructed signals of the feature vectors were subsequently removed. A test using simultaneous EEG-fMRI data acquired from left-hand (LH) and right-hand (RH) clenching tasks performed by volunteers found that the proposed rsPCA method substantially reduced helium-pump artifacts in the EEG data and significantly enhanced task-related gamma band activity levels (p=0.0038 and 0.0363 for LH and RH tasks, respectively) in EEG data that have had GAs and BCAs removed. The spatial patterns of the fMRI data were estimated using a hemodynamic response function (HRF) modeled from the estimated gamma band activity in a general linear model (GLM) framework. Active voxel clusters were identified in the post-/pre-central gyri of motor area, only from the rsPCA method (uncorrected p<0.001 for both LH/RH tasks). In addition, the superior temporal pole areas were consistently observed (uncorrected p<0.001 for the LH task and uncorrected p<0.05 for the RH task) in the spatial patterns of the HRF model for gamma band activity when the task paradigm and movement were also included in the GLM. Copyright © 2014 Elsevier Inc. All rights reserved.
Comparison of continuously acquired resting state and extracted analogues from active tasks.
Ganger, Sebastian; Hahn, Andreas; Küblböck, Martin; Kranz, Georg S; Spies, Marie; Vanicek, Thomas; Seiger, René; Sladky, Ronald; Windischberger, Christian; Kasper, Siegfried; Lanzenberger, Rupert
2015-10-01
Functional connectivity analysis of brain networks has become an important tool for investigation of human brain function. Although functional connectivity computations are usually based on resting-state data, the application to task-specific fMRI has received growing attention. Three major methods for extraction of resting-state data from task-related signal have been proposed (1) usage of unmanipulated task data for functional connectivity; (2) regression against task effects, subsequently using the residuals; and (3) concatenation of baseline blocks located in-between task blocks. Despite widespread application in current research, consensus on which method best resembles resting-state seems to be missing. We, therefore, evaluated these techniques in a sample of 26 healthy controls measured at 7 Tesla. In addition to continuous resting-state, two different task paradigms were assessed (emotion discrimination and right finger-tapping) and five well-described networks were analyzed (default mode, thalamus, cuneus, sensorimotor, and auditory). Investigating the similarity to continuous resting-state (Dice, Intraclass correlation coefficient (ICC), R(2) ) showed that regression against task effects yields functional connectivity networks most alike to resting-state. However, all methods exhibited significant differences when compared to continuous resting-state and similarity metrics were lower than test-retest of two resting-state scans. Omitting global signal regression did not change these findings. Visually, the networks are highly similar, but through further investigation marked differences can be found. Therefore, our data does not support referring to resting-state when extracting signals from task designs, although functional connectivity computed from task-specific data may indeed yield interesting information. © 2015 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
DOT National Transportation Integrated Search
1978-10-01
This report presents a method that may be used to evaluate the reliability of performance of individual subjects, particularly in applied laboratory research. The method is based on analysis of variance of a tasks-by-subjects data matrix, with all sc...
SER-LARS, Volume 10. Instructional Methods I. 1975-76 Edition.
ERIC Educational Resources Information Center
Montgomery County Intermediate Unit 23, Blue Bell, PA.
The book briefly describes several hundred instructional methods from the Special Education Resources Location Analysis and Retrieval System (SER-LARS), which are intended for use in developing and carrying out individualized programs for handicapped children. Each teaching method includes an accession number; title; author; source; teacher tasks;…
SER-LARS, Volume 11. Instructional Methods II. 1975-76 Edition.
ERIC Educational Resources Information Center
Montgomery County Intermediate Unit 23, Blue Bell, PA.
The book briefly describes several hundred instructional methods from the Special Education Resources Location Analysis and Retrieval System (SER LARS), which are intended for use in developing and carrying out individualized programs for handicapped children. Each teaching method includes an accession number; title; author; source; teacher tasks;…
SER-LARS, Volume 12. Instructional Methods III. 1975-76 Edition.
ERIC Educational Resources Information Center
Montgomery County Intermediate Unit 23, Blue Bell, PA.
The book briefly describes several hundred instructional methods from the Special Education Resources Location Analysis and Retrieval System (SER-LARS), which are intended for use in developing and carrying out individualized programs for handicapped children. Each teaching method includes an accession number; title; author; source; teacher tasks;…
Selected Judgmental Methods in Defense Analyses. Volume 1. Main Text.
1990-07-01
contract No. MDA903-89-C-0003, Task T-6-593, Survey of Qualitative Methods in Military Operations Research . The objective of this analysis is to...Generalizability, and Reliability: Three Dimensions of Judgment Research ..................................................................... 1-1 a...V-3 3. Non -Gamble Methods ............................................................... V-4 B. Criticisms, Caveats, Replies
Advanced Signal Processing Methods Applied to Digital Mammography
NASA Technical Reports Server (NTRS)
Stauduhar, Richard P.
1997-01-01
The work reported here is on the extension of the earlier proposal of the same title, August 1994-June 1996. The report for that work is also being submitted. The work reported there forms the foundation for this work from January 1997 to September 1997. After the earlier work was completed there were a few items that needed to be completed prior to submission of a new and more comprehensive proposal for further research. Those tasks have been completed and two new proposals have been submitted, one to NASA, and one to Health & Human Services WS). The main purpose of this extension was to refine some of the techniques that lead to automatic large scale evaluation of full mammograms. Progress on each of the proposed tasks follows. Task 1: A multiresolution segmentation of background from breast has been developed and tested. The method is based on the different noise characteristics of the two different fields. The breast field has more power in the lower octaves and the off-breast field behaves similar to a wideband process, where more power is in the high frequency octaves. After the two fields are separated by lowpass filtering, a region labeling routine is used to find the largest contiguous region, the breast. Task 2: A wavelet expansion that can decompose the image without zero padding has been developed. The method preserves all properties of the power-of-two wavelet transform and does not add appreciably to computation time or storage. This work is essential for analysis of the full mammogram, as opposed to selecting sections from the full mammogram. Task 3: A clustering method has been developed based on a simple counting mechanism. No ROC analysis has been performed (and was not proposed), so we cannot finally evaluate this work without further support. Task 4: Further testing of the filter reveals that different wavelet bases do yield slightly different qualitative results. We cannot provide quantitative conclusions about this for all possible bases without further support. Task 5: Better modeling does indeed make an improvement in the detection output. After the proposal ended, we came up with some new theoretical explanations that helps in understanding when the D4 filter should be better. This work is currently in the review process. Task 6: N/A. This no longer applies in view of Tasks 4-5. Task 7: Comprehensive plans for further work have been completed. These plans are the subject of two proposals, one to NASA and one to HHS. These proposals represent plans for a complete evaluation of the methods for identifying normal mammograms, augmented with significant further theoretical work.
A customisable framework for the assessment of therapies in the solution of therapy decision tasks.
Manjarrés Riesco, A; Martínez Tomás, R; Mira Mira, J
2000-01-01
In current medical research, a growing interest can be observed in the definition of a global therapy-evaluation framework which integrates considerations such as patients preferences and quality-of-life results. In this article, we propose the use of the research results in this domain as a source of knowledge in the design of support systems for therapy decision analysis, in particular with a view to application in oncology. We discuss the incorporation of these considerations in the definition of the therapy-assessment methods involved in the solution of a generic therapy decision task, described in the context of AI software development methodologies such as CommonKADS. The goal of the therapy decision task is to identify the ideal therapy, for a given patient, in accordance with a set of objectives of a diverse nature. The assessment methods applied are based either on data obtained from statistics or on the specific idiosyncrasies of each patient, as identified from their responses to a suite of psychological tests. In the analysis of the therapy decision task we emphasise the importance, from a methodological perspective, of using a rigorous approach to the modelling of domain ontologies and domain-specific data. To this aim we make extensive use of the semi-formal object oriented analysis notation UML to describe the domain level.
DOT National Transportation Integrated Search
2011-12-01
Current AASHTO provisions for the conventional load rating of flat slab bridges rely on the equivalent strip method : of analysis for determining live load effects, this is generally regarded as overly conservative by many professional : engineers. A...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, J.; Talbott, J.
1984-01-01
Task 1. Methods development for the speciation of the polysulfides. Work on this task has been completed in December 1983 and reported accordingly in DOE/PC/40783-T13. Task 2. Methods development for the speciation of dithionite and polythionates. Work on Task 2 has been completed in June 1984 and has been reported accordingly in DOE/PC/40783-T15. Task 3. Total accounting of the sulfur balance in representative samples of synfuel process streams. A systematic and critical comparison of results, obtained in the analysis of sulfur moieties in representative samples of coal conversion process streams, revealed the following general trends. (a) In specimens of highmore » pH (9-10) and low redox potential (-0.3 to -0.4 volt versus NHE) sulfidic and polysulfidic sulfur moieties predominate. (b) In process streams of lower pH and more positive redox potential, higher oxidation states of sulfur (notably sulfate) account for most of the total sulfur present. (c) Oxidative wastewater treatment procedures by the PETC stripping process convert lower oxidation states of sulfur into thiosulfate and sulfate. In this context, remarkable similarities were observed between liquefaction and gasification process streams. However, the thiocyanate present in samples from the Grand Forks gasifier were impervious to the PETC stripping process. (d) Total sulfur contaminant levels in coal conversion process stream wastewater samples are primarily determined by the abundance of sulfur in the coal used as starting material than by the nature of the conversion process (liquefaction or gasification). 13 references.« less
Application of GIS Technology for Town Planning Tasks Solving
NASA Astrophysics Data System (ADS)
Kiyashko, G. A.
2017-11-01
For developing territories, one of the most actual town-planning tasks is to find out the suitable sites for building projects. The geographic information system (GIS) allows one to model complex spatial processes and can provide necessary effective tools to solve these tasks. We propose several GIS analysis models which can define suitable settlement allocations and select appropriate parcels for construction objects. We implement our models in the ArcGIS Desktop package and verify by application to the existing objects in Primorsky Region (Primorye Territory). These suitability models use several variations of the analysis method combinations and include various ways to resolve the suitability task using vector data and a raster data set. The suitability models created in this study can be combined, and one model can be integrated into another as its part. Our models can be updated by other suitability models for further detailed planning.
Cognitive Approaches for Medicine in Cloud Computing.
Ogiela, Urszula; Takizawa, Makoto; Ogiela, Lidia
2018-03-03
This paper will present the application potential of the cognitive approach to data interpretation, with special reference to medical areas. The possibilities of using the meaning approach to data description and analysis will be proposed for data analysis tasks in Cloud Computing. The methods of cognitive data management in Cloud Computing are aimed to support the processes of protecting data against unauthorised takeover and they serve to enhance the data management processes. The accomplishment of the proposed tasks will be the definition of algorithms for the execution of meaning data interpretation processes in safe Cloud Computing. • We proposed a cognitive methods for data description. • Proposed a techniques for secure data in Cloud Computing. • Application of cognitive approaches for medicine was described.
Kirchner, Elsa A; Kim, Su Kyoung
2018-01-01
Event-related potentials (ERPs) are often used in brain-computer interfaces (BCIs) for communication or system control for enhancing or regaining control for motor-disabled persons. Especially results from single-trial EEG classification approaches for BCIs support correlations between single-trial ERP detection performance and ERP expression. Hence, BCIs can be considered as a paradigm shift contributing to new methods with strong influence on both neuroscience and clinical applications. Here, we investigate the relevance of the choice of training data and classifier transfer for the interpretability of results from single-trial ERP detection. In our experiments, subjects performed a visual-motor oddball task with motor-task relevant infrequent ( targets ), motor-task irrelevant infrequent ( deviants ), and motor-task irrelevant frequent ( standards ) stimuli. Under dual-task condition, a secondary senso-motor task was performed, compared to the simple-task condition. For evaluation, average ERP analysis and single-trial detection analysis with different numbers of electrodes were performed. Further, classifier transfer was investigated between simple and dual task. Parietal positive ERPs evoked by target stimuli (but not by deviants) were expressed stronger under dual-task condition, which is discussed as an increase of task emphasis and brain processes involved in task coordination and change of task set. Highest classification performance was found for targets irrespective whether all 62, 6 or 2 parietal electrodes were used. Further, higher detection performance of targets compared to standards was achieved under dual-task compared to simple-task condition in case of training on data from 2 parietal electrodes corresponding to results of ERP average analysis. Classifier transfer between tasks improves classification performance in case that training took place on more varying examples (from dual task). In summary, we showed that P300 and overlaying parietal positive ERPs can successfully be detected while subjects are performing additional ongoing motor activity. This supports single-trial detection of ERPs evoked by target events to, e.g., infer a patient's attentional state during therapeutic intervention.
Kirchner, Elsa A.; Kim, Su Kyoung
2018-01-01
Event-related potentials (ERPs) are often used in brain-computer interfaces (BCIs) for communication or system control for enhancing or regaining control for motor-disabled persons. Especially results from single-trial EEG classification approaches for BCIs support correlations between single-trial ERP detection performance and ERP expression. Hence, BCIs can be considered as a paradigm shift contributing to new methods with strong influence on both neuroscience and clinical applications. Here, we investigate the relevance of the choice of training data and classifier transfer for the interpretability of results from single-trial ERP detection. In our experiments, subjects performed a visual-motor oddball task with motor-task relevant infrequent (targets), motor-task irrelevant infrequent (deviants), and motor-task irrelevant frequent (standards) stimuli. Under dual-task condition, a secondary senso-motor task was performed, compared to the simple-task condition. For evaluation, average ERP analysis and single-trial detection analysis with different numbers of electrodes were performed. Further, classifier transfer was investigated between simple and dual task. Parietal positive ERPs evoked by target stimuli (but not by deviants) were expressed stronger under dual-task condition, which is discussed as an increase of task emphasis and brain processes involved in task coordination and change of task set. Highest classification performance was found for targets irrespective whether all 62, 6 or 2 parietal electrodes were used. Further, higher detection performance of targets compared to standards was achieved under dual-task compared to simple-task condition in case of training on data from 2 parietal electrodes corresponding to results of ERP average analysis. Classifier transfer between tasks improves classification performance in case that training took place on more varying examples (from dual task). In summary, we showed that P300 and overlaying parietal positive ERPs can successfully be detected while subjects are performing additional ongoing motor activity. This supports single-trial detection of ERPs evoked by target events to, e.g., infer a patient's attentional state during therapeutic intervention. PMID:29636660
Hsu, Chien-Chang; Cheng, Ching-Wen; Chiu, Yi-Shiuan
2017-02-15
Electroencephalograms can record wave variations in any brain activity. Beta waves are produced when an external stimulus induces logical thinking, computation, and reasoning during consciousness. This work uses the beta wave of major scale working memory N-back tasks to analyze the differences between young musicians and non-musicians. After the feature analysis uses signal filtering, Hilbert-Huang transformation, and feature extraction methods to identify differences, k-means clustering algorithm are used to group them into different clusters. The results of feature analysis showed that beta waves significantly differ between young musicians and non-musicians from the low memory load of working memory task. Copyright © 2017 Elsevier B.V. All rights reserved.
Innovations in the Analysis of Chandra-ACIS Observations
NASA Astrophysics Data System (ADS)
Broos, Patrick S.; Townsley, Leisa K.; Feigelson, Eric D.; Getman, Konstantin V.; Bauer, Franz E.; Garmire, Gordon P.
2010-05-01
As members of the instrument team for the Advanced CCD Imaging Spectrometer (ACIS) on NASA's Chandra X-ray Observatory and as Chandra General Observers, we have developed a wide variety of data analysis methods that we believe are useful to the Chandra community, and have constructed a significant body of publicly available software (the ACIS Extract package) addressing important ACIS data and science analysis tasks. This paper seeks to describe these data analysis methods for two purposes: to document the data analysis work performed in our own science projects and to help other ACIS observers judge whether these methods may be useful in their own projects (regardless of what tools and procedures they choose to implement those methods). The ACIS data analysis recommendations we offer here address much of the workflow in a typical ACIS project, including data preparation, point source detection via both wavelet decomposition and image reconstruction, masking point sources, identification of diffuse structures, event extraction for both point and diffuse sources, merging extractions from multiple observations, nonparametric broadband photometry, analysis of low-count spectra, and automation of these tasks. Many of the innovations presented here arise from several, often interwoven, complications that are found in many Chandra projects: large numbers of point sources (hundreds to several thousand), faint point sources, misaligned multiple observations of an astronomical field, point source crowding, and scientifically relevant diffuse emission.
Bayesian multi-task learning for decoding multi-subject neuroimaging data.
Marquand, Andre F; Brammer, Michael; Williams, Steven C R; Doyle, Orla M
2014-05-15
Decoding models based on pattern recognition (PR) are becoming increasingly important tools for neuroimaging data analysis. In contrast to alternative (mass-univariate) encoding approaches that use hierarchical models to capture inter-subject variability, inter-subject differences are not typically handled efficiently in PR. In this work, we propose to overcome this problem by recasting the decoding problem in a multi-task learning (MTL) framework. In MTL, a single PR model is used to learn different but related "tasks" simultaneously. The primary advantage of MTL is that it makes more efficient use of the data available and leads to more accurate models by making use of the relationships between tasks. In this work, we construct MTL models where each subject is modelled by a separate task. We use a flexible covariance structure to model the relationships between tasks and induce coupling between them using Gaussian process priors. We present an MTL method for classification problems and demonstrate a novel mapping method suitable for PR models. We apply these MTL approaches to classifying many different contrasts in a publicly available fMRI dataset and show that the proposed MTL methods produce higher decoding accuracy and more consistent discriminative activity patterns than currently used techniques. Our results demonstrate that MTL provides a promising method for multi-subject decoding studies by focusing on the commonalities between a group of subjects rather than the idiosyncratic properties of different subjects. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bondarenko, I. R.
2018-03-01
The paper tackles the task of applying the numerical approach to determine the cutting forces of carbon steel machining with curved cutting edge mill. To solve the abovementioned task the curved surface of the cutting edge was subject to step approximation, and the chips section was split into discrete elements. As a result, the cutting force was defined as the sum of elementary forces observed during the cut of every element. Comparison and analysis of calculations with regard to the proposed method and the method with Kienzle dependence showed its sufficient accuracy, which makes it possible to apply the method in practice.
Turewicz, Michael; Kohl, Michael; Ahrens, Maike; Mayer, Gerhard; Uszkoreit, Julian; Naboulsi, Wael; Bracht, Thilo; Megger, Dominik A; Sitek, Barbara; Marcus, Katrin; Eisenacher, Martin
2017-11-10
The analysis of high-throughput mass spectrometry-based proteomics data must address the specific challenges of this technology. To this end, the comprehensive proteomics workflow offered by the de.NBI service center BioInfra.Prot provides indispensable components for the computational and statistical analysis of this kind of data. These components include tools and methods for spectrum identification and protein inference, protein quantification, expression analysis as well as data standardization and data publication. All particular methods of the workflow which address these tasks are state-of-the-art or cutting edge. As has been shown in previous publications, each of these methods is adequate to solve its specific task and gives competitive results. However, the methods included in the workflow are continuously reviewed, updated and improved to adapt to new scientific developments. All of these particular components and methods are available as stand-alone BioInfra.Prot services or as a complete workflow. Since BioInfra.Prot provides manifold fast communication channels to get access to all components of the workflow (e.g., via the BioInfra.Prot ticket system: bioinfraprot@rub.de) users can easily benefit from this service and get support by experts. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kang, Ziho
This dissertation is divided into four parts: 1) Development of effective methods for comparing visual scanning paths (or scanpaths) for a dynamic task of multiple moving targets, 2) application of the methods to compare the scanpaths of experts and novices for a conflict detection task of multiple aircraft on radar screen, 3) a post-hoc analysis of other eye movement characteristics of experts and novices, and 4) finding out whether the scanpaths of experts can be used to teach the novices. In order to compare experts' and novices' scanpaths, two methods are developed. The first proposed method is the matrix comparisons using the Mantel test. The second proposed method is the maximum transition-based agglomerative hierarchical clustering (MTAHC) where comparisons of multi-level visual groupings are held out. The matrix comparison method was useful for a small number of targets during the preliminary experiment, but turned out to be inapplicable to a realistic case when tens of aircraft were presented on screen; however, MTAHC was effective with large number of aircraft on screen. The experiments with experts and novices on the aircraft conflict detection task showed that their scanpaths are different. The MTAHC result was able to explicitly show how experts visually grouped multiple aircraft based on similar altitudes while novices tended to group them based on convergence. Also, the MTAHC results showed that novices paid much attention to the converging aircraft groups even if they are safely separated by altitude; therefore, less attention was given to the actual conflicting pairs resulting in low correct conflict detection rates. Since the analysis showed the scanpath differences, experts' scanpaths were shown to novices in order to find out its effectiveness. The scanpath treatment group showed indications that they changed their visual movements from trajectory-based to altitude-based movements. Between the treatment and the non-treatment group, there were no significant differences in terms of number of correct detections; however, the treatment group made significantly fewer false alarms.
NASA Technical Reports Server (NTRS)
John, Bonnie; Vera, Alonso; Matessa, Michael; Freed, Michael; Remington, Roger
2002-01-01
CPM-GOMS is a modeling method that combines the task decomposition of a GOMS analysis with a model of human resource usage at the level of cognitive, perceptual, and motor operations. CPM-GOMS models have made accurate predictions about skilled user behavior in routine tasks, but developing such models is tedious and error-prone. We describe a process for automatically generating CPM-GOMS models from a hierarchical task decomposition expressed in a cognitive modeling tool called Apex. Resource scheduling in Apex automates the difficult task of interleaving the cognitive, perceptual, and motor resources underlying common task operators (e.g. mouse move-and-click). Apex's UI automatically generates PERT charts, which allow modelers to visualize a model's complex parallel behavior. Because interleaving and visualization is now automated, it is feasible to construct arbitrarily long sequences of behavior. To demonstrate the process, we present a model of automated teller interactions in Apex and discuss implications for user modeling. available to model human users, the Goals, Operators, Methods, and Selection (GOMS) method [6, 21] has been the most widely used, providing accurate, often zero-parameter, predictions of the routine performance of skilled users in a wide range of procedural tasks [6, 13, 15, 27, 28]. GOMS is meant to model routine behavior. The user is assumed to have methods that apply sequences of operators and to achieve a goal. Selection rules are applied when there is more than one method to achieve a goal. Many routine tasks lend themselves well to such decomposition. Decomposition produces a representation of the task as a set of nested goal states that include an initial state and a final state. The iterative decomposition into goals and nested subgoals can terminate in primitives of any desired granularity, the choice of level of detail dependent on the predictions required. Although GOMS has proven useful in HCI, tools to support the construction of GOMS models have not yet come into general use.
Individual Differences in Dynamic Functional Brain Connectivity across the Human Lifespan
Davison, Elizabeth N.; Turner, Benjamin O.; Miller, Michael B.; Carlson, Jean M.
2016-01-01
Individual differences in brain functional networks may be related to complex personal identifiers, including health, age, and ability. Dynamic network theory has been used to identify properties of dynamic brain function from fMRI data, but the majority of analyses and findings remain at the level of the group. Here, we apply hypergraph analysis, a method from dynamic network theory, to quantify individual differences in brain functional dynamics. Using a summary metric derived from the hypergraph formalism—hypergraph cardinality—we investigate individual variations in two separate, complementary data sets. The first data set (“multi-task”) consists of 77 individuals engaging in four consecutive cognitive tasks. We observe that hypergraph cardinality exhibits variation across individuals while remaining consistent within individuals between tasks; moreover, the analysis of one of the memory tasks revealed a marginally significant correspondence between hypergraph cardinality and age. This finding motivated a similar analysis of the second data set (“age-memory”), in which 95 individuals, aged 18–75, performed a memory task with a similar structure to the multi-task memory task. With the increased age range in the age-memory data set, the correlation between hypergraph cardinality and age correspondence becomes significant. We discuss these results in the context of the well-known finding linking age with network structure, and suggest that hypergraph analysis should serve as a useful tool in furthering our understanding of the dynamic network structure of the brain. PMID:27880785
Supervisory manipulation based on the concepts of absolute vs relative and fixed vs moving tasks
NASA Technical Reports Server (NTRS)
Brooks, T. L.
1980-01-01
If a machine is to perform a given subtask autonomously, it will require an internal model which, combined with operator and environmental inputs, can be used to generate the manipulator functions necessary to complete the task. This paper will advance a technique based on linear transformations by which short, supervised periods of manipulation can be accomplished. To achieve this end a distinction will be made between tasks which can be completely defined during the training period, and tasks which can be only partially defined prior to the moment of execution. A further distinction will be made between tasks which have a fixed relationship to the manipulator base throughout the execution period, and tasks which have a continuously changing task/base relationship during execution. Finally, through a rudimentary analysis of the methods developed in this paper, some of the practical aspects of implementing a supervisory system will be illustrated
Traditional and Cognitive Job Analyses as Tools for Understanding the Skills Gap.
ERIC Educational Resources Information Center
Hanser, Lawrence M.
Traditional methods of job and task analysis may be categorized as worker-oriented methods focusing on general human behaviors performed by workers in jobs or as job-oriented methods focusing on the technologies involved in jobs. The ability of both types of traditional methods to identify, understand, and communicate the skills needed in high…
Jatobá, Alessandro; de Carvalho, Paulo Victor R; da Cunha, Amauri Marques
2012-01-01
Work in organizations requires a minimum level of consensus on the understanding of the practices performed. To adopt technological devices to support the activities in environments where work is complex, characterized by the interdependence among a large number of variables, understanding about how work is done not only takes an even greater importance, but also becomes a more difficult task. Therefore, this study aims to present a method for modeling of work in complex systems, which allows improving the knowledge about the way activities are performed where these activities do not simply happen by performing procedures. Uniting techniques of Cognitive Task Analysis with the concept of Work Process, this work seeks to provide a method capable of providing a detailed and accurate vision of how people perform their tasks, in order to apply information systems for supporting work in organizations.
Doborjeh, Maryam Gholami; Wang, Grace Y; Kasabov, Nikola K; Kydd, Robert; Russell, Bruce
2016-09-01
This paper introduces a method utilizing spiking neural networks (SNN) for learning, classification, and comparative analysis of brain data. As a case study, the method was applied to electroencephalography (EEG) data collected during a GO/NOGO cognitive task performed by untreated opiate addicts, those undergoing methadone maintenance treatment (MMT) for opiate dependence and a healthy control group. the method is based on an SNN architecture called NeuCube, trained on spatiotemporal EEG data. NeuCube was used to classify EEG data across subject groups and across GO versus NOGO trials, but also facilitated a deeper comparative analysis of the dynamic brain processes. This analysis results in a better understanding of human brain functioning across subject groups when performing a cognitive task. In terms of the EEG data classification, a NeuCube model obtained better results (the maximum obtained accuracy: 90.91%) when compared with traditional statistical and artificial intelligence methods (the maximum obtained accuracy: 50.55%). more importantly, new information about the effects of MMT on cognitive brain functions is revealed through the analysis of the SNN model connectivity and its dynamics. this paper presented a new method for EEG data modeling and revealed new knowledge on brain functions associated with mental activity which is different from the brain activity observed in a resting state of the same subjects.
Assessment of the nursing skill mix in Mozambique using a task analysis methodology
2014-01-01
Background The density of the nursing and maternal child health nursing workforce in Mozambique (0.32/1000) is well below the WHO minimum standard of 1 nurse per 1000. Two levels of education were being offered for both nurses and maternal child health nurses, in programmes ranging from 18 to 30 months in length. The health care workforce in Mozambique also includes Medical Technicians and Medical Agents, who are also educated at either basic or mid-level. The Ministry of Health determined the need to document the tasks that each of the six cadres was performing within various health facilities to identify gaps, and duplications, in order to identify strategies for streamlining workforce production, while retaining highest educational and competency standards. The methodology of task analysis (TA) was used to achieve this objective. This article provides information about the TA methodology, and selected outcomes of the very broad study. Methods A cross-sectional descriptive task analysis survey was conducted over a 15 month period (2008–2009). A stratified sample of 1295 individuals was recruited from every type of health facility in all of Mozambique’s 10 provinces and in Maputo City. Respondents indicated how frequently they performed any of 233 patient care tasks. Data analysis focused on identifying areas where identical tasks were performed by the various cadres. Analyses addressed frequency of performance, grouped by level of educational preparation, within various types of health facilities. Results Task sharing ranged from 74% to 88% between basic and general nurse cadres and from 54% to 88% between maternal and child health nurse cadres, within various health facility types. Conversely, there was distinction between scope of practice for nursing and maternal/child health nursing cadres. Conclusion The educational pathways to general nursing and maternal/child health nursing careers were consolidated into one 24 month programme for each career. The scopes of practice were affirmed based on task analysis survey data. PMID:24460789
ERIC Educational Resources Information Center
Alexander, Jennifer L.; Smith, Katie A.; Mataras, Theologia; Shepley, Sally B.; Ayres, Kevin M.
2015-01-01
The two most frequently used methods for assessing performance on chained tasks are single opportunity probes (SOPs) and multiple opportunity probes (MOPs). Of the two, SOPs may be easier and less time-consuming but can suppress actual performance. In comparison, MOPs can provide more information but present the risk of participants acquiring…
Riva, F; Bisi, M C; Stagni, R
2013-01-01
Falls represent a heavy economic and clinical burden on society. The identification of individual chronic characteristics associated with falling is of fundamental importance for the clinicians; in particular, the stability of daily motor tasks is one of the main factors that the clinicians look for during assessment procedures. Various methods for the assessment of stability in human movement are present in literature, and methods coming from stability analysis of nonlinear dynamic systems applied to biomechanics recently showed promise. One of these techniques is orbital stability analysis via Floquet multipliers. This method allows to measure orbital stability of periodic nonlinear dynamic systems and it seems a promising approach for the definition of a reliable motor stability index, taking into account for the whole task cycle dynamics. Despite the premises, its use in the assessment of fall risk has been deemed controversial. The aim of this systematic review was therefore to provide a critical evaluation of the literature on the topic of applications of orbital stability analysis in biomechanics, with particular focus to methodologic aspects. Four electronic databases have been searched for articles relative to the topic; 23 articles were selected for review. Quality of the studies present in literature has been assessed with a customised quality assessment tool. Overall quality of the literature in the field was found to be high. The most critical aspect was found to be the lack of uniformity in the implementation of the analysis to biomechanical time series, particularly in the choice of state space and number of cycles to include in the analysis. Copyright © 2012 Elsevier B.V. All rights reserved.
Operations planning and analysis handbook for NASA/MSFC phase B development projects
NASA Technical Reports Server (NTRS)
Batson, Robert C.
1986-01-01
Current operations planning and analysis practices on NASA/MSFC Phase B projects were investigated with the objectives of (1) formalizing these practices into a handbook and (2) suggesting improvements. The study focused on how Science and Engineering (S&E) Operational Personnel support Program Development (PD) Task Teams. The intimate relationship between systems engineering and operations analysis was examined. Methods identified for use by operations analysts during Phase B include functional analysis, interface analysis methods to calculate/allocate such criteria as reliability, Maintainability, and operations and support cost.
One Size Does Not Fit All: Human Failure Event Decomposition and Task Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald Laurids Boring, PhD
2014-09-01
In the probabilistic safety assessments (PSAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered or exacerbated by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally,more » both approaches should arrive at the same set of HFEs. This question remains central as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PSAs tend to be top-down—defined as a subset of the PSA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) are more likely to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications. In this paper, I first review top-down and bottom-up approaches for defining HFEs and then present a seven-step guideline to ensure a task analysis completed as part of human error identification decomposes to a level suitable for use as HFEs. This guideline illustrates an effective way to bridge the bottom-up approach with top-down requirements.« less
Burgos, Pablo; Kilborn, Kerry; Evans, Jonathan J.
2017-01-01
Objective Time-based prospective memory (PM), remembering to do something at a particular moment in the future, is considered to depend upon self-initiated strategic monitoring, involving a retrieval mode (sustained maintenance of the intention) plus target checking (intermittent time checks). The present experiment was designed to explore what brain regions and brain activity are associated with these components of strategic monitoring in time-based PM tasks. Method 24 participants were asked to reset a clock every four minutes, while performing a foreground ongoing word categorisation task. EEG activity was recorded and data were decomposed into source-resolved activity using Independent Component Analysis. Common brain regions across participants, associated with retrieval mode and target checking, were found using Measure Projection Analysis. Results Participants decreased their performance on the ongoing task when concurrently performed with the time-based PM task, reflecting an active retrieval mode that relied on withdrawal of limited resources from the ongoing task. Brain activity, with its source in or near the anterior cingulate cortex (ACC), showed changes associated with an active retrieval mode including greater negative ERP deflections, decreased theta synchronization, and increased alpha suppression for events locked to the ongoing task while maintaining a time-based intention. Activity in the ACC was also associated with time-checks and found consistently across participants; however, we did not find an association with time perception processing per se. Conclusion The involvement of the ACC in both aspects of time-based PM monitoring may be related to different functions that have been attributed to it: strategic control of attention during the retrieval mode (distributing attentional resources between the ongoing task and the time-based task) and anticipatory/decision making processing associated with clock-checks. PMID:28863146
Yigzaw, Tegbar; Carr, Catherine; Stekelenburg, Jelle; van Roosmalen, Jos; Gibson, Hannah; Gelagay, Mintwab; Admassu, Azeb
2016-01-01
Purpose Realizing aspirations for meeting the global reproductive, maternal, newborn, and child health goals depends not only on increasing the numbers but also on improving the capability of midwifery workforce. We conducted a task analysis study to identify the needs for strengthening the midwifery workforce in Ethiopia. Methods We conducted a cross-sectional study of recently qualified midwives in Ethiopia. Purposively selected participants from representative geographic and practice settings completed a self-administered questionnaire, making judgments about the frequency of performance, criticality, competence, and location of training for a list of validated midwifery tasks. Using Statistical Package for the Social Sciences, Version 20, we computed the percentages and averages to describe participant and practice characteristics. We identified priority preservice education gaps by considering the tasks least frequently learned in preservice, most frequently mentioned for not being trained, and had the highest not capable response. Identification of top priorities for in-service training considered tasks with highest “not capable” and “never” done responses. We determined the licensing exam blueprint by weighing the composite mean scores for frequency and criticality variables and expert rating across practice categories. Results One hundred and thirty-eight midwives participated in the study. The majority of respondents recognized the importance of midwifery tasks (89%), felt they were capable (91.8%), reported doing them frequently (63.9%), and learned them during preservice education (56.3%). We identified competence gaps in tasks related to obstetric complications, gynecology, public health, professional duties, and prevention of mother to child transmission of HIV. Moreover, our study helped to determine composition of the licensing exam for university graduates. Conclusion The task analysis indicates that midwives provide critical reproductive, maternal, newborn, and child health care services and supports continuing investment in this cadre. However, there were substantial competence gaps that limit their ability to accelerate progress toward health development goals. Moreover, basing the licensure exam on task analysis helped to ground it in national practice priorities. PMID:27313478
Methods for Assessment of Memory Reactivation.
Liu, Shizhao; Grosmark, Andres D; Chen, Zhe
2018-04-13
It has been suggested that reactivation of previously acquired experiences or stored information in declarative memories in the hippocampus and neocortex contributes to memory consolidation and learning. Understanding memory consolidation depends crucially on the development of robust statistical methods for assessing memory reactivation. To date, several statistical methods have seen established for assessing memory reactivation based on bursts of ensemble neural spike activity during offline states. Using population-decoding methods, we propose a new statistical metric, the weighted distance correlation, to assess hippocampal memory reactivation (i.e., spatial memory replay) during quiet wakefulness and slow-wave sleep. The new metric can be combined with an unsupervised population decoding analysis, which is invariant to latent state labeling and allows us to detect statistical dependency beyond linearity in memory traces. We validate the new metric using two rat hippocampal recordings in spatial navigation tasks. Our proposed analysis framework may have a broader impact on assessing memory reactivations in other brain regions under different behavioral tasks.
Wang, Jinjia; Liu, Yuan
2015-04-01
This paper presents a feature extraction method based on multivariate empirical mode decomposition (MEMD) combining with the power spectrum feature, and the method aims at the non-stationary electroencephalogram (EEG) or magnetoencephalogram (MEG) signal in brain-computer interface (BCI) system. Firstly, we utilized MEMD algorithm to decompose multichannel brain signals into a series of multiple intrinsic mode function (IMF), which was proximate stationary and with multi-scale. Then we extracted and reduced the power characteristic from each IMF to a lower dimensions using principal component analysis (PCA). Finally, we classified the motor imagery tasks by linear discriminant analysis classifier. The experimental verification showed that the correct recognition rates of the two-class and four-class tasks of the BCI competition III and competition IV reached 92.0% and 46.2%, respectively, which were superior to the winner of the BCI competition. The experimental proved that the proposed method was reasonably effective and stable and it would provide a new way for feature extraction.
Spatio-Temporal Information Analysis of Event-Related BOLD Responses
Alpert, Galit Fuhrmann; Handwerker, Dan; Sun, Felice T.; D’Esposito, Mark; Knight, Robert T.
2009-01-01
A new approach for analysis of event related fMRI (BOLD) signals is proposed. The technique is based on measures from information theory and is used both for spatial localization of task related activity, as well as for extracting temporal information regarding the task dependent propagation of activation across different brain regions. This approach enables whole brain visualization of voxels (areas) most involved in coding of a specific task condition, the time at which they are most informative about the condition, as well as their average amplitude at that preferred time. The approach does not require prior assumptions about the shape of the hemodynamic response function (HRF), nor about linear relations between BOLD response and presented stimuli (or task conditions). We show that relative delays between different brain regions can also be computed without prior knowledge of the experimental design, suggesting a general method that could be applied for analysis of differential time delays that occur during natural, uncontrolled conditions. Here we analyze BOLD signals recorded during performance of a motor learning task. We show that during motor learning, the BOLD response of unimodal motor cortical areas precedes the response in higher-order multimodal association areas, including posterior parietal cortex. Brain areas found to be associated with reduced activity during motor learning, predominantly in prefrontal brain regions, are informative about the task typically at significantly later times. PMID:17188515
Gordon, Evan M.; Stollstorff, Melanie; Vaidya, Chandan J.
2012-01-01
Many researchers have noted that the functional architecture of the human brain is relatively invariant during task performance and the resting state. Indeed, intrinsic connectivity networks (ICNs) revealed by resting-state functional connectivity analyses are spatially similar to regions activated during cognitive tasks. This suggests that patterns of task-related activation in individual subjects may result from the engagement of one or more of these ICNs; however, this has not been tested. We used a novel analysis, spatial multiple regression, to test whether the patterns of activation during an N-back working memory task could be well described by a linear combination of ICNs delineated using Independent Components Analysis at rest. We found that across subjects, the cingulo-opercular Set Maintenance ICN, as well as right and left Frontoparietal Control ICNs, were reliably activated during working memory, while Default Mode and Visual ICNs were reliably deactivated. Further, involvement of Set Maintenance, Frontoparietal Control, and Dorsal Attention ICNs was sensitive to varying working memory load. Finally, the degree of left Frontoparietal Control network activation predicted response speed, while activation in both left Frontoparietal Control and Dorsal Attention networks predicted task accuracy. These results suggest that a close relationship between resting-state networks and task-evoked activation is functionally relevant for behavior, and that spatial multiple regression analysis is a suitable method for revealing that relationship. PMID:21761505
Knowledge-based system verification and validation
NASA Technical Reports Server (NTRS)
Johnson, Sally C.
1990-01-01
The objective of this task is to develop and evaluate a methodology for verification and validation (V&V) of knowledge-based systems (KBS) for space station applications with high reliability requirements. The approach consists of three interrelated tasks. The first task is to evaluate the effectiveness of various validation methods for space station applications. The second task is to recommend requirements for KBS V&V for Space Station Freedom (SSF). The third task is to recommend modifications to the SSF to support the development of KBS using effectiveness software engineering and validation techniques. To accomplish the first task, three complementary techniques will be evaluated: (1) Sensitivity Analysis (Worchester Polytechnic Institute); (2) Formal Verification of Safety Properties (SRI International); and (3) Consistency and Completeness Checking (Lockheed AI Center). During FY89 and FY90, each contractor will independently demonstrate the user of his technique on the fault detection, isolation, and reconfiguration (FDIR) KBS or the manned maneuvering unit (MMU), a rule-based system implemented in LISP. During FY91, the application of each of the techniques to other knowledge representations and KBS architectures will be addressed. After evaluation of the results of the first task and examination of Space Station Freedom V&V requirements for conventional software, a comprehensive KBS V&V methodology will be developed and documented. Development of highly reliable KBS's cannot be accomplished without effective software engineering methods. Using the results of current in-house research to develop and assess software engineering methods for KBS's as well as assessment of techniques being developed elsewhere, an effective software engineering methodology for space station KBS's will be developed, and modification of the SSF to support these tools and methods will be addressed.
Passaro, Antony D; Vettel, Jean M; McDaniel, Jonathan; Lawhern, Vernon; Franaszczuk, Piotr J; Gordon, Stephen M
2017-03-01
During an experimental session, behavioral performance fluctuates, yet most neuroimaging analyses of functional connectivity derive a single connectivity pattern. These conventional connectivity approaches assume that since the underlying behavior of the task remains constant, the connectivity pattern is also constant. We introduce a novel method, behavior-regressed connectivity (BRC), to directly examine behavioral fluctuations within an experimental session and capture their relationship to changes in functional connectivity. This method employs the weighted phase lag index (WPLI) applied to a window of trials with a weighting function. Using two datasets, the BRC results are compared to conventional connectivity results during two time windows: the one second before stimulus onset to identify predictive relationships, and the one second after onset to capture task-dependent relationships. In both tasks, we replicate the expected results for the conventional connectivity analysis, and extend our understanding of the brain-behavior relationship using the BRC analysis, demonstrating subject-specific BRC maps that correspond to both positive and negative relationships with behavior. Comparison with Existing Method(s): Conventional connectivity analyses assume a consistent relationship between behaviors and functional connectivity, but the BRC method examines performance variability within an experimental session to understand dynamic connectivity and transient behavior. The BRC approach examines connectivity as it covaries with behavior to complement the knowledge of underlying neural activity derived from conventional connectivity analyses. Within this framework, BRC may be implemented for the purpose of understanding performance variability both within and between participants. Published by Elsevier B.V.
Industrial Instrument Mechanic. Occupational Analyses Series.
ERIC Educational Resources Information Center
Dean, Ann; Zagorac, Mike; Bumbaka, Nick
This analysis covers tasks performed by an industrial instrument mechanic, an occupational title some provinces and territories of Canada have also identified as industrial instrumentation and instrument mechanic. A guide to analysis discusses development, structure, and validation method; scope of the occupation; trends; and safety. To facilitate…
Farm Equipment Mechanic. Occupational Analyses Series.
ERIC Educational Resources Information Center
Ross, Douglas
This analysis covers tasks performed by a farm equipment mechanic, an occupational title some provinces and territories of Canada have also identified as agricultural machinery technician, agricultural mechanic, and farm equipment service technician. A guide to analysis discusses development, structure, and validation method; scope of the…
Recreation Vehicle Mechanic. Occupational Analyses Series.
ERIC Educational Resources Information Center
Dean, Ann; Embree, Rick
This analysis covers tasks performed by a recreation vehicle mechanic, an occupational title some provinces and territories of Canada have also identified as recreation vehicle technician and recreation vehicle service technician. A guide to analysis discusses development, structure, and validation method; scope of the occupation; trends; and…
Linking normative models of natural tasks to descriptive models of neural response.
Jaini, Priyank; Burge, Johannes
2017-10-01
Understanding how nervous systems exploit task-relevant properties of sensory stimuli to perform natural tasks is fundamental to the study of perceptual systems. However, there are few formal methods for determining which stimulus properties are most useful for a given natural task. As a consequence, it is difficult to develop principled models for how to compute task-relevant latent variables from natural signals, and it is difficult to evaluate descriptive models fit to neural response. Accuracy maximization analysis (AMA) is a recently developed Bayesian method for finding the optimal task-specific filters (receptive fields). Here, we introduce AMA-Gauss, a new faster form of AMA that incorporates the assumption that the class-conditional filter responses are Gaussian distributed. Then, we use AMA-Gauss to show that its assumptions are justified for two fundamental visual tasks: retinal speed estimation and binocular disparity estimation. Next, we show that AMA-Gauss has striking formal similarities to popular quadratic models of neural response: the energy model and the generalized quadratic model (GQM). Together, these developments deepen our understanding of why the energy model of neural response have proven useful, improve our ability to evaluate results from subunit model fits to neural data, and should help accelerate psychophysics and neuroscience research with natural stimuli.
Using an embedded reality approach to improve test reliability for NHPT tasks.
Bowler, M; Amirabdollahian, F; Dautenhahn, K
2011-01-01
Research into the use of haptic and virtual reality technologies has increased greatly over the past decade, in terms of both quality and quantity. Methods to utilise haptic and virtual technologies with currently existing techniques for assessing impairment are underway, and, due to the commercially available equipment, has found some success in the use of these methods for individuals who suffer upper limb impairment. This paper uses the clinically validated assessment technique for measuring motor impairment: the Nine Hole Peg Test and creates three tasks with different levels of realism. The efficacy of these tasks is discussed with particular attention paid to analysis in terms of removing factors that limit a virtual environment's use in a clinical setting, such as inter-subject variation. © 2011 IEEE
Weir, Charlene R; Nebeker, Jonathan J R; Hicken, Bret L; Campo, Rebecca; Drews, Frank; Lebar, Beth
2007-01-01
Computerized Provider Order Entry (CPOE) with electronic documentation, and computerized decision support dramatically changes the information environment of the practicing clinician. Prior work patterns based on paper, verbal exchange, and manual methods are replaced with automated, computerized, and potentially less flexible systems. The objective of this study is to explore the information management strategies that clinicians use in the process of adapting to a CPOE system using cognitive task analysis techniques. Observation and semi-structured interviews were conducted with 88 primary-care clinicians at 10 Veterans Administration Medical Centers. Interviews were taped, transcribed, and extensively analyzed to identify key information management goals, strategies, and tasks. Tasks were aggregated into groups, common components across tasks were clarified, and underlying goals and strategies identified. Nearly half of the identified tasks were not fully supported by the available technology. Six core components of tasks were identified. Four meta-cognitive information management goals emerged: 1) Relevance Screening; 2) Ensuring Accuracy; 3) Minimizing memory load; and 4) Negotiating Responsibility. Strategies used to support these goals are presented. Users develop a wide array of information management strategies that allow them to successfully adapt to new technology. Supporting the ability of users to develop adaptive strategies to support meta-cognitive goals is a key component of a successful system.
Niezen, Maartje G H; Mathijssen, Jolanda J P
2014-08-01
To explore the main facilitators and barriers to task reallocation. One of the innovative approaches to dealing with the anticipated shortage of physicians is to reallocate tasks from the professional domain of medicine to the nursing domain. Various (cost-)effectiveness studies demonstrate that nurse practitioners can deliver as high quality care as physicians and can achieve as good outcomes. However, these studies do not examine what factors may facilitate or hinder such task reallocation. A systematic literature review of PubMed and Web of Knowledge supplemented with a snowball research method. The principles of thematic analysis were followed. The 13 identified relevant papers address a broad spectrum of task reallocation (delegation, substitution and complementary care). Thematic analysis revealed four categories of facilitators and barriers: (1) knowledge and capabilities, (2) professional boundaries, (3) organisational environment, and (4) institutional environment. Introducing nurse practitioners in healthcare requires organisational redesign and the reframing of professional boundaries. Especially the facilitators and barriers in the analytical themes of 'professional boundaries' and 'organisational environment' should be considered when reallocating tasks. If not, these factors might hamper the cost-effectiveness of task reallocation in practice. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
An analysis of relational complexity in an air traffic control conflict detection task.
Boag, Christine; Neal, Andrew; Loft, Shayne; Halford, Graeme S
2006-11-15
Theoretical analyses of air traffic complexity were carried out using the Method for the Analysis of Relational Complexity. Twenty-two air traffic controllers examined static air traffic displays and were required to detect and resolve conflicts. Objective measures of performance included conflict detection time and accuracy. Subjective perceptions of mental workload were assessed by a complexity-sorting task and subjective ratings of the difficulty of different aspects of the task. A metric quantifying the complexity of pair-wise relations among aircraft was able to account for a substantial portion of the variance in the perceived complexity and difficulty of conflict detection problems, as well as reaction time. Other variables that influenced performance included the mean minimum separation between aircraft pairs and the amount of time that aircraft spent in conflict.
Mejia Tobar, Alejandra; Hyoudou, Rikiya; Kita, Kahori; Nakamura, Tatsuhiro; Kambara, Hiroyuki; Ogata, Yousuke; Hanakawa, Takashi; Koike, Yasuharu; Yoshimura, Natsue
2017-01-01
The classification of ankle movements from non-invasive brain recordings can be applied to a brain-computer interface (BCI) to control exoskeletons, prosthesis, and functional electrical stimulators for the benefit of patients with walking impairments. In this research, ankle flexion and extension tasks at two force levels in both legs, were classified from cortical current sources estimated by a hierarchical variational Bayesian method, using electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) recordings. The hierarchical prior for the current source estimation from EEG was obtained from activated brain areas and their intensities from an fMRI group (second-level) analysis. The fMRI group analysis was performed on regions of interest defined over the primary motor cortex, the supplementary motor area, and the somatosensory area, which are well-known to contribute to movement control. A sparse logistic regression method was applied for a nine-class classification (eight active tasks and a resting control task) obtaining a mean accuracy of 65.64% for time series of current sources, estimated from the EEG and the fMRI signals using a variational Bayesian method, and a mean accuracy of 22.19% for the classification of the pre-processed of EEG sensor signals, with a chance level of 11.11%. The higher classification accuracy of current sources, when compared to EEG classification accuracy, was attributed to the high number of sources and the different signal patterns obtained in the same vertex for different motor tasks. Since the inverse filter estimation for current sources can be done offline with the present method, the present method is applicable to real-time BCIs. Finally, due to the highly enhanced spatial distribution of current sources over the brain cortex, this method has the potential to identify activation patterns to design BCIs for the control of an affected limb in patients with stroke, or BCIs from motor imagery in patients with spinal cord injury.
Quantitative analysis of task selection for brain-computer interfaces
NASA Astrophysics Data System (ADS)
Llera, Alberto; Gómez, Vicenç; Kappen, Hilbert J.
2014-10-01
Objective. To assess quantitatively the impact of task selection in the performance of brain-computer interfaces (BCI). Approach. We consider the task-pairs derived from multi-class BCI imagery movement tasks in three different datasets. We analyze for the first time the benefits of task selection on a large-scale basis (109 users) and evaluate the possibility of transferring task-pair information across days for a given subject. Main results. Selecting the subject-dependent optimal task-pair among three different imagery movement tasks results in approximately 20% potential increase in the number of users that can be expected to control a binary BCI. The improvement is observed with respect to the best task-pair fixed across subjects. The best task-pair selected for each subject individually during a first day of recordings is generally a good task-pair in subsequent days. In general, task learning from the user side has a positive influence in the generalization of the optimal task-pair, but special attention should be given to inexperienced subjects. Significance. These results add significant evidence to existing literature that advocates task selection as a necessary step towards usable BCIs. This contribution motivates further research focused on deriving adaptive methods for task selection on larger sets of mental tasks in practical online scenarios.
No psychological effect of color context in a low level vision task
Pedley, Adam; Wade, Alex R
2013-01-01
Background: A remarkable series of recent papers have shown that colour can influence performance in cognitive tasks. In particular, they suggest that viewing a participant number printed in red ink or other red ancillary stimulus elements improves performance in tasks requiring local processing and impedes performance in tasks requiring global processing whilst the reverse is true for the colour blue. The tasks in these experiments require high level cognitive processing such as analogy solving or remote association tests and the chromatic effect on local vs. global processing is presumed to involve widespread activation of the autonomic nervous system. If this is the case, we might expect to see similar effects on all local vs. global task comparisons. To test this hypothesis, we asked whether chromatic cues also influence performance in tasks involving low level visual feature integration. Methods: Subjects performed either local (contrast detection) or global (form detection) tasks on achromatic dynamic Glass pattern stimuli. Coloured instructions, target frames and fixation points were used to attempt to bias performance to different task types. Based on previous literature, we hypothesised that red cues would improve performance in the (local) contrast detection task but would impede performance in the (global) form detection task. Results: A two-way, repeated measures, analysis of covariance (2×2 ANCOVA) with gender as a covariate, revealed no influence of colour on either task, F(1,29) = 0.289, p = 0.595, partial η 2 = 0.002. Additional analysis revealed no significant differences in only the first attempts of the tasks or in the improvement in performance between trials. Discussion: We conclude that motivational processes elicited by colour perception do not influence neuronal signal processing in the early visual system, in stark contrast to their putative effects on processing in higher areas. PMID:25075280
Warbrick, Tracy; Reske, Martina; Shah, N Jon
2014-09-22
As cognitive neuroscience methods develop, established experimental tasks are used with emerging brain imaging modalities. Here transferring a paradigm (the visual oddball task) with a long history of behavioral and electroencephalography (EEG) experiments to a functional magnetic resonance imaging (fMRI) experiment is considered. The aims of this paper are to briefly describe fMRI and when its use is appropriate in cognitive neuroscience; illustrate how task design can influence the results of an fMRI experiment, particularly when that task is borrowed from another imaging modality; explain the practical aspects of performing an fMRI experiment. It is demonstrated that manipulating the task demands in the visual oddball task results in different patterns of blood oxygen level dependent (BOLD) activation. The nature of the fMRI BOLD measure means that many brain regions are found to be active in a particular task. Determining the functions of these areas of activation is very much dependent on task design and analysis. The complex nature of many fMRI tasks means that the details of the task and its requirements need careful consideration when interpreting data. The data show that this is particularly important in those tasks relying on a motor response as well as cognitive elements and that covert and overt responses should be considered where possible. Furthermore, the data show that transferring an EEG paradigm to an fMRI experiment needs careful consideration and it cannot be assumed that the same paradigm will work equally well across imaging modalities. It is therefore recommended that the design of an fMRI study is pilot tested behaviorally to establish the effects of interest and then pilot tested in the fMRI environment to ensure appropriate design, implementation and analysis for the effects of interest.
NASA Astrophysics Data System (ADS)
Shukri, S. Ahmad; Millar, R.; Gratton, G.; Garner, M.; Noh, H. Mohd
2017-12-01
Documentation errors and human errors are often claimed to be the contributory factors for aircraft maintenance mistakes. This paper highlights the preliminary results of the third phase of a four-phased research on communication media that are utilised in an aircraft maintenance organisation. The second phase has looked into the probability of success and failure in completing a task by 60 subjects while in this third phase, the same subjects have been interviewed immediately after completing the task by using Root Cause Analysis (RCA) method. It is discovered that the root cause of their inability to finish the task while using only written manual is the absence of diagrams. However, haste is identified to be the root cause for the incompletion of the task when both manual and diagram are given to the participants. It is observed that those who are able to complete the task is due to their reference to both manual and diagram, simultaneously.
ERIC Educational Resources Information Center
Zheng, Lanqin; Yang, Kaicheng; Huang, Ronghuai
2012-01-01
This study proposes a new method named the IIS-map-based method for analyzing interactions in face-to-face collaborative learning settings. This analysis method is conducted in three steps: firstly, drawing an initial IIS-map according to collaborative tasks; secondly, coding and segmenting information flows into information items of IIS; thirdly,…
Deep learning for tumor classification in imaging mass spectrometry.
Behrmann, Jens; Etmann, Christian; Boskamp, Tobias; Casadonte, Rita; Kriegsmann, Jörg; Maaß, Peter
2018-04-01
Tumor classification using imaging mass spectrometry (IMS) data has a high potential for future applications in pathology. Due to the complexity and size of the data, automated feature extraction and classification steps are required to fully process the data. Since mass spectra exhibit certain structural similarities to image data, deep learning may offer a promising strategy for classification of IMS data as it has been successfully applied to image classification. Methodologically, we propose an adapted architecture based on deep convolutional networks to handle the characteristics of mass spectrometry data, as well as a strategy to interpret the learned model in the spectral domain based on a sensitivity analysis. The proposed methods are evaluated on two algorithmically challenging tumor classification tasks and compared to a baseline approach. Competitiveness of the proposed methods is shown on both tasks by studying the performance via cross-validation. Moreover, the learned models are analyzed by the proposed sensitivity analysis revealing biologically plausible effects as well as confounding factors of the considered tasks. Thus, this study may serve as a starting point for further development of deep learning approaches in IMS classification tasks. https://gitlab.informatik.uni-bremen.de/digipath/Deep_Learning_for_Tumor_Classification_in_IMS. jbehrmann@uni-bremen.de or christianetmann@uni-bremen.de. Supplementary data are available at Bioinformatics online.
Non-invasive detection of language-related prefrontal high gamma band activity with beamforming MEG.
Hashimoto, Hiroaki; Hasegawa, Yuka; Araki, Toshihiko; Sugata, Hisato; Yanagisawa, Takufumi; Yorifuji, Shiro; Hirata, Masayuki
2017-10-27
High gamma band (>50 Hz) activity is a key oscillatory phenomenon of brain activation. However, there has not been a non-invasive method established to detect language-related high gamma band activity. We used a 160-channel whole-head magnetoencephalography (MEG) system equipped with superconducting quantum interference device (SQUID) gradiometers to non-invasively investigate neuromagnetic activities during silent reading and verb generation tasks in 15 healthy participants. Individual data were divided into alpha (8-13 Hz), beta (13-25 Hz), low gamma (25-50 Hz), and high gamma (50-100 Hz) bands and analysed with the beamformer method. The time window was consecutively moved. Group analysis was performed to delineate common areas of brain activation. In the verb generation task, transient power increases in the high gamma band appeared in the left middle frontal gyrus (MFG) at the 550-750 ms post-stimulus window. We set a virtual sensor on the left MFG for time-frequency analysis, and high gamma event-related synchronization (ERS) induced by a verb generation task was demonstrated at 650 ms. In contrast, ERS in the high gamma band was not detected in the silent reading task. Thus, our study successfully non-invasively measured language-related prefrontal high gamma band activity.
ERIC Educational Resources Information Center
Nielsen, Richard A.
2016-01-01
This article shows how statistical matching methods can be used to select "most similar" cases for qualitative analysis. I first offer a methodological justification for research designs based on selecting most similar cases. I then discuss the applicability of existing matching methods to the task of selecting most similar cases and…
Reentry Hazard Analysis Handbook
DOT National Transportation Integrated Search
2005-01-28
The Aerospace Corporation was tasked by the Volpe National Transportation Systems Center provide technical support to the Federal Aviation Administration, Office of Commercial Space Transportation (FAA/AST), in developing acceptable methods of evalua...
Exploring the single-cell RNA-seq analysis landscape with the scRNA-tools database.
Zappia, Luke; Phipson, Belinda; Oshlack, Alicia
2018-06-25
As single-cell RNA-sequencing (scRNA-seq) datasets have become more widespread the number of tools designed to analyse these data has dramatically increased. Navigating the vast sea of tools now available is becoming increasingly challenging for researchers. In order to better facilitate selection of appropriate analysis tools we have created the scRNA-tools database (www.scRNA-tools.org) to catalogue and curate analysis tools as they become available. Our database collects a range of information on each scRNA-seq analysis tool and categorises them according to the analysis tasks they perform. Exploration of this database gives insights into the areas of rapid development of analysis methods for scRNA-seq data. We see that many tools perform tasks specific to scRNA-seq analysis, particularly clustering and ordering of cells. We also find that the scRNA-seq community embraces an open-source and open-science approach, with most tools available under open-source licenses and preprints being extensively used as a means to describe methods. The scRNA-tools database provides a valuable resource for researchers embarking on scRNA-seq analysis and records the growth of the field over time.
Correlation between safety assessments in the driver-car interaction design process.
Broström, Robert; Bengtsson, Peter; Axelsson, Jakob
2011-05-01
With the functional revolution in modern cars, evaluation methods to be used in all phases of driver-car interaction design have gained importance. It is crucial for car manufacturers to discover and solve safety issues early in the interaction design process. A current problem is thus to find a correlation between the formative methods that are used during development and the summative methods that are used when the product has reached the customer. This paper investigates the correlation between efficiency metrics from summative and formative evaluations, where the results of two studies on sound and navigation system tasks are compared. The first, an analysis of the J.D. Power and Associates APEAL survey, consists of answers given by about two thousand customers. The second, an expert evaluation study, was done by six evaluators who assessed the layouts by task completion time, TLX and Nielsen heuristics. The results show a high degree of correlation between the studies in terms of task efficiency, i.e. between customer ratings and task completion time, and customer ratings and TLX. However, no correlation was observed between Nielsen heuristics and customer ratings, task completion time or TLX. The results of the studies introduce a possibility to develop a usability evaluation framework that includes both formative and summative approaches, as the results show a high degree of consistency between the different methodologies. Hence, combining a quantitative approach with the expert evaluation method, such as task completion time, should be more useful for driver-car interaction design. Copyright © 2010 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Suzuki, Satoshi
2017-09-01
This study investigated the spatial distribution of brain activity on body schema (BS) modification induced by natural body motion using two versions of a hand-tracing task. In Task 1, participants traced Japanese Hiragana characters using the right forefinger, requiring no BS expansion. In Task 2, participants performed the tracing task with a long stick, requiring BS expansion. Spatial distribution was analyzed using general linear model (GLM)-based statistical parametric mapping of near-infrared spectroscopy data contaminated with motion artifacts caused by the hand-tracing task. Three methods were utilized in series to counter the artifacts, and optimal conditions and modifications were investigated: a model-free method (Step 1), a convolution matrix method (Step 2), and a boxcar-function-based Gaussian convolution method (Step 3). The results revealed four methodological findings: (1) Deoxyhemoglobin was suitable for the GLM because both Akaike information criterion and the variance against the averaged hemodynamic response function were smaller than for other signals, (2) a high-pass filter with a cutoff frequency of .014 Hz was effective, (3) the hemodynamic response function computed from a Gaussian kernel function and its first- and second-derivative terms should be included in the GLM model, and (4) correction of non-autocorrelation and use of effective degrees of freedom were critical. Investigating z-maps computed according to these guidelines revealed that contiguous areas of BA7-BA40-BA21 in the right hemisphere became significantly activated ([Formula: see text], [Formula: see text], and [Formula: see text], respectively) during BS modification while performing the hand-tracing task.
Overview of BioCreative II gene mention recognition.
Smith, Larry; Tanabe, Lorraine K; Ando, Rie Johnson nee; Kuo, Cheng-Ju; Chung, I-Fang; Hsu, Chun-Nan; Lin, Yu-Shi; Klinger, Roman; Friedrich, Christoph M; Ganchev, Kuzman; Torii, Manabu; Liu, Hongfang; Haddow, Barry; Struble, Craig A; Povinelli, Richard J; Vlachos, Andreas; Baumgartner, William A; Hunter, Lawrence; Carpenter, Bob; Tsai, Richard Tzong-Han; Dai, Hong-Jie; Liu, Feng; Chen, Yifei; Sun, Chengjie; Katrenko, Sophia; Adriaans, Pieter; Blaschke, Christian; Torres, Rafael; Neves, Mariana; Nakov, Preslav; Divoli, Anna; Maña-López, Manuel; Mata, Jacinto; Wilbur, W John
2008-01-01
Nineteen teams presented results for the Gene Mention Task at the BioCreative II Workshop. In this task participants designed systems to identify substrings in sentences corresponding to gene name mentions. A variety of different methods were used and the results varied with a highest achieved F1 score of 0.8721. Here we present brief descriptions of all the methods used and a statistical analysis of the results. We also demonstrate that, by combining the results from all submissions, an F score of 0.9066 is feasible, and furthermore that the best result makes use of the lowest scoring submissions.
Overview of BioCreative II gene mention recognition
Smith, Larry; Tanabe, Lorraine K; Ando, Rie Johnson nee; Kuo, Cheng-Ju; Chung, I-Fang; Hsu, Chun-Nan; Lin, Yu-Shi; Klinger, Roman; Friedrich, Christoph M; Ganchev, Kuzman; Torii, Manabu; Liu, Hongfang; Haddow, Barry; Struble, Craig A; Povinelli, Richard J; Vlachos, Andreas; Baumgartner, William A; Hunter, Lawrence; Carpenter, Bob; Tsai, Richard Tzong-Han; Dai, Hong-Jie; Liu, Feng; Chen, Yifei; Sun, Chengjie; Katrenko, Sophia; Adriaans, Pieter; Blaschke, Christian; Torres, Rafael; Neves, Mariana; Nakov, Preslav; Divoli, Anna; Maña-López, Manuel; Mata, Jacinto; Wilbur, W John
2008-01-01
Nineteen teams presented results for the Gene Mention Task at the BioCreative II Workshop. In this task participants designed systems to identify substrings in sentences corresponding to gene name mentions. A variety of different methods were used and the results varied with a highest achieved F1 score of 0.8721. Here we present brief descriptions of all the methods used and a statistical analysis of the results. We also demonstrate that, by combining the results from all submissions, an F score of 0.9066 is feasible, and furthermore that the best result makes use of the lowest scoring submissions. PMID:18834493
Lather (Interior Systems Mechanic). Occupational Analyses Series.
ERIC Educational Resources Information Center
Chapman, Mike; Chapman, Carol; MacLean, Margaret
This analysis covers tasks performed by a lather, an occupational title some provinces and territories of Canada have also identified as drywall and acoustical mechanic; interior systems installer; and interior systems mechanic. A guide to analysis discusses development, structure, and validation method; scope of the occupation; trends; and…
Auditory Scene Analysis: An Attention Perspective
ERIC Educational Resources Information Center
Sussman, Elyse S.
2017-01-01
Purpose: This review article provides a new perspective on the role of attention in auditory scene analysis. Method: A framework for understanding how attention interacts with stimulus-driven processes to facilitate task goals is presented. Previously reported data obtained through behavioral and electrophysiological measures in adults with normal…
Bricklayer. Occupational Analyses Series.
ERIC Educational Resources Information Center
Cap, Orest; Cap, Ihor; Semenovych, Viktor
This analysis covers tasks performed by a bricklayer, an occupational title some provinces and territories of Canada have also identified as bricklayer-mason, brick and stone mason, and mason. A guide to analysis discusses development, structure, and validation method; scope of the occupation; trends; and safety. To facilitate understanding the…
Mathematical and information maintenance of biometric systems
NASA Astrophysics Data System (ADS)
Boriev, Z.; Sokolov, S.; Nyrkov, A.; Nekrasova, A.
2016-04-01
This article describes the different mathematical methods for processing biometric data. A brief overview of methods for personality recognition by means of a signature is conducted. Mathematical solutions of a dynamic authentication method are considered. Recommendations on use of certain mathematical methods, depending on specific tasks, are provided. Based on the conducted analysis of software and the choice made in favor of the wavelet analysis, a brief basis for its use in the course of software development for biometric personal identification is given for the purpose of its practical application.
2015-10-01
capability to meet the task to the standard under the condition, nothing more or less, else the funding is wasted . Also, that funding for the...bin to segregate gaps qualitatively before the gap value model determined preference among gaps within the bins. Computation of a gap’s...for communication, interpretation, or processing by humans or by automatic means (as it pertains to modeling and simulation). Delphi Method -- a
Scotch, Matthew; Parmanto, Bambang; Monaco, Valerie
2008-01-01
Background Data analysis in community health assessment (CHA) involves the collection, integration, and analysis of large numerical and spatial data sets in order to identify health priorities. Geographic Information Systems (GIS) enable for management and analysis using spatial data, but have limitations in performing analysis of numerical data because of its traditional database architecture. On-Line Analytical Processing (OLAP) is a multidimensional datawarehouse designed to facilitate querying of large numerical data. Coupling the spatial capabilities of GIS with the numerical analysis of OLAP, might enhance CHA data analysis. OLAP-GIS systems have been developed by university researchers and corporations, yet their potential for CHA data analysis is not well understood. To evaluate the potential of an OLAP-GIS decision support system for CHA problem solving, we compared OLAP-GIS to the standard information technology (IT) currently used by many public health professionals. Methods SOVAT, an OLAP-GIS decision support system developed at the University of Pittsburgh, was compared against current IT for data analysis for CHA. For this study, current IT was considered the combined use of SPSS and GIS ("SPSS-GIS"). Graduate students, researchers, and faculty in the health sciences at the University of Pittsburgh were recruited. Each round consisted of: an instructional video of the system being evaluated, two practice tasks, five assessment tasks, and one post-study questionnaire. Objective and subjective measurement included: task completion time, success in answering the tasks, and system satisfaction. Results Thirteen individuals participated. Inferential statistics were analyzed using linear mixed model analysis. SOVAT was statistically significant (α = .01) from SPSS-GIS for satisfaction and time (p < .002). Descriptive results indicated that participants had greater success in answering the tasks when using SOVAT as compared to SPSS-GIS. Conclusion Using SOVAT, tasks were completed more efficiently, with a higher rate of success, and with greater satisfaction, than the combined use of SPSS and GIS. The results from this study indicate a potential for OLAP-GIS decision support systems as a valuable tool for CHA data analysis. PMID:18541037
A Human Factors Analysis of EVA Time Requirements
NASA Technical Reports Server (NTRS)
Pate, Dennis W.
1997-01-01
Human Factors Engineering (HFE) is a discipline whose goal is to engineer a safer, more efficient interface between humans and machines. HFE makes use of a wide range of tools and techniques to fulfill this goal. One of these tools is known as motion and time study, a technique used to develop time standards for given tasks. During the summer of 1995, a human factors motion and time study was initiated with the goals of developing a database of EVA task times and developing a method of utilizing the database to predict how long an EVA should take. Initial development relied on the EVA activities performed during the STS-61 (Hubble) mission. The first step of the study was to become familiar with EVA's, the previous task-time studies, and documents produced on EVA's. After reviewing these documents, an initial set of task primitives and task-time modifiers was developed. Data was collected from videotaped footage of two entire STS-61 EVA missions and portions of several others, each with two EVA astronauts. Feedback from the analysis of the data was used to further refine the primitives and modifiers used. The project was continued during the summer of 1996, during which data on human errors was also collected and analyzed. Additional data from the STS-71 mission was also collected. Analysis of variance techniques for categorical data was used to determine which factors may affect the primitive times and how much of an effect they have. Probability distributions for the various task were also generated. Further analysis of the modifiers and interactions is planned.
Task exposures in an office environment: a comparison of methods.
Van Eerd, Dwayne; Hogg-Johnson, Sheilah; Mazumder, Anjali; Cole, Donald; Wells, Richard; Moore, Anne
2009-10-01
Task-related factors such as frequency and duration are associated with musculoskeletal disorders in office settings. The primary objective was to compare various task recording methods as measures of exposure in an office workplace. A total of 41 workers from different jobs were recruited from a large urban newspaper (71% female, mean age 41 years SD 9.6). Questionnaire, task diaries, direct observation and video methods were used to record tasks. A common set of task codes was used across methods. Different estimates of task duration, number of tasks and task transitions arose from the different methods. Self-report methods did not consistently result in longer task duration estimates. Methodological issues could explain some of the differences in estimates seen between methods observed. It was concluded that different task recording methods result in different estimates of exposure likely due to different exposure constructs. This work addresses issues of exposure measurement in office environments. It is of relevance to ergonomists/researchers interested in how to best assess the risk of injury among office workers. The paper discusses the trade-offs between precision, accuracy and burden in the collection of computer task-based exposure measures and different underlying constructs captures in each method.
Burge, Johannes
2017-01-01
Accuracy Maximization Analysis (AMA) is a recently developed Bayesian ideal observer method for task-specific dimensionality reduction. Given a training set of proximal stimuli (e.g. retinal images), a response noise model, and a cost function, AMA returns the filters (i.e. receptive fields) that extract the most useful stimulus features for estimating a user-specified latent variable from those stimuli. Here, we first contribute two technical advances that significantly reduce AMA’s compute time: we derive gradients of cost functions for which two popular estimators are appropriate, and we implement a stochastic gradient descent (AMA-SGD) routine for filter learning. Next, we show how the method can be used to simultaneously probe the impact on neural encoding of natural stimulus variability, the prior over the latent variable, noise power, and the choice of cost function. Then, we examine the geometry of AMA’s unique combination of properties that distinguish it from better-known statistical methods. Using binocular disparity estimation as a concrete test case, we develop insights that have general implications for understanding neural encoding and decoding in a broad class of fundamental sensory-perceptual tasks connected to the energy model. Specifically, we find that non-orthogonal (partially redundant) filters with scaled additive noise tend to outperform orthogonal filters with constant additive noise; non-orthogonal filters and scaled additive noise can interact to sculpt noise-induced stimulus encoding uncertainty to match task-irrelevant stimulus variability. Thus, we show that some properties of neural response thought to be biophysical nuisances can confer coding advantages to neural systems. Finally, we speculate that, if repurposed for the problem of neural systems identification, AMA may be able to overcome a fundamental limitation of standard subunit model estimation. As natural stimuli become more widely used in the study of psychophysical and neurophysiological performance, we expect that task-specific methods for feature learning like AMA will become increasingly important. PMID:28178266
Vera, Angelina M; Russo, Michael; Mohsin, Adnan; Tsuda, Shawn
2014-12-01
Laparoscopic skills training has evolved over recent years. However, conveying a mentor's directions using conventional methods, without realistic on-screen visual cues, can be difficult and confusing. To facilitate laparoscopic skill transference, an augmented reality telementoring (ART) platform was designed to overlay the instruments of a mentor onto the trainee's laparoscopic monitor. The aim of this study was to compare the effectiveness of this new teaching modality to traditional methods in novices performing an intracorporeal suturing task. Nineteen pre-medical and medical students were randomized into traditional mentoring (n = 9) and ART (n = 10) groups for a laparoscopic suturing and knot-tying task. Subjects received either traditional mentoring or ART for 1 h on the validated fundamentals of laparoscopic surgery intracorporeal suturing task. Tasks for suturing were recorded and scored for time and errors. Results were analyzed using means, standard deviation, power regression analysis, correlation coefficient, analysis of variance, and student's t test. Using Wright's cumulative average model (Y = aX (b)) the learning curve slope was significantly steeper, demonstrating faster skill acquisition, for the ART group (b = -0.567, r (2) = 0.92) than the control group (b = -0.453, r (2) = 0.74). At the end of 10 repetitions or 1 h of practice, the ART group was faster versus traditional (mean 167.4 vs. 242.4 s, p = 0.014). The ART group also had fewer fails (8) than the traditional group (13). The ART Platform may be a more effective training technique in teaching laparoscopic skills to novices compared to traditional methods. ART conferred a shorter learning curve, which was more pronounced in the first 4 trials. ART reduced the number of failed attempts and resulted in faster suture times by the end of the training session. ART may be a more effective training tool in laparoscopic surgical training for complex tasks than traditional methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, G.A.; Sepehrnoori, K.
1995-12-31
The objective of this research is to develop cost-effective surfactant flooding technology by using simulation studies to evaluate and optimize alternative design strategies taking into account reservoir characteristics process chemistry, and process design options such as horizontal wells. Task 1 is the development of an improved numerical method for our simulator that will enable us to solve a wider class of these difficult simulation problems accurately and affordably. Task 2 is the application of this simulator to the optimization of surfactant flooding to reduce its risk and cost. In this quarter, we have continued working on Task 2 to optimizemore » surfactant flooding design and have included economic analysis to the optimization process. An economic model was developed using a spreadsheet and the discounted cash flow (DCF) method of economic analysis. The model was designed specifically for a domestic onshore surfactant flood and has been used to economically evaluate previous work that used a technical approach to optimization. The DCF model outputs common economic decision making criteria, such as net present value (NPV), internal rate of return (IRR), and payback period.« less
Life Sciences Research Facility automation requirements and concepts for the Space Station
NASA Technical Reports Server (NTRS)
Rasmussen, Daryl N.
1986-01-01
An evaluation is made of the methods and preliminary results of a study on prospects for the automation of the NASA Space Station's Life Sciences Research Facility. In order to remain within current Space Station resource allocations, approximately 85 percent of planned life science experiment tasks must be automated; these tasks encompass specimen care and feeding, cage and instrument cleaning, data acquisition and control, sample analysis, waste management, instrument calibration, materials inventory and management, and janitorial work. Task automation will free crews for specimen manipulation, tissue sampling, data interpretation and communication with ground controllers, and experiment management.
How Acute Total Sleep Loss Affects the Attending Brain: A Meta-Analysis of Neuroimaging Studies
Ma, Ning; Dinges, David F.; Basner, Mathias; Rao, Hengyi
2015-01-01
Study Objectives: Attention is a cognitive domain that can be severely affected by sleep deprivation. Previous neuroimaging studies have used different attention paradigms and reported both increased and reduced brain activation after sleep deprivation. However, due to large variability in sleep deprivation protocols, task paradigms, experimental designs, characteristics of subject populations, and imaging techniques, there is no consensus regarding the effects of sleep loss on the attending brain. The aim of this meta-analysis was to identify brain activations that are commonly altered by acute total sleep deprivation across different attention tasks. Design: Coordinate-based meta-analysis of neuroimaging studies of performance on attention tasks during experimental sleep deprivation. Methods: The current version of the activation likelihood estimation (ALE) approach was used for meta-analysis. The authors searched published articles and identified 11 sleep deprivation neuroimaging studies using different attention tasks with a total of 185 participants, equaling 81 foci for ALE analysis. Results: The meta-analysis revealed significantly reduced brain activation in multiple regions following sleep deprivation compared to rested wakefulness, including bilateral intraparietal sulcus, bilateral insula, right prefrontal cortex, medial frontal cortex, and right parahippocampal gyrus. Increased activation was found only in bilateral thalamus after sleep deprivation compared to rested wakefulness. Conclusion: Acute total sleep deprivation decreases brain activation in the fronto-parietal attention network (prefrontal cortex and intraparietal sulcus) and in the salience network (insula and medial frontal cortex). Increased thalamic activation after sleep deprivation may reflect a complex interaction between the de-arousing effects of sleep loss and the arousing effects of task performance on thalamic activity. Citation: Ma N, Dinges DF, Basner M, Rao H. How acute total sleep loss affects the attending brain: a meta-analysis of neuroimaging studies. SLEEP 2015;38(2):233–240. PMID:25409102
TEES 2.2: Biomedical Event Extraction for Diverse Corpora
2015-01-01
Background The Turku Event Extraction System (TEES) is a text mining program developed for the extraction of events, complex biomedical relationships, from scientific literature. Based on a graph-generation approach, the system detects events with the use of a rich feature set built via dependency parsing. The TEES system has achieved record performance in several of the shared tasks of its domain, and continues to be used in a variety of biomedical text mining tasks. Results The TEES system was quickly adapted to the BioNLP'13 Shared Task in order to provide a public baseline for derived systems. An automated approach was developed for learning the underlying annotation rules of event type, allowing immediate adaptation to the various subtasks, and leading to a first place in four out of eight tasks. The system for the automated learning of annotation rules is further enhanced in this paper to the point of requiring no manual adaptation to any of the BioNLP'13 tasks. Further, the scikit-learn machine learning library is integrated into the system, bringing a wide variety of machine learning methods usable with TEES in addition to the default SVM. A scikit-learn ensemble method is also used to analyze the importances of the features in the TEES feature sets. Conclusions The TEES system was introduced for the BioNLP'09 Shared Task and has since then demonstrated good performance in several other shared tasks. By applying the current TEES 2.2 system to multiple corpora from these past shared tasks an overarching analysis of the most promising methods and possible pitfalls in the evolving field of biomedical event extraction are presented. PMID:26551925
TEES 2.2: Biomedical Event Extraction for Diverse Corpora.
Björne, Jari; Salakoski, Tapio
2015-01-01
The Turku Event Extraction System (TEES) is a text mining program developed for the extraction of events, complex biomedical relationships, from scientific literature. Based on a graph-generation approach, the system detects events with the use of a rich feature set built via dependency parsing. The TEES system has achieved record performance in several of the shared tasks of its domain, and continues to be used in a variety of biomedical text mining tasks. The TEES system was quickly adapted to the BioNLP'13 Shared Task in order to provide a public baseline for derived systems. An automated approach was developed for learning the underlying annotation rules of event type, allowing immediate adaptation to the various subtasks, and leading to a first place in four out of eight tasks. The system for the automated learning of annotation rules is further enhanced in this paper to the point of requiring no manual adaptation to any of the BioNLP'13 tasks. Further, the scikit-learn machine learning library is integrated into the system, bringing a wide variety of machine learning methods usable with TEES in addition to the default SVM. A scikit-learn ensemble method is also used to analyze the importances of the features in the TEES feature sets. The TEES system was introduced for the BioNLP'09 Shared Task and has since then demonstrated good performance in several other shared tasks. By applying the current TEES 2.2 system to multiple corpora from these past shared tasks an overarching analysis of the most promising methods and possible pitfalls in the evolving field of biomedical event extraction are presented.
Structural synthesis: Precursor and catalyst
NASA Technical Reports Server (NTRS)
Schmit, L. A.
1984-01-01
More than twenty five years have elapsed since it was recognized that a rather general class of structural design optimization tasks could be properly posed as an inequality constrained minimization problem. It is suggested that, independent of primary discipline area, it will be useful to think about: (1) posing design problems in terms of an objective function and inequality constraints; (2) generating design oriented approximate analysis methods (giving special attention to behavior sensitivity analysis); (3) distinguishing between decisions that lead to an analysis model and those that lead to a design model; (4) finding ways to generate a sequence of approximate design optimization problems that capture the essential characteristics of the primary problem, while still having an explicit algebraic form that is matched to one or more of the established optimization algorithms; (5) examining the potential of optimum design sensitivity analysis to facilitate quantitative trade-off studies as well as participation in multilevel design activities. It should be kept in mind that multilevel methods are inherently well suited to a parallel mode of operation in computer terms or to a division of labor between task groups in organizational terms. Based on structural experience with multilevel methods general guidelines are suggested.
Lazaris, Charalampos; Kelly, Stephen; Ntziachristos, Panagiotis; Aifantis, Iannis; Tsirigos, Aristotelis
2017-01-05
Chromatin conformation capture techniques have evolved rapidly over the last few years and have provided new insights into genome organization at an unprecedented resolution. Analysis of Hi-C data is complex and computationally intensive involving multiple tasks and requiring robust quality assessment. This has led to the development of several tools and methods for processing Hi-C data. However, most of the existing tools do not cover all aspects of the analysis and only offer few quality assessment options. Additionally, availability of a multitude of tools makes scientists wonder how these tools and associated parameters can be optimally used, and how potential discrepancies can be interpreted and resolved. Most importantly, investigators need to be ensured that slight changes in parameters and/or methods do not affect the conclusions of their studies. To address these issues (compare, explore and reproduce), we introduce HiC-bench, a configurable computational platform for comprehensive and reproducible analysis of Hi-C sequencing data. HiC-bench performs all common Hi-C analysis tasks, such as alignment, filtering, contact matrix generation and normalization, identification of topological domains, scoring and annotation of specific interactions using both published tools and our own. We have also embedded various tasks that perform quality assessment and visualization. HiC-bench is implemented as a data flow platform with an emphasis on analysis reproducibility. Additionally, the user can readily perform parameter exploration and comparison of different tools in a combinatorial manner that takes into account all desired parameter settings in each pipeline task. This unique feature facilitates the design and execution of complex benchmark studies that may involve combinations of multiple tool/parameter choices in each step of the analysis. To demonstrate the usefulness of our platform, we performed a comprehensive benchmark of existing and new TAD callers exploring different matrix correction methods, parameter settings and sequencing depths. Users can extend our pipeline by adding more tools as they become available. HiC-bench consists an easy-to-use and extensible platform for comprehensive analysis of Hi-C datasets. We expect that it will facilitate current analyses and help scientists formulate and test new hypotheses in the field of three-dimensional genome organization.
Dasgupta, Aritra; Lee, Joon-Yong; Wilson, Ryan; Lafrance, Robert A; Cramer, Nick; Cook, Kristin; Payne, Samuel
2017-01-01
Combining interactive visualization with automated analytical methods like statistics and data mining facilitates data-driven discovery. These visual analytic methods are beginning to be instantiated within mixed-initiative systems, where humans and machines collaboratively influence evidence-gathering and decision-making. But an open research question is that, when domain experts analyze their data, can they completely trust the outputs and operations on the machine-side? Visualization potentially leads to a transparent analysis process, but do domain experts always trust what they see? To address these questions, we present results from the design and evaluation of a mixed-initiative, visual analytics system for biologists, focusing on analyzing the relationships between familiarity of an analysis medium and domain experts' trust. We propose a trust-augmented design of the visual analytics system, that explicitly takes into account domain-specific tasks, conventions, and preferences. For evaluating the system, we present the results of a controlled user study with 34 biologists where we compare the variation of the level of trust across conventional and visual analytic mediums and explore the influence of familiarity and task complexity on trust. We find that despite being unfamiliar with a visual analytic medium, scientists seem to have an average level of trust that is comparable with the same in conventional analysis medium. In fact, for complex sense-making tasks, we find that the visual analytic system is able to inspire greater trust than other mediums. We summarize the implications of our findings with directions for future research on trustworthiness of visual analytic systems.
Mills, Kelly A; Markun, Leslie C; Luciano, Marta San; Rizk, Rami; Allen, I Elaine; Racine, Caroline A; Starr, Philip A; Alberts, Jay L; Ostrem, Jill L
2015-01-01
Objective Subthalamic nucleus (STN) deep brain stimulation (DBS) can improve motor complications of Parkinson's disease (PD) but may worsen specific cognitive functions. The effect of STN DBS on cognitive function in dystonia patients is less clear. Previous reports indicate that bilateral STN stimulation in patients with PD amplifies the decrement in cognitive-motor dual-task performance seen when moving from a single-task to dual-task paradigm. We aimed to determine if the effect of bilateral STN DBS on dual-task performance in isolated patients with dystonia, who have less cognitive impairment and no dementia, is similar to that seen in PD. Methods Eight isolated predominantly cervical patients with dystonia treated with bilateral STN DBS, with average dystonia duration of 10.5 years and Montreal Cognitive Assessment score of 26.5, completed working memory (n-back) and motor (forced-maintenance) tests under single-task and dual-task conditions while on and off DBS. Results A multivariate, repeated-measures analysis of variance showed no effect of stimulation status (On vs Off) on working memory (F=0.75, p=0.39) or motor function (F=0.22, p=0.69) when performed under single-task conditions, though as working memory task difficulty increased, stimulation disrupted the accuracy of force-tracking. There was a very small worsening in working memory performance (F=9.14, p=0.019) when moving from single-task to dual-tasks when using the ‘dual-task loss’ analysis. Conclusions This study suggests the effect of STN DBS on working memory and attention may be much less consequential in patients with dystonia than has been reported in PD. PMID:25012202
Bayesian analysis of multimethod ego-depletion studies favours the null hypothesis.
Etherton, Joseph L; Osborne, Randall; Stephenson, Katelyn; Grace, Morgan; Jones, Chas; De Nadai, Alessandro S
2018-04-01
Ego-depletion refers to the purported decrease in performance on a task requiring self-control after engaging in a previous task involving self-control, with self-control proposed to be a limited resource. Despite many published studies consistent with this hypothesis, recurrent null findings within our laboratory and indications of publication bias have called into question the validity of the depletion effect. This project used three depletion protocols involved three different depleting initial tasks followed by three different self-control tasks as dependent measures (total n = 840). For each method, effect sizes were not significantly different from zero When data were aggregated across the three different methods and examined meta-analytically, the pooled effect size was not significantly different from zero (for all priors evaluated, Hedges' g = 0.10 with 95% credibility interval of [-0.05, 0.24]) and Bayes factors reflected strong support for the null hypothesis (Bayes factor > 25 for all priors evaluated). © 2018 The British Psychological Society.
Applications of neural networks to landmark detection in 3-D surface data
NASA Astrophysics Data System (ADS)
Arndt, Craig M.
1992-09-01
The problem of identifying key landmarks in 3-dimensional surface data is of considerable interest in solving a number of difficult real-world tasks, including object recognition and image processing. The specific problem that we address in this research is to identify the specific landmarks (anatomical) in human surface data. This is a complex task, currently performed visually by an expert human operator. In order to replace these human operators and increase reliability of the data acquisition, we need to develop a computer algorithm which will utilize the interrelations between the 3-dimensional data to identify the landmarks of interest. The current presentation describes a method for designing, implementing, training, and testing a custom architecture neural network which will perform the landmark identification task. We discuss the performance of the net in relationship to human performance on the same task and how this net has been integrated with other AI and traditional programming methods to produce a powerful analysis tool for computer anthropometry.
Do you see what I see? Mobile eye-tracker contextual analysis and inter-rater reliability.
Stuart, S; Hunt, D; Nell, J; Godfrey, A; Hausdorff, J M; Rochester, L; Alcock, L
2018-02-01
Mobile eye-trackers are currently used during real-world tasks (e.g. gait) to monitor visual and cognitive processes, particularly in ageing and Parkinson's disease (PD). However, contextual analysis involving fixation locations during such tasks is rarely performed due to its complexity. This study adapted a validated algorithm and developed a classification method to semi-automate contextual analysis of mobile eye-tracking data. We further assessed inter-rater reliability of the proposed classification method. A mobile eye-tracker recorded eye-movements during walking in five healthy older adult controls (HC) and five people with PD. Fixations were identified using a previously validated algorithm, which was adapted to provide still images of fixation locations (n = 116). The fixation location was manually identified by two raters (DH, JN), who classified the locations. Cohen's kappa correlation coefficients determined the inter-rater reliability. The algorithm successfully provided still images for each fixation, allowing manual contextual analysis to be performed. The inter-rater reliability for classifying the fixation location was high for both PD (kappa = 0.80, 95% agreement) and HC groups (kappa = 0.80, 91% agreement), which indicated a reliable classification method. This study developed a reliable semi-automated contextual analysis method for gait studies in HC and PD. Future studies could adapt this methodology for various gait-related eye-tracking studies.
Genotype-phenotype association study via new multi-task learning model
Huo, Zhouyuan; Shen, Dinggang
2018-01-01
Research on the associations between genetic variations and imaging phenotypes is developing with the advance in high-throughput genotype and brain image techniques. Regression analysis of single nucleotide polymorphisms (SNPs) and imaging measures as quantitative traits (QTs) has been proposed to identify the quantitative trait loci (QTL) via multi-task learning models. Recent studies consider the interlinked structures within SNPs and imaging QTs through group lasso, e.g. ℓ2,1-norm, leading to better predictive results and insights of SNPs. However, group sparsity is not enough for representing the correlation between multiple tasks and ℓ2,1-norm regularization is not robust either. In this paper, we propose a new multi-task learning model to analyze the associations between SNPs and QTs. We suppose that low-rank structure is also beneficial to uncover the correlation between genetic variations and imaging phenotypes. Finally, we conduct regression analysis of SNPs and QTs. Experimental results show that our model is more accurate in prediction than compared methods and presents new insights of SNPs. PMID:29218896
Knowledge Acquisition Using Linguistic-Based Knowledge Analysis
Daniel L. Schmoldt
1998-01-01
Most knowledge-based system developmentefforts include acquiring knowledge from one or more sources. difficulties associated with this knowledge acquisition task are readily acknowledged by most researchers. While a variety of knowledge acquisition methods have been reported, little has been done to organize those different methods and to suggest how to apply them...
DOT National Transportation Integrated Search
2012-12-01
Current AASHTO provisions for load rating flat-slab concrete bridges use the equivalent strip : width method, which is regarded as overly conservative compared to more advanced analysis : methods and field live load testing. It has been shown that li...
Shinba, Toshikazu; Kariya, Nobutoshi; Matsui, Yasue; Ozawa, Nobuyuki; Matsuda, Yoshiki; Yamamoto, Ken-Ichi
2008-10-01
Previous studies have shown that heart rate variability (HRV) measurement is useful in investigating the pathophysiology of various psychiatric disorders. The present study further examined its usefulness in evaluating the mental health of normal subjects with respect to anxiety and depressiveness. Heart rate (HR) and HRV were measured tonometrically at the wrist in 43 normal subjects not only in the resting condition but also during a task (random number generation) to assess the responsiveness. For HRV measurement, high-frequency (HF; 0.15-0.4 Hz) and low-frequency (LF; 0.04-0.15 Hz) components of HRV were obtained using MemCalc, a time series analysis technique that combines a non-linear least square method with maximum entropy method. For psychological evaluation of anxiety and depressiveness, two self-report questionnaires were used: State-Trait Anxiety Inventory (STAI) and Self-Rating Depression Scale (SDS). No significant relation was observed between HR and HRV indices, and the psychological scores both in the resting and task conditions. By task application, HF decreased, and LF/HF and HR increased, and significant correlation with psychological scores was found in the responsiveness to task measured by the ratio of HRV and HR indices during the task to that at rest (task/rest ratio). A positive relationship was found between task/rest ratio for HF, and STAI and SDS scores. Task/rest ratio of HR was negatively correlated with STAI-state score. Decreased HRV response to task application is related to anxiety and depressiveness. Decreased autonomic responsiveness could serve as a sign of psychological dysfunction.
Analysis of Skeletal Muscle Metrics as Predictors of Functional Task Performance
NASA Technical Reports Server (NTRS)
Ryder, Jeffrey W.; Buxton, Roxanne E.; Redd, Elizabeth; Scott-Pandorf, Melissa; Hackney, Kyle J.; Fiedler, James; Ploutz-Snyder, Robert J.; Bloomberg, Jacob J.; Ploutz-Snyder, Lori L.
2010-01-01
PURPOSE: The ability to predict task performance using physiological performance metrics is vital to ensure that astronauts can execute their jobs safely and effectively. This investigation used a weighted suit to evaluate task performance at various ratios of strength, power, and endurance to body weight. METHODS: Twenty subjects completed muscle performance tests and functional tasks representative of those that would be required of astronauts during planetary exploration (see table for specific tests/tasks). Subjects performed functional tasks while wearing a weighted suit with additional loads ranging from 0-120% of initial body weight. Performance metrics were time to completion for all tasks except hatch opening, which consisted of total work. Task performance metrics were plotted against muscle metrics normalized to "body weight" (subject weight + external load; BW) for each trial. Fractional polynomial regression was used to model the relationship between muscle and task performance. CONCLUSION: LPMIF/BW is the best predictor of performance for predominantly lower-body tasks that are ambulatory and of short duration. LPMIF/BW is a very practical predictor of occupational task performance as it is quick and relatively safe to perform. Accordingly, bench press work best predicts hatch-opening work performance.
Executive Functions in Children with Specific Language Impairment: A Meta-Analysis
ERIC Educational Resources Information Center
Pauls, Laura J.; Archibald, Lisa M. D.
2016-01-01
Purpose: Mounting evidence demonstrates deficits in children with specific language impairment (SLI) beyond the linguistic domain. Using meta-analysis, this study examined differences in children with and without SLI on tasks measuring inhibition and cognitive flexibility. Method: Databases were searched for articles comparing children (4-14…
Insulator (Heat and Frost). Occupational Analyses Series.
ERIC Educational Resources Information Center
McRory, Aline; Ally, Mohamed
This analysis covers tasks performed by an insulator, an occupational title some provinces and territories of Canada have also identified as heat and frost insulator. A guide to analysis discusses development, structure, and validation method; scope of the occupation; trends; and safety. To facilitate understanding the nature of the occupation,…
Formative Use of Intuitive Analysis of Variance
ERIC Educational Resources Information Center
Trumpower, David L.
2013-01-01
Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In both…
2004-01-01
Cognitive Task Analysis Abstract As Department of Defense (DoD) leaders rely more on modeling and simulation to provide information on which to base...capabilities and intent. Cognitive Task Analysis (CTA) Cognitive Task Analysis (CTA) is an extensive/detailed look at tasks and subtasks performed by a...Domain Analysis and Task Analysis: A Difference That Matters. In Cognitive Task Analysis , edited by J. M. Schraagen, S.
Task-Driven Comparison of Topic Models.
Alexander, Eric; Gleicher, Michael
2016-01-01
Topic modeling, a method of statistically extracting thematic content from a large collection of texts, is used for a wide variety of tasks within text analysis. Though there are a growing number of tools and techniques for exploring single models, comparisons between models are generally reduced to a small set of numerical metrics. These metrics may or may not reflect a model's performance on the analyst's intended task, and can therefore be insufficient to diagnose what causes differences between models. In this paper, we explore task-centric topic model comparison, considering how we can both provide detail for a more nuanced understanding of differences and address the wealth of tasks for which topic models are used. We derive comparison tasks from single-model uses of topic models, which predominantly fall into the categories of understanding topics, understanding similarity, and understanding change. Finally, we provide several visualization techniques that facilitate these tasks, including buddy plots, which combine color and position encodings to allow analysts to readily view changes in document similarity.
Li, Ya; Wang, Yongchun; Zhang, Baoqiang; Wang, Yonghui; Zhou, Xiaolin
2018-01-01
Dynamically evaluating the outcomes of our actions and thoughts is a fundamental cognitive ability. Given its excellent temporal resolution, the event-related potential (ERP) technology has been used to address this issue. The feedback-related negativity (FRN) component of ERPs has been studied intensively with the averaged linked mastoid reference method (LM). However, it is unknown whether FRN can be induced by an expectancy violation in an antonym relations context and whether LM is the most suitable reference approach. To address these issues, the current research directly compared the ERP components induced by expectancy violations in antonym expectation and gambling tasks with a within-subjects design and investigated the effect of the reference approach on the experimental effects. Specifically, we systematically compared the influence of the LM, reference electrode standardization technique (REST) and average reference (AVE) approaches on the amplitude, scalp distribution and magnitude of ERP effects as a function of expectancy violation type. The expectancy deviation in the antonym expectation task elicited an N400 effect that differed from the FRN effect induced in the gambling task; this difference was confirmed by all the three reference methods. Both the amplitudes of the ERP effects (N400 and FRN) and the magnitude as the expectancy violation increased were greater under the LM approach than those under the REST approach, followed by those under the AVE approach. Based on the statistical results, the electrode sites that showed the N400 and FRN effects critically depended on the reference method, and the results of the REST analysis were consistent with previous ERP studies. Combined with evidence from simulation studies, we suggest that REST is an optional reference method to be used in future ERP data analysis. PMID:29615858
Fractal dimension based damage identification incorporating multi-task sparse Bayesian learning
NASA Astrophysics Data System (ADS)
Huang, Yong; Li, Hui; Wu, Stephen; Yang, Yongchao
2018-07-01
Sensitivity to damage and robustness to noise are critical requirements for the effectiveness of structural damage detection. In this study, a two-stage damage identification method based on the fractal dimension analysis and multi-task Bayesian learning is presented. The Higuchi’s fractal dimension (HFD) based damage index is first proposed, directly examining the time-frequency characteristic of local free vibration data of structures based on the irregularity sensitivity and noise robustness analysis of HFD. Katz’s fractal dimension is then presented to analyze the abrupt irregularity change of the spatial curve of the displacement mode shape along the structure. At the second stage, the multi-task sparse Bayesian learning technique is employed to infer the final damage localization vector, which borrow the dependent strength of the two fractal dimension based damage indication information and also incorporate the prior knowledge that structural damage occurs at a limited number of locations in a structure in the absence of its collapse. To validate the capability of the proposed method, a steel beam and a bridge, named Yonghe Bridge, are analyzed as illustrative examples. The damage identification results demonstrate that the proposed method is capable of localizing single and multiple damages regardless of its severity, and show superior robustness under heavy noise as well.
Analysis of pilot control strategy
NASA Technical Reports Server (NTRS)
Heffley, R. K.; Hanson, G. D.; Jewell, W. F.; Clement, W. F.
1983-01-01
Methods for nonintrusive identification of pilot control strategy and task execution dynamics are presented along with examples based on flight data. The specific analysis technique is Nonintrusive Parameter Identification Procedure (NIPIP), which is described in a companion user's guide (NASA CR-170398). Quantification of pilot control strategy and task execution dynamics is discussed in general terms followed by a more detailed description of how NIPIP can be applied. The examples are based on flight data obtained from the NASA F-8 digital fly by wire airplane. These examples involve various piloting tasks and control axes as well as a demonstration of how the dynamics of the aircraft itself are identified using NIPIP. Application of NIPIP to the AFTI/F-16 flight test program is discussed. Recommendations are made for flight test applications in general and refinement of NIPIP to include interactive computer graphics.
Total systems design analysis of high performance structures
NASA Technical Reports Server (NTRS)
Verderaime, V.
1993-01-01
Designer-control parameters were identified at interdiscipline interfaces to optimize structural systems performance and downstream development and operations with reliability and least life-cycle cost. Interface tasks and iterations are tracked through a matrix of performance disciplines integration versus manufacturing, verification, and operations interactions for a total system design analysis. Performance integration tasks include shapes, sizes, environments, and materials. Integrity integrating tasks are reliability and recurring structural costs. Significant interface designer control parameters were noted as shapes, dimensions, probability range factors, and cost. Structural failure concept is presented, and first-order reliability and deterministic methods, benefits, and limitations are discussed. A deterministic reliability technique combining benefits of both is proposed for static structures which is also timely and economically verifiable. Though launch vehicle environments were primarily considered, the system design process is applicable to any surface system using its own unique filed environments.
Human factors process failure modes and effects analysis (HF PFMEA) software tool
NASA Technical Reports Server (NTRS)
Chandler, Faith T. (Inventor); Relvini, Kristine M. (Inventor); Shedd, Nathaneal P. (Inventor); Valentino, William D. (Inventor); Philippart, Monica F. (Inventor); Bessette, Colette I. (Inventor)
2011-01-01
Methods, computer-readable media, and systems for automatically performing Human Factors Process Failure Modes and Effects Analysis for a process are provided. At least one task involved in a process is identified, where the task includes at least one human activity. The human activity is described using at least one verb. A human error potentially resulting from the human activity is automatically identified, the human error is related to the verb used in describing the task. A likelihood of occurrence, detection, and correction of the human error is identified. The severity of the effect of the human error is identified. The likelihood of occurrence, and the severity of the risk of potential harm is identified. The risk of potential harm is compared with a risk threshold to identify the appropriateness of corrective measures.
Russ, Alissa L; Militello, Laura G; Glassman, Peter A; Arthur, Karen J; Zillich, Alan J; Weiner, Michael
2017-05-03
Cognitive task analysis (CTA) can yield valuable insights into healthcare professionals' cognition and inform system design to promote safe, quality care. Our objective was to adapt CTA-the critical decision method, specifically-to investigate patient safety incidents, overcome barriers to implementing this method, and facilitate more widespread use of cognitive task analysis in healthcare. We adapted CTA to facilitate recruitment of healthcare professionals and developed a data collection tool to capture incidents as they occurred. We also leveraged the electronic health record (EHR) to expand data capture and used EHR-stimulated recall to aid reconstruction of safety incidents. We investigated 3 categories of medication-related incidents: adverse drug reactions, drug-drug interactions, and drug-disease interactions. Healthcare professionals submitted incidents, and a subset of incidents was selected for CTA. We analyzed several outcomes to characterize incident capture and completed CTA interviews. We captured 101 incidents. Eighty incidents (79%) met eligibility criteria. We completed 60 CTA interviews, 20 for each incident category. Capturing incidents before interviews allowed us to shorten the interview duration and reduced reliance on healthcare professionals' recall. Incorporating the EHR into CTA enriched data collection. The adapted CTA technique was successful in capturing specific categories of safety incidents. Our approach may be especially useful for investigating safety incidents that healthcare professionals "fix and forget." Our innovations to CTA are expected to expand the application of this method in healthcare and inform a wide range of studies on clinical decision making and patient safety.
A formal and data-based comparison of measures of motor-equivalent covariation.
Verrel, Julius
2011-09-15
Different analysis methods have been developed for assessing motor-equivalent organization of movement variability. In the uncontrolled manifold (UCM) method, the structure of variability is analyzed by comparing goal-equivalent and non-goal-equivalent variability components at the level of elemental variables (e.g., joint angles). In contrast, in the covariation by randomization (CR) approach, motor-equivalent organization is assessed by comparing variability at the task level between empirical and decorrelated surrogate data. UCM effects can be due to both covariation among elemental variables and selective channeling of variability to elemental variables with low task sensitivity ("individual variation"), suggesting a link between the UCM and CR method. However, the precise relationship between the notion of covariation in the two approaches has not been analyzed in detail yet. Analysis of empirical and simulated data from a study on manual pointing shows that in general the two approaches are not equivalent, but the respective covariation measures are highly correlated (ρ > 0.7) for two proposed definitions of covariation in the UCM context. For one-dimensional task spaces, a formal comparison is possible and in fact the two notions of covariation are equivalent. In situations in which individual variation does not contribute to UCM effects, for which necessary and sufficient conditions are derived, this entails the equivalence of the UCM and CR analysis. Implications for the interpretation of UCM effects are discussed. Copyright © 2011 Elsevier B.V. All rights reserved.
Quantitative evaluation of muscle synergy models: a single-trial task decoding approach
Delis, Ioannis; Berret, Bastien; Pozzo, Thierry; Panzeri, Stefano
2013-01-01
Muscle synergies, i.e., invariant coordinated activations of groups of muscles, have been proposed as building blocks that the central nervous system (CNS) uses to construct the patterns of muscle activity utilized for executing movements. Several efficient dimensionality reduction algorithms that extract putative synergies from electromyographic (EMG) signals have been developed. Typically, the quality of synergy decompositions is assessed by computing the Variance Accounted For (VAF). Yet, little is known about the extent to which the combination of those synergies encodes task-discriminating variations of muscle activity in individual trials. To address this question, here we conceive and develop a novel computational framework to evaluate muscle synergy decompositions in task space. Unlike previous methods considering the total variance of muscle patterns (VAF based metrics), our approach focuses on variance discriminating execution of different tasks. The procedure is based on single-trial task decoding from muscle synergy activation features. The task decoding based metric evaluates quantitatively the mapping between synergy recruitment and task identification and automatically determines the minimal number of synergies that captures all the task-discriminating variability in the synergy activations. In this paper, we first validate the method on plausibly simulated EMG datasets. We then show that it can be applied to different types of muscle synergy decomposition and illustrate its applicability to real data by using it for the analysis of EMG recordings during an arm pointing task. We find that time-varying and synchronous synergies with similar number of parameters are equally efficient in task decoding, suggesting that in this experimental paradigm they are equally valid representations of muscle synergies. Overall, these findings stress the effectiveness of the decoding metric in systematically assessing muscle synergy decompositions in task space. PMID:23471195
NASA Technical Reports Server (NTRS)
Estes, Samantha; Parker, Nelson C. (Technical Monitor)
2001-01-01
Virtual reality and simulation applications are becoming widespread in human task analysis. These programs have many benefits for the Human Factors Engineering field. Not only do creating and using virtual environments for human engineering analyses save money and time, this approach also promotes user experimentation and provides increased quality of analyses. This paper explains the human engineering task analysis performed on the Environmental Control and Life Support System (ECLSS) space station rack and its Distillation Assembly (DA) subsystem using EAI's human modeling simulation software, Jack. When installed on the International Space Station (ISS), ECLSS will provide the life and environment support needed to adequately sustain crew life. The DA is an Orbital Replaceable Unit (ORU) that provides means of wastewater (primarily urine from flight crew and experimental animals) reclamation. Jack was used to create a model of the weightless environment of the ISS Node 3, where the ECLSS is housed. Computer aided drawings of the ECLSS rack and DA system were also brought into the environment. Anthropometric models of a 95th percentile male and 5th percentile female were used to examine the human interfaces encountered during various ECLSS and DA tasks. The results of the task analyses were used in suggesting modifications to hardware and crew task procedures to improve accessibility, conserve crew time, and add convenience for the crew. This paper will address some of those suggested modifications and the method of presenting final analyses for requirements verification.
Automating a Detailed Cognitive Task Analysis for Structuring Curriculum
1991-06-01
Cognitive Task Analysis For... cognitive task analysis o3 0 chniques. A rather substantial literature has been amassed relative to _ - cutonqed knowledge acquisition but only seven...references have been found in LO V*r data base seaci of literature specifically addressing cognitive task analysis . - A variety of forms of cognitive task analysis
A verification procedure for MSC/NASTRAN Finite Element Models
NASA Technical Reports Server (NTRS)
Stockwell, Alan E.
1995-01-01
Finite Element Models (FEM's) are used in the design and analysis of aircraft to mathematically describe the airframe structure for such diverse tasks as flutter analysis and actively controlled landing gear design. FEM's are used to model the entire airplane as well as airframe components. The purpose of this document is to describe recommended methods for verifying the quality of the FEM's and to specify a step-by-step procedure for implementing the methods.
Doherty, Cailbhe; Bleakley, Chris; Hertel, Jay; Caulfield, Brian; Ryan, John; Delahunt, Eamonn
2014-01-01
Instrumented postural control analysis plays an important role in evaluating the effects of injury on dynamic stability during balance tasks, and is often conveyed with measures based on the displacement of the center-of-pressure (COP) assessed with a force platform. However, the desired outcome of the task is frequently characterized by a loss of dynamic stability, secondary to injury. Typically, these failed trials are discarded during research investigations, with the potential loss of informative data pertaining to task success. The novelty of the present study is that COP characteristics of failed trials in injured participants are compared to successful trial data in another injured group, and a control group of participants, using the fractal dimension (FD) method. Three groups of participants attempted a task of eyes closed single limb stance (SLS): twenty-nine participants with acute ankle sprain successfully completed the task on their non-injured limb (successful injury group); twenty eight participants with acute ankle sprain failed their attempt on their injured limb (failed injury group); sixteen participants with no current injury successfully completed the task on their non-dominant limb (successful non-injured group). Between trial analyses of these groups revealed significant differences in COP trajectory FD (successful injury group: 1.58±0.06; failed injury group: 1.54±0.07; successful non-injured group: 1.64±0.06) with a large effect size (0.27). These findings demonstrate that successful eyes-closed SLS is characterized by a larger FD of the COP path when compared to failed trials, and that injury causes a decrease in COP path FD. Copyright © 2014 Elsevier B.V. All rights reserved.
Speciali, Danielli S.; Oliveira, Elaine M.; Cardoso, Jefferson R.; Correa, João C. F.; Baker, Richard; Lucareli, Paulo R. G.
2014-01-01
Background: Gait disorders are common in individuals with Parkinson's Disease (PD) and the concurrent performance of motor and cognitive tasks can have marked effects on gait. The Gait Profile Score (GPS) and the Movement Analysis Profile (MAP) were developed in order to summarize the data of kinematics and facilitate understanding of the results of gait analysis. Objective: To investigate the effectiveness of the GPS and MAP in the quantification of changes in gait during a concurrent cognitive load while walking in adults with and without PD. Method: Fourteen patients with idiopathic PD and nine healthy subjects participated in the study. All subjects performed single and dual walking tasks. The GPS/MAP was computed from three-dimensional gait analysis data. Results: Differences were found between tasks for GPS (P<0.05) and Gait Variable Score (GVS) (pelvic rotation, knee flexion-extension and ankle dorsiflexion-plantarflexion) (P<0.05) in the PD group. An interaction between task and group was observed for GPS (P<0.01) for the right side (Cohen's ¯d=0.99), left side (Cohen's ¯d=0.91), and overall (Cohen's ¯d=0.88). No interaction was observed only for hip internal-external rotation and foot internal-external progression GVS variables in the PD group. Conclusions: The results showed gait impairment during the dual task and suggest that GPS/MAP may be used to evaluate the effects of concurrent cognitive load while walking in patients with PD. PMID:25054382
2009-01-01
Background Cognitive function might be affected by the subjects' emotional reactivity. We assessed whether behavior in different tests of emotional reactivity is correlated with performance in aversively motivated learning tasks, using four strains of rats generally considered to have a different emotional reactivity. Methods The performance of male Brown Norway, Lewis, Fischer 344, and Wistar Kyoto rats in open field (OF), elevated plus-maze (EPM), and circular light-dark preference box (cLDB) tasks, which are believed to provide measures of emotional reactivity, was evaluated. Spatial working and reference memory were assessed in two aversively motivated learning and memory tasks: the standard and the "repeated acquisition" versions of the Morris water maze escape task, respectively. All rats were also tested in a passive avoidance task. At the end of the study, levels of serotonin (5-HT) and 5-hydroxyindoleacetic acid, and 5-HT turnover in the hippocampus and frontal cortex were determined. Results Strain differences showed a complex pattern across behavioral tests and serotonergic measures. Fischer 344 rats had the poorest performance in both versions of the Morris water escape task, whereas Brown Norway rats performed these tasks very well but the passive avoidance task poorly. Neither correlation analysis nor principal component analysis provided convincing support for the notion that OF, EPM, and cLDB tasks measure the same underlying trait. Conclusions Our findings do not support the hypothesis that the level of emotional reactivity modulates cognitive performance in aversively motivated tasks. Concepts such as "emotional reactivity" and "learning and memory" cannot adequately be tapped with only one behavioral test. Our results emphasize the need for multiple testing. PMID:20003525
Deep Learning for Classification of Colorectal Polyps on Whole-slide Images.
Korbar, Bruno; Olofson, Andrea M; Miraflor, Allen P; Nicka, Catherine M; Suriawinata, Matthew A; Torresani, Lorenzo; Suriawinata, Arief A; Hassanpour, Saeed
2017-01-01
Histopathological characterization of colorectal polyps is critical for determining the risk of colorectal cancer and future rates of surveillance for patients. However, this characterization is a challenging task and suffers from significant inter- and intra-observer variability. We built an automatic image analysis method that can accurately classify different types of colorectal polyps on whole-slide images to help pathologists with this characterization and diagnosis. Our method is based on deep-learning techniques, which rely on numerous levels of abstraction for data representation and have shown state-of-the-art results for various image analysis tasks. Our method covers five common types of polyps (i.e., hyperplastic, sessile serrated, traditional serrated, tubular, and tubulovillous/villous) that are included in the US Multisociety Task Force guidelines for colorectal cancer risk assessment and surveillance. We developed multiple deep-learning approaches by leveraging a dataset of 2074 crop images, which were annotated by multiple domain expert pathologists as reference standards. We evaluated our method on an independent test set of 239 whole-slide images and measured standard machine-learning evaluation metrics of accuracy, precision, recall, and F1 score and their 95% confidence intervals. Our evaluation shows that our method with residual network architecture achieves the best performance for classification of colorectal polyps on whole-slide images (overall accuracy: 93.0%, 95% confidence interval: 89.0%-95.9%). Our method can reduce the cognitive burden on pathologists and improve their efficacy in histopathological characterization of colorectal polyps and in subsequent risk assessment and follow-up recommendations.
Optimal External Wrench Distribution During a Multi-Contact Sit-to-Stand Task.
Bonnet, Vincent; Azevedo-Coste, Christine; Robert, Thomas; Fraisse, Philippe; Venture, Gentiane
2017-07-01
This paper aims at developing and evaluating a new practical method for the real-time estimate of joint torques and external wrenches during multi-contact sit-to-stand (STS) task using kinematics data only. The proposed method allows also identifying subject specific body inertial segment parameters that are required to perform inverse dynamics. The identification phase is performed using simple and repeatable motions. Thanks to an accurately identified model the estimate of the total external wrench can be used as an input to solve an under-determined multi-contact problem. It is solved using a constrained quadratic optimization process minimizing a hybrid human-like energetic criterion. The weights of this hybrid cost function are adjusted and a sensitivity analysis is performed in order to reproduce robustly human external wrench distribution. The results showed that the proposed method could successfully estimate the external wrenches under buttocks, feet, and hands during STS tasks (RMS error lower than 20 N and 6 N.m). The simplicity and generalization abilities of the proposed method allow paving the way of future diagnosis solutions and rehabilitation applications, including in-home use.
Using task analysis in healthcare design to improve clinical efficiency.
Lu, Jun; Hignett, Sue
2009-01-01
To review the functionality of the proposed soiled workroom design for efficient and safe clinical activities. As part of a hospital refurbishment program, the planning team of a United Kingdom National Health Service hospital requested a review of a proposed standardized room design. A 7-day observational study was conducted in five clinical departments at three hospitals. Link analysis was used to record and analyze the movements among components, i.e., nursing staff, equipment/devices, and furniture. Fifty-four observations were recorded for 18 clinical tasks. The most frequent tasks were the disposal of urine and used urine bottles, and returning used commode chairs. Minor recommendations were made to improve the proposed design, and major revisions were suggested to address functionality problems. It was found that the proposed design did not offer the optimal layout for efficient and safe clinical activities. Link analysis was found to be effective for plotting the movements of the staff and accounting for the complexity of tasks. This ergonomic method, in combination with observational field studies, provided a simple and effective way to determine functional space requirements for clinical activities and should be used in all healthcare building design projects.
Influence of the helicopter environment on patient care capabilities: flight crew perceptions
NASA Technical Reports Server (NTRS)
Myers, K. J.; Rodenberg, H.; Woodard, D.
1995-01-01
INTRODUCTION: Flight crew perceptions of the effect of the rotary-wing environment on patient-care capabilities have not been subject to statistical analysis. We hypothesized that flight crew members perceived significant difficulties in performing patient-care tasks during air medical transport. METHODS: A survey was distributed to a convenience sample of flight crew members from 20 flight programs. Respondents were asked to compare the difficulty of performing patient-care tasks in rotary-wing and standard (emergency department or intensive care unit) settings. Demographic data collected on respondents included years of flight experience, flights per month, crew duty position and primary aircraft in which the respondent worked. Statistical analysis was performed as appropriate using Student's t-test, type III sum of squares, and analysis of variance. Alpha was defined as p < 0.05. RESULTS: Fifty-five percent of programs (90 individuals) responded. All tasks were significantly rated more difficult in the rotary-wing environment. Ratings were not significantly correlated with flight experience, duty position, flights per month or aircraft used. CONCLUSIONS: We conclude that the performance of patient-care tasks are perceived by air medical flight crew to be significantly more difficult during rotary-wing air medical transport than in hospital settings.
Between Domain Cognitive Dispersion and Functional Abilities in Older Adults
Fellows, Robert P.; Schmitter-Edgecombe, Maureen
2016-01-01
Objective Within-person variability in cognitive performance is related to neurological integrity, but the association with functional abilities is less clear. The primary aim of this study was to examine the association between cognitive dispersion, or within-person variability, and everyday multitasking and the way in which these variables may influence performance on a naturalistic assessment of functional abilities. Method Participants were 156 community-dwelling adults, age 50 or older. Cognitive dispersion was calculated by measuring within-person variability in cognitive domains, established through principal components analysis. Path analysis was used to determine the independent contribution of cognitive dispersion to functional ability, mediated by multitasking. Results Results of the path analysis revealed that the number of subtasks interweaved (i.e., multitasked) mediated the association between cognitive dispersion and task sequencing and accuracy. Although increased multitasking was associated with worse task performance in the path model, secondary analyses revealed that for individuals with low cognitive dispersion, increased multitasking was associated with better task performance, whereas for those with higher levels of dispersion multitasking was negatively correlated with task performance. Conclusion These results suggest that cognitive dispersion between domains may be a useful indicator of multitasking and daily living skills among older adults. PMID:26300441
Analysis of 3-D Tongue Motion From Tagged and Cine Magnetic Resonance Images
Woo, Jonghye; Lee, Junghoon; Murano, Emi Z.; Stone, Maureen; Prince, Jerry L.
2016-01-01
Purpose Measuring tongue deformation and internal muscle motion during speech has been a challenging task because the tongue deforms in 3 dimensions, contains interdigitated muscles, and is largely hidden within the vocal tract. In this article, a new method is proposed to analyze tagged and cine magnetic resonance images of the tongue during speech in order to estimate 3-dimensional tissue displacement and deformation over time. Method The method involves computing 2-dimensional motion components using a standard tag-processing method called harmonic phase, constructing superresolution tongue volumes using cine magnetic resonance images, segmenting the tongue region using a random-walker algorithm, and estimating 3-dimensional tongue motion using an incompressible deformation estimation algorithm. Results Evaluation of the method is presented with a control group and a group of people who had received a glossectomy carrying out a speech task. A 2-step principal-components analysis is then used to reveal the unique motion patterns of the subjects. Azimuth motion angles and motion on the mirrored hemi-tongues are analyzed. Conclusion Tests of the method with a various collection of subjects show its capability of capturing patient motion patterns and indicate its potential value in future speech studies. PMID:27295428
Grace, Sally A; Rossell, Susan L; Heinrichs, Markus; Kordsachia, Catarina; Labuschagne, Izelle
2018-05-24
Oxytocin (OXT) is a neuropeptide which has a critical role in human social behaviour and cognition. Research investigating the role of OXT on functional brain changes in humans has often used task paradigms that probe socioemotional processes. Preliminary evidence suggests a central role of the amygdala in the social cognitive effects of intranasal OXT (IN-OXT), however, inconsistencies in task-design and analysis methods have led to inconclusive findings regarding a cohesive model of the neural mechanisms underlying OXT's actions. The aim of this meta-analysis was to systematically investigate these findings. A systematic search of PubMed, PsycINFO, and Scopus databases was conducted for fMRI studies which compared IN-OXT to placebo in humans. First, we systematically reviewed functional magnetic resonance imaging (fMRI) studies of IN-OXT, including studies of healthy humans, those with clinical disorders, and studies examining resting-state fMRI (rsfMRI). Second, we employed a coordinate-based meta-analysis for task-based neuroimaging literature using activation likelihood estimation (ALE), whereby, coordinates were extracted from clusters with significant differences in IN-OXT versus placebo in healthy adults. Data were included for 39 fMRI studies that reported a total of 374 distinct foci. The meta-analysis identified task-related IN-OXT increases in activity within a cluster of the left superior temporal gyrus during tasks of emotion processing. These findings are important as they implicate regions beyond the amygdala in the neural effects of IN-OXT. The outcomes from this meta-analysis can guide a priori predictions for future OXT research, and provide an avenue for targeted treatment interventions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Weir, Charlene R.; Nebeker, Jonathan J.R.; Hicken, Bret L.; Campo, Rebecca; Drews, Frank; LeBar, Beth
2007-01-01
Objective Computerized Provider Order Entry (CPOE) with electronic documentation, and computerized decision support dramatically changes the information environment of the practicing clinician. Prior work patterns based on paper, verbal exchange, and manual methods are replaced with automated, computerized, and potentially less flexible systems. The objective of this study is to explore the information management strategies that clinicians use in the process of adapting to a CPOE system using cognitive task analysis techniques. Design Observation and semi-structured interviews were conducted with 88 primary-care clinicians at 10 Veterans Administration Medical Centers. Measurements Interviews were taped, transcribed, and extensively analyzed to identify key information management goals, strategies, and tasks. Tasks were aggregated into groups, common components across tasks were clarified, and underlying goals and strategies identified. Results Nearly half of the identified tasks were not fully supported by the available technology. Six core components of tasks were identified. Four meta-cognitive information management goals emerged: 1) Relevance Screening; 2) Ensuring Accuracy; 3) Minimizing memory load; and 4) Negotiating Responsibility. Strategies used to support these goals are presented. Conclusion Users develop a wide array of information management strategies that allow them to successfully adapt to new technology. Supporting the ability of users to develop adaptive strategies to support meta-cognitive goals is a key component of a successful system. PMID:17068345
Lee, Youngjin; Choo, Jina; Cho, Jeonghyun; Kim, So-Nam; Lee, Hye-Eun; Yoon, Seok-Jun; Seomun, GyeongAe
2014-03-01
This study aimed to develop a job description for healthcare managers of metabolic syndrome management programs using task analysis. Exploratory research was performed by using the Developing a Curriculum method, the Intervention Wheel model, and focus group discussions. Subsequently, we conducted a survey of 215 healthcare workers from 25 community health centers to verify that the job description we created was accurate. We defined the role of healthcare managers. Next, we elucidated the tasks of healthcare managers and performed needs analysis to examine the frequency, importance, and difficulty of each of their duties. Finally, we verified that our job description was accurate. Based on the 8 duties, 30 tasks, and 44 task elements assigned to healthcare managers, we found that the healthcare managers functioned both as team coordinators responsible for providing multidisciplinary health services and nurse specialists providing health promotion services. In terms of importance and difficulty of tasks performed by the healthcare managers, which were measured using a determinant coefficient, the highest-ranked task was planning social marketing (15.4), while the lowest-ranked task was managing human resources (9.9). A job description for healthcare managers may provide basic data essential for the development of a job training program for healthcare managers working in community health promotion programs. Copyright © 2014. Published by Elsevier B.V.
Fetterhoff, Dustin; Opris, Ioan; Simpson, Sean L.; Deadwyler, Sam A.; Hampson, Robert E.; Kraft, Robert A.
2014-01-01
Background Multifractal analysis quantifies the time-scale-invariant properties in data by describing the structure of variability over time. By applying this analysis to hippocampal interspike interval sequences recorded during performance of a working memory task, a measure of long-range temporal correlations and multifractal dynamics can reveal single neuron correlates of information processing. New method Wavelet leaders-based multifractal analysis (WLMA) was applied to hippocampal interspike intervals recorded during a working memory task. WLMA can be used to identify neurons likely to exhibit information processing relevant to operation of brain–computer interfaces and nonlinear neuronal models. Results Neurons involved in memory processing (“Functional Cell Types” or FCTs) showed a greater degree of multifractal firing properties than neurons without task-relevant firing characteristics. In addition, previously unidentified FCTs were revealed because multifractal analysis suggested further functional classification. The cannabinoid-type 1 receptor partial agonist, tetrahydrocannabinol (THC), selectively reduced multifractal dynamics in FCT neurons compared to non-FCT neurons. Comparison with existing methods WLMA is an objective tool for quantifying the memory-correlated complexity represented by FCTs that reveals additional information compared to classification of FCTs using traditional z-scores to identify neuronal correlates of behavioral events. Conclusion z-Score-based FCT classification provides limited information about the dynamical range of neuronal activity characterized by WLMA. Increased complexity, as measured with multifractal analysis, may be a marker of functional involvement in memory processing. The level of multifractal attributes can be used to differentially emphasize neural signals to improve computational models and algorithms underlying brain–computer interfaces. PMID:25086297
Automating a Detailed Cognitive Task Analysis for Structuring Curriculum
1991-08-01
1991-- ] Aleeo/i ISM’-19# l Title: Automating a Detailed Cognitive Task Analysis for Structuring Curriculum Activities: To date we have completed task...The Institute for Management Sciences. Although the particular application of the modified GOMS cognitive task analysis technique under development is...Laboratories 91 9 23 074 Automnating a Detailed Cognitive Task Analysis For Stucuring Curriculum Research Plan Year 1 Task 1.0 Design Task 1.1 Conduct body
Tschentscher, Nadja; Hauk, Olaf
2015-01-01
Mental arithmetic is a powerful paradigm to study problem solving using neuroimaging methods. However, the evaluation of task complexity varies significantly across neuroimaging studies. Most studies have parameterized task complexity by objective features such as the number size. Only a few studies used subjective rating procedures. In fMRI, we provided evidence that strategy self-reports control better for task complexity across arithmetic conditions than objective features (Tschentscher and Hauk, 2014). Here, we analyzed the relative predictive value of self-reported strategies and objective features for performance in addition and multiplication tasks, by using a paradigm designed for neuroimaging research. We found a superiority of strategy ratings as predictor of performance above objective features. In a Principal Component Analysis on reaction times, the first component explained over 90 percent of variance and factor loadings reflected percentages of self-reported strategies well. In multiple regression analyses on reaction times, self-reported strategies performed equally well or better than objective features, depending on the operation type. A Receiver Operating Characteristic (ROC) analysis confirmed this result. Reaction times classified task complexity better when defined by individual ratings. This suggests that participants' strategy ratings are reliable predictors of arithmetic complexity and should be taken into account in neuroimaging research.
Tschentscher, Nadja; Hauk, Olaf
2015-01-01
Mental arithmetic is a powerful paradigm to study problem solving using neuroimaging methods. However, the evaluation of task complexity varies significantly across neuroimaging studies. Most studies have parameterized task complexity by objective features such as the number size. Only a few studies used subjective rating procedures. In fMRI, we provided evidence that strategy self-reports control better for task complexity across arithmetic conditions than objective features (Tschentscher and Hauk, 2014). Here, we analyzed the relative predictive value of self-reported strategies and objective features for performance in addition and multiplication tasks, by using a paradigm designed for neuroimaging research. We found a superiority of strategy ratings as predictor of performance above objective features. In a Principal Component Analysis on reaction times, the first component explained over 90 percent of variance and factor loadings reflected percentages of self-reported strategies well. In multiple regression analyses on reaction times, self-reported strategies performed equally well or better than objective features, depending on the operation type. A Receiver Operating Characteristic (ROC) analysis confirmed this result. Reaction times classified task complexity better when defined by individual ratings. This suggests that participants’ strategy ratings are reliable predictors of arithmetic complexity and should be taken into account in neuroimaging research. PMID:26321997
Zhang, Linjun; Yue, Qiuhai; Zhang, Yang; Shu, Hua; Li, Ping
2015-01-01
Numerous studies have revealed the essential role of the left lateral temporal cortex in auditory sentence comprehension along with evidence of the functional specialization of the anterior and posterior temporal sub-areas. However, it is unclear whether task demands (e.g., active vs. passive listening) modulate the functional specificity of these sub-areas. In the present functional magnetic resonance imaging (fMRI) study, we addressed this issue by applying both independent component analysis (ICA) and general linear model (GLM) methods. Consistent with previous studies, intelligible sentences elicited greater activity in the left lateral temporal cortex relative to unintelligible sentences. Moreover, responses to intelligibility in the sub-regions were differentially modulated by task demands. While the overall activation patterns of the anterior and posterior superior temporal sulcus and middle temporal gyrus (STS/MTG) were equivalent during both passive and active tasks, a middle portion of the STS/MTG was found to be selectively activated only during the active task under a refined analysis of sub-regional contributions. Our results not only confirm the critical role of the left lateral temporal cortex in auditory sentence comprehension but further demonstrate that task demands modulate functional specialization of the anterior-middle-posterior temporal sub-areas. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Family environment influences emotion recognition following paediatric traumatic brain injury
SCHMIDT, ADAM T.; ORSTEN, KIMBERLEY D.; HANTEN, GERRI R.; LI, XIAOQI; LEVIN, HARVEY S.
2011-01-01
Objective This study investigated the relationship between family functioning and performance on two tasks of emotion recognition (emotional prosody and face emotion recognition) and a cognitive control procedure (the Flanker task) following paediatric traumatic brain injury (TBI) or orthopaedic injury (OI). Methods A total of 142 children (75 TBI, 67 OI) were assessed on three occasions: baseline, 3 months and 1 year post-injury on the two emotion recognition tasks and the Flanker task. Caregivers also completed the Life Stressors and Resources Scale (LISRES) on each occasion. Growth curve analysis was used to analyse the data. Results Results indicated that family functioning influenced performance on the emotional prosody and Flanker tasks but not on the face emotion recognition task. Findings on both the emotional prosody and Flanker tasks were generally similar across groups. However, financial resources emerged as significantly related to emotional prosody performance in the TBI group only (p = 0.0123). Conclusions Findings suggest family functioning variables—especially financial resources—can influence performance on an emotional processing task following TBI in children. PMID:21058900
Analysis of the Structure of Surgical Activity for a Suturing and Knot-Tying Task
Vedula, S. Swaroop; Malpani, Anand O.; Tao, Lingling; Chen, George; Gao, Yixin; Poddar, Piyush; Ahmidi, Narges; Paxton, Christopher; Vidal, Rene; Khudanpur, Sanjeev; Hager, Gregory D.; Chen, Chi Chiung Grace
2016-01-01
Background Surgical tasks are performed in a sequence of steps, and technical skill evaluation includes assessing task flow efficiency. Our objective was to describe differences in task flow for expert and novice surgeons for a basic surgical task. Methods We used a hierarchical semantic vocabulary to decompose and annotate maneuvers and gestures for 135 instances of a surgeon’s knot performed by 18 surgeons. We compared counts of maneuvers and gestures, and analyzed task flow by skill level. Results Experts used fewer gestures to perform the task (26.29; 95% CI = 25.21 to 27.38 for experts vs. 31.30; 95% CI = 29.05 to 33.55 for novices) and made fewer errors in gestures than novices (1.00; 95% CI = 0.61 to 1.39 vs. 2.84; 95% CI = 2.3 to 3.37). Transitions among maneuvers, and among gestures within each maneuver for expert trials were more predictable than novice trials. Conclusions Activity segments and state flow transitions within a basic surgical task differ by surgical skill level, and can be used to provide targeted feedback to surgical trainees. PMID:26950551
Goode, Natassia; Salmon, Paul M; Lenné, Michael G; Hillard, Peter
2014-07-01
Injuries resulting from manual handling tasks represent an on-going problem for the transport and storage industry. This article describes an application of a systems theory-based approach, Rasmussen's (1997. Safety Science 27, 183), risk management framework, to the analysis of the factors influencing safety during manual handling activities in a freight handling organisation. Observations of manual handling activities, cognitive decision method interviews with workers (n=27) and interviews with managers (n=35) were used to gather information about three manual handling activities. Hierarchical task analysis and thematic analysis were used to identify potential risk factors and performance shaping factors across the levels of Rasmussen's framework. These different data sources were then integrated using Rasmussen's Accimap technique to provide an overall analysis of the factors influencing safety during manual handling activities in this context. The findings demonstrate how a systems theory-based approach can be applied to this domain, and suggest that policy-orientated, rather than worker-orientated, changes are required to prevent future manual handling injuries. Copyright © 2013 Elsevier Ltd. All rights reserved.
2011-01-01
Background For brain computer interfaces (BCIs), which may be valuable in neurorehabilitation, brain signals derived from mental activation can be monitored by non-invasive methods, such as functional near-infrared spectroscopy (fNIRS). Single-trial classification is important for this purpose and this was the aim of the presented study. In particular, we aimed to investigate a combined approach: 1) offline single-trial classification of brain signals derived from a novel wireless fNIRS instrument; 2) to use motor imagery (MI) as mental task thereby discriminating between MI signals in response to different tasks complexities, i.e. simple and complex MI tasks. Methods 12 subjects were asked to imagine either a simple finger-tapping task using their right thumb or a complex sequential finger-tapping task using all fingers of their right hand. fNIRS was recorded over secondary motor areas of the contralateral hemisphere. Using Fisher's linear discriminant analysis (FLDA) and cross validation, we selected for each subject a best-performing feature combination consisting of 1) one out of three channel, 2) an analysis time interval ranging from 5-15 s after stimulation onset and 3) up to four Δ[O2Hb] signal features (Δ[O2Hb] mean signal amplitudes, variance, skewness and kurtosis). Results The results of our single-trial classification showed that using the simple combination set of channels, time intervals and up to four Δ[O2Hb] signal features comprising Δ[O2Hb] mean signal amplitudes, variance, skewness and kurtosis, it was possible to discriminate single-trials of MI tasks differing in complexity, i.e. simple versus complex tasks (inter-task paired t-test p ≤ 0.001), over secondary motor areas with an average classification accuracy of 81%. Conclusions Although the classification accuracies look promising they are nevertheless subject of considerable subject-to-subject variability. In the discussion we address each of these aspects, their limitations for future approaches in single-trial classification and their relevance for neurorehabilitation. PMID:21682906
Comparison of software packages for detecting differential expression in RNA-seq studies
Seyednasrollah, Fatemeh; Laiho, Asta
2015-01-01
RNA-sequencing (RNA-seq) has rapidly become a popular tool to characterize transcriptomes. A fundamental research problem in many RNA-seq studies is the identification of reliable molecular markers that show differential expression between distinct sample groups. Together with the growing popularity of RNA-seq, a number of data analysis methods and pipelines have already been developed for this task. Currently, however, there is no clear consensus about the best practices yet, which makes the choice of an appropriate method a daunting task especially for a basic user without a strong statistical or computational background. To assist the choice, we perform here a systematic comparison of eight widely used software packages and pipelines for detecting differential expression between sample groups in a practical research setting and provide general guidelines for choosing a robust pipeline. In general, our results demonstrate how the data analysis tool utilized can markedly affect the outcome of the data analysis, highlighting the importance of this choice. PMID:24300110
Comparison of software packages for detecting differential expression in RNA-seq studies.
Seyednasrollah, Fatemeh; Laiho, Asta; Elo, Laura L
2015-01-01
RNA-sequencing (RNA-seq) has rapidly become a popular tool to characterize transcriptomes. A fundamental research problem in many RNA-seq studies is the identification of reliable molecular markers that show differential expression between distinct sample groups. Together with the growing popularity of RNA-seq, a number of data analysis methods and pipelines have already been developed for this task. Currently, however, there is no clear consensus about the best practices yet, which makes the choice of an appropriate method a daunting task especially for a basic user without a strong statistical or computational background. To assist the choice, we perform here a systematic comparison of eight widely used software packages and pipelines for detecting differential expression between sample groups in a practical research setting and provide general guidelines for choosing a robust pipeline. In general, our results demonstrate how the data analysis tool utilized can markedly affect the outcome of the data analysis, highlighting the importance of this choice. © The Author 2013. Published by Oxford University Press.
Armeson, Kent E.; Hill, Elizabeth G.; Bonilha, Heather Shaw; Martin-Harris, Bonnie
2017-01-01
Purpose The purpose of this study was to identify which swallowing task(s) yielded the worst performance during a standardized modified barium swallow study (MBSS) in order to optimize the detection of swallowing impairment. Method This secondary data analysis of adult MBSSs estimated the probability of each swallowing task yielding the derived Modified Barium Swallow Impairment Profile (MBSImP™©; Martin-Harris et al., 2008) Overall Impression (OI; worst) scores using generalized estimating equations. The range of probabilities across swallowing tasks was calculated to discern which swallowing task(s) yielded the worst performance. Results Large-volume, thin-liquid swallowing tasks had the highest probabilities of yielding the OI scores for oral containment and airway protection. The cookie swallowing task was most likely to yield OI scores for oral clearance. Several swallowing tasks had nearly equal probabilities (≤ .20) of yielding the OI score. Conclusions The MBSS must represent impairment while requiring boluses that challenge the swallowing system. No single swallowing task had a sufficiently high probability to yield the identification of the worst score for each physiological component. Omission of swallowing tasks will likely fail to capture the most severe impairment for physiological components critical for safe and efficient swallowing. Results provide further support for standardized, well-tested protocols during MBSS. PMID:28614846
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald L. Boring; David I. Gertman; Jeffrey C. Joe
2005-09-01
An ongoing issue within human-computer interaction (HCI) is the need for simplified or “discount” methods. The current economic slowdown has necessitated innovative methods that are results driven and cost effective. The myriad methods of design and usability are currently being cost-justified, and new techniques are actively being explored that meet current budgets and needs. Recent efforts in human reliability analysis (HRA) are highlighted by the ten-year development of the Standardized Plant Analysis Risk HRA (SPAR-H) method. The SPAR-H method has been used primarily for determining humancentered risk at nuclear power plants. The SPAR-H method, however, shares task analysis underpinnings withmore » HCI. Despite this methodological overlap, there is currently no HRA approach deployed in heuristic usability evaluation. This paper presents an extension of the existing SPAR-H method to be used as part of heuristic usability evaluation in HCI.« less
Video content analysis of surgical procedures.
Loukas, Constantinos
2018-02-01
In addition to its therapeutic benefits, minimally invasive surgery offers the potential for video recording of the operation. The videos may be archived and used later for reasons such as cognitive training, skills assessment, and workflow analysis. Methods from the major field of video content analysis and representation are increasingly applied in the surgical domain. In this paper, we review recent developments and analyze future directions in the field of content-based video analysis of surgical operations. The review was obtained from PubMed and Google Scholar search on combinations of the following keywords: 'surgery', 'video', 'phase', 'task', 'skills', 'event', 'shot', 'analysis', 'retrieval', 'detection', 'classification', and 'recognition'. The collected articles were categorized and reviewed based on the technical goal sought, type of surgery performed, and structure of the operation. A total of 81 articles were included. The publication activity is constantly increasing; more than 50% of these articles were published in the last 3 years. Significant research has been performed for video task detection and retrieval in eye surgery. In endoscopic surgery, the research activity is more diverse: gesture/task classification, skills assessment, tool type recognition, shot/event detection and retrieval. Recent works employ deep neural networks for phase and tool recognition as well as shot detection. Content-based video analysis of surgical operations is a rapidly expanding field. Several future prospects for research exist including, inter alia, shot boundary detection, keyframe extraction, video summarization, pattern discovery, and video annotation. The development of publicly available benchmark datasets to evaluate and compare task-specific algorithms is essential.
Operating Reserves and Wind Power Integration: An International Comparison; Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milligan, M.; Donohoo, P.; Lew, D.
2010-10-01
This paper provides a high-level international comparison of methods and key results from both operating practice and integration analysis, based on an informal International Energy Agency Task 25: Large-scale Wind Integration.
Network analysis of exploratory behaviors of mice in a spatial learning and memory task
Suzuki, Yusuke
2017-01-01
The Barnes maze is one of the main behavioral tasks used to study spatial learning and memory. The Barnes maze is a task conducted on “dry land” in which animals try to escape from a brightly lit exposed circular open arena to a small dark escape box located under one of several holes at the periphery of the arena. In comparison with another classical spatial learning and memory task, the Morris water maze, the negative reinforcements that motivate animals in the Barnes maze are less severe and less stressful. Furthermore, the Barnes maze is more compatible with recently developed cutting-edge techniques in neural circuit research, such as the miniature brain endoscope or optogenetics. For this study, we developed a lift-type task start system and equipped the Barnes maze with it. The subject mouse is raised up by the lift and released into the maze automatically so that it can start navigating the maze smoothly from exactly the same start position across repeated trials. We believe that a Barnes maze test with a lift-type task start system may be useful for behavioral experiments when combined with head-mounted or wire-connected devices for online imaging and intervention in neural circuits. Furthermore, we introduced a network analysis method for the analysis of the Barnes maze data. Each animal’s exploratory behavior in the maze was visualized as a network of nodes and their links, and spatial learning in the maze is described by systematic changes in network structures of search behavior. Network analysis was capable of visualizing and quantitatively analyzing subtle but significant differences in an animal’s exploratory behavior in the maze. PMID:28700627
Garten, Justin; Hoover, Joe; Johnson, Kate M; Boghrati, Reihane; Iskiwitch, Carol; Dehghani, Morteza
2018-02-01
Theory-driven text analysis has made extensive use of psychological concept dictionaries, leading to a wide range of important results. These dictionaries have generally been applied through word count methods which have proven to be both simple and effective. In this paper, we introduce Distributed Dictionary Representations (DDR), a method that applies psychological dictionaries using semantic similarity rather than word counts. This allows for the measurement of the similarity between dictionaries and spans of text ranging from complete documents to individual words. We show how DDR enables dictionary authors to place greater emphasis on construct validity without sacrificing linguistic coverage. We further demonstrate the benefits of DDR on two real-world tasks and finally conduct an extensive study of the interaction between dictionary size and task performance. These studies allow us to examine how DDR and word count methods complement one another as tools for applying concept dictionaries and where each is best applied. Finally, we provide references to tools and resources to make this method both available and accessible to a broad psychological audience.
NASA Technical Reports Server (NTRS)
1985-01-01
Task 2 in the Space Station Data System (SSDS) Analysis/Architecture Study is the development of an information base that will support the conduct of trade studies and provide sufficient data to make design/programmatic decisions. This volume identifies the preferred options in the programmatic category and characterizes these options with respect to performance attributes, constraints, costs, and risks. The programmatic category includes methods used to administrate/manage the development, operation and maintenance of the SSDS. The specific areas discussed include standardization/commonality; systems management; and systems development, including hardware procurement, software development and system integration, test and verification.
ERIC Educational Resources Information Center
Zhang, Zhidong
2016-01-01
This study explored an alternative assessment procedure to examine learning trajectories of matrix multiplication. It took rule-based analytical and cognitive task analysis methods specifically to break down operation rules for a given matrix multiplication. Based on the analysis results, a hierarchical Bayesian network, an assessment model,…
Protocol Analysis as a Method for Analyzing Conversational Data.
ERIC Educational Resources Information Center
Aleman, Carlos G.; Vangelisti, Anita L.
Protocol analysis, a technique that uses people's verbal reports about their cognitions as they engage in an assigned task, has been used in a number of applications to provide insight into how people mentally plan, assess, and carry out those assignments. Using a system of networked computers where actors communicate with each other over…
Systematic Instruction for Retarded Children: The Illinois Program. Part III: Self-Help Instruction.
ERIC Educational Resources Information Center
Linford, Maxine D.; And Others
The manual for programed instruction of self care skills for trainable mentally handicapped children consists of dressing, dining, grooming, and toilet training. Teaching methods used include behavioral analysis and management, task analysis, and errorless learning. The lesson plans in each section are programed to maximize the child's success at…
Interactive-predictive detection of handwritten text blocks
NASA Astrophysics Data System (ADS)
Ramos Terrades, O.; Serrano, N.; Gordó, A.; Valveny, E.; Juan, A.
2010-01-01
A method for text block detection is introduced for old handwritten documents. The proposed method takes advantage of sequential book structure, taking into account layout information from pages previously transcribed. This glance at the past is used to predict the position of text blocks in the current page with the help of conventional layout analysis methods. The method is integrated into the GIDOC prototype: a first attempt to provide integrated support for interactive-predictive page layout analysis, text line detection and handwritten text transcription. Results are given in a transcription task on a 764-page Spanish manuscript from 1891.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruebel, Oliver
2009-11-20
Knowledge discovery from large and complex collections of today's scientific datasets is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the increasing number of data dimensions and data objects is presenting tremendous challenges for data analysis and effective data exploration methods and tools. Researchers are overwhelmed with data and standard tools are often insufficient to enable effective data analysis and knowledge discovery. The main objective of this thesis is to provide important new capabilities to accelerate scientific knowledge discovery form large, complex, and multivariate scientific data. The research coveredmore » in this thesis addresses these scientific challenges using a combination of scientific visualization, information visualization, automated data analysis, and other enabling technologies, such as efficient data management. The effectiveness of the proposed analysis methods is demonstrated via applications in two distinct scientific research fields, namely developmental biology and high-energy physics.Advances in microscopy, image analysis, and embryo registration enable for the first time measurement of gene expression at cellular resolution for entire organisms. Analysis of high-dimensional spatial gene expression datasets is a challenging task. By integrating data clustering and visualization, analysis of complex, time-varying, spatial gene expression patterns and their formation becomes possible. The analysis framework MATLAB and the visualization have been integrated, making advanced analysis tools accessible to biologist and enabling bioinformatic researchers to directly integrate their analysis with the visualization. Laser wakefield particle accelerators (LWFAs) promise to be a new compact source of high-energy particles and radiation, with wide applications ranging from medicine to physics. To gain insight into the complex physical processes of particle acceleration, physicists model LWFAs computationally. The datasets produced by LWFA simulations are (i) extremely large, (ii) of varying spatial and temporal resolution, (iii) heterogeneous, and (iv) high-dimensional, making analysis and knowledge discovery from complex LWFA simulation data a challenging task. To address these challenges this thesis describes the integration of the visualization system VisIt and the state-of-the-art index/query system FastBit, enabling interactive visual exploration of extremely large three-dimensional particle datasets. Researchers are especially interested in beams of high-energy particles formed during the course of a simulation. This thesis describes novel methods for automatic detection and analysis of particle beams enabling a more accurate and efficient data analysis process. By integrating these automated analysis methods with visualization, this research enables more accurate, efficient, and effective analysis of LWFA simulation data than previously possible.« less
Optimizing estimation of hemispheric dominance for language using magnetic source imaging
Passaro, Antony D.; Rezaie, Roozbeh; Moser, Dana C.; Li, Zhimin; Dias, Nadeeka; Papanicolaou, Andrew C.
2011-01-01
The efficacy of magnetoencephalography (MEG) as an alternative to invasive methods for investigating the cortical representation of language has been explored in several studies. Recently, studies comparing MEG to the gold standard Wada procedure have found inconsistent and often less-than accurate estimates of laterality across various MEG studies. Here we attempted to address this issue among normal right-handed adults (N=12) by supplementing a well-established MEG protocol involving word recognition and the single dipole method with a sentence comprehension task and a beamformer approach localizing neural oscillations. Beamformer analysis of word recognition and sentence comprehension tasks revealed a desynchronization in the 10–18 Hz range, localized to the temporo-parietal cortices. Inspection of individual profiles of localized desynchronization (10–18 Hz) revealed left hemispheric dominance in 91.7% and 83.3% of individuals during the word recognition and sentence comprehension tasks, respectively. In contrast, single dipole analysis yielded lower estimates, such that activity in temporal language regions was left-lateralized in 66.7% and 58.3% of individuals during word recognition and sentence comprehension, respectively. The results obtained from the word recognition task and localization of oscillatory activity using a beamformer appear to be in line with general estimates of left hemispheric dominance for language in normal right-handed individuals. Furthermore, the current findings support the growing notion that changes in neural oscillations underlie critical components of linguistic processing. PMID:21890118
Rosa, Pedro J; Gamito, Pedro; Oliveira, Jorge; Morais, Diogo; Pavlovic, Matthew; Smyth, Olivia; Maia, Inês; Gomes, Tiago
2017-03-23
An adequate behavioral response depends on attentional and mnesic processes. When these basic cognitive functions are impaired, the use of non-immersive Virtual Reality Applications (VRAs) can be a reliable technique for assessing the level of impairment. However, most non-immersive VRAs use indirect measures to make inferences about visual attention and mnesic processes (e.g., time to task completion, error rate). To examine whether the eye movement analysis through eye tracking (ET) can be a reliable method to probe more effectively where and how attention is deployed and how it is linked with visual working memory during comparative visual search tasks (CVSTs) in non-immersive VRAs. The eye movements of 50 healthy participants were continuously recorded while CVSTs, selected from a set of cognitive tasks in the Systemic Lisbon Battery (SLB). Then a VRA designed to assess of cognitive impairments were randomly presented. The total fixation duration, the number of visits in the areas of interest and in the interstimulus space, along with the total execution time was significantly different as a function of the Mini Mental State Examination (MMSE) scores. The present study demonstrates that CVSTs in SLB, when combined with ET, can be a reliable and unobtrusive method for assessing cognitive abilities in healthy individuals, opening it to potential use in clinical samples.
Comparison of continuously acquired resting state and extracted analogues from active tasks
Ganger, Sebastian; Hahn, Andreas; Küblböck, Martin; Kranz, Georg S.; Spies, Marie; Vanicek, Thomas; Seiger, René; Sladky, Ronald; Windischberger, Christian; Kasper, Siegfried
2015-01-01
Abstract Functional connectivity analysis of brain networks has become an important tool for investigation of human brain function. Although functional connectivity computations are usually based on resting‐state data, the application to task‐specific fMRI has received growing attention. Three major methods for extraction of resting‐state data from task‐related signal have been proposed (1) usage of unmanipulated task data for functional connectivity; (2) regression against task effects, subsequently using the residuals; and (3) concatenation of baseline blocks located in‐between task blocks. Despite widespread application in current research, consensus on which method best resembles resting‐state seems to be missing. We, therefore, evaluated these techniques in a sample of 26 healthy controls measured at 7 Tesla. In addition to continuous resting‐state, two different task paradigms were assessed (emotion discrimination and right finger‐tapping) and five well‐described networks were analyzed (default mode, thalamus, cuneus, sensorimotor, and auditory). Investigating the similarity to continuous resting‐state (Dice, Intraclass correlation coefficient (ICC), R 2) showed that regression against task effects yields functional connectivity networks most alike to resting‐state. However, all methods exhibited significant differences when compared to continuous resting‐state and similarity metrics were lower than test‐retest of two resting‐state scans. Omitting global signal regression did not change these findings. Visually, the networks are highly similar, but through further investigation marked differences can be found. Therefore, our data does not support referring to resting‐state when extracting signals from task designs, although functional connectivity computed from task‐specific data may indeed yield interesting information. Hum Brain Mapp 36:4053–4063, 2015. © 2015 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. PMID:26178250
Spatial and dynamical handwriting analysis in mild cognitive impairment.
Kawa, Jacek; Bednorz, Adam; Stępień, Paula; Derejczyk, Jarosław; Bugdol, Monika
2017-03-01
Background and Objectives Standard clinical procedure of Mild Cognitive Impairment (MCI) assessment employs time-consuming tests of psychological evaluation and requires the involvement of specialists. The employment of quantitative methods proves to be superior to clinical judgment, yet reliable, fast and inexpensive tests are not available. This study was conducted as a first step towards the development of a diagnostic tool based on handwriting. Methods In this paper the handwriting sample of a group of 37 patients with MCI (mean age 76.1±5.8) and 37 healthy controls (mean age 74.8±5.7) was collected using a Livescribe Echo Pen while completing three tasks: (1) regular writing, (2) all-capital-letters writing, and (3) single letter multiply repeated. Parameters differentiating both groups were selected in each task. Results Subjects with confirmed MCI needed more time to complete task one (median 119.5s, IQR - interquartile range - 38.1 vs. 95.1s, IQR 29.2 in control and MCI group, p-value <0.05) and two (median 84.2s, IQR 49.2 and 53.7s, IQR 30.5 in control and MCI group) as their writing was significantly slower. These results were associated with a longer time to complete a single stroke of written text. The written text was also noticeably larger in the MCI group in all three tasks (e.g. median height of the text block in task 2 being 22.3mm, IQR 12.9 in MCI and 20.2mm, IQR 8.7 in control group). Moreover, the MCI group showed more variation in the dynamics of writing: longer pause between strokes in task 1 and 2. The all-capital-letters task produced most of the discriminating features. Conclusion Proposed handwriting features are significant in distinguishing MCI patients. Inclusion of quantitative handwriting analysis in psychological assessment may be a step forward towards a fast MCI diagnosis. Copyright © 2017 Elsevier Ltd. All rights reserved.
Detecting eye movements in dynamic environments.
Reimer, Bryan; Sodhi, Manbir
2006-11-01
To take advantage of the increasing number of in-vehicle devices, automobile drivers must divide their attention between primary (driving) and secondary (operating in-vehicle device) tasks. In dynamic environments such as driving, however, it is not easy to identify and quantify how a driver focuses on the various tasks he/she is simultaneously engaged in, including the distracting tasks. Measures derived from the driver's scan path have been used as correlates of driver attention. This article presents a methodology for analyzing eye positions, which are discrete samples of a subject's scan path, in order to categorize driver eye movements. Previous methods of analyzing eye positions recorded in a dynamic environment have relied completely on the manual identification of the focus of visual attention from a point of regard superimposed on a video of a recorded scene, failing to utilize information regarding movement structure in the raw recorded eye positions. Although effective, these methods are too time consuming to be easily used when the large data sets that would be required to identify subtle differences between drivers, under different road conditions, and with different levels of distraction are processed. The aim of the methods presented in this article are to extend the degree of automation in the processing of eye movement data by proposing a methodology for eye movement analysis that extends automated fixation identification to include smooth and saccadic movements. By identifying eye movements in the recorded eye positions, a method of reducing the analysis of scene video to a finite search space is presented. The implementation of a software tool for the eye movement analysis is described, including an example from an on-road test-driving sample.
1987-12-01
were presented. The second part of the thesis proposed the alternative methods of decision analysis and PROMETHEE to solve TAF’s . prioritization...of decision analysis (DA) and Preference Ranking Orqanization Method for Enrichment Evaluations ( PROMETHEE ) will be explained. First, the...dollars. However, once this task is successfully accomplished, TAF would be able to use DA to prioritize their mods. The PROMETHEE is a "new class of
Voyvodic, James T.; Glover, Gary H.; Greve, Douglas; Gadde, Syam
2011-01-01
Functional magnetic resonance imaging (fMRI) is based on correlating blood oxygen-level dependent (BOLD) signal fluctuations in the brain with other time-varying signals. Although the most common reference for correlation is the timing of a behavioral task performed during the scan, many other behavioral and physiological variables can also influence fMRI signals. Variations in cardiac and respiratory functions in particular are known to contribute significant BOLD signal fluctuations. Variables such as skin conduction, eye movements, and other measures that may be relevant to task performance can also be correlated with BOLD signals and can therefore be used in image analysis to differentiate multiple components in complex brain activity signals. Combining real-time recording and data management of multiple behavioral and physiological signals in a way that can be routinely used with any task stimulus paradigm is a non-trivial software design problem. Here we discuss software methods that allow users control of paradigm-specific audio–visual or other task stimuli combined with automated simultaneous recording of multi-channel behavioral and physiological response variables, all synchronized with sub-millisecond temporal accuracy. We also discuss the implementation and importance of real-time display feedback to ensure data quality of all recorded variables. Finally, we discuss standards and formats for storage of temporal covariate data and its integration into fMRI image analysis. These neuroinformatics methods have been adopted for behavioral task control at all sites in the Functional Biomedical Informatics Research Network (FBIRN) multi-center fMRI study. PMID:22232596
Formal methods demonstration project for space applications
NASA Technical Reports Server (NTRS)
Divito, Ben L.
1995-01-01
The Space Shuttle program is cooperating in a pilot project to apply formal methods to live requirements analysis activities. As one of the larger ongoing shuttle Change Requests (CR's), the Global Positioning System (GPS) CR involves a significant upgrade to the Shuttle's navigation capability. Shuttles are to be outfitted with GPS receivers and the primary avionics software will be enhanced to accept GPS-provided positions and integrate them into navigation calculations. Prior to implementing the CR, requirements analysts at Loral Space Information Systems, the Shuttle software contractor, must scrutinize the CR to identify and resolve any requirements issues. We describe an ongoing task of the Formal Methods Demonstration Project for Space Applications whose goal is to find an effective way to use formal methods in the GPS CR requirements analysis phase. This phase is currently under way and a small team from NASA Langley, ViGYAN Inc. and Loral is now engaged in this task. Background on the GPS CR is provided and an overview of the hardware/software architecture is presented. We outline the approach being taken to formalize the requirements, only a subset of which is being attempted. The approach features the use of the PVS specification language to model 'principal functions', which are major units of Shuttle software. Conventional state machine techniques form the basis of our approach. Given this background, we present interim results based on a snapshot of work in progress. Samples of requirements specifications rendered in PVS are offered to illustration. We walk through a specification sketch for the principal function known as GPS Receiver State processing. Results to date are summarized and feedback from Loral requirements analysts is highlighted. Preliminary data is shown comparing issues detected by the formal methods team versus those detected using existing requirements analysis methods. We conclude by discussing our plan to complete the remaining activities of this task.
ERIC Educational Resources Information Center
Hull, Daniel M.; Lovett, James E.
This task analysis report for the Robotics/Automated Systems Technician (RAST) curriculum project first provides a RAST job description. It then discusses the task analysis, including the identification of tasks, the grouping of tasks according to major areas of specialty, and the comparison of the competencies to existing or new courses to…
NASA Technical Reports Server (NTRS)
Benek, John A.; Luckring, James M.
2017-01-01
A NATO symposium held in 2008 identified many promising sensitivity analysis and un-certainty quantification technologies, but the maturity and suitability of these methods for realistic applications was not known. The STO Task Group AVT-191 was established to evaluate the maturity and suitability of various sensitivity analysis and uncertainty quantification methods for application to realistic problems of interest to NATO. The program ran from 2011 to 2015, and the work was organized into four discipline-centric teams: external aerodynamics, internal aerodynamics, aeroelasticity, and hydrodynamics. This paper presents an overview of the AVT-191 program content.
Results of the first provisional technical secretariat interlaboratory comparison test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stuff, J.R.; Hoffland, L.
1995-06-01
The principal task of this laboratory in the first Provisional Technical Secretariat (PTS) Interlaboratory Comparison Test was to verify and test the extraction and preparation procedures outlined in the Recommended Operating Procedures for Sampling and Analysis in the Verification of Chemical Disarmament in addition to our laboratory extraction methods and our laboratory analysis methods. Sample preparation began on 16 May 1994 and analysis was completed on 12 June 1994. The analytical methods used included NMR ({sup 1}H and {sup 31}P) GC/AED, GC/MS (EI and methane CI), GC/IRD, HPLC/IC, HPLC/TSP/MS, MS/MS(Electrospray), and CZE.
A Novel Method for Characterizing Spacesuit Mobility Through Metabolic Cost
NASA Technical Reports Server (NTRS)
McFarland, Shane M.; Norcross, Jason R.
2014-01-01
Historically, spacesuit mobility has been characterized by directly measuring both range of motion and joint torque of individual anatomic joints. The work detailed herein aims to improve on this method, which is often prone to uncertainly, lack of repeatability, and a general lack of applicability to real-world functional tasks. Specifically, the goal of this work is to characterize suited mobility performance by directly measuring the metabolic performance of the occupant. Pilot testing was conducted in 2013, employing three subjects performing a range of functional tasks in two different suits prototypes, the Mark III and Z-1. Cursory analysis of the results shows the approach has merit, with consistent performance trends toward one suit over the other. Forward work includes the need to look at more subjects, a refined task set, and another suit in a different mass/mobility regime to validate the approach.
Artificial intelligence in radiology.
Hosny, Ahmed; Parmar, Chintan; Quackenbush, John; Schwartz, Lawrence H; Aerts, Hugo J W L
2018-05-17
Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in image-recognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically, in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to image-based tasks. We explore how these methods could impact multiple facets of radiology, with a general focus on applications in oncology, and demonstrate ways in which these methods are advancing the field. Finally, we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced.
Applying Cognitive Work Analysis to Time Critical Targeting Functionality
2004-10-01
Cognitive Task Analysis , CTA, Cognitive Task Analysis , Human Factors, GUI, Graphical User Interface, Heuristic Evaluation... Cognitive Task Analysis MITRE Briefing January 2000 Dynamic Battle Management Functional Architecture 3-1 Section 3 Human Factors...clear distinction between Cognitive Work Analysis (CWA) and Cognitive Task Analysis (CTA), therefore this document will refer to these
NASA Technical Reports Server (NTRS)
Warren, Andrew H.; Arelt, Joseph E.; Lalicata, Anthony L.; Rogers, Karen M.
1993-01-01
A method of efficient and automated thermal-structural processing of very large space structures is presented. The method interfaces the finite element and finite difference techniques. It also results in a pronounced reduction of the quantity of computations, computer resources and manpower required for the task, while assuring the desired accuracy of the results.
Demixed principal component analysis of neural population data.
Kobak, Dmitry; Brendel, Wieland; Constantinidis, Christos; Feierstein, Claudia E; Kepecs, Adam; Mainen, Zachary F; Qi, Xue-Lian; Romo, Ranulfo; Uchida, Naoshige; Machens, Christian K
2016-04-12
Neurons in higher cortical areas, such as the prefrontal cortex, are often tuned to a variety of sensory and motor variables, and are therefore said to display mixed selectivity. This complexity of single neuron responses can obscure what information these areas represent and how it is represented. Here we demonstrate the advantages of a new dimensionality reduction technique, demixed principal component analysis (dPCA), that decomposes population activity into a few components. In addition to systematically capturing the majority of the variance of the data, dPCA also exposes the dependence of the neural representation on task parameters such as stimuli, decisions, or rewards. To illustrate our method we reanalyze population data from four datasets comprising different species, different cortical areas and different experimental tasks. In each case, dPCA provides a concise way of visualizing the data that summarizes the task-dependent features of the population response in a single figure.
Human-Automation Integration: Principle and Method for Design and Evaluation
NASA Technical Reports Server (NTRS)
Billman, Dorrit; Feary, Michael
2012-01-01
Future space missions will increasingly depend on integration of complex engineered systems with their human operators. It is important to ensure that the systems that are designed and developed do a good job of supporting the needs of the work domain. Our research investigates methods for needs analysis. We included analysis of work products (plans for regulation of the space station) as well as work processes (tasks using current software), in a case study of Attitude Determination and Control Officers (ADCO) planning work. This allows comparing how well different designs match the structure of the work to be supported. Redesigned planning software that better matches the structure of work was developed and experimentally assessed. The new prototype enabled substantially faster and more accurate performance in plan revision tasks. This success suggests the approach to needs assessment and use in design and evaluation is promising, and merits investigatation in future research.
Multi-task learning for cross-platform siRNA efficacy prediction: an in-silico study
2010-01-01
Background Gene silencing using exogenous small interfering RNAs (siRNAs) is now a widespread molecular tool for gene functional study and new-drug target identification. The key mechanism in this technique is to design efficient siRNAs that incorporated into the RNA-induced silencing complexes (RISC) to bind and interact with the mRNA targets to repress their translations to proteins. Although considerable progress has been made in the computational analysis of siRNA binding efficacy, few joint analysis of different RNAi experiments conducted under different experimental scenarios has been done in research so far, while the joint analysis is an important issue in cross-platform siRNA efficacy prediction. A collective analysis of RNAi mechanisms for different datasets and experimental conditions can often provide new clues on the design of potent siRNAs. Results An elegant multi-task learning paradigm for cross-platform siRNA efficacy prediction is proposed. Experimental studies were performed on a large dataset of siRNA sequences which encompass several RNAi experiments recently conducted by different research groups. By using our multi-task learning method, the synergy among different experiments is exploited and an efficient multi-task predictor for siRNA efficacy prediction is obtained. The 19 most popular biological features for siRNA according to their jointly importance in multi-task learning were ranked. Furthermore, the hypothesis is validated out that the siRNA binding efficacy on different messenger RNAs(mRNAs) have different conditional distribution, thus the multi-task learning can be conducted by viewing tasks at an "mRNA"-level rather than at the "experiment"-level. Such distribution diversity derived from siRNAs bound to different mRNAs help indicate that the properties of target mRNA have important implications on the siRNA binding efficacy. Conclusions The knowledge gained from our study provides useful insights on how to analyze various cross-platform RNAi data for uncovering of their complex mechanism. PMID:20380733
Multi-task learning for cross-platform siRNA efficacy prediction: an in-silico study.
Liu, Qi; Xu, Qian; Zheng, Vincent W; Xue, Hong; Cao, Zhiwei; Yang, Qiang
2010-04-10
Gene silencing using exogenous small interfering RNAs (siRNAs) is now a widespread molecular tool for gene functional study and new-drug target identification. The key mechanism in this technique is to design efficient siRNAs that incorporated into the RNA-induced silencing complexes (RISC) to bind and interact with the mRNA targets to repress their translations to proteins. Although considerable progress has been made in the computational analysis of siRNA binding efficacy, few joint analysis of different RNAi experiments conducted under different experimental scenarios has been done in research so far, while the joint analysis is an important issue in cross-platform siRNA efficacy prediction. A collective analysis of RNAi mechanisms for different datasets and experimental conditions can often provide new clues on the design of potent siRNAs. An elegant multi-task learning paradigm for cross-platform siRNA efficacy prediction is proposed. Experimental studies were performed on a large dataset of siRNA sequences which encompass several RNAi experiments recently conducted by different research groups. By using our multi-task learning method, the synergy among different experiments is exploited and an efficient multi-task predictor for siRNA efficacy prediction is obtained. The 19 most popular biological features for siRNA according to their jointly importance in multi-task learning were ranked. Furthermore, the hypothesis is validated out that the siRNA binding efficacy on different messenger RNAs(mRNAs) have different conditional distribution, thus the multi-task learning can be conducted by viewing tasks at an "mRNA"-level rather than at the "experiment"-level. Such distribution diversity derived from siRNAs bound to different mRNAs help indicate that the properties of target mRNA have important implications on the siRNA binding efficacy. The knowledge gained from our study provides useful insights on how to analyze various cross-platform RNAi data for uncovering of their complex mechanism.
Assessment of methodologies for analysis of the dungeness B accidental aircraft crash risk.
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaChance, Jeffrey L.; Hansen, Clifford W.
2010-09-01
The Health and Safety Executive (HSE) has requested Sandia National Laboratories (SNL) to review the aircraft crash methodology for nuclear facilities that are being used in the United Kingdom (UK). The scope of the work included a review of one method utilized in the UK for assessing the potential for accidental airplane crashes into nuclear facilities (Task 1) and a comparison of the UK methodology against similar International Atomic Energy Agency (IAEA), United States (US) Department of Energy (DOE), and the US Nuclear Regulatory Commission (NRC) methods (Task 2). Based on the conclusions from Tasks 1 and 2, an additionalmore » Task 3 would provide an assessment of a site-specific crash frequency for the Dungeness B facility using one of the other methodologies. This report documents the results of Task 2. The comparison of the different methods was performed for the three primary contributors to aircraft crash risk at the Dungeness B site: airfield related crashes, crashes below airways, and background crashes. The methods and data specified in each methodology were compared for each of these risk contributors, differences in the methodologies were identified, and the importance of these differences was qualitatively and quantitatively assessed. The bases for each of the methods and the data used were considered in this assessment process. A comparison of the treatment of the consequences of the aircraft crashes was not included in this assessment because the frequency of crashes into critical structures is currently low based on the existing Dungeness B assessment. Although the comparison found substantial differences between the UK and the three alternative methodologies (IAEA, NRC, and DOE) this assessment concludes that use of any of these alternative methodologies would not change the conclusions reached for the Dungeness B site. Performance of Task 3 is thus not recommended.« less
Investigation of Latent Traces Using Infrared Reflectance Hyperspectral Imaging
NASA Astrophysics Data System (ADS)
Schubert, Till; Wenzel, Susanne; Roscher, Ribana; Stachniss, Cyrill
2016-06-01
The detection of traces is a main task of forensics. Hyperspectral imaging is a potential method from which we expect to capture more fluorescence effects than with common forensic light sources. This paper shows that the use of hyperspectral imaging is suited for the analysis of latent traces and extends the classical concept to the conservation of the crime scene for retrospective laboratory analysis. We examine specimen of blood, semen and saliva traces in several dilution steps, prepared on cardboard substrate. As our key result we successfully make latent traces visible up to dilution factor of 1:8000. We can attribute most of the detectability to interference of electromagnetic light with the water content of the traces in the shortwave infrared region of the spectrum. In a classification task we use several dimensionality reduction methods (PCA and LDA) in combination with a Maximum Likelihood classifier, assuming normally distributed data. Further, we use Random Forest as a competitive approach. The classifiers retrieve the exact positions of labelled trace preparation up to highest dilution and determine posterior probabilities. By modelling the classification task with a Markov Random Field we are able to integrate prior information about the spatial relation of neighboured pixel labels.
On temporal connectivity of PFC via Gauss-Markov modeling of fNIRS signals.
Aydöre, Sergül; Mihçak, M Kivanç; Ciftçi, Koray; Akin, Ata
2010-03-01
Functional near-infrared spectroscopy (fNIRS) is an optical imaging method, which monitors the brain activation by measuring the successive changes in the concentration of oxy- and deoxyhemoglobin in real time. In this study, we present a method to investigate the functional connectivity of prefrontal cortex (PFC) Sby applying a Gauss-Markov model to fNIRS signals. The hemodynamic changes on PFC during the performance of cognitive paradigm are measured by fNIRS for 17 healthy adults. The color-word matching Stroop task is performed to activate 16 different regions of PFC. There are three different types of stimuli in this task, which can be listed as incongruent stimulus (IS), congruent stimulus (CS), and neutral stimulus (NS), respectively. We introduce a new measure, called "information transfer metric" (ITM) for each time sample. The behavior of ITMs during IS are significantly different from the ITMs during CS and NS, which is consistent with the outcome of the previous research, which concentrated on fNIRS signal analysis via color-word matching Stroop task. Our analysis shows that the functional connectivity of PFC is highly relevant with the cognitive load, i.e., functional connectivity increases with the increasing cognitive load.
An effective convolutional neural network model for Chinese sentiment analysis
NASA Astrophysics Data System (ADS)
Zhang, Yu; Chen, Mengdong; Liu, Lianzhong; Wang, Yadong
2017-06-01
Nowadays microblog is getting more and more popular. People are increasingly accustomed to expressing their opinions on Twitter, Facebook and Sina Weibo. Sentiment analysis of microblog has received significant attention, both in academia and in industry. So far, Chinese microblog exploration still needs lots of further work. In recent years CNN has also been used to deal with NLP tasks, and already achieved good results. However, these methods ignore the effective use of a large number of existing sentimental resources. For this purpose, we propose a Lexicon-based Sentiment Convolutional Neural Networks (LSCNN) model focus on Weibo's sentiment analysis, which combines two CNNs, trained individually base on sentiment features and word embedding, at the fully connected hidden layer. The experimental results show that our model outperforms the CNN model only with word embedding features on microblog sentiment analysis task.
Evaluating a Computerized Aid for Conducting a Cognitive Task Analysis
2000-01-01
in conducting a cognitive task analysis . The conduct of a cognitive task analysis is costly and labor intensive. As a result, a few computerized aids...evaluation of a computerized aid, specifically CAT-HCI (Cognitive Analysis Tool - Human Computer Interface), for the conduct of a detailed cognitive task analysis . A
A Comparison of Two Methods Used for Ranking Task Exposure Levels Using Simulated Multi-Task Data
1999-12-17
OF OKLAHOMA HEALTH SCIENCES CENTER GRADUATE COLLEGE A COMPARISON OF TWO METHODS USED FOR RANKING TASK EXPOSURE LEVELS USING SIMULATED MULTI-TASK...COSTANTINO Oklahoma City, Oklahoma 1999 ^ooo wx °^ A COMPARISON OF TWO METHODS USED FOR RANKING TASK EXPOSURE LEVELS USING SIMULATED MULTI-TASK DATA... METHODS AND MATERIALS 9 TV. RESULTS 14 V. DISCUSSION AND CONCLUSION 28 LIST OF REFERENCES 31 APPENDICES 33 Appendix A JJ -in Appendix B Dl IV
Iterative filtering decomposition based on local spectral evolution kernel
Wang, Yang; Wei, Guo-Wei; Yang, Siyang
2011-01-01
The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559
Using Apex To Construct CPM-GOMS Models
NASA Technical Reports Server (NTRS)
John, Bonnie; Vera, Alonso; Matessa, Michael; Freed, Michael; Remington, Roger
2006-01-01
process for automatically generating computational models of human/computer interactions as well as graphical and textual representations of the models has been built on the conceptual foundation of a method known in the art as CPM-GOMS. This method is so named because it combines (1) the task decomposition of analysis according to an underlying method known in the art as the goals, operators, methods, and selection (GOMS) method with (2) a model of human resource usage at the level of cognitive, perceptual, and motor (CPM) operations. CPM-GOMS models have made accurate predictions about behaviors of skilled computer users in routine tasks, but heretofore, such models have been generated in a tedious, error-prone manual process. In the present process, CPM-GOMS models are generated automatically from a hierarchical task decomposition expressed by use of a computer program, known as Apex, designed previously to be used to model human behavior in complex, dynamic tasks. An inherent capability of Apex for scheduling of resources automates the difficult task of interleaving the cognitive, perceptual, and motor resources that underlie common task operators (e.g., move and click mouse). The user interface of Apex automatically generates Program Evaluation Review Technique (PERT) charts, which enable modelers to visualize the complex parallel behavior represented by a model. Because interleaving and the generation of displays to aid visualization are automated, it is now feasible to construct arbitrarily long sequences of behaviors. The process was tested by using Apex to create a CPM-GOMS model of a relatively simple human/computer-interaction task and comparing the time predictions of the model and measurements of the times taken by human users in performing the various steps of the task. The task was to withdraw $80 in cash from an automated teller machine (ATM). For the test, a Visual Basic mockup of an ATM was created, with a provision for input from (and measurement of the performance of) the user via a mouse. The times predicted by the automatically generated model turned out to approximate the measured times fairly well (see figure). While these results are promising, there is need for further development of the process. Moreover, it will also be necessary to test other, more complex models: The actions required of the user in the ATM task are too sequential to involve substantial parallelism and interleaving and, hence, do not serve as an adequate test of the unique strength of CPM-GOMS models to accommodate parallelism and interleaving.
An intelligent crowdsourcing system for forensic analysis of surveillance video
NASA Astrophysics Data System (ADS)
Tahboub, Khalid; Gadgil, Neeraj; Ribera, Javier; Delgado, Blanca; Delp, Edward J.
2015-03-01
Video surveillance systems are of a great value for public safety. With an exponential increase in the number of cameras, videos obtained from surveillance systems are often archived for forensic purposes. Many automatic methods have been proposed to do video analytics such as anomaly detection and human activity recognition. However, such methods face significant challenges due to object occlusions, shadows and scene illumination changes. In recent years, crowdsourcing has become an effective tool that utilizes human intelligence to perform tasks that are challenging for machines. In this paper, we present an intelligent crowdsourcing system for forensic analysis of surveillance video that includes the video recorded as a part of search and rescue missions and large-scale investigation tasks. We describe a method to enhance crowdsourcing by incorporating human detection, re-identification and tracking. At the core of our system, we use a hierarchal pyramid model to distinguish the crowd members based on their ability, experience and performance record. Our proposed system operates in an autonomous fashion and produces a final output of the crowdsourcing analysis consisting of a set of video segments detailing the events of interest as one storyline.
Mutual information optimization for mass spectra data alignment.
Zoppis, Italo; Gianazza, Erica; Borsani, Massimiliano; Chinello, Clizia; Mainini, Veronica; Galbusera, Carmen; Ferrarese, Carlo; Galimberti, Gloria; Sorbi, Sandro; Borroni, Barbara; Magni, Fulvio; Antoniotti, Marco; Mauri, Giancarlo
2012-01-01
"Signal" alignments play critical roles in many clinical setting. This is the case of mass spectrometry data, an important component of many types of proteomic analysis. A central problem occurs when one needs to integrate (mass spectrometry) data produced by different sources, e.g., different equipment and/or laboratories. In these cases some form of "data integration'" or "data fusion'" may be necessary in order to discard some source specific aspects and improve the ability to perform a classification task such as inferring the "disease classes'" of patients. The need for new high performance data alignments methods is therefore particularly important in these contexts. In this paper we propose an approach based both on an information theory perspective, generally used in a feature construction problem, and on the application of a mathematical programming task (i.e. the weighted bipartite matching problem). We present the results of a competitive analysis of our method against other approaches. The analysis was conducted on data from plasma/ethylenediaminetetraacetic acid (EDTA) of "control" and Alzheimer patients collected from three different hospitals. The results point to a significant performance advantage of our method with respect to the competing ones tested.
Baracat, Patrícia Junqueira Ferraz; de Sá Ferreira, Arthur
2013-12-01
The present study investigated the association between postural tasks and center of pressure spatial patterns of three-dimensional statokinesigrams. Young (n=35; 27.0±7.7years) and elderly (n=38; 67.3±8.7years) healthy volunteers maintained an undisturbed standing position during postural tasks characterized by combined sensory (vision/no vision) and biomechanical challenges (feet apart/together). A method for the analysis of three-dimensional statokinesigrams based on nonparametric statistics and image-processing analysis was employed. Four patterns of spatial distribution were derived from ankle and hip strategies according to the quantity (single; double; multi) and location (anteroposterior; mediolateral) of high-density regions on three-dimensional statokinesigrams. Significant associations between postural task and spatial pattern were observed (young: gamma=0.548, p<.001; elderly: gamma=0.582, p<.001). Robustness analysis revealed small changes related to parameter choices for histogram processing. MANOVA revealed multivariate main effects for postural task [Wilks' Lambda=0.245, p<.001] and age [Wilks' Lambda=0.308, p<.001], with interaction [Wilks' Lambda=0.732, p<.001]. The quantity of high-density regions was positively correlated to stabilogram and statokinesigram variables (p<.05 or lower). In conclusion, postural tasks are associated with center of pressure spatial patterns and are similar in young and elderly healthy volunteers. Single-centered patterns reflected more stable postural conditions and were more frequent with complete visual input and a wide base of support. Copyright © 2013 Elsevier B.V. All rights reserved.
Kotani, Kiyoshi; Takamasu, Kiyoshi; Tachibana, Makoto
2007-01-01
The objectives of this paper were to present a method to extract the amplitude of RSA in the respiratory-phase domain, to compare that with subjective or objective indices of the MWL (mental workload), and to compare that with a conventional frequency analysis in terms of its accuracy during a mental arithmetic task. HRV (heart rate variability), ILV (instantaneous lung volume), and motion of the throat were measured under a mental arithmetic experiment and subjective and objective indices were also obtained. The amplitude of RSA was extracted in the respiratory-phase domain, and its correlation with the load level was compared with the results of the frequency domain analysis, which is the standard analysis of the HRV. The subjective and objective indices decreased as the load level increased, showing that the experimental protocol was appropriate. Then, the amplitude of RSA in the respiratory-phase domain also decreased with the increase in the load level. The results of the correlation analysis showed that the respiratory-phase domain analysis has higher negative correlations, -0.84 and -0.82, with the load level as determined by simple correlation and rank correlation, respectively, than does frequency analysis, for which the correlations were found to be -0.54 and -0.63, respectively. In addition, it was demonstrated that the proposed method could be applied to the short-term extraction of RSA amplitude. We proposed a simple and effective method to extract the amplitude of the respiratory sinus arrhythmia (RSA) in the respiratory-phase domain and the results show that this method can estimate cardiac vagal activity more accurately than frequency analysis.
Estimation of phase derivatives using discrete chirp-Fourier-transform-based method.
Gorthi, Sai Siva; Rastogi, Pramod
2009-08-15
Estimation of phase derivatives is an important task in many interferometric measurements in optical metrology. This Letter introduces a method based on discrete chirp-Fourier transform for accurate and direct estimation of phase derivatives, even in the presence of noise. The method is introduced in the context of the analysis of reconstructed interference fields in digital holographic interferometry. We present simulation and experimental results demonstrating the utility of the proposed method.
Are we under-utilizing the talents of primary care personnel? A job analytic examination
Hysong, Sylvia J; Best, Richard G; Moore, Frank I
2007-01-01
Background Primary care staffing decisions are often made unsystematically, potentially leading to increased costs, dissatisfaction, turnover, and reduced quality of care. This article aims to (1) catalogue the domain of primary care tasks, (2) explore the complexity associated with these tasks, and (3) examine how tasks performed by different job titles differ in function and complexity, using Functional Job Analysis to develop a new tool for making evidence-based staffing decisions. Methods Seventy-seven primary care personnel from six US Department of Veterans Affairs (VA) Medical Centers, representing six job titles, participated in two-day focus groups to generate 243 unique task statements describing the content of VA primary care. Certified job analysts rated tasks on ten dimensions representing task complexity, skills, autonomy, and error consequence. Two hundred and twenty-four primary care personnel from the same clinics then completed a survey indicating whether they performed each task. Tasks were catalogued using an adaptation of an existing classification scheme; complexity differences were tested via analysis of variance. Results Objective one: Task statements were categorized into four functions: service delivery (65%), administrative duties (15%), logistic support (9%), and workforce management (11%). Objective two: Consistent with expectations, 80% of tasks received ratings at or below the mid-scale value on all ten scales. Objective three: Service delivery and workforce management tasks received higher ratings on eight of ten scales (multiple functional complexity dimensions, autonomy, human error consequence) than administrative and logistic support tasks. Similarly, tasks performed by more highly trained job titles received higher ratings on six of ten scales than tasks performed by lower trained job titles. Contrary to expectations, the distribution of tasks across functions did not significantly vary by job title. Conclusion Primary care personnel are not being utilized to the extent of their training; most personnel perform many tasks that could reasonably be performed by personnel with less training. Primary care clinics should use evidence-based information to optimize job-person fit, adjusting clinic staff mix and allocation of work across staff to enhance efficiency and effectiveness. PMID:17397534
Michels, Nele R M; Driessen, Erik W; Muijtjens, Arno M M; Van Gaal, Luc F; Bossaert, Leo L; De Winter, Benedicte Y
2009-12-01
A portfolio is used to mentor and assess students' clinical performance at the workplace. However, students and raters often perceive the portfolio as a time-consuming instrument. In this study, we investigated whether assessment during medical internship by a portfolio can combine reliability and feasibility. The domain-oriented reliability of 61 double-rated portfolios was measured, using a generalisability analysis with portfolio tasks and raters as sources of variation in measuring the performance of a student. We obtained reliability (Phi coefficient) of 0.87 with this internship portfolio containing 15 double-rated tasks. The generalisability analysis showed that an acceptable level of reliability (Phi = 0.80) was maintained when the amount of portfolio tasks was decreased to 13 or 9 using one and two raters, respectively. Our study shows that a portfolio can be a reliable method for the assessment of workplace learning. The possibility of reducing the amount of tasks or raters while maintaining a sufficient level of reliability suggests an increase in feasibility of portfolio use for both students and raters.
Robot Acquisition of Active Maps Through Teleoperation and Vector Space Analysis
NASA Technical Reports Server (NTRS)
Peters, Richard Alan, II
2003-01-01
The work performed under this contract was in the area of intelligent robotics. The problem being studied was the acquisition of intelligent behaviors by a robot. The method was to acquire action maps that describe tasks as sequences of reflexive behaviors. Action maps (a.k.a. topological maps) are graphs whose nodes represent sensorimotor states and whose edges represent the motor actions that cause the robot to proceed from one state to the next. The maps were acquired by the robot after being teleoperated or otherwise guided by a person through a task several times. During a guided task, the robot records all its sensorimotor signals. The signals from several task trials are partitioned into episodes of static behavior. The corresponding episodes from each trial are averaged to produce a task description as a sequence of characteristic episodes. The sensorimotor states that indicate episode boundaries become the nodes, and the static behaviors, the edges. It was demonstrated that if compound maps are constructed from a set of tasks then the robot can perform new tasks in which it was never explicitly trained.
A Meta-analysis of Cerebellar Contributions to Higher Cognition from PET and fMRI studies
Keren-Happuch, E; Chen, Shen-Hsing Annabel; Ho, Moon-Ho Ringo; Desmond, John E.
2013-01-01
A growing interest in cerebellar function and its involvement in higher cognition have prompted much research in recent years. Cerebellar presence in a wide range of cognitive functions examined within an increasing body of neuroimaging literature has been observed. We applied a meta-analytic approach, which employed the activation likelihood estimate method, to consolidate results of cerebellar involvement accumulated in different cognitive tasks of interest and systematically identified similarities among the studies. The current analysis included 88 neuroimaging studies demonstrating cerebellar activations in higher cognitive domains involving emotion, executive function, language, music, timing and working memory. While largely consistent with a prior meta-analysis by Stoodley and Schmahmann (2009), our results extended their findings to include music and timing domains to provide further insights into cerebellar involvement and elucidate its role in higher cognition. In addition, we conducted inter- and intra-domain comparisons for the cognitive domains of emotion, language and working memory. We also considered task differences within the domain of verbal working memory by conducting a comparison of the Sternberg with the n-back task, as well as an analysis of the differential components within the Sternberg task. Results showed a consistent cerebellar presence in the timing domain, providing evidence for a role in time keeping. Unique clusters identified within the domain further refine the topographic organization of the cerebellum. PMID:23125108
MANPRINT Methods Monograph: Aiding the Development of Manpower-Based System Evaluation
1989-06-01
zone below tree level where threats are known to be (the actual number of threats may vary). Weather conditions are VFR. The helicopter pops up to...12.0 Replace 13.3 Bearing, connecting Inspect 6.2 Replace 6.2 0105 Camshaft Inspect 7.2 Replace 7.2 Cover, cylinder head Inspect .2 (valve cover...matrix to analyze the data and identify task clusters. . Outputs and Use of Cluster Analysis 1. Hierarchical cluster tree (taxonomy) of system tasks will
ERIC Educational Resources Information Center
Fox, Mark C.; Ericsson, K. Anders; Best, Ryan
2011-01-01
Since its establishment, psychology has struggled to find valid methods for studying thoughts and subjective experiences. Thirty years ago, Ericsson and Simon (1980) proposed that participants can give concurrent verbal expression to their thoughts (think aloud) while completing tasks without changing objectively measurable performance (accuracy).…
Methods for comparing 3D surface attributes
NASA Astrophysics Data System (ADS)
Pang, Alex; Freeman, Adam
1996-03-01
A common task in data analysis is to compare two or more sets of data, statistics, presentations, etc. A predominant method in use is side-by-side visual comparison of images. While straightforward, it burdens the user with the task of discerning the differences between the two images. The user if further taxed when the images are of 3D scenes. This paper presents several methods for analyzing the extent, magnitude, and manner in which surfaces in 3D differ in their attributes. The surface geometry are assumed to be identical and only the surface attributes (color, texture, etc.) are variable. As a case in point, we examine the differences obtained when a 3D scene is rendered progressively using radiosity with different form factor calculation methods. The comparison methods include extensions of simple methods such as mapping difference information to color or transparency, and more recent methods including the use of surface texture, perturbation, and adaptive placements of error glyphs.
The Use of Rapid Review Methods for the U.S. Preventive Services Task Force.
Patnode, Carrie D; Eder, Michelle L; Walsh, Emily S; Viswanathan, Meera; Lin, Jennifer S
2018-01-01
Rapid review products are intended to synthesize available evidence in a timely fashion while still meeting the needs of healthcare decision makers. Various methods and products have been applied for rapid evidence syntheses, but no single approach has been uniformly adopted. Methods to gain efficiency and compress the review time period include focusing on a narrow clinical topic and key questions; limiting the literature search; performing single (versus dual) screening of abstracts and full-text articles for relevance; and limiting the analysis and synthesis. In order to maintain the scientific integrity, including transparency, of rapid evidence syntheses, it is imperative that procedures used to streamline standard systematic review methods are prespecified, based on sound review principles and empiric evidence when possible, and provide the end user with an accurate and comprehensive synthesis. The collection of clinical preventive service recommendations maintained by the U.S. Preventive Services Task Force, along with its commitment to rigorous methods development, provide a unique opportunity to refine, implement, and evaluate rapid evidence synthesis methods and add to an emerging evidence base on rapid review methods. This paper summarizes the U.S. Preventive Services Task Force's use of rapid review methodology, its criteria for selecting topics for rapid evidence syntheses, and proposed methods to streamline the review process. Copyright © 2018 American Journal of Preventive Medicine. All rights reserved.
ERIC Educational Resources Information Center
Oliver, Rhonda; Grote, Ellen; Rochecouste, Judith; Exell, Michael
2013-01-01
While needs analyses underpin the design of second language analytic syllabi, the methodologies undertaken are rarely examined. This paper explores the value of multiple data sources and collection methods for developing a needs analysis model to enable vocational education and training teachers to address the needs of Australian Aboriginal…
ERIC Educational Resources Information Center
Chang, Liang-Te; And Others
A study was conducted to develop the electronic technical competencies of duty and task analysis by using a revised DACUM (Developing a Curriculum) method, a questionnaire survey, and a fuzzy synthesis operation. The revised DACUM process relied on inviting electronics trade professionals to analyze electronic technology for entry-level…
Magnetoencephalographic Analysis of Cortical Activity in Adults with and without Down Syndrome
ERIC Educational Resources Information Center
Virji-Babul, N.; Cheung, T.; Weeks, D.; Herdman, A. T.; Cheyne, D.
2007-01-01
Background: This preliminary study served as a pilot for an ongoing analysis of spectral power in adults with Down syndrome (DS) using a 151 channel whole head magnetoencephalography (MEG). The present study is the first step for examining and comparing cortical responses during spontaneous and task related activity in DS. Method: Cortical…
ERIC Educational Resources Information Center
Chen, Zhe; Honomichl, Ryan; Kennedy, Diane; Tan, Enda
2016-01-01
The present study examines 5- to 8-year-old children's relation reasoning in solving matrix completion tasks. This study incorporates a componential analysis, an eye-tracking method, and a microgenetic approach, which together allow an investigation of the cognitive processing strategies involved in the development and learning of children's…
Front-End Analysis Methods for the Noncommissioned Officer Education System
2013-02-01
The Noncommissioned Officer Education System plays a crucial role in Soldier development by providing both institutional training and structured-self...created challenges with maintaining currency of institutional training . Questions have arisen regarding the optimal placement of tasks as their...relevance changes, especially considering the resources required to update institutional training . An analysis was conducted to identify the
Richards, Thomas P; Arditi, Aries; da Cruz, Lyndon; Dagnelie, Gislin; Dorn, Jessy D; Duncan, Jacque L; Ho, Allen C; Olmos de Koo, Lisa C; Sahel, José‐Alain; Stanga, Paulo E; Thumann, Gabriele; Wang, Vizhong; Greenberg, Robert J
2016-01-01
Abstract Objective The purpose of this analysis was to compare observer‐rated tasks in patients implanted with the Argus II Retinal Prosthesis System, when the device is ON versus OFF. Methods The Functional Low‐Vision Observer Rated Assessment (FLORA) instrument was administered to 26 blind patients implanted with the Argus II Retinal Prosthesis System at a mean follow‐up of 36 months. FLORA is a multi‐component instrument that consists in part of observer‐rated assessment of 35 tasks completed with the device ON versus OFF. The ease with which a patient completes a task is scored using a four‐point scale, ranging from easy (score of 1) to impossible (score of 4). The tasks are evaluated individually and organised into four discrete domains, including ‘Visual orientation’, ‘Visual mobility’, ‘Daily life and ‘Interaction with others’. Results Twenty‐six patients completed each of the 35 tasks. Overall, 24 out of 35 tasks (69 per cent) were statistically significantly easier to achieve with the device ON versus OFF. In each of the four domains, patients’ performances were significantly better (p < 0.05) with the device ON versus OFF, ranging from 19 to 38 per cent improvement. Conclusion Patients with an Argus II Retinal Prosthesis implanted for 18 to 44 months (mean 36 months), demonstrated significantly improved completion of vision‐related tasks with the device ON versus OFF. PMID:26804484
Beuscart-Zéphir, Marie-Catherine; Pelayo, Sylvia; Bernonville, Stéphanie
2010-04-01
The objectives of this paper are: In this approach, the implementation of such a complex IT solution is considered a major redesign of the work system. The paper describes the Human Factor (HF) tasks embedded in the project lifecycle: (1) analysis and modelling of the current work system and usability assessment of the medication CPOE solution; (2) HF recommendations for work re-design and usability recommendations for IT system re-engineering both aiming at a safer and more efficient work situation. Standard ethnographic methods were used to support the analysis of the current work system and work situations, coupled with cognitive task analysis methods and documents review. Usability inspection (heuristic evaluation) and both in-lab (simulated tasks) and on-site (real tasks) usability tests were performed for the evaluation of the CPOE candidate. Adapted software engineering models were used in combination with usual textual descriptions, tasks models and mock-ups to support the recommendations for work and product re-design. The analysis of the work situations identified different work organisations and procedures across the hospital's departments. The most important differences concerned the doctor-nurse communications and cooperation modes and the procedures for preparing and administering the medications. The assessment of the medication CPOE functions uncovered a number of usability problems including severe ones leading to impossible to detect or to catch errors. Models of the actual and possible distribution of tasks and roles were used to support decision making in the work design process. The results of the usability assessment were translated into requirements to support the necessary re-engineering of the IT application. The HFE approach to medication CPOE efficiently identifies and distinguishes currently unsafe or uncomfortable work situations that could obviously benefit from an IT solution from other work situations incorporating efficient work procedures that might be impaired by the implementation of the CPOE. In this context, a careful redesign of the work situation and of the entire work system is necessary to actually benefit from the installation of the product in terms of patient safety and human performances. In parallel, a usability assessment of the product to be implemented is mandatory to identify potentially dangerous usability flaws and to fix them before the installation. (c) 2009 Elsevier Ireland Ltd. All rights reserved.
A job analysis of care helpers
Choi, Kyung-Sook; Jeong, Seungeun; Kim, Seulgee; Park, Hyeung-Keun; Seok, Jae Eun
2012-01-01
The aim of this study was to examine the roles of care helpers through job analysis. To do this, this study used the Developing A Curriculum Method (DACUM) to classify job content and a multi-dimensional study design was applied to identify roles and create a job description by looking into the appropriateness, significance, frequency, and difficulty of job content as identified through workshops and cross-sectional surveys conducted for appropriateness verification. A total of 418 care helpers working in nursing facilities and community senior service facilities across the country were surveyed. The collected data were analyzed using PASW 18.0 software. Six duties and 18 tasks were identified based on the job model. Most tasks were found to be "important task", scoring 4.0 points or above. Physical care duties, elimination care, position changing and movement assistance, feeding assistance, and safety care were identified as high frequency tasks. The most difficult tasks were emergency prevention, early detection, and speedy reporting. A summary of the job of care helpers is providing physical, emotional, housekeeping, and daily activity assistance to elderly patients with problems in independently undertaking daily activities due to physical or mental causes in long-term care facilities or at the client's home. The results of this study suggest a task-focused examination, optimizing the content of the current standard teaching materials authorized by the Ministry of Health and Welfare while supplementing some content which was identified as task elements but not included in the current teaching materials and fully reflecting the actual frequency and difficulty of tasks. PMID:22323929
Effects of Alcohol on Performance on a Distraction Task During Simulated Driving
Allen, Allyssa J.; Meda, Shashwath A.; Skudlarski, Pawel; Calhoun, Vince; Astur, Robert; Ruopp, Kathryn C.; Pearlson, Godfrey D.
2009-01-01
Background Prior studies report that accidents involving intoxicated drivers are more likely to occur during performance of secondary tasks. We studied this phenomenon, using a dual-task paradigm, involving performance of a visual oddball (VO) task while driving in an alcohol challenge paradigm. Previous functional MRI (fMRI) studies of the VO task have shown activation in the anterior cingulate, hippocampus, and prefrontal cortex. Thus, we predicted dose-dependent decreases in activation of these areas during VO performance. Methods Forty healthy social drinkers were administered 3 different doses of alcohol, individually tailored to their gender and weight. Participants performed a VO task while operating a virtual reality driving simulator in a 3T fMRI scanner. Results Analysis showed a dose-dependent linear decrease in Blood Oxygen Level Dependent activation during task performance, primarily in hippocampus, anterior cingulate, and dorsolateral prefrontal areas, with the least activation occurring during the high dose. Behavioral analysis showed a dose-dependent linear increase in reaction time, with no effects associated with either correct hits or false alarms. In all dose conditions, driving speed decreased significantly after a VO stimulus. However, at the high dose this decrease was significantly less. Passenger-side line crossings significantly increased at the high dose. Conclusions These results suggest that driving impairment during secondary task performance may be associated with alcohol-related effects on the above brain regions, which are involved with attentional processing/decision-making. Drivers with high blood alcohol concentrations may be less able to orient or detect novel or sudden stimuli during driving. PMID:19183133
Machine Learning Methods for Production Cases Analysis
NASA Astrophysics Data System (ADS)
Mokrova, Nataliya V.; Mokrov, Alexander M.; Safonova, Alexandra V.; Vishnyakov, Igor V.
2018-03-01
Approach to analysis of events occurring during the production process were proposed. Described machine learning system is able to solve classification tasks related to production control and hazard identification at an early stage. Descriptors of the internal production network data were used for training and testing of applied models. k-Nearest Neighbors and Random forest methods were used to illustrate and analyze proposed solution. The quality of the developed classifiers was estimated using standard statistical metrics, such as precision, recall and accuracy.
Multi-disciplinary optimization of aeroservoelastic systems
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1991-01-01
New methods were developed for efficient aeroservoelastic analysis and optimization. The main target was to develop a method for investigating large structural variations using a single set of modal coordinates. This task was accomplished by basing the structural modal coordinates on normal modes calculated with a set of fictitious masses loading the locations of anticipated structural changes. The following subject areas are covered: (1) modal coordinates for aeroelastic analysis with large local structural variations; and (2) time simulation of flutter with large stiffness changes.
Evaluation of a novel Serious Game based assessment tool for patients with Alzheimer's disease.
Vallejo, Vanessa; Wyss, Patric; Rampa, Luca; Mitache, Andrei V; Müri, René M; Mosimann, Urs P; Nef, Tobias
2017-01-01
Despite growing interest in developing ecological assessment of difficulties in patients with Alzheimer's disease new methods assessing the cognitive difficulties related to functional activities are missing. To complete current evaluation, the use of Serious Games can be a promising approach as it offers the possibility to recreate a virtual environment with daily living activities and a precise and complete cognitive evaluation. The aim of the present study was to evaluate the usability and the screening potential of a new ecological tool for assessment of cognitive functions in patients with Alzheimer's disease. Eighteen patients with Alzheimer's disease and twenty healthy controls participated to the study. They were asked to complete six daily living virtual tasks assessing several cognitive functions: three navigation tasks, one shopping task, one cooking task and one table preparation task following a one-day scenario. Usability of the game was evaluated through a questionnaire and through the analysis of the computer interactions for the two groups. Furthermore, the performances in terms of time to achieve the task and percentage of completion on the several tasks were recorded. Results indicate that both groups subjectively found the game user friendly and they were objectively able to play the game without computer interactions difficulties. Comparison of the performances between the two groups indicated a significant difference in terms of percentage of achievement of the several tasks and in terms of time they needed to achieve the several tasks. This study suggests that this new Serious Game based assessment tool is a user-friendly and ecological method to evaluate the cognitive abilities related to the difficulties patients can encounter in daily living activities and can be used as a screening tool as it allowed to distinguish Alzheimer's patient's performance from healthy controls.
TARGET - TASK ANALYSIS REPORT GENERATION TOOL, VERSION 1.0
NASA Technical Reports Server (NTRS)
Ortiz, C. J.
1994-01-01
The Task Analysis Report Generation Tool, TARGET, is a graphical interface tool used to capture procedural knowledge and translate that knowledge into a hierarchical report. TARGET is based on VISTA, a knowledge acquisition tool developed by the Naval Systems Training Center. TARGET assists a programmer and/or task expert organize and understand the steps involved in accomplishing a task. The user can label individual steps in the task through a dialogue-box and get immediate graphical feedback for analysis. TARGET users can decompose tasks into basic action kernels or minimal steps to provide a clear picture of all basic actions needed to accomplish a job. This method allows the user to go back and critically examine the overall flow and makeup of the process. The user can switch between graphics (box flow diagrams) and text (task hierarchy) versions to more easily study the process being documented. As the practice of decomposition continues, tasks and their subtasks can be continually modified to more accurately reflect the user's procedures and rationale. This program is designed to help a programmer document an expert's task thus allowing the programmer to build an expert system which can help others perform the task. Flexibility is a key element of the system design and of the knowledge acquisition session. If the expert is not able to find time to work on the knowledge acquisition process with the program developer, the developer and subject matter expert may work in iterative sessions. TARGET is easy to use and is tailored to accommodate users ranging from the novice to the experienced expert systems builder. TARGET is written in C-language for IBM PC series and compatible computers running MS-DOS and Microsoft Windows version 3.0 or 3.1. No source code is supplied. The executable also requires 2Mb of RAM, a Microsoft compatible mouse, a VGA display and an 80286, 386 or 486 processor machine. The standard distribution medium for TARGET is one 5.25 inch 360K MS-DOS format diskette. TARGET was developed in 1991.
Model-based segmentation of hand radiographs
NASA Astrophysics Data System (ADS)
Weiler, Frank; Vogelsang, Frank
1998-06-01
An important procedure in pediatrics is to determine the skeletal maturity of a patient from radiographs of the hand. There is great interest in the automation of this tedious and time-consuming task. We present a new method for the segmentation of the bones of the hand, which allows the assessment of the skeletal maturity with an appropriate database of reference bones, similar to the atlas based methods. The proposed algorithm uses an extended active contour model for the segmentation of the hand bones, which incorporates a-priori knowledge of shape and topology of the bones in an additional energy term. This `scene knowledge' is integrated in a complex hierarchical image model, that is used for the image analysis task.
NASA Astrophysics Data System (ADS)
Chen, Jie; Hu, Jiangnan
2017-06-01
Industry 4.0 and lean production has become the focus of manufacturing. A current issue is to analyse the performance of the assembly line balancing. This study focus on distinguishing the factors influencing the assembly line balancing. The one-way ANOVA method is applied to explore the significant degree of distinguished factors. And regression model is built to find key points. The maximal task time (tmax ), the quantity of tasks (n), and degree of convergence of precedence graph (conv) are critical for the performance of assembly line balancing. The conclusion will do a favor to the lean production in the manufacturing.
Characterizing Task-Based OpenMP Programs
Muddukrishna, Ananya; Jonsson, Peter A.; Brorsson, Mats
2015-01-01
Programmers struggle to understand performance of task-based OpenMP programs since profiling tools only report thread-based performance. Performance tuning also requires task-based performance in order to balance per-task memory hierarchy utilization against exposed task parallelism. We provide a cost-effective method to extract detailed task-based performance information from OpenMP programs. We demonstrate the utility of our method by quickly diagnosing performance problems and characterizing exposed task parallelism and per-task instruction profiles of benchmarks in the widely-used Barcelona OpenMP Tasks Suite. Programmers can tune performance faster and understand performance tradeoffs more effectively than existing tools by using our method to characterize task-based performance. PMID:25860023
Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network.
Li, Yuexiang; Shen, Linlin
2018-02-11
Skin lesions are a severe disease globally. Early detection of melanoma in dermoscopy images significantly increases the survival rate. However, the accurate recognition of melanoma is extremely challenging due to the following reasons: low contrast between lesions and skin, visual similarity between melanoma and non-melanoma lesions, etc. Hence, reliable automatic detection of skin tumors is very useful to increase the accuracy and efficiency of pathologists. In this paper, we proposed two deep learning methods to address three main tasks emerging in the area of skin lesion image processing, i.e., lesion segmentation (task 1), lesion dermoscopic feature extraction (task 2) and lesion classification (task 3). A deep learning framework consisting of two fully convolutional residual networks (FCRN) is proposed to simultaneously produce the segmentation result and the coarse classification result. A lesion index calculation unit (LICU) is developed to refine the coarse classification results by calculating the distance heat-map. A straight-forward CNN is proposed for the dermoscopic feature extraction task. The proposed deep learning frameworks were evaluated on the ISIC 2017 dataset. Experimental results show the promising accuracies of our frameworks, i.e., 0.753 for task 1, 0.848 for task 2 and 0.912 for task 3 were achieved.
NASA Astrophysics Data System (ADS)
Wang, Chia-Yu
2015-08-01
The purpose of this study was to use multiple assessments to investigate the general versus task-specific characteristics of metacognition in dissimilar chemistry topics. This mixed-method approach investigated the nature of undergraduate general chemistry students' metacognition using four assessments: a self-report questionnaire, assessment of concurrent metacognitive skills, confidence judgment, and calibration accuracy. Data were analyzed using a multitrait-multimethod correlation matrix, supplemented with regression analyses, and qualitative interpretation. Significant correlations among task performance, calibration accuracy, and concurrent metacognition within a task suggest a converging relationship. Confidence judgment, however, was not associated with task performance or the other metacognitive measurements. The results partially support hypotheses of both general and task-specific metacognition. However, general and task-specific properties of metacognition were detected using different assessments. Case studies were constructed for two participants to illustrate how concurrent metacognition varied within different task demands. Considerations of how each assessment may appropriate different metacognitive constructs and the importance of the alignment of analytical constructs when using multiple assessments are discussed. These results may help lead to improvements in metacognition assessment and may provide insights into designs of effective metacognitive instruction.
Using Cognitive Task Analysis and Eye Tracking to Understand Imagery Analysis
2006-01-01
Using Cognitive Task Analysis and Eye Tracking to Understand Imagery Analysis Laura Kurland, Abigail Gertner, Tom Bartee, Michael Chisholm and...have used these to study the analysts search behavior in detail. 2 EXPERIMENT Using a Cognitive Task Analysis (CTA) framework for knowledge...TITLE AND SUBTITLE Using Cognitive Task Analysis and Eye Tracking to Understand Imagery Analysis 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM
A naturalistic study of railway controllers.
Farrington-Darby, T; Wilson, John R; Norris, B J; Clarke, Theresa
There is an increasing prevalence for work to be analysed through naturalistic study, especially using ethnographically derived methods of enquiry and qualitative field research. The relatively unexplored domain of railway control (in comparison to signalling) in the UK is described in terms of features derived from observations and semi-structured interviews. In addition, task diagrams (a technique taken from the Applied Cognitive Task Analysis toolkit) are used to represent controllers' core elements of work, i.e. to manage events or incidents, and to identify the challenging steps in the process. The work features identified, the task diagrams, and the steps identified as challenging form a basis from which future ergonomics studies on railway controllers in the UK will be carried out.
Assessing ergonomic risks of software: Development of the SEAT.
Peres, S Camille; Mehta, Ranjana K; Ritchey, Paul
2017-03-01
Software utilizing interaction designs that require extensive dragging or clicking of icons may increase users' risks for upper extremity cumulative trauma disorders. The purpose of this research is to develop a Self-report Ergonomic Assessment Tool (SEAT) for assessing the risks of software interaction designs and facilitate mitigation of those risks. A 28-item self-report measure was developed by combining and modifying items from existing industrial ergonomic tools. Data were collected from 166 participants after they completed four different tasks that varied by method of input (touch or keyboard and mouse) and type of task (selecting or typing). Principal component analysis found distinct factors associated with stress (i.e., demands) and strain (i.e., response). Repeated measures analyses of variance showed that participants could discriminate the different strain induced by the input methods and tasks. However, participants' ability to discriminate between the stressors associated with that strain was mixed. Further validation of the SEAT is necessary but these results indicate that the SEAT may be a viable method of assessing ergonomics risks presented by software design. Copyright © 2016 Elsevier Ltd. All rights reserved.
Learning About Cockpit Automation: From Piston Trainer to Jet Transport
NASA Technical Reports Server (NTRS)
Casner, Stephen M.
2003-01-01
Two experiments explored the idea of providing cockpit automation training to airline-bound student pilots using cockpit automation equipment commonly found in small training airplanes. In a first experiment, pilots mastered a set of tasks and maneuvers using a GPS navigation computer, autopilot, and flight director system installed in a small training airplane Students were then tested on their ability to complete a similar set of tasks using the cockpit automation system found in a popular jet transport aircraft. Pilot were able to successfully complete 77% of all tasks in the jet transport on their first attempt. An analysis of a control group suggests that the pilot's success was attributable to the application of automation principles they had learned in the small airplane. A second experiment looked at two different ways of delivering small-aeroplane cockpit automation training: a self-study method, and a dual instruction method. The results showed a slight advantage for the self-study method. Overall, the results of the two studies cast a strong vote for the incorporation of cockpit automation training in curricula designed for pilot who will later transition to the jet fleet.
Radar analysis of free oscillations of rail for diagnostics defects
NASA Astrophysics Data System (ADS)
Shaydurov, G. Y.; Kudinov, D. S.; Kokhonkova, E. A.; Potylitsyn, V. S.
2018-05-01
One of the tasks of developing and implementing defectoscopy devices is the minimal influence of the human factor in their exploitation. At present, rail inspection systems do not have sufficient depth of rail research, and ultrasonic diagnostics systems need to contact the sensor with the surface being studied, which leads to low productivity. The article gives a comparative analysis of existing noncontact methods of flaw detection, offers a contactless method of diagnostics by excitation of acoustic waves and extraction of information about defects from the frequency of free rail oscillations using the radar method.
Initial Development of an E-cigarette Purchase Task: A Mixed Methods Study
Cassidy, Rachel N.; Tidey, Jennifer W.; Colby, Suzanne M.; Long, Victoria; Higgins, Stephen T.
2017-01-01
Objectives Behavioral economic purchase tasks, which estimate demand for drugs, have been successfully developed for cigarettes and are widely used. However, a validated purchase task does not yet exist for e-cigarettes. The aim of this project was to identify the relevant units for an e-cigarette purchase task (E-CPT). Methods Focus groups (N=28 participants in 7 groups, 2-7 participants per group) consisting of current e-cigarette users were conducted. Participants discussed their daily use patterns, completed a preliminary E-CPT which asked how many puffs of their e-cigarette they would consume per day at escalating prices, and discussed the extent to which the task accurately reflected their real-world behavior. Groups were recorded and transcribed; analysis focused on statements related to daily consumption and the E-CPT. Results Participants were unlikely to quantify their daily use in terms of puffs, and perceptions about the appropriate unit for an E-CPT varied across device type. Users of first-generation devices (eg, cigalikes) reported that the relevant unit was the individual device/cartridge; however, participants who purchased nicotine liquid for their device emphasized that e-liquid volume in milliliters would better reflect their use. Conclusions Multiple versions of the E-CPT may be necessary to provide valid measures of e-cigarette demand. PMID:28824938
Dual Arm Work Package performance estimates and telerobot task network simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Draper, J.V.; Blair, L.M.
1997-02-01
This paper describes the methodology and results of a network simulation study of the Dual Arm Work Package (DAWP), to be employed for dismantling the Argonne National Laboratory CP-5 reactor. The development of the simulation model was based upon the results of a task analysis for the same system. This study was performed by the Oak Ridge National Laboratory (ORNL), in the Robotics and Process Systems Division. Funding was provided the US Department of Energy`s Office of Technology Development, Robotics Technology Development Program (RTDP). The RTDP is developing methods of computer simulation to estimate telerobotic system performance. Data were collectedmore » to provide point estimates to be used in a task network simulation model. Three skilled operators performed six repetitions of a pipe cutting task representative of typical teleoperation cutting operations.« less
NASA Astrophysics Data System (ADS)
Zhang, Lei; Sun, Jinyan; Sun, Bailei; Luo, Qingming; Gong, Hui
2014-05-01
Near-infrared spectroscopy (NIRS) is a developing and promising functional brain imaging technology. Developing data analysis methods to effectively extract meaningful information from collected data is the major bottleneck in popularizing this technology. In this study, we measured hemodynamic activity of the prefrontal cortex (PFC) during a color-word matching Stroop task using NIRS. Hemispheric lateralization was examined by employing traditional activation and novel NIRS-based connectivity analyses simultaneously. Wavelet transform coherence was used to assess intrahemispheric functional connectivity. Spearman correlation analysis was used to examine the relationship between behavioral performance and activation/functional connectivity, respectively. In agreement with activation analysis, functional connectivity analysis revealed leftward lateralization for the Stroop effect and correlation with behavioral performance. However, functional connectivity was more sensitive than activation for identifying hemispheric lateralization. Granger causality was used to evaluate the effective connectivity between hemispheres. The results showed increased information flow from the left to the right hemispheres for the incongruent versus the neutral task, indicating a leading role of the left PFC. This study demonstrates that the NIRS-based connectivity can reveal the functional architecture of the brain more comprehensively than traditional activation, helping to better utilize the advantages of NIRS.
Blackboard architecture for medical image interpretation
NASA Astrophysics Data System (ADS)
Davis, Darryl N.; Taylor, Christopher J.
1991-06-01
There is a growing interest in using sophisticated knowledge-based systems for biomedical image interpretation. We present a principled attempt to use artificial intelligence methodologies in interpreting lateral skull x-ray images. Such radiographs are routinely used in cephalometric analysis to provide quantitative measurements useful to clinical orthodontists. Manual and interactive methods of analysis are known to be error prone and previous attempts to automate this analysis typically fail to capture the expertise and adaptability required to cope with the variability in biological structure and image quality. An integrated model-based system has been developed which makes use of a blackboard architecture and multiple knowledge sources. A model definition interface allows quantitative models, of feature appearance and location, to be built from examples as well as more qualitative modelling constructs. Visual task definition and blackboard control modules allow task-specific knowledge sources to act on information available to the blackboard in a hypothesise and test reasoning cycle. Further knowledge-based modules include object selection, location hypothesis, intelligent segmentation, and constraint propagation systems. Alternative solutions to given tasks are permitted.
Parametric analysis of closed cycle magnetohydrodynamic (MHD) power plants
NASA Technical Reports Server (NTRS)
Owens, W.; Berg, R.; Murthy, R.; Patten, J.
1981-01-01
A parametric analysis of closed cycle MHD power plants was performed which studied the technical feasibility, associated capital cost, and cost of electricity for the direct combustion of coal or coal derived fuel. Three reference plants, differing primarily in the method of coal conversion utilized, were defined. Reference Plant 1 used direct coal fired combustion while Reference Plants 2 and 3 employed on site integrated gasifiers. Reference Plant 2 used a pressurized gasifier while Reference Plant 3 used a ""state of the art' atmospheric gasifier. Thirty plant configurations were considered by using parametric variations from the Reference Plants. Parametric variations include the type of coal (Montana Rosebud or Illinois No. 6), clean up systems (hot or cold gas clean up), on or two stage atmospheric or pressurized direct fired coal combustors, and six different gasifier systems. Plant sizes ranged from 100 to 1000 MWe. Overall plant performance was calculated using two methodologies. In one task, the channel performance was assumed and the MHD topping cycle efficiencies were based on the assumed values. A second task involved rigorous calculations of channel performance (enthalpy extraction, isentropic efficiency and generator output) that verified the original (task one) assumptions. Closed cycle MHD capital costs were estimated for the task one plants; task two cost estimates were made for the channel and magnet only.
Krüger, Melanie; Hinder, Mark R; Puri, Rohan; Summers, Jeffery J
2017-01-01
Objectives: The aim of this study was to investigate how age-related performance differences in a visuospatial sequence learning task relate to age-related declines in cognitive functioning. Method: Cognitive functioning of 18 younger and 18 older participants was assessed using a standardized test battery. Participants then undertook a perceptual visuospatial sequence learning task. Various relationships between sequence learning and participants' cognitive functioning were examined through correlation and factor analysis. Results: Older participants exhibited significantly lower performance than their younger counterparts in the sequence learning task as well as in multiple cognitive functions. Factor analysis revealed two independent subsets of cognitive functions associated with performance in the sequence learning task, related to either the processing and storage of sequence information (first subset) or problem solving (second subset). Age-related declines were only found for the first subset of cognitive functions, which also explained a significant degree of the performance differences in the sequence learning task between age-groups. Discussion: The results suggest that age-related performance differences in perceptual visuospatial sequence learning can be explained by declines in the ability to process and store sequence information in older adults, while a set of cognitive functions related to problem solving mediates performance differences independent of age.
Toosizadeh, Nima; Najafi, Bijan; Reiman, Eric M.; Mager, Reine M.; Veldhuizen, Jaimeson K.; O’Connor, Kathy; Zamrini, Edward; Mohler, Jane
2016-01-01
Background: Difficulties in orchestrating simultaneous tasks (i.e., dual-tasking) have been associated with cognitive impairments in older adults. Gait tests have been commonly used as the motor task component for dual-task assessments; however, many older adults have mobility impairments or there is a lack of space in busy clinical settings. We assessed an upper-extremity function (UEF) test as an alternative motor task to study the dual-task motor performance in older adults. Methods: Older adults (≥65 years) were recruited, and cognitive ability was measured using the Montreal cognitive assessment (MoCA). Participants performed repetitive elbow flexion with their maximum pace, once single-task, and once while counting backward by one (dual-task). Single- and dual-task gait tests were also performed with normal speed. Three-dimensional kinematics was measured both from upper-extremity and lower-extremity using wearable sensors to determine UEF and gait parameters. Parameters were compared between the cognitively impaired and healthy groups using analysis of variance tests, while controlling for age, gender, and body mass index (BMI). Correlations between UEF and gait parameters for dual-task and dual-task cost were assessed using linear regression models. Results: Sixty-seven older adults were recruited (age = 83 ± 10 years). Based on MoCA, 10 (15%) were cognitively impaired. While no significant differences were observed in the single-task condition, within the dual-task condition, the cognitively impaired group showed significantly less arm flexion speed (62%, d = 1.51, p = 0.02) and range of motion (27%, d = 0.93, p = 0.04), and higher speed variability (88%, d = 1.82, p < 0.0001) compared to the cognitively intact group, when adjusted with age, gender, and BMI. Significant correlations were observed between UEF speed parameters and gait stride velocity for dual-task condition (r = 0.55, p < 0.0001) and dual-task cost (r = 0.28, p = 0.03). Conclusion: We introduced a novel test for assessing dual-task performance in older adults that lasts 20 s and is based on upper-extremity function. Our results confirm significant associations between upper-extremity speed, range of motion, and speed variability with both the MoCA score and the gait performance within the dual-task condition. PMID:27458374
Steinhauser, Marco; Hübner, Ronald
2009-10-01
It has been suggested that performance in the Stroop task is influenced by response conflict as well as task conflict. The present study investigated the idea that both conflict types can be isolated by applying ex-Gaussian distribution analysis which decomposes response time into a Gaussian and an exponential component. Two experiments were conducted in which manual versions of a standard Stroop task (Experiment 1) and a separated Stroop task (Experiment 2) were performed under task-switching conditions. Effects of response congruency and stimulus bivalency were used to measure response conflict and task conflict, respectively. Ex-Gaussian analysis revealed that response conflict was mainly observed in the Gaussian component, whereas task conflict was stronger in the exponential component. Moreover, task conflict in the exponential component was selectively enhanced under task-switching conditions. The results suggest that ex-Gaussian analysis can be used as a tool to isolate different conflict types in the Stroop task. PsycINFO Database Record (c) 2009 APA, all rights reserved.
Optimizing estimation of hemispheric dominance for language using magnetic source imaging.
Passaro, Antony D; Rezaie, Roozbeh; Moser, Dana C; Li, Zhimin; Dias, Nadeeka; Papanicolaou, Andrew C
2011-10-06
The efficacy of magnetoencephalography (MEG) as an alternative to invasive methods for investigating the cortical representation of language has been explored in several studies. Recently, studies comparing MEG to the gold standard Wada procedure have found inconsistent and often less-than accurate estimates of laterality across various MEG studies. Here we attempted to address this issue among normal right-handed adults (N=12) by supplementing a well-established MEG protocol involving word recognition and the single dipole method with a sentence comprehension task and a beamformer approach localizing neural oscillations. Beamformer analysis of word recognition and sentence comprehension tasks revealed a desynchronization in the 10-18Hz range, localized to the temporo-parietal cortices. Inspection of individual profiles of localized desynchronization (10-18Hz) revealed left hemispheric dominance in 91.7% and 83.3% of individuals during the word recognition and sentence comprehension tasks, respectively. In contrast, single dipole analysis yielded lower estimates, such that activity in temporal language regions was left-lateralized in 66.7% and 58.3% of individuals during word recognition and sentence comprehension, respectively. The results obtained from the word recognition task and localization of oscillatory activity using a beamformer appear to be in line with general estimates of left hemispheric dominance for language in normal right-handed individuals. Furthermore, the current findings support the growing notion that changes in neural oscillations underlie critical components of linguistic processing. Published by Elsevier B.V.
2014-01-01
Background Abortion is restricted in Uganda, and poor access to contraceptive methods result in unwanted pregnancies. This leaves women no other choice than unsafe abortion, thus placing a great burden on the Ugandan health system and making unsafe abortion one of the major contributors to maternal mortality and morbidity in Uganda. The existing sexual and reproductive health policy in Uganda supports the sharing of tasks in post-abortion care. This task sharing is taking place as a pragmatic response to the increased workload. This study aims to explore physicians’ and midwives’ perception of post-abortion care with regard to professional competences, methods, contraceptive counselling and task shifting/sharing in post-abortion care. Methods In-depth interviews (n = 27) with health care providers of post-abortion care were conducted in seven health facilities in the Central Region of Uganda. The data were organized using thematic analysis with an inductive approach. Results Post-abortion care was perceived as necessary, albeit controversial and sometimes difficult to provide. Together with poor conditions post-abortion care provoked frustration especially among midwives. Task sharing was generally taking place and midwives were identified as the main providers, although they would rarely have the proper training in post-abortion care. Additionally, midwives were sometimes forced to provide services outside their defined task area, due to the absence of doctors. Different uterine evacuation skills were recognized although few providers knew of misoprostol as a method for post-abortion care. An overall need for further training in post-abortion care was identified. Conclusions Task sharing is taking place, but providers lack the relevant skills for the provision of quality care. For post-abortion care to improve, task sharing needs to be scaled up and in-service training for both doctors and midwives needs to be provided. Post-abortion care should further be included in the educational curricula of nurses and midwives. Scaled-up task sharing in post-abortion care, along with misoprostol use for uterine evacuation would provide a systematic approach to improving the quality of care and accessibility of services, with the aim of reducing abortion-related mortality and morbidity in Uganda. PMID:24447321
Linking Task Analysis with Student Learning.
ERIC Educational Resources Information Center
Sherman, Thomas M.; Wildman, Terry M.
An examination of task analysis from several perspectives in order to identify some of its purposes and advantages reveals that, as the interest in learning theory has shifted from a predominately behavioral perspective to a more cognitive orientation, the purpose of task analysis has also shifted. Formerly the purpose of task analysis was to aid…
Task Analysis of Shuttle Entry and Landing Activities
NASA Technical Reports Server (NTRS)
Holland, Albert W.; Vanderark, Stephen T.
1993-01-01
The Task Analysis of Shuttle Entry and Landing (E/L) Activities documents all tasks required to land the Orbiter following an STS mission. In addition to analysis of tasks performed, task conditions are described, including estimated time for completion, altitude, relative velocity, normal and lateral acceleration, location of controls operated or monitored, and level of g's experienced. This analysis precedes further investigations into potential effects of zero g on piloting capabilities for landing the Orbiter following long-duration missions. This includes, but is not limited to, researching the effects of extended duration missions on piloting capabilities. Four primary constraints of the analysis must be clarified: (1) the analysis depicts E/L in a static manner--the actual process is dynamic; (2) the task analysis was limited to a paper analysis, since it was not feasible to conduct research in the actual setting (i.e., observing or filming duration an actual E/L); (3) the tasks included are those required for E/L during nominal, daylight conditions; and (4) certain E/L tasks will vary according to the flying style of each commander.
An R package for the integrated analysis of metabolomics and spectral data.
Costa, Christopher; Maraschin, Marcelo; Rocha, Miguel
2016-06-01
Recently, there has been a growing interest in the field of metabolomics, materialized by a remarkable growth in experimental techniques, available data and related biological applications. Indeed, techniques as nuclear magnetic resonance, gas or liquid chromatography, mass spectrometry, infrared and UV-visible spectroscopies have provided extensive datasets that can help in tasks as biological and biomedical discovery, biotechnology and drug development. However, as it happens with other omics data, the analysis of metabolomics datasets provides multiple challenges, both in terms of methodologies and in the development of appropriate computational tools. Indeed, from the available software tools, none addresses the multiplicity of existing techniques and data analysis tasks. In this work, we make available a novel R package, named specmine, which provides a set of methods for metabolomics data analysis, including data loading in different formats, pre-processing, metabolite identification, univariate and multivariate data analysis, machine learning, and feature selection. Importantly, the implemented methods provide adequate support for the analysis of data from diverse experimental techniques, integrating a large set of functions from several R packages in a powerful, yet simple to use environment. The package, already available in CRAN, is accompanied by a web site where users can deposit datasets, scripts and analysis reports to be shared with the community, promoting the efficient sharing of metabolomics data analysis pipelines. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Analysis of line structure in handwritten documents using the Hough transform
NASA Astrophysics Data System (ADS)
Ball, Gregory R.; Kasiviswanathan, Harish; Srihari, Sargur N.; Narayanan, Aswin
2010-01-01
In the analysis of handwriting in documents a central task is that of determining line structure of the text, e.g., number of text lines, location of their starting and end-points, line-width, etc. While simple methods can handle ideal images, real world documents have complexities such as overlapping line structure, variable line spacing, line skew, document skew, noisy or degraded images etc. This paper explores the application of the Hough transform method to handwritten documents with the goal of automatically determining global document line structure in a top-down manner which can then be used in conjunction with a bottom-up method such as connected component analysis. The performance is significantly better than other top-down methods, such as the projection profile method. In addition, we evaluate the performance of skew analysis by the Hough transform on handwritten documents.
Quantitative method of medication system interface evaluation.
Pingenot, Alleene Anne; Shanteau, James; Pingenot, James D F
2007-01-01
The objective of this study was to develop a quantitative method of evaluating the user interface for medication system software. A detailed task analysis provided a description of user goals and essential activity. A structural fault analysis was used to develop a detailed description of the system interface. Nurses experienced with use of the system under evaluation provided estimates of failure rates for each point in this simplified fault tree. Means of estimated failure rates provided quantitative data for fault analysis. Authors note that, although failures of steps in the program were frequent, participants reported numerous methods of working around these failures so that overall system failure was rare. However, frequent process failure can affect the time required for processing medications, making a system inefficient. This method of interface analysis, called Software Efficiency Evaluation and Fault Identification Method, provides quantitative information with which prototypes can be compared and problems within an interface identified.
NASA Astrophysics Data System (ADS)
Drygin, Michael; Kuryshkin, Nicholas
2017-11-01
The article tells about forming a new concept of scheduled preventive repair system of the equipment at coal mining enterprises, based on the use of modem non-destructive evaluation methods. The approach to the solution for this task is based on the system-oriented analysis of the regulatory documentation, non-destructive evaluation methods and means, experimental studies with compilation of statistics and subsequent grapho-analytical analysis. The main result of the work is a feasible explanation of using non-destructive evaluation methods within the current scheduled preventive repair system, their high efficiency and the potential of gradual transition to condition-based maintenance. In practice wide use of nondestructive evaluation means w;ill allow to reduce significantly the number of equipment failures and to repair only the nodes in pre-accident condition. Considering the import phase-out policy, the solution for this task will allow to adapt the SPR system to Russian market economy conditions and give the opportunity of commercial move by reducing the expenses for maintenance of Russian-made and imported equipment.
Colquhoun, Heather L; Squires, Janet E; Kolehmainen, Niina; Fraser, Cynthia; Grimshaw, Jeremy M
2017-03-04
Systematic reviews consistently indicate that interventions to change healthcare professional (HCP) behaviour are haphazardly designed and poorly specified. Clarity about methods for designing and specifying interventions is needed. The objective of this review was to identify published methods for designing interventions to change HCP behaviour. A search of MEDLINE, Embase, and PsycINFO was conducted from 1996 to April 2015. Using inclusion/exclusion criteria, a broad screen of abstracts by one rater was followed by a strict screen of full text for all potentially relevant papers by three raters. An inductive approach was first applied to the included studies to identify commonalities and differences between the descriptions of methods across the papers. Based on this process and knowledge of related literatures, we developed a data extraction framework that included, e.g. level of change (e.g. individual versus organization); context of development; a brief description of the method; tasks included in the method (e.g. barrier identification, component selection, use of theory). 3966 titles and abstracts and 64 full-text papers were screened to yield 15 papers included in the review, each outlining one design method. All of the papers reported methods developed within a specific context. Thirteen papers included barrier identification and 13 included linking barriers to intervention components; although not the same 13 papers. Thirteen papers targeted individual HCPs with only one paper targeting change across individual, organization, and system levels. The use of theory and user engagement were included in 13/15 and 13/15 papers, respectively. There is an agreement across methods of four tasks that need to be completed when designing individual-level interventions: identifying barriers, selecting intervention components, using theory, and engaging end-users. Methods also consist of further additional tasks. Examples of methods for designing the organisation and system-level interventions were limited. Further analysis of design tasks could facilitate the development of detailed guidelines for designing interventions.
A method for the automated processing and analysis of images of ULVWF-platelet strings.
Reeve, Scott R; Abbitt, Katherine B; Cruise, Thomas D; Hose, D Rodney; Lawford, Patricia V
2013-01-01
We present a method for identifying and analysing unusually large von Willebrand factor (ULVWF)-platelet strings in noisy low-quality images. The method requires relatively inexpensive, non-specialist equipment and allows multiple users to be employed in the capture of images. Images are subsequently enhanced and analysed, using custom-written software to perform the processing tasks. The formation and properties of ULVWF-platelet strings released in in vitro flow-based assays have recently become a popular research area. Endothelial cells are incorporated into a flow chamber, chemically stimulated to induce ULVWF release and perfused with isolated platelets which are able to bind to the ULVWF to form strings. The numbers and lengths of the strings released are related to characteristics of the flow. ULVWF-platelet strings are routinely identified by eye from video recordings captured during experiments and analysed manually using basic NIH image software to determine the number of strings and their lengths. This is a laborious, time-consuming task and a single experiment, often consisting of data from four to six dishes of endothelial cells, can take 2 or more days to analyse. The method described here allows analysis of the strings to provide data such as the number and length of strings, number of platelets per string and the distance between each platelet to be found. The software reduces analysis time, and more importantly removes user subjectivity, producing highly reproducible results with an error of less than 2% when compared with detailed manual analysis.
Problems and Processes in Medical Encounters: The CASES method of dialogue analysis
Laws, M. Barton; Taubin, Tatiana; Bezreh, Tanya; Lee, Yoojin; Beach, Mary Catherine; Wilson, Ira B.
2013-01-01
Objective To develop methods to reliably capture structural and dynamic temporal features of clinical interactions. Methods Observational study of 50 audio-recorded routine outpatient visits to HIV specialty clinics, using innovative analytic methods. The Comprehensive Analysis of the Structure of Encounters System (CASES) uses transcripts coded for speech acts, then imposes larger-scale structural elements: threads – the problems or issues addressed; and processes within threads –basic tasks of clinical care labeled Presentation, Information, Resolution (decision making) and Engagement (interpersonal exchange). Threads are also coded for the nature of resolution. Results 61% of utterances are in presentation processes. Provider verbal dominance is greatest in information and resolution processes, which also contain a high proportion of provider directives. About half of threads result in no action or decision. Information flows predominantly from patient to provider in presentation processes, and from provider to patient in information processes. Engagement is rare. Conclusions In this data, resolution is provider centered; more time for patient participation in resolution, or interpersonal engagement, would have to come from presentation. Practice Implications Awareness of the use of time in clinical encounters, and the interaction processes associated with various tasks, may help make clinical communication more efficient and effective. PMID:23391684
Applications of flight control system methods to an advanced combat rotorcraft
NASA Technical Reports Server (NTRS)
Tischler, Mark B.; Fletcher, Jay W.; Morris, Patrick M.; Tucker, George T.
1989-01-01
Advanced flight control system design, analysis, and testing methodologies developed at the Ames Research Center are applied in an analytical and flight test evaluation of the Advanced Digital Optical Control System (ADOCS) demonstrator. The primary objectives are to describe the knowledge gained about the implications of digital flight control system design for rotorcraft, and to illustrate the analysis of the resulting handling-qualities in the context of the proposed new handling-qualities specification for rotorcraft. Topics covered in-depth are digital flight control design and analysis methods, flight testing techniques, ADOCS handling-qualities evaluation results, and correlation of flight test results with analytical models and the proposed handling-qualities specification. The evaluation of the ADOCS demonstrator indicates desirable response characteristics based on equivalent damping and frequency, but undersirably large effective time-delays (exceeding 240 m sec in all axes). Piloted handling-qualities are found to be desirable or adequate for all low, medium, and high pilot gain tasks; but handling-qualities are inadequate for ultra-high gain tasks such as slope and running landings.
Task-Driven Dictionary Learning Based on Mutual Information for Medical Image Classification.
Diamant, Idit; Klang, Eyal; Amitai, Michal; Konen, Eli; Goldberger, Jacob; Greenspan, Hayit
2017-06-01
We present a novel variant of the bag-of-visual-words (BoVW) method for automated medical image classification. Our approach improves the BoVW model by learning a task-driven dictionary of the most relevant visual words per task using a mutual information-based criterion. Additionally, we generate relevance maps to visualize and localize the decision of the automatic classification algorithm. These maps demonstrate how the algorithm works and show the spatial layout of the most relevant words. We applied our algorithm to three different tasks: chest x-ray pathology identification (of four pathologies: cardiomegaly, enlarged mediastinum, right consolidation, and left consolidation), liver lesion classification into four categories in computed tomography (CT) images and benign/malignant clusters of microcalcifications (MCs) classification in breast mammograms. Validation was conducted on three datasets: 443 chest x-rays, 118 portal phase CT images of liver lesions, and 260 mammography MCs. The proposed method improves the classical BoVW method for all tested applications. For chest x-ray, area under curve of 0.876 was obtained for enlarged mediastinum identification compared to 0.855 using classical BoVW (with p-value 0.01). For MC classification, a significant improvement of 4% was achieved using our new approach (with p-value = 0.03). For liver lesion classification, an improvement of 6% in sensitivity and 2% in specificity were obtained (with p-value 0.001). We demonstrated that classification based on informative selected set of words results in significant improvement. Our new BoVW approach shows promising results in clinically important domains. Additionally, it can discover relevant parts of images for the task at hand without explicit annotations for training data. This can provide computer-aided support for medical experts in challenging image analysis tasks.
Semi-supervised prediction of gene regulatory networks using machine learning algorithms.
Patel, Nihir; Wang, Jason T L
2015-10-01
Use of computational methods to predict gene regulatory networks (GRNs) from gene expression data is a challenging task. Many studies have been conducted using unsupervised methods to fulfill the task; however, such methods usually yield low prediction accuracies due to the lack of training data. In this article, we propose semi-supervised methods for GRN prediction by utilizing two machine learning algorithms, namely, support vector machines (SVM) and random forests (RF). The semi-supervised methods make use of unlabelled data for training. We investigated inductive and transductive learning approaches, both of which adopt an iterative procedure to obtain reliable negative training data from the unlabelled data. We then applied our semi-supervised methods to gene expression data of Escherichia coli and Saccharomyces cerevisiae, and evaluated the performance of our methods using the expression data. Our analysis indicated that the transductive learning approach outperformed the inductive learning approach for both organisms. However, there was no conclusive difference identified in the performance of SVM and RF. Experimental results also showed that the proposed semi-supervised methods performed better than existing supervised methods for both organisms.
Three Techniques for Task Analysis: Examples from the Nuclear Utilities.
ERIC Educational Resources Information Center
Carlisle, Kenneth E.
1984-01-01
Discusses three task analysis techniques utilized at the Palo Verde Nuclear Generating Station to review training programs: analysis of (1) job positions, (2) procedures, and (3) instructional presentations. All of these include task breakdown, relationship determination, and task restructuring. (MBR)
Khandan, Mohammad; Nili, Majid; Koohpaei, Alireza; Mosaferchi, Saeedeh
2016-01-01
Nowadays, the health work decision makers need to analyze a huge amount of data and consider many conflicting evaluation criteria and sub-criteria. Therefore, an ergonomic evaluation in the work environment in order to the control occupational disorders is considered as the Multi Criteria Decision Making (MCDM) problem. In this study, the ergonomic risks factors, which may influence health, were evaluated in a manufacturing company in 2014. Then entropy method was applied to prioritize the different risk factors. This study was done with a descriptive-analytical approach and 13 tasks were included from total number of employees who were working in the seven halls of an ark opal manufacturing (240). Required information was gathered by the demographic questionnaire and Assessment of Repetitive Tasks (ART) method for repetitive task assessment. In addition, entropy was used to prioritize the risk factors based on the ergonomic control needs. The total exposure score based on the ART method calculated was equal to 30.07 ±12.43. Data analysis illustrated that 179 cases (74.6% of tasks) were in the high level of risk area and 13.8% were in the medium level of risk. ART- entropy results revealed that based on the weighted factors, higher value belongs to grip factor and the lowest value was related to neck and hand posture and duration. Based on the limited financial resources, it seems that MCDM in many challenging situations such as control procedures and priority approaches could be used successfully. Other MCDM methods for evaluating and prioritizing the ergonomic problems are recommended.
Beda, Alessandro; Simpson, David M; Faes, Luca
2017-01-01
The growing interest in personalized medicine requires making inferences from descriptive indexes estimated from individual recordings of physiological signals, with statistical analyses focused on individual differences between/within subjects, rather than comparing supposedly homogeneous cohorts. To this end, methods to compute confidence limits of individual estimates of descriptive indexes are needed. This study introduces numerical methods to compute such confidence limits and perform statistical comparisons between indexes derived from autoregressive (AR) modeling of individual time series. Analytical approaches are generally not viable, because the indexes are usually nonlinear functions of the AR parameters. We exploit Monte Carlo (MC) and Bootstrap (BS) methods to reproduce the sampling distribution of the AR parameters and indexes computed from them. Here, these methods are implemented for spectral and information-theoretic indexes of heart-rate variability (HRV) estimated from AR models of heart-period time series. First, the MS and BC methods are tested in a wide range of synthetic HRV time series, showing good agreement with a gold-standard approach (i.e. multiple realizations of the "true" process driving the simulation). Then, real HRV time series measured from volunteers performing cognitive tasks are considered, documenting (i) the strong variability of confidence limits' width across recordings, (ii) the diversity of individual responses to the same task, and (iii) frequent disagreement between the cohort-average response and that of many individuals. We conclude that MC and BS methods are robust in estimating confidence limits of these AR-based indexes and thus recommended for short-term HRV analysis. Moreover, the strong inter-individual differences in the response to tasks shown by AR-based indexes evidence the need of individual-by-individual assessments of HRV features. Given their generality, MC and BS methods are promising for applications in biomedical signal processing and beyond, providing a powerful new tool for assessing the confidence limits of indexes estimated from individual recordings.
2017-01-01
The growing interest in personalized medicine requires making inferences from descriptive indexes estimated from individual recordings of physiological signals, with statistical analyses focused on individual differences between/within subjects, rather than comparing supposedly homogeneous cohorts. To this end, methods to compute confidence limits of individual estimates of descriptive indexes are needed. This study introduces numerical methods to compute such confidence limits and perform statistical comparisons between indexes derived from autoregressive (AR) modeling of individual time series. Analytical approaches are generally not viable, because the indexes are usually nonlinear functions of the AR parameters. We exploit Monte Carlo (MC) and Bootstrap (BS) methods to reproduce the sampling distribution of the AR parameters and indexes computed from them. Here, these methods are implemented for spectral and information-theoretic indexes of heart-rate variability (HRV) estimated from AR models of heart-period time series. First, the MS and BC methods are tested in a wide range of synthetic HRV time series, showing good agreement with a gold-standard approach (i.e. multiple realizations of the "true" process driving the simulation). Then, real HRV time series measured from volunteers performing cognitive tasks are considered, documenting (i) the strong variability of confidence limits' width across recordings, (ii) the diversity of individual responses to the same task, and (iii) frequent disagreement between the cohort-average response and that of many individuals. We conclude that MC and BS methods are robust in estimating confidence limits of these AR-based indexes and thus recommended for short-term HRV analysis. Moreover, the strong inter-individual differences in the response to tasks shown by AR-based indexes evidence the need of individual-by-individual assessments of HRV features. Given their generality, MC and BS methods are promising for applications in biomedical signal processing and beyond, providing a powerful new tool for assessing the confidence limits of indexes estimated from individual recordings. PMID:28968394
A supervised framework for resolving coreference in clinical records.
Rink, Bryan; Roberts, Kirk; Harabagiu, Sanda M
2012-01-01
A method for the automatic resolution of coreference between medical concepts in clinical records. A multiple pass sieve approach utilizing support vector machines (SVMs) at each pass was used to resolve coreference. Information such as lexical similarity, recency of a concept mention, synonymy based on Wikipedia redirects, and local lexical context were used to inform the method. Results were evaluated using an unweighted average of MUC, CEAF, and B(3) coreference evaluation metrics. The datasets used in these research experiments were made available through the 2011 i2b2/VA Shared Task on Coreference. The method achieved an average F score of 0.821 on the ODIE dataset, with a precision of 0.802 and a recall of 0.845. These results compare favorably to the best-performing system with a reported F score of 0.827 on the dataset and the median system F score of 0.800 among the eight teams that participated in the 2011 i2b2/VA Shared Task on Coreference. On the i2b2 dataset, the method achieved an average F score of 0.906, with a precision of 0.895 and a recall of 0.918 compared to the best F score of 0.915 and the median of 0.859 among the 16 participating teams. Post hoc analysis revealed significant performance degradation on pathology reports. The pathology reports were characterized by complex synonymy and very few patient mentions. The use of several simple lexical matching methods had the most impact on achieving competitive performance on the task of coreference resolution. Moreover, the ability to detect patients in electronic medical records helped to improve coreference resolution more than other linguistic analysis.
Overview of the ID, EPI and REL tasks of BioNLP Shared Task 2011.
Pyysalo, Sampo; Ohta, Tomoko; Rak, Rafal; Sullivan, Dan; Mao, Chunhong; Wang, Chunxia; Sobral, Bruno; Tsujii, Jun'ichi; Ananiadou, Sophia
2012-06-26
We present the preparation, resources, results and analysis of three tasks of the BioNLP Shared Task 2011: the main tasks on Infectious Diseases (ID) and Epigenetics and Post-translational Modifications (EPI), and the supporting task on Entity Relations (REL). The two main tasks represent extensions of the event extraction model introduced in the BioNLP Shared Task 2009 (ST'09) to two new areas of biomedical scientific literature, each motivated by the needs of specific biocuration tasks. The ID task concerns the molecular mechanisms of infection, virulence and resistance, focusing in particular on the functions of a class of signaling systems that are ubiquitous in bacteria. The EPI task is dedicated to the extraction of statements regarding chemical modifications of DNA and proteins, with particular emphasis on changes relating to the epigenetic control of gene expression. By contrast to these two application-oriented main tasks, the REL task seeks to support extraction in general by separating challenges relating to part-of relations into a subproblem that can be addressed by independent systems. Seven groups participated in each of the two main tasks and four groups in the supporting task. The participating systems indicated advances in the capability of event extraction methods and demonstrated generalization in many aspects: from abstracts to full texts, from previously considered subdomains to new ones, and from the ST'09 extraction targets to other entities and events. The highest performance achieved in the supporting task REL, 58% F-score, is broadly comparable with levels reported for other relation extraction tasks. For the ID task, the highest-performing system achieved 56% F-score, comparable to the state-of-the-art performance at the established ST'09 task. In the EPI task, the best result was 53% F-score for the full set of extraction targets and 69% F-score for a reduced set of core extraction targets, approaching a level of performance sufficient for user-facing applications. In this study, we extend on previously reported results and perform further analyses of the outputs of the participating systems. We place specific emphasis on aspects of system performance relating to real-world applicability, considering alternate evaluation metrics and performing additional manual analysis of system outputs. We further demonstrate that the strengths of extraction systems can be combined to improve on the performance achieved by any system in isolation. The manually annotated corpora, supporting resources, and evaluation tools for all tasks are available from http://www.bionlp-st.org and the tasks continue as open challenges for all interested parties.
Overview of the ID, EPI and REL tasks of BioNLP Shared Task 2011
2012-01-01
We present the preparation, resources, results and analysis of three tasks of the BioNLP Shared Task 2011: the main tasks on Infectious Diseases (ID) and Epigenetics and Post-translational Modifications (EPI), and the supporting task on Entity Relations (REL). The two main tasks represent extensions of the event extraction model introduced in the BioNLP Shared Task 2009 (ST'09) to two new areas of biomedical scientific literature, each motivated by the needs of specific biocuration tasks. The ID task concerns the molecular mechanisms of infection, virulence and resistance, focusing in particular on the functions of a class of signaling systems that are ubiquitous in bacteria. The EPI task is dedicated to the extraction of statements regarding chemical modifications of DNA and proteins, with particular emphasis on changes relating to the epigenetic control of gene expression. By contrast to these two application-oriented main tasks, the REL task seeks to support extraction in general by separating challenges relating to part-of relations into a subproblem that can be addressed by independent systems. Seven groups participated in each of the two main tasks and four groups in the supporting task. The participating systems indicated advances in the capability of event extraction methods and demonstrated generalization in many aspects: from abstracts to full texts, from previously considered subdomains to new ones, and from the ST'09 extraction targets to other entities and events. The highest performance achieved in the supporting task REL, 58% F-score, is broadly comparable with levels reported for other relation extraction tasks. For the ID task, the highest-performing system achieved 56% F-score, comparable to the state-of-the-art performance at the established ST'09 task. In the EPI task, the best result was 53% F-score for the full set of extraction targets and 69% F-score for a reduced set of core extraction targets, approaching a level of performance sufficient for user-facing applications. In this study, we extend on previously reported results and perform further analyses of the outputs of the participating systems. We place specific emphasis on aspects of system performance relating to real-world applicability, considering alternate evaluation metrics and performing additional manual analysis of system outputs. We further demonstrate that the strengths of extraction systems can be combined to improve on the performance achieved by any system in isolation. The manually annotated corpora, supporting resources, and evaluation tools for all tasks are available from http://www.bionlp-st.org and the tasks continue as open challenges for all interested parties. PMID:22759456
Task-based modeling and optimization of a cone-beam CT scanner for musculoskeletal imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prakash, P.; Zbijewski, W.; Gang, G. J.
2011-10-15
Purpose: This work applies a cascaded systems model for cone-beam CT imaging performance to the design and optimization of a system for musculoskeletal extremity imaging. The model provides a quantitative guide to the selection of system geometry, source and detector components, acquisition techniques, and reconstruction parameters. Methods: The model is based on cascaded systems analysis of the 3D noise-power spectrum (NPS) and noise-equivalent quanta (NEQ) combined with factors of system geometry (magnification, focal spot size, and scatter-to-primary ratio) and anatomical background clutter. The model was extended to task-based analysis of detectability index (d') for tasks ranging in contrast and frequencymore » content, and d' was computed as a function of system magnification, detector pixel size, focal spot size, kVp, dose, electronic noise, voxel size, and reconstruction filter to examine trade-offs and optima among such factors in multivariate analysis. The model was tested quantitatively versus the measured NPS and qualitatively in cadaver images as a function of kVp, dose, pixel size, and reconstruction filter under conditions corresponding to the proposed scanner. Results: The analysis quantified trade-offs among factors of spatial resolution, noise, and dose. System magnification (M) was a critical design parameter with strong effect on spatial resolution, dose, and x-ray scatter, and a fairly robust optimum was identified at M {approx} 1.3 for the imaging tasks considered. The results suggested kVp selection in the range of {approx}65-90 kVp, the lower end (65 kVp) maximizing subject contrast and the upper end maximizing NEQ (90 kVp). The analysis quantified fairly intuitive results--e.g., {approx}0.1-0.2 mm pixel size (and a sharp reconstruction filter) optimal for high-frequency tasks (bone detail) compared to {approx}0.4 mm pixel size (and a smooth reconstruction filter) for low-frequency (soft-tissue) tasks. This result suggests a specific protocol for 1 x 1 (full-resolution) projection data acquisition followed by full-resolution reconstruction with a sharp filter for high-frequency tasks along with 2 x 2 binning reconstruction with a smooth filter for low-frequency tasks. The analysis guided selection of specific source and detector components implemented on the proposed scanner. The analysis also quantified the potential benefits and points of diminishing return in focal spot size, reduced electronic noise, finer detector pixels, and low-dose limits of detectability. Theoretical results agreed quantitatively with the measured NPS and qualitatively with evaluation of cadaver images by a musculoskeletal radiologist. Conclusions: A fairly comprehensive model for 3D imaging performance in cone-beam CT combines factors of quantum noise, system geometry, anatomical background, and imaging task. The analysis provided a valuable, quantitative guide to design, optimization, and technique selection for a musculoskeletal extremities imaging system under development.« less
Simulation of short-term electric load using an artificial neural network
NASA Astrophysics Data System (ADS)
Ivanin, O. A.
2018-01-01
While solving the task of optimizing operation modes and equipment composition of small energy complexes or other tasks connected with energy planning, it is necessary to have data on energy loads of a consumer. Usually, there is a problem with obtaining real load charts and detailed information about the consumer, because a method of load-charts simulation on the basis of minimal information should be developed. The analysis of work devoted to short-term loads prediction allows choosing artificial neural networks as a most suitable mathematical instrument for solving this problem. The article provides an overview of applied short-term load simulation methods; it describes the advantages of artificial neural networks and offers a neural network structure for electric loads of residential buildings simulation. The results of modeling loads with proposed method and the estimation of its error are presented.
Investigation of High-Angle-of-Attack Maneuver-Limiting Factors. Part 1. Analysis and Simulation
1980-12-01
useful, are not so satisfying or in- structive as the more positive identification of causal factors offered by the methods developed in Reference 5...same methods be applied to additional high-performance fighter aircraft having widely differing high AOA handling characteristics to see if further...predictions and the nonlinear model results were resolved. The second task involved development of methods , criteria, and an associated pilot rating scale, for
Proceedings of the NASA Workshop on Registration and Rectification
NASA Technical Reports Server (NTRS)
Bryant, N. A. (Editor)
1982-01-01
Issues associated with the registration and rectification of remotely sensed data. Near and long range applications research tasks and some medium range technology augmentation research areas are recommended. Image sharpness, feature extraction, inter-image mapping, error analysis, and verification methods are addressed.
Cooperation, Technology, and Performance: A Case Study.
ERIC Educational Resources Information Center
Cavanagh, Thomas; Dickenson, Sabrina; Brandt, Suzanne
1999-01-01
Describes the CTP (Cooperation, Technology, and Performance) model and explains how it is used by the Department of Veterans Affairs-Veteran's Benefit Administration (VBA) for training. Discusses task analysis; computer-based training; cooperative-based learning environments; technology-based learning; performance-assessment methods; courseware…
Project for the analysis of technology transfer
NASA Technical Reports Server (NTRS)
Kottenstette, J. P.; Freeman, J. E.; Staskin, E. R.
1971-01-01
The special task of preparing technology transfer profiles during the first six months of 1971 produced two major results: refining a new method for identifying and describing technology transfer activities, and generating practical insights into a number of issues associated with transfer programs.
Sun, Li; Liang, Peipeng; Jia, Xiuqin; Qi, Zhigang; Li, Kuncheng
2014-01-01
Objective: Recent neuroimaging studies have shown that elderly adults exhibit increased and decreased activation on various cognitive tasks, yet little is known about age-related changes in inductive reasoning. Methods: To investigate the neural basis for the aging effect on inductive reasoning, 15 young and 15 elderly subjects performed numerical inductive reasoning while in a magnetic resonance (MR) scanner. Results: Functional magnetic resonance imaging (fMRI) analysis revealed that numerical inductive reasoning, relative to rest, yielded multiple frontal, temporal, parietal, and some subcortical area activations for both age groups. In addition, the younger participants showed significant regions of task-induced deactivation, while no deactivation occurred in the elderly adults. Direct group comparisons showed that elderly adults exhibited greater activity in regions of task-related activation and areas showing task-induced deactivation (TID) in the younger group. Conclusions: Our findings suggest an age-related deficiency in neural function and resource allocation during inductive reasoning. PMID:25337240
Finger Interdependence: Linking the Kinetic and Kinematic Variables
Kim, Sun Wook; Shim, Jae Kun; Zatsiorsky, Vladimir M.; Latash, Mark L.
2008-01-01
We studied the dependence between voluntary motion of a finger and pressing forces produced by the tips of other fingers of the hand. Subjects moved one of the fingers (task finger) of the right hand trying to follow a cyclic, ramp-like flexion-extension template at different frequencies. The other fingers (slave fingers) were restricted from moving; their flexion forces were recorded and analyzed. Index finger motion caused the smallest force production by the slave fingers. Larger forces were produced by the neighbors of the task finger; these forces showed strong modulation over the range of motion of the task finger. The enslaved forces were higher during the flexion phase of the movement cycle as compared to the extension phase. The index of enslaving expressed in N/rad was higher when the task finger moved through the more flexed postures. The dependence of enslaving on both range and direction of task finger motion poses problems for methods of analysis of finger coordination based on an assumption of universal matrices of finger inter-dependence. PMID:18255182
Characterization of Cleaning and Disinfecting Tasks and Product Use Among Hospital Occupations
Saito, Rena; Virji, M. Abbas; Henneberger, Paul K.; Humann, Michael J.; LeBouf, Ryan F.; Stanton, Marcia L.; Liang, Xiaoming; Stefaniak, Aleksandr B.
2016-01-01
Background Healthcare workers have an elevated prevalence of asthma and related symptoms associated with the use of cleaning/disinfecting products. The objective of this study was to identify and characterize cleaning/disinfecting tasks and products used among hospital occupations. Methods Workers from 14 occupations at five hospitals were monitored for 216 shifts, and work tasks and products used were recorded at five-minute intervals. The major chemical constituents of each product were identified from safety data sheets. Results Cleaning and disinfecting tasks were performed with a high frequency at least once per shift in many occupations. Medical equipment preparers, housekeepers, floor strippers/waxers, and endoscopy technicians spent on average 108–177 min/shift performing cleaning/disinfecting tasks. Many occupations used products containing amines and quaternary ammonium compounds for > 100 min/shift. Conclusions This analysis demonstrates that many occupations besides housekeeping incur exposures to cleaning/disinfecting products, albeit for different durations and using products containing different chemicals. PMID:25351791
Williams, Kent E; Voigt, Jeffrey R
2004-01-01
The research reported herein presents the results of an empirical evaluation that focused on the accuracy and reliability of cognitive models created using a computerized tool: the cognitive analysis tool for human-computer interaction (CAT-HCI). A sample of participants, expert in interacting with a newly developed tactical display for the U.S. Army's Bradley Fighting Vehicle, individually modeled their knowledge of 4 specific tasks employing the CAT-HCI tool. Measures of the accuracy and consistency of task models created by these task domain experts using the tool were compared with task models created by a double expert. The findings indicated a high degree of consistency and accuracy between the different "single experts" in the task domain in terms of the resultant models generated using the tool. Actual or potential applications of this research include assessing human-computer interaction complexity, determining the productivity of human-computer interfaces, and analyzing an interface design to determine whether methods can be automated.
Čegovnik, Tomaž; Stojmenova, Kristina; Jakus, Grega; Sodnik, Jaka
2018-04-01
This paper presents a driving simulator study in which we investigated whether the Eye Tribe eye tracker (ET) is capable of assessing changes in the cognitive load of drivers through oculography and pupillometry. In the study, participants were asked to drive a simulated vehicle and simultaneously perform a set of secondary tasks with different cognitive complexity levels. We measured changes in eye properties, such as the pupil size, blink rate and fixation time. We also performed a measurement with a Detection Response Task (DRT) to validate the results and to prove a steady increase of cognitive load with increasing secondary task difficulty. The results showed that the ET precisely recognizes an increasing pupil diameter with increasing secondary task difficulty. In addition, the ET shows increasing blink rates, decreasing fixation time and narrowing of the attention field with increasing secondary task difficulty. The results were validated with the DRT method and the secondary task performance. We conclude that the Eye Tribe ET is a suitable device for assessing a driver's cognitive load. Copyright © 2017 Elsevier Ltd. All rights reserved.
Time-frequency analysis of band-limited EEG with BMFLC and Kalman filter for BCI applications
2013-01-01
Background Time-Frequency analysis of electroencephalogram (EEG) during different mental tasks received significant attention. As EEG is non-stationary, time-frequency analysis is essential to analyze brain states during different mental tasks. Further, the time-frequency information of EEG signal can be used as a feature for classification in brain-computer interface (BCI) applications. Methods To accurately model the EEG, band-limited multiple Fourier linear combiner (BMFLC), a linear combination of truncated multiple Fourier series models is employed. A state-space model for BMFLC in combination with Kalman filter/smoother is developed to obtain accurate adaptive estimation. By virtue of construction, BMFLC with Kalman filter/smoother provides accurate time-frequency decomposition of the bandlimited signal. Results The proposed method is computationally fast and is suitable for real-time BCI applications. To evaluate the proposed algorithm, a comparison with short-time Fourier transform (STFT) and continuous wavelet transform (CWT) for both synthesized and real EEG data is performed in this paper. The proposed method is applied to BCI Competition data IV for ERD detection in comparison with existing methods. Conclusions Results show that the proposed algorithm can provide optimal time-frequency resolution as compared to STFT and CWT. For ERD detection, BMFLC-KF outperforms STFT and BMFLC-KS in real-time applicability with low computational requirement. PMID:24274109
Wind Sensing, Analysis, and Modeling
NASA Technical Reports Server (NTRS)
Corvin, Michael A.
1995-01-01
The purpose of this task was to begin development of a unified approach to the sensing, analysis, and modeling of the wind environments in which launch system operate. The initial activity was to examine the current usage and requirements for wind modeling for the Titan 4 vehicle. This was to be followed by joint technical efforts with NASA Langley Research Center to develop applicable analysis methods. This work was to be performed in and demonstrate the use of prototype tools implementing an environment in which to realize a unified system. At the convenience of the customer, due to resource limitations, the task was descoped. The survey of Titan 4 processes was accomplished and is reported in this document. A summary of general requirements is provided. Current versions of prototype Process Management Environment tools are being provided to the customer.
Wind sensing, analysis, and modeling
NASA Technical Reports Server (NTRS)
Corvin, Michael A.
1995-01-01
The purpose of this task was to begin development of a unified approach to the sensing, analysis, and modeling of the wind environments in which launch systems operate. The initial activity was to examine the current usage and requirements for wind modeling for the Titan 4 vehicle. This was to be followed by joint technical efforts with NASA Langley Research Center to develop applicable analysis methods. This work was to be performed in and demonstrate the use of prototype tools implementing an environment in which to realize a unified system. At the convenience of the customer, due to resource limitations, the task was descoped. The survey of Titan 4 processes was accomplished and is reported in this document. A summary of general requirements is provided . Current versions of prototype Process Management Environment tools are being provided to the customer.
Piovesan, Davide; Pierobon, Alberto; DiZio, Paul; Lackner, James R
2012-01-01
This study presents and validates a Time-Frequency technique for measuring 2-dimensional multijoint arm stiffness throughout a single planar movement as well as during static posture. It is proposed as an alternative to current regressive methods which require numerous repetitions to obtain average stiffness on a small segment of the hand trajectory. The method is based on the analysis of the reassigned spectrogram of the arm's response to impulsive perturbations and can estimate arm stiffness on a trial-by-trial basis. Analytic and empirical methods are first derived and tested through modal analysis on synthetic data. The technique's accuracy and robustness are assessed by modeling the estimation of stiffness time profiles changing at different rates and affected by different noise levels. Our method obtains results comparable with two well-known regressive techniques. We also test how the technique can identify the viscoelastic component of non-linear and higher than second order systems with a non-parametrical approach. The technique proposed here is very impervious to noise and can be used easily for both postural and movement tasks. Estimations of stiffness profiles are possible with only one perturbation, making our method a useful tool for estimating limb stiffness during motor learning and adaptation tasks, and for understanding the modulation of stiffness in individuals with neurodegenerative diseases.
Georgsson, Mattias; Kushniruk, Andre
2016-01-01
The cognitive walkthrough (CW) is a task-based, expert inspection usability evaluation method involving benefits such as cost effectiveness and efficiency. A drawback of the method is that it doesn't involve the user perspective from real users but instead is based on experts' predictions about the usability of the system and how users interact. In this paper, we propose a way of involving the user in an expert evaluation method by modifying the CW with patient groups as mediators. This along with other modifications include a dual domain session facilitator, specific patient groups and three different phases: 1) a preparation phase where suitable tasks are developed by a panel of experts and patients, validated through the content validity index 2) a patient user evaluation phase including an individual and collaborative process part 3) an analysis and coding phase where all data is digitalized and synthesized making use of Qualitative Data Analysis Software (QDAS) to determine usability deficiencies. We predict that this way of evaluating will utilize the benefits of the expert methods, also providing a way of including the patient user of these self-management systems. Results from this prospective study should provide evidence of the usefulness of this method modification.
Pelvic kinematic method for determining vertical jump height.
Chiu, Loren Z F; Salem, George J
2010-11-01
Sacral marker and pelvis reconstruction methods have been proposed to approximate total body center of mass during relatively low intensity gait and hopping tasks, but not during a maximum effort vertical jumping task. In this study, center of mass displacement was calculated using the pelvic kinematic method and compared with center of mass displacement using the ground-reaction force-impulse method, in experienced athletes (n = 13) performing restricted countermovement vertical jumps. Maximal vertical jumps were performed in a biomechanics laboratory, with data collected using an 8-camera motion analysis system and two force platforms. The pelvis center of mass was reconstructed from retro-reflective markers placed on the pelvis. Jump height was determined from the peak height of the pelvis center of mass minus the standing height. Strong linear relationships were observed between the pelvic kinematic and impulse methods (R² = .86; p < .01). The pelvic kinematic method underestimated jump height versus the impulse method, however, the difference was small (CV = 4.34%). This investigation demonstrates concurrent validity for the pelvic kinematic method to determine vertical jump height.
NASA Astrophysics Data System (ADS)
Huber, Samuel; Dunau, Patrick; Wellig, Peter; Stein, Karin
2017-10-01
Background: In target detection, the success rates depend strongly on human observer performances. Two prior studies tested the contributions of target detection algorithms and prior training sessions. The aim of this Swiss-German cooperation study was to evaluate the dependency of human observer performance on the quality of supporting image analysis algorithms. Methods: The participants were presented 15 different video sequences. Their task was to detect all targets in the shortest possible time. Each video sequence showed a heavily cluttered simulated public area from a different viewing angle. In each video sequence, the number of avatars in the area was altered to 100, 150 and 200 subjects. The number of targets appearing was kept at 10%. The number of marked targets varied from 0, 5, 10, 20 up to 40 marked subjects while keeping the positive predictive value of the detection algorithm at 20%. During the task, workload level was assessed by applying an acoustic secondary task. Detection rates and detection times for the targets were analyzed using inferential statistics. Results: The study found Target Detection Time to increase and Target Detection Rates to decrease with increasing numbers of avatars. The same is true for the Secondary Task Reaction Time while there was no effect on Secondary Task Hit Rate. Furthermore, we found a trend for a u-shaped correlation between the numbers of markings and RTST indicating increased workload. Conclusion: The trial results may indicate useful criteria for the design of training and support of observers in observational tasks.
Stability of multifinger action in different state spaces
Reschechtko, Sasha; Zatsiorsky, Vladimir M.
2014-01-01
We investigated stability of action by a multifinger system with three methods: analysis of intertrial variance, application of transient perturbations, and analysis of the system's motion in different state spaces. The “inverse piano” device was used to apply transient (lifting-and-lowering) perturbations to individual fingers during single- and two-finger accurate force production tasks. In each trial, the perturbation was applied either to a finger explicitly involved in the task or one that was not. We hypothesized that, in one-finger tasks, task-specific stability would be observed in the redundant space of finger forces but not in the nonredundant space of finger modes (commands to explicitly involved fingers). In two-finger tasks, we expected that perturbations applied to a nontask finger would not contribute to task-specific stability in mode space. In contrast to our expectations, analyses in both force and mode spaces showed lower stability in directions that did not change total force output compared with directions that did cause changes in total force. In addition, the transient perturbations led to a significant increase in the enslaving index. We consider these results within a theoretical scheme of control with referent body configurations organized hierarchically, using multiple few-to-many mappings organized in a synergic way. The observed volatility of enslaving, greater equifinality of total force compared with elemental variables, and large magnitude of motor equivalent motion in both force and mode spaces provide support for the concept of task-specific stability of performance and the existence of multiple neural loops, which ensure this stability. PMID:25253478
Stability of multifinger action in different state spaces.
Reschechtko, Sasha; Zatsiorsky, Vladimir M; Latash, Mark L
2014-12-15
We investigated stability of action by a multifinger system with three methods: analysis of intertrial variance, application of transient perturbations, and analysis of the system's motion in different state spaces. The "inverse piano" device was used to apply transient (lifting-and-lowering) perturbations to individual fingers during single- and two-finger accurate force production tasks. In each trial, the perturbation was applied either to a finger explicitly involved in the task or one that was not. We hypothesized that, in one-finger tasks, task-specific stability would be observed in the redundant space of finger forces but not in the nonredundant space of finger modes (commands to explicitly involved fingers). In two-finger tasks, we expected that perturbations applied to a nontask finger would not contribute to task-specific stability in mode space. In contrast to our expectations, analyses in both force and mode spaces showed lower stability in directions that did not change total force output compared with directions that did cause changes in total force. In addition, the transient perturbations led to a significant increase in the enslaving index. We consider these results within a theoretical scheme of control with referent body configurations organized hierarchically, using multiple few-to-many mappings organized in a synergic way. The observed volatility of enslaving, greater equifinality of total force compared with elemental variables, and large magnitude of motor equivalent motion in both force and mode spaces provide support for the concept of task-specific stability of performance and the existence of multiple neural loops, which ensure this stability. Copyright © 2014 the American Physiological Society.
Gazes, Yunglin; Habeck, Christian; O'Shea, Deirdre; Razlighi, Qolamreza R; Steffener, Jason; Stern, Yaakov
2015-01-01
Introduction A functional activation (i.e., ordinal trend) pattern was previously identified in both young and older adults during task-switching performance, the expression of which correlated with reaction time. The current study aimed to (1) replicate this functional activation pattern in a new group of fMRI activation data, and (2) extend the previous study by specifically examining whether the effect of aging on reaction time can be explained by differences in the activation of the functional activation pattern. Method A total of 47 young and 50 older participants were included in the extension analysis. Participants performed task-switching as the activation task and were cued by the color of the stimulus for the task to be performed in each block. To test for replication, two approaches were implemented. The first approach tested the replicability of the predictive power of the previously identified functional activation pattern by forward applying the pattern to the Study II data and the second approach was rederivation of the activation pattern in the Study II data. Results Both approaches showed successful replication in the new data set. Using mediation analysis, expression of the pattern from the first approach was found to partially mediate age-related effects on reaction time such that older age was associated with greater activation of the brain pattern and longer reaction time, suggesting that brain activation efficiency (defined as “the rate of activation increase with increasing task difficulty” in Neuropsychologia 47, 2009, 2015) of the regions in the Ordinal trend pattern directly accounts for age-related differences in task performance. Discussion The successful replication of the functional activation pattern demonstrates the versatility of the Ordinal Trend Canonical Variates Analysis, and the ability to summarize each participant's brain activation map into one number provides a useful metric in multimodal analysis as well as cross-study comparisons. PMID:25874162
Flexible modulation of network connectivity related to cognition in Alzheimer's disease.
McLaren, Donald G; Sperling, Reisa A; Atri, Alireza
2014-10-15
Functional neuroimaging tools, such as fMRI methods, may elucidate the neural correlates of clinical, behavioral, and cognitive performance. Most functional imaging studies focus on regional task-related activity or resting state connectivity rather than how changes in functional connectivity across conditions and tasks are related to cognitive and behavioral performance. To investigate the promise of characterizing context-dependent connectivity-behavior relationships, this study applies the method of generalized psychophysiological interactions (gPPI) to assess the patterns of associative-memory-related fMRI hippocampal functional connectivity in Alzheimer's disease (AD) associated with performance on memory and other cognitively demanding neuropsychological tests and clinical measures. Twenty-four subjects with mild AD dementia (ages 54-82, nine females) participated in a face-name paired-associate encoding memory study. Generalized PPI analysis was used to estimate the connectivity between the hippocampus and the whole brain during encoding. The difference in hippocampal-whole brain connectivity between encoding novel and encoding repeated face-name pairs was used in multiple-regression analyses as an independent predictor for 10 behavioral, neuropsychological and clinical tests. The analysis revealed connectivity-behavior relationships that were distributed, dynamically overlapping, and task-specific within and across intrinsic networks; hippocampal-whole brain connectivity-behavior relationships were not isolated to single networks, but spanned multiple brain networks. Importantly, these spatially distributed performance patterns were unique for each measure. In general, out-of-network behavioral associations with encoding novel greater than repeated face-name pairs hippocampal-connectivity were observed in the default-mode network, while correlations with encoding repeated greater than novel face-name pairs hippocampal-connectivity were observed in the executive control network (p<0.05, cluster corrected). Psychophysiological interactions revealed significantly more extensive and robust associations between paired-associate encoding task-dependent hippocampal-whole brain connectivity and performance on memory and behavioral/clinical measures than previously revealed by standard activity-behavior analysis. Compared to resting state and task-activation methods, gPPI analyses may be more sensitive to reveal additional complementary information regarding subtle within- and between-network relations. The patterns of robust correlations between hippocampal-whole brain connectivity and behavioral measures identified here suggest that there are 'coordinated states' in the brain; that the dynamic range of these states is related to behavior and cognition; and that these states can be observed and quantified, even in individuals with mild AD. Copyright © 2014 Elsevier Inc. All rights reserved.
Li, Zhi; Milutinović, Dejan; Rosen, Jacob
2017-05-01
Reach-to-grasp arm postures differ from those in pure reaching because they are affected by grasp position/orientation, rather than simple transport to a position during a reaching motion. This paper investigates this difference via an analysis of experimental data collected on reaching and reach-to-grasp motions. A seven-degree-of-freedom (DOFs) kinematic arm model with the swivel angle is used for the motion analysis. Compared to a widely used anatomical arm model, this model distinguishes clearly the four grasping-relevant DOFs (GR-DOFs) that are affected by positions and orientations of the objects to be grasped. These four GR-DOFs include the swivel angle that measures the elbow rotation about the shoulder-wrist axis, and three wrist joint angles. For each GR-DOF, we quantify position vs orientation task-relevance bias that measures how much the DOF is affected by the grasping position vs orientation. The swivel angle and forearm supination have similar bias, and the analysis of their motion suggests two hypotheses regarding the synergistic coordination of the macro- and micro-structures of the human arm (1) DOFs with similar task-relevance are synergistically coordinated; and (2) such synergy breaks when a task-relevant DOF is close to its joint limit without necessarily reaching the limit. This study provides a motion analysis method to reduce the control complexity for reach-to-grasp tasks, and suggests using dynamic coupling to coordinate the hand and arm of upper-limb exoskeletons.
Adaptable, high recall, event extraction system with minimal configuration.
Miwa, Makoto; Ananiadou, Sophia
2015-01-01
Biomedical event extraction has been a major focus of biomedical natural language processing (BioNLP) research since the first BioNLP shared task was held in 2009. Accordingly, a large number of event extraction systems have been developed. Most such systems, however, have been developed for specific tasks and/or incorporated task specific settings, making their application to new corpora and tasks problematic without modification of the systems themselves. There is thus a need for event extraction systems that can achieve high levels of accuracy when applied to corpora in new domains, without the need for exhaustive tuning or modification, whilst retaining competitive levels of performance. We have enhanced our state-of-the-art event extraction system, EventMine, to alleviate the need for task-specific tuning. Task-specific details are specified in a configuration file, while extensive task-specific parameter tuning is avoided through the integration of a weighting method, a covariate shift method, and their combination. The task-specific configuration and weighting method have been employed within the context of two different sub-tasks of BioNLP shared task 2013, i.e. Cancer Genetics (CG) and Pathway Curation (PC), removing the need to modify the system specifically for each task. With minimal task specific configuration and tuning, EventMine achieved the 1st place in the PC task, and 2nd in the CG, achieving the highest recall for both tasks. The system has been further enhanced following the shared task by incorporating the covariate shift method and entity generalisations based on the task definitions, leading to further performance improvements. We have shown that it is possible to apply a state-of-the-art event extraction system to new tasks with high levels of performance, without having to modify the system internally. Both covariate shift and weighting methods are useful in facilitating the production of high recall systems. These methods and their combination can adapt a model to the target data with no deep tuning and little manual configuration.
A Leadership and Managerial Competency Framework for Public Hospital Managers in Vietnam
Van Tuong, Phan; Duc Thanh, Nguyen
2017-01-01
Objective The aim of this paper was to develop a leadership and managerial competency framework for public hospital managers in Vietnam. Methods This mixed-method study used a four-step approach. The first step was a position description content analysis to identify the tasks hospital managers are required to carry out. The resulting data were used to identify the leadership and managerial competency factors and items in the second step. In the third step, a workshop was organized to reach consensus about the validity of these competency factors and items. Finally, a quantitative survey was conducted across a sample of 891 hospital managers who are working in the selected hospitals in seven geographical regions in Vietnam to validate the competency scales using exploratory factor analysis (EFA) and Cronbach's alpha. Results The study identified a number of tasks required for public hospital managers and confirmed the competencies for implementing these tasks effectively. Four dimensions with 14 components and 81 items of leadership and managerial competencies were identified. These components exhibited 83.8% of variance and Cronbach's alpha were at good level of 0.9. Conclusions These competencies are required for public hospital managers which provide guidance to the further development of the competency-based training for the current management taskforce and preparing future hospital managers. PMID:29546227
Identification of Time-Varying Pilot Control Behavior in Multi-Axis Control Tasks
NASA Technical Reports Server (NTRS)
Zaal, Peter M. T.; Sweet, Barbara T.
2012-01-01
Recent developments in fly-by-wire control architectures for rotorcraft have introduced new interest in the identification of time-varying pilot control behavior in multi-axis control tasks. In this paper a maximum likelihood estimation method is used to estimate the parameters of a pilot model with time-dependent sigmoid functions to characterize time-varying human control behavior. An experiment was performed by 9 general aviation pilots who had to perform a simultaneous roll and pitch control task with time-varying aircraft dynamics. In 8 different conditions, the axis containing the time-varying dynamics and the growth factor of the dynamics were varied, allowing for an analysis of the performance of the estimation method when estimating time-dependent parameter functions. In addition, a detailed analysis of pilots adaptation to the time-varying aircraft dynamics in both the roll and pitch axes could be performed. Pilot control behavior in both axes was significantly affected by the time-varying aircraft dynamics in roll and pitch, and by the growth factor. The main effect was found in the axis that contained the time-varying dynamics. However, pilot control behavior also changed over time in the axis not containing the time-varying aircraft dynamics. This indicates that some cross coupling exists in the perception and control processes between the roll and pitch axes.
Geometric subspace methods and time-delay embedding for EEG artifact removal and classification.
Anderson, Charles W; Knight, James N; O'Connor, Tim; Kirby, Michael J; Sokolov, Artem
2006-06-01
Generalized singular-value decomposition is used to separate multichannel electroencephalogram (EEG) into components found by optimizing a signal-to-noise quotient. These components are used to filter out artifacts. Short-time principal components analysis of time-delay embedded EEG is used to represent windowed EEG data to classify EEG according to which mental task is being performed. Examples are presented of the filtering of various artifacts and results are shown of classification of EEG from five mental tasks using committees of decision trees.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hilaly, A.K.; Sikdar, S.K.
In this study, the authors introduced several modifications to the WAR (waste reduction) algorithm developed earlier. These modifications were made for systematically handling sensitivity analysis and various tasks of waste minimization. A design hierarchy was formulated to promote appropriate waste reduction tasks at designated levels of the hierarchy. A sensitivity coefficient was used to measure the relative impacts of process variables on the pollution index of a process. The use of the WAR algorithm was demonstrated by a fermentation process for making penicillin.
Assessment of Joystick control during the performance of powered wheelchair driving tasks
2011-01-01
Background Powered wheelchairs are essential for many individuals who have mobility impairments. Nevertheless, if operated improperly, the powered wheelchair poses dangers to both the user and to those in its vicinity. Thus, operating a powered wheelchair with some degree of proficiency is important for safety, and measuring driving skills becomes an important issue to address. The objective of this study was to explore the discriminate validity of outcome measures of driving skills based on joystick control strategies and performance recorded using a data logging system. Methods We compared joystick control strategies and performance during standardized driving tasks between a group of 10 expert and 13 novice powered wheelchair users. Driving tasks were drawn from the Wheelchair Skills Test (v. 4.1). Data from the joystick controller were collected on a data logging system. Joystick control strategies and performance outcome measures included the mean number of joystick movements, time required to complete tasks, as well as variability of joystick direction. Results In simpler tasks, the expert group's driving skills were comparable to those of the novice group. Yet, in more difficult and spatially confined tasks, the expert group required fewer joystick movements for task completion. In some cases, experts also completed tasks in approximately half the time with respect to the novice group. Conclusions The analysis of joystick control made it possible to discriminate between novice and expert powered wheelchair users in a variety of driving tasks. These results imply that in spatially confined areas, a greater powered wheelchair driving skill level is required to complete tasks efficiently. Based on these findings, it would appear that the use of joystick signal analysis constitutes an objective tool for the measurement of powered wheelchair driving skills. This tool may be useful for the clinical assessment and training of powered wheelchair skills. PMID:21609435
A Workbench for Discovering Task-Specific Theories of Learning
1989-03-03
mind (the cognitive architecture) will not be of much use to educators who wish to perform a cognitive task analysis of their subject matter before...analysis packages that can be added to a cognitive architecture, thus creating a ’workbench’ for performing cognitive task analysis . Such tools becomes...learning theories have been. Keywords: Cognitive task analysis , Instructional design, Cognitive modelling, Learning.
Naturalistic Decision Making: Implications for Design
1993-04-01
Cognitive Task Analysis Decision Making Design Engineer Design System Human-Computer Interface System Development 15. NUMBER OF PAGES 182 16...people use to select a course of action. The SOAR explains how stress affects the decision making of both individuals and teams. COGNITIVE TASK ANALYSIS : This...procedures for Cognitive Task Analysis , contrasting the strengths and weaknesses of each, and showing how a Cognitive Task Analysis
Acquisition, representation and rule generation for procedural knowledge
NASA Technical Reports Server (NTRS)
Ortiz, Chris; Saito, Tim; Mithal, Sachin; Loftin, R. Bowen
1991-01-01
Current research into the design and continuing development of a system for the acquisition of procedural knowledge, its representation in useful forms, and proposed methods for automated C Language Integrated Production System (CLIPS) rule generation is discussed. The Task Analysis and Rule Generation Tool (TARGET) is intended to permit experts, individually or collectively, to visually describe and refine procedural tasks. The system is designed to represent the acquired knowledge in the form of graphical objects with the capacity for generating production rules in CLIPS. The generated rules can then be integrated into applications such as NASA's Intelligent Computer Aided Training (ICAT) architecture. Also described are proposed methods for use in translating the graphical and intermediate knowledge representations into CLIPS rules.
[Prospects of systemic radioecology in solving innovative tasks of nuclear power engineering].
Spiridonov, S I
2014-01-01
A need of systemic radioecological studies in the strategy developed by the atomic industry in Russia in the XXI century has been justified. The priorities in the radioecology of nuclear power engineering of natural safety associated with the development of the radiation-migration equivalence concept, comparative evaluation of innovative nuclear technologies and forecasting methods of various emergencies have been identified. Also described is an algorithm for the integrated solution of these tasks that includes elaboration of methodological approaches, methods and software allowing dose burdens to humans and biota to be estimated. The rationale of using radioecological risks for the analysis of uncertainties in the environmental contamination impacts,at different stages of the existing and innovative nuclear fuel cycles is shown.
McLaren, Donald G.; Ries, Michele L.; Xu, Guofan; Johnson, Sterling C.
2012-01-01
Functional MRI (fMRI) allows one to study task-related regional responses and task-dependent connectivity analysis using psychophysiological interaction (PPI) methods. The latter affords the additional opportunity to understand how brain regions interact in a task-dependent manner. The current implementation of PPI in Statistical Parametric Mapping (SPM8) is configured primarily to assess connectivity differences between two task conditions, when in practice fMRI tasks frequently employ more than two conditions. Here we evaluate how a generalized form of context-dependent PPI (gPPI; http://www.nitrc.org/projects/gppi), which is configured to automatically accommodate more than two task conditions in the same PPI model by spanning the entire experimental space, compares to the standard implementation in SPM8. These comparisons are made using both simulations and an empirical dataset. In the simulated dataset, we compare the interaction beta estimates to their expected values and model fit using the Akaike Information Criterion (AIC). We found that interaction beta estimates in gPPI were robust to different simulated data models, were not different from the expected beta value, and had better model fits than when using standard PPI (sPPI) methods. In the empirical dataset, we compare the model fit of the gPPI approach to sPPI. We found that the gPPI approach improved model fit compared to sPPI. There were several regions that became non-significant with gPPI. These regions all showed significantly better model fits with gPPI. Also, there were several regions where task-dependent connectivity was only detected using gPPI methods, also with improved model fit. Regions that were detected with all methods had more similar model fits. These results suggest that gPPI may have greater sensitivity and specificity than standard implementation in SPM. This notion is tempered slightly as there is no gold standard; however, data simulations with a known outcome support our conclusions about gPPI. In sum, the generalized form of context-dependent PPI approach has increased flexibility of statistical modeling, and potentially improves model fit, specificity to true negative findings, and sensitivity to true positive findings. PMID:22484411
Multi-Source Multi-Target Dictionary Learning for Prediction of Cognitive Decline.
Zhang, Jie; Li, Qingyang; Caselli, Richard J; Thompson, Paul M; Ye, Jieping; Wang, Yalin
2017-06-01
Alzheimer's Disease (AD) is the most common type of dementia. Identifying correct biomarkers may determine pre-symptomatic AD subjects and enable early intervention. Recently, Multi-task sparse feature learning has been successfully applied to many computer vision and biomedical informatics researches. It aims to improve the generalization performance by exploiting the shared features among different tasks. However, most of the existing algorithms are formulated as a supervised learning scheme. Its drawback is with either insufficient feature numbers or missing label information. To address these challenges, we formulate an unsupervised framework for multi-task sparse feature learning based on a novel dictionary learning algorithm. To solve the unsupervised learning problem, we propose a two-stage Multi-Source Multi-Target Dictionary Learning (MMDL) algorithm. In stage 1, we propose a multi-source dictionary learning method to utilize the common and individual sparse features in different time slots. In stage 2, supported by a rigorous theoretical analysis, we develop a multi-task learning method to solve the missing label problem. Empirical studies on an N = 3970 longitudinal brain image data set, which involves 2 sources and 5 targets, demonstrate the improved prediction accuracy and speed efficiency of MMDL in comparison with other state-of-the-art algorithms.
Large Margin Multi-Modal Multi-Task Feature Extraction for Image Classification.
Yong Luo; Yonggang Wen; Dacheng Tao; Jie Gui; Chao Xu
2016-01-01
The features used in many image analysis-based applications are frequently of very high dimension. Feature extraction offers several advantages in high-dimensional cases, and many recent studies have used multi-task feature extraction approaches, which often outperform single-task feature extraction approaches. However, most of these methods are limited in that they only consider data represented by a single type of feature, even though features usually represent images from multiple modalities. We, therefore, propose a novel large margin multi-modal multi-task feature extraction (LM3FE) framework for handling multi-modal features for image classification. In particular, LM3FE simultaneously learns the feature extraction matrix for each modality and the modality combination coefficients. In this way, LM3FE not only handles correlated and noisy features, but also utilizes the complementarity of different modalities to further help reduce feature redundancy in each modality. The large margin principle employed also helps to extract strongly predictive features, so that they are more suitable for prediction (e.g., classification). An alternating algorithm is developed for problem optimization, and each subproblem can be efficiently solved. Experiments on two challenging real-world image data sets demonstrate the effectiveness and superiority of the proposed method.
Cue Representation and Situational Awareness in Task Analysis
ERIC Educational Resources Information Center
Carl, Diana R.
2009-01-01
Task analysis in human performance technology is used to determine how human performance can be well supported with training, job aids, environmental changes, and other interventions. Early work by Miller (1953) and Gilbert (1969, 1974) addressed cue processing in task execution and recommended cue descriptions in task analysis. Modern task…
Chen, Zhe; Honomichl, Ryan; Kennedy, Diane; Tan, Enda
2016-06-01
The present study examines 5- to 8-year-old children's relation reasoning in solving matrix completion tasks. This study incorporates a componential analysis, an eye-tracking method, and a microgenetic approach, which together allow an investigation of the cognitive processing strategies involved in the development and learning of children's relational thinking. Developmental differences in problem-solving performance were largely due to deficiencies in engaging the processing strategies that are hypothesized to facilitate problem-solving performance. Feedback designed to highlight the relations between objects within the matrix improved 5- and 6-year-olds' problem-solving performance, as well as their use of appropriate processing strategies. Furthermore, children who engaged the processing strategies early on in the task were more likely to solve subsequent problems in later phases. These findings suggest that encoding relations, integrating rules, completing the model, and generalizing strategies across tasks are critical processing components that underlie relational thinking. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Demixed principal component analysis of neural population data
Kobak, Dmitry; Brendel, Wieland; Constantinidis, Christos; Feierstein, Claudia E; Kepecs, Adam; Mainen, Zachary F; Qi, Xue-Lian; Romo, Ranulfo; Uchida, Naoshige; Machens, Christian K
2016-01-01
Neurons in higher cortical areas, such as the prefrontal cortex, are often tuned to a variety of sensory and motor variables, and are therefore said to display mixed selectivity. This complexity of single neuron responses can obscure what information these areas represent and how it is represented. Here we demonstrate the advantages of a new dimensionality reduction technique, demixed principal component analysis (dPCA), that decomposes population activity into a few components. In addition to systematically capturing the majority of the variance of the data, dPCA also exposes the dependence of the neural representation on task parameters such as stimuli, decisions, or rewards. To illustrate our method we reanalyze population data from four datasets comprising different species, different cortical areas and different experimental tasks. In each case, dPCA provides a concise way of visualizing the data that summarizes the task-dependent features of the population response in a single figure. DOI: http://dx.doi.org/10.7554/eLife.10989.001 PMID:27067378
A day in the life of a volunteer incident commander: errors, pressures and mitigating strategies.
Bearman, Christopher; Bremner, Peter A
2013-05-01
To meet an identified gap in the literature this paper investigates the tasks that a volunteer incident commander needs to carry out during an incident, the errors that can be made and the way that errors are managed. In addition, pressure from goal seduction and situation aversion were also examined. Volunteer incident commanders participated in a two-part interview consisting of a critical decision method interview and discussions about a hierarchical task analysis constructed by the authors. A SHERPA analysis was conducted to further identify potential errors. The results identified the key tasks, errors with extreme risk, pressures from strong situations and mitigating strategies for errors and pressures. The errors and pressures provide a basic set of issues that need to be managed by both volunteer incident commanders and fire agencies. The mitigating strategies identified here suggest some ways that this can be done. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Aliu, Oluseyi; Corlew, Scott D; Heisler, Michele E; Pannucci, Christopher J; Chung, Kevin C
2014-01-01
Surgical volunteer organizations (SVOs) focus considerable resources on addressing the backlog of cases in low-resource countries. This model of service may perpetuate dependency. Efforts should focus on models that establish independence in providing surgical care. Independence could be achieved through surgical capacity building. However, there has been scant discussion in literature on SVO involvement in surgical capacity building. Using qualitative methods, we evaluated the perspectives of surgeons with extensive volunteer experience in low-resource countries. We collected data through in-depth interviews that centered on SVOs using task shifting as a tool for surgical capacity building. Some of the key themes from our analysis include the ethical ramifications of task shifting, the challenges of addressing technical and clinical education in capacity building for low-resource settings, and the allocation of limited volunteer resources toward surgical capacity building. These themes will be the foundation of subsequent studies that will focus on other stakeholders in surgical capacity building including host communities and SVO administrators.
Use of the internet to study the utility values of the public.
Lenert, Leslie A.; Sturley, Ann E.
2002-01-01
One of the most difficult tasks in cost-effectiveness analysis is the measurement of quality weights (utilities) for health states. The task is difficult because subjects often lack familiarity with health states they are asked to rate and because utilities measures such as the standard gamble, ask subjects to perform tasks that are complex and far from everyday experience. A large body of research suggests that computer methods can play an important role in explaining health states and measuring utilities. However, administering computer surveys to a "general public" sample, the most relevant sample for cost-effectiveness analysis, is logistically difficult. In this paper, we describe a software system designed to allow the study of general population preferences in a volunteer Internet survey panel. The approach, which relied on over sampling of ethnic groups and older members of the panel, produced a data set with an ethnically, chronologically and geographically diverse group of respondents, but was not successful in replicating the joint distribution of demographic patterns in the population. PMID:12463862
NASA Technical Reports Server (NTRS)
Wolf, M.
1979-01-01
To facilitate the task of objectively comparing competing process options, a methodology was needed for the quantitative evaluation of their relative cost effectiveness. Such a methodology was developed and is described, together with three examples for its application. The criterion for the evaluation is the cost of the energy produced by the system. The method permits the evaluation of competing design options for subsystems, based on the differences in cost and efficiency of the subsystems, assuming comparable reliability and service life, or of competing manufacturing process options for such subsystems, which include solar cells or modules. This process option analysis is based on differences in cost, yield, and conversion efficiency contribution of the process steps considered.
An automatic experimental apparatus to study arm reaching in New World monkeys.
Yin, Allen; An, Jehi; Lehew, Gary; Lebedev, Mikhail A; Nicolelis, Miguel A L
2016-05-01
Several species of the New World monkeys have been used as experimental models in biomedical and neurophysiological research. However, a method for controlled arm reaching tasks has not been developed for these species. We have developed a fully automated, pneumatically driven, portable, and reconfigurable experimental apparatus for arm-reaching tasks suitable for these small primates. We have utilized the apparatus to train two owl monkeys in a visually-cued arm-reaching task. Analysis of neural recordings demonstrates directional tuning of the M1 neurons. Our apparatus allows automated control, freeing the experimenter from manual experiments. The presented apparatus provides a valuable tool for conducting neurophysiological research on New World monkeys. Copyright © 2016. Published by Elsevier B.V.
Device research task (processing and high-efficiency solar cells)
NASA Technical Reports Server (NTRS)
1986-01-01
This task has been expanded since the last 25th Project Integration Meeting (PIM) to include process research in addition to device research. The objective of this task is to assist the Flat-plate Solar Array (FSA) Project in meeting its near- and long-term goals by identifying and implementing research in the areas of device physics, device structures, measurement techniques, material-device interactions, and cell processing. The research efforts of this task are described and reflect the deversity of device research being conducted. All of the contracts being reported are either completed or near completion and culminate the device research efforts of the FSA Project. Optimazation methods and silicon solar cell numerical models, carrier transport and recombination parameters in heavily doped silicon, development and analysis of silicon solar cells of near 20% efficiency, and SiN sub x passivation of silicon surfaces are discussed.
Marsh, Kevin; IJzerman, Maarten; Thokala, Praveen; Baltussen, Rob; Boysen, Meindert; Kaló, Zoltán; Lönngren, Thomas; Mussen, Filip; Peacock, Stuart; Watkins, John; Devlin, Nancy
2016-01-01
Health care decisions are complex and involve confronting trade-offs between multiple, often conflicting objectives. Using structured, explicit approaches to decisions involving multiple criteria can improve the quality of decision making. A set of techniques, known under the collective heading, multiple criteria decision analysis (MCDA), are useful for this purpose. In 2014, ISPOR established an Emerging Good Practices Task Force. The task force's first report defined MCDA, provided examples of its use in health care, described the key steps, and provided an overview of the principal methods of MCDA. This second task force report provides emerging good-practice guidance on the implementation of MCDA to support health care decisions. The report includes: a checklist to support the design, implementation and review of an MCDA; guidance to support the implementation of the checklist; the order in which the steps should be implemented; illustrates how to incorporate budget constraints into an MCDA; provides an overview of the skills and resources, including available software, required to implement MCDA; and future research directions. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Preliminary Tests For Development Of A Non-Pertechnetate Analysis Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diprete, D.; McCabe, D.
2016-09-28
The objective of this task was to develop a non-pertechnetate analysis method that 222-S lab could easily implement. The initial scope involved working with 222-S laboratory personnel to adapt the existing Tc analytical method to fractionate the non-pertechnetate and pertechnetate. SRNL then developed and tested a method using commercial sorbents containing Aliquat ® 336 to extract the pertechnetate (thereby separating it from non-pertechnetate), followed by oxidation, extraction, and stripping steps, and finally analysis by beta counting and Mass Spectroscopy. Several additional items were partially investigated, including impacts of a 137Cs removal step. The method was initially tested on SRS tankmore » waste samples to determine its viability. Although SRS tank waste does not contain non-pertechnetate, testing with it was useful to investigate the compatibility, separation efficiency, interference removal efficacy, and method sensitivity.« less
Chen, Keping; Blong, Russell; Jacobson, Carol
2003-04-01
This paper develops a GIS-based integrated approach to risk assessment in natural hazards, with reference to bushfires. The challenges for undertaking this approach have three components: data integration, risk assessment tasks, and risk decision-making. First, data integration in GIS is a fundamental step for subsequent risk assessment tasks and risk decision-making. A series of spatial data integration issues within GIS such as geographical scales and data models are addressed. Particularly, the integration of both physical environmental data and socioeconomic data is examined with an example linking remotely sensed data and areal census data in GIS. Second, specific risk assessment tasks, such as hazard behavior simulation and vulnerability assessment, should be undertaken in order to understand complex hazard risks and provide support for risk decision-making. For risk assessment tasks involving heterogeneous data sources, the selection of spatial analysis units is important. Third, risk decision-making concerns spatial preferences and/or patterns, and a multicriteria evaluation (MCE)-GIS typology for risk decision-making is presented that incorporates three perspectives: spatial data types, data models, and methods development. Both conventional MCE methods and artificial intelligence-based methods with GIS are identified to facilitate spatial risk decision-making in a rational and interpretable way. Finally, the paper concludes that the integrated approach can be used to assist risk management of natural hazards, in theory and in practice.
Agent-based Training: Facilitating Knowledge and Skill Acquisition in a Modern Space Operations Team
2002-04-01
face, and being careful to not add to existing problems such as limited display space. This required us to work closely with members of the SBIRS operational community and use research tools such as cognitive task analysis methods.
Multi-Dimensional Analysis of Dynamic Human Information Interaction
ERIC Educational Resources Information Center
Park, Minsoo
2013-01-01
Introduction: This study aims to understand the interactions of perception, effort, emotion, time and performance during the performance of multiple information tasks using Web information technologies. Method: Twenty volunteers from a university participated in this study. Questionnaires were used to obtain general background information and…
What's So Important about Water?
ERIC Educational Resources Information Center
Walker, Mary Pat; Baker, C'Anne
A method of water learning (teaching low level motor coordination in water, rather than on land) has been developed for stimulating the growth and skills of severely handicapped students. The model, which attempts to elicit natural developmental responses to the environment, incorporates task analysis of developmental preswimming sequences…
Task Analysis Assessment on Intrastate Bus Traffic Controllers
NASA Astrophysics Data System (ADS)
Yen Bin, Teo; Azlis-Sani, Jalil; Nur Annuar Mohd Yunos, Muhammad; Ismail, S. M. Sabri S. M.; Tajedi, Noor Aqilah Ahmad
2016-11-01
Public transportation acts as social mobility and caters the daily needs of the society for passengers to travel from one place to another. This is true for a country like Malaysia where international trade has been growing significantly over the past few decades. Task analysis assessment was conducted with the consideration of cognitive ergonomic view towards problem related to human factors. Conducting research regarding the task analysis on bus traffic controllers had allowed a better understanding regarding the nature of work and the overall monitoring activities of the bus services. This paper served to study the task analysis assessment on intrastate bus traffic controllers and the objectives of this study include to conduct task analysis assessment on the bus traffic controllers. Task analysis assessment for the bus traffic controllers was developed via Hierarchical Task Analysis (HTA). There are a total of five subsidiary tasks on level one and only two were able to be further broken down in level two. Development of HTA allowed a better understanding regarding the work and this could further ease the evaluation of the tasks conducted by the bus traffic controllers. Thus, human error could be reduced for the safety of all passengers and increase the overall efficiency of the system. Besides, it could assist in improving the operation of the bus traffic controllers by modelling or synthesizing the existing tasks if necessary.
Sheehan, Barbara; Kaufman, David; Stetson, Peter; Currie, Leanne M.
2009-01-01
Computerized decision support systems have been used to help ensure safe medication prescribing. However, the acceptance of these types of decision support has been reported to be low. It has been suggested that decreased acceptance may be due to lack of clinical relevance. Additionally, cognitive fit between the user interface and clinical task may impact the response of clinicians as they interact with the system. In order to better understand clinician responses to such decision support, we used cognitive task analysis methods to evaluate clinical alerts for antibiotic prescribing in a neonatal intensive care unit. Two methods were used: 1) a cognitive walkthrough; and 2) usability testing with a ‘think-aloud’ protocol. Data were analyzed for impact on cognitive effort according to categories of cognitive distance. We found that responses to alerts may be context specific and that lack of screen cues often increases cognitive effort required to use a system. PMID:20351922
Nonlinear dimensionality reduction of electroencephalogram (EEG) for Brain Computer interfaces.
Teli, Mohammad Nayeem; Anderson, Charles
2009-01-01
Patterns in electroencephalogram (EEG) signals are analyzed for a Brain Computer Interface (BCI). An important aspect of this analysis is the work on transformations of high dimensional EEG data to low dimensional spaces in which we can classify the data according to mental tasks being performed. In this research we investigate how a Neural Network (NN) in an auto-encoder with bottleneck configuration can find such a transformation. We implemented two approximate second-order methods to optimize the weights of these networks, because the more common first-order methods are very slow to converge for networks like these with more than three layers of computational units. The resulting non-linear projections of time embedded EEG signals show interesting separations that are related to tasks. The bottleneck networks do indeed discover nonlinear transformations to low-dimensional spaces that capture much of the information present in EEG signals. However, the resulting low-dimensional representations do not improve classification rates beyond what is possible using Quadratic Discriminant Analysis (QDA) on the original time-lagged EEG.
Graph Frequency Analysis of Brain Signals
Huang, Weiyu; Goldsberry, Leah; Wymbs, Nicholas F.; Grafton, Scott T.; Bassett, Danielle S.; Ribeiro, Alejandro
2016-01-01
This paper presents methods to analyze functional brain networks and signals from graph spectral perspectives. The notion of frequency and filters traditionally defined for signals supported on regular domains such as discrete time and image grids has been recently generalized to irregular graph domains, and defines brain graph frequencies associated with different levels of spatial smoothness across the brain regions. Brain network frequency also enables the decomposition of brain signals into pieces corresponding to smooth or rapid variations. We relate graph frequency with principal component analysis when the networks of interest denote functional connectivity. The methods are utilized to analyze brain networks and signals as subjects master a simple motor skill. We observe that brain signals corresponding to different graph frequencies exhibit different levels of adaptability throughout learning. Further, we notice a strong association between graph spectral properties of brain networks and the level of exposure to tasks performed, and recognize the most contributing and important frequency signatures at different levels of task familiarity. PMID:28439325
Simulating Mission Command for Planning and Analysis
2015-06-01
mission plan. 14. SUBJECT TERMS Mission Planning, CPM , PERT, Simulation, DES, Simkit, Triangle Distribution, Critical Path 15. NUMBER OF...Battalion Task Force CO Company CPM Critical Path Method DES Discrete Event Simulation FA BAT Field Artillery Battalion FEL Future Event List FIST...management tools that can be utilized to find the critical path in military projects. These are the Critical Path Method ( CPM ) and the Program Evaluation and
Comparison of gesture and conventional interaction techniques for interventional neuroradiology.
Hettig, Julian; Saalfeld, Patrick; Luz, Maria; Becker, Mathias; Skalej, Martin; Hansen, Christian
2017-09-01
Interaction with radiological image data and volume renderings within a sterile environment is a challenging task. Clinically established methods such as joystick control and task delegation can be time-consuming and error-prone and interrupt the workflow. New touchless input modalities may have the potential to overcome these limitations, but their value compared to established methods is unclear. We present a comparative evaluation to analyze the value of two gesture input modalities (Myo Gesture Control Armband and Leap Motion Controller) versus two clinically established methods (task delegation and joystick control). A user study was conducted with ten experienced radiologists by simulating a diagnostic neuroradiological vascular treatment with two frequently used interaction tasks in an experimental operating room. The input modalities were assessed using task completion time, perceived task difficulty, and subjective workload. Overall, the clinically established method of task delegation performed best under the study conditions. In general, gesture control failed to exceed the clinical input approach. However, the Myo Gesture Control Armband showed a potential for simple image selection task. Novel input modalities have the potential to take over single tasks more efficiently than clinically established methods. The results of our user study show the relevance of task characteristics such as task complexity on performance with specific input modalities. Accordingly, future work should consider task characteristics to provide a useful gesture interface for a specific use case instead of an all-in-one solution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bill Stanley; Sandra Brown; Patrick Gonzalez
2004-07-10
The Nature Conservancy is participating in a Cooperative Agreement with the Department of Energy (DOE) National Energy Technology Laboratory (NETL) to explore the compatibility of carbon sequestration in terrestrial ecosystems and the conservation of biodiversity. The title of the research project is ''Application and Development of Appropriate Tools and Technologies for Cost-Effective Carbon Sequestration''. The objectives of the project are to: (1) improve carbon offset estimates produced in both the planning and implementation phases of projects; (2) build valid and standardized approaches to estimate project carbon benefits at a reasonable cost; and (3) lay the groundwork for implementing cost-effective projects,more » providing new testing ground for biodiversity protection and restoration projects that store additional atmospheric carbon. This Technical Progress Report discusses preliminary results of the six specific tasks that The Nature Conservancy is undertaking to answer research needs while facilitating the development of real projects with measurable greenhouse gas impacts. The research described in this report occurred between July 1, 2002 and June 30, 2003. The specific tasks discussed include: Task 1: carbon inventory advancements; Task 2: remote sensing for carbon analysis; Task 3: baseline method development; Task 4: third-party technical advisory panel meetings; Task 5: new project feasibility studies; and Task 6: development of new project software screening tool.« less
Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network
2018-01-01
Skin lesions are a severe disease globally. Early detection of melanoma in dermoscopy images significantly increases the survival rate. However, the accurate recognition of melanoma is extremely challenging due to the following reasons: low contrast between lesions and skin, visual similarity between melanoma and non-melanoma lesions, etc. Hence, reliable automatic detection of skin tumors is very useful to increase the accuracy and efficiency of pathologists. In this paper, we proposed two deep learning methods to address three main tasks emerging in the area of skin lesion image processing, i.e., lesion segmentation (task 1), lesion dermoscopic feature extraction (task 2) and lesion classification (task 3). A deep learning framework consisting of two fully convolutional residual networks (FCRN) is proposed to simultaneously produce the segmentation result and the coarse classification result. A lesion index calculation unit (LICU) is developed to refine the coarse classification results by calculating the distance heat-map. A straight-forward CNN is proposed for the dermoscopic feature extraction task. The proposed deep learning frameworks were evaluated on the ISIC 2017 dataset. Experimental results show the promising accuracies of our frameworks, i.e., 0.753 for task 1, 0.848 for task 2 and 0.912 for task 3 were achieved. PMID:29439500
User Needs, Benefits, and Integration of Robotic Systems in a Space Station Laboratory
NASA Technical Reports Server (NTRS)
Dodd, W. R.; Badgley, M. B.; Konkel, C. R.
1989-01-01
The methodology, results and conclusions of all tasks of the User Needs, Benefits, and Integration Study (UNBIS) of Robotic Systems in a Space Station Laboratory are summarized. Study goals included the determination of user requirements for robotics within the Space Station, United States Laboratory. In Task 1, three experiments were selected to determine user needs and to allow detailed investigation of microgravity requirements. In Task 2, a NASTRAN analysis of Space Station response to robotic disturbances, and acceleration measurement of a standard industrial robot (Intelledex Model 660) resulted in selection of two ranges of microgravity manipulation: Level 1 (10-3 to 10-5 G at greater than 1 Hz) and Level 2 (less than equal 10-6 G at 0.1 Hz). This task included an evaluation of microstepping methods for controlling stepper motors and concluded that an industrial robot actuator can perform milli-G motion without modification. Relative merits of end-effectors and manipulators were studied in Task 3 in order to determine their ability to perform a range of tasks related to the three microgravity experiments. An Effectivity Rating was established for evaluating these robotic system capabilities. Preliminary interface requirements for an orbital flight demonstration were determined in Task 4. Task 5 assessed the impact of robotics.
Multi-Task Convolutional Neural Network for Pose-Invariant Face Recognition
NASA Astrophysics Data System (ADS)
Yin, Xi; Liu, Xiaoming
2018-02-01
This paper explores multi-task learning (MTL) for face recognition. We answer the questions of how and why MTL can improve the face recognition performance. First, we propose a multi-task Convolutional Neural Network (CNN) for face recognition where identity classification is the main task and pose, illumination, and expression estimations are the side tasks. Second, we develop a dynamic-weighting scheme to automatically assign the loss weight to each side task, which is a crucial problem in MTL. Third, we propose a pose-directed multi-task CNN by grouping different poses to learn pose-specific identity features, simultaneously across all poses. Last but not least, we propose an energy-based weight analysis method to explore how CNN-based MTL works. We observe that the side tasks serve as regularizations to disentangle the variations from the learnt identity features. Extensive experiments on the entire Multi-PIE dataset demonstrate the effectiveness of the proposed approach. To the best of our knowledge, this is the first work using all data in Multi-PIE for face recognition. Our approach is also applicable to in-the-wild datasets for pose-invariant face recognition and achieves comparable or better performance than state of the art on LFW, CFP, and IJB-A datasets.
EEG Frequency Changes Prior to Making Errors in an Easy Stroop Task
Atchley, Rachel; Klee, Daniel; Oken, Barry
2017-01-01
Background: Mind-wandering is a form of off-task attention that has been associated with negative affect and rumination. The goal of this study was to assess potential electroencephalographic markers of task-unrelated thought, or mind-wandering state, as related to error rates during a specialized cognitive task. We used EEG to record frontal frequency band activity while participants completed a Stroop task that was modified to induce boredom, task-unrelated thought, and therefore mind-wandering. Methods: A convenience sample of 27 older adults (50–80 years) completed a computerized Stroop matching task. Half of the Stroop trials were congruent (word/color match), and the other half were incongruent (mismatched). Behavioral data and EEG recordings were assessed. EEG analysis focused on the 1-s epochs prior to stimulus presentation in order to compare trials followed by correct versus incorrect responses. Results: Participants made errors on 9% of incongruent trials. There were no errors on congruent trials. There was a decrease in alpha and theta band activity during the epochs followed by error responses. Conclusion: Although replication of these results is necessary, these findings suggest that potential mind-wandering, as evidenced by errors, can be characterized by a decrease in alpha and theta activity compared to on-task, accurate performance periods. PMID:29163101
Dual ant colony operational modal analysis parameter estimation method
NASA Astrophysics Data System (ADS)
Sitarz, Piotr; Powałka, Bartosz
2018-01-01
Operational Modal Analysis (OMA) is a common technique used to examine the dynamic properties of a system. Contrary to experimental modal analysis, the input signal is generated in object ambient environment. Operational modal analysis mainly aims at determining the number of pole pairs and at estimating modal parameters. Many methods are used for parameter identification. Some methods operate in time while others in frequency domain. The former use correlation functions, the latter - spectral density functions. However, while some methods require the user to select poles from a stabilisation diagram, others try to automate the selection process. Dual ant colony operational modal analysis parameter estimation method (DAC-OMA) presents a new approach to the problem, avoiding issues involved in the stabilisation diagram. The presented algorithm is fully automated. It uses deterministic methods to define the interval of estimated parameters, thus reducing the problem to optimisation task which is conducted with dedicated software based on ant colony optimisation algorithm. The combination of deterministic methods restricting parameter intervals and artificial intelligence yields very good results, also for closely spaced modes and significantly varied mode shapes within one measurement point.
Exemplar Training for Battalion Visualization (CD-ROM)
cognitive task analysis to identify important visualization skill at a battalion level of command. The cognitive task analysis consisted of a review of...findings from the cognitive task analysis , 11 skill areas were identified as potential focal points of future training development. The findings from the... cognitive task analysis were used to design and develop exemplar training exercises for two skill areas; identify key problem elements employing the
Cognitive Task Analysis of the HALIFAX-Class Operations Room Officer
1999-03-10
Image Cover Sheet CLASSIFICATION SYSTEM NUMBER 510918 UNCLASSIFIED llllllllllllllllllllllllllllllllllllllll TITLE COGNITIVE TASK ANALYSIS OF THE...DATES COVERED 00-00-1999 to 00-00-1999 4. TITLE AND SUBTITLE Cognitive Task Analysis of the HALIFAX-Class Operations Room Officer 5a. CONTRACT...Ontario . ~ -- . ’ c ... - Incorporated Cognitive Task Analysis of the HALIFAX-Class Operations Room Officer: PWGSC Contract No. W7711-7-7404/001/SV
Brain activity associated with selective attention, divided attention and distraction.
Salo, Emma; Salmela, Viljami; Salmi, Juha; Numminen, Jussi; Alho, Kimmo
2017-06-01
Top-down controlled selective or divided attention to sounds and visual objects, as well as bottom-up triggered attention to auditory and visual distractors, has been widely investigated. However, no study has systematically compared brain activations related to all these types of attention. To this end, we used functional magnetic resonance imaging (fMRI) to measure brain activity in participants performing a tone pitch or a foveal grating orientation discrimination task, or both, distracted by novel sounds not sharing frequencies with the tones or by extrafoveal visual textures. To force focusing of attention to tones or gratings, or both, task difficulty was kept constantly high with an adaptive staircase method. A whole brain analysis of variance (ANOVA) revealed fronto-parietal attention networks for both selective auditory and visual attention. A subsequent conjunction analysis indicated partial overlaps of these networks. However, like some previous studies, the present results also suggest segregation of prefrontal areas involved in the control of auditory and visual attention. The ANOVA also suggested, and another conjunction analysis confirmed, an additional activity enhancement in the left middle frontal gyrus related to divided attention supporting the role of this area in top-down integration of dual task performance. Distractors expectedly disrupted task performance. However, contrary to our expectations, activations specifically related to the distractors were found only in the auditory and visual cortices. This suggests gating of the distractors from further processing perhaps due to strictly focused attention in the current demanding discrimination tasks. Copyright © 2017 Elsevier B.V. All rights reserved.
Hardisty, Frank; Robinson, Anthony C.
2010-01-01
In this paper we present the GeoViz Toolkit, an open-source, internet-delivered program for geographic visualization and analysis that features a diverse set of software components which can be flexibly combined by users who do not have programming expertise. The design and architecture of the GeoViz Toolkit allows us to address three key research challenges in geovisualization: allowing end users to create their own geovisualization and analysis component set on-the-fly, integrating geovisualization methods with spatial analysis methods, and making geovisualization applications sharable between users. Each of these tasks necessitates a robust yet flexible approach to inter-tool coordination. The coordination strategy we developed for the GeoViz Toolkit, called Introspective Observer Coordination, leverages and combines key advances in software engineering from the last decade: automatic introspection of objects, software design patterns, and reflective invocation of methods. PMID:21731423
Revealing representational content with pattern-information fMRI--an introductory guide.
Mur, Marieke; Bandettini, Peter A; Kriegeskorte, Nikolaus
2009-03-01
Conventional statistical analysis methods for functional magnetic resonance imaging (fMRI) data are very successful at detecting brain regions that are activated as a whole during specific mental activities. The overall activation of a region is usually taken to indicate involvement of the region in the task. However, such activation analysis does not consider the multivoxel patterns of activity within a brain region. These patterns of activity, which are thought to reflect neuronal population codes, can be investigated by pattern-information analysis. In this framework, a region's multivariate pattern information is taken to indicate representational content. This tutorial introduction motivates pattern-information analysis, explains its underlying assumptions, introduces the most widespread methods in an intuitive way, and outlines the basic sequence of analysis steps.