Sample records for task model development

  1. Integrating Cognitive Task Analysis into Instructional Systems Development.

    ERIC Educational Resources Information Center

    Ryder, Joan M.; Redding, Richard E.

    1993-01-01

    Discussion of instructional systems development (ISD) focuses on recent developments in cognitive task analysis and describes the Integrated Task Analysis Model, a framework for integrating cognitive and behavioral task analysis methods within the ISD model. Three components of expertise are analyzed: skills, knowledge, and mental models. (96…

  2. Development and Application of a Multi-Modal Task Analysis to Support Intelligent Tutoring of Complex Skills

    ERIC Educational Resources Information Center

    Skinner, Anna; Diller, David; Kumar, Rohit; Cannon-Bowers, Jan; Smith, Roger; Tanaka, Alyssa; Julian, Danielle; Perez, Ray

    2018-01-01

    Background: Contemporary work in the design and development of intelligent training systems employs task analysis (TA) methods for gathering knowledge that is subsequently encoded into task models. These task models form the basis of intelligent interpretation of student performance within education and training systems. Also referred to as expert…

  3. Model Developments for Development of Improved Emissions Scenarios: Developing Purchasing-Power Parity Models, Analyzing Uncertainty, and Developing Data Sets for Gridded Integrated Assessment Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Zili; Nordhaus, William

    2009-03-19

    In the duration of this project, we finished the main tasks set up in the initial proposal. These tasks include: setting up the basic platform in GAMS language for the new RICE 2007 model; testing various model structure of RICE 2007; incorporating PPP data set in the new RICE model; developing gridded data set for IA modeling.

  4. Development of task network models of human performance in microgravity

    NASA Technical Reports Server (NTRS)

    Diaz, Manuel F.; Adam, Susan

    1992-01-01

    This paper discusses the utility of task-network modeling for quantifying human performance variability in microgravity. The data are gathered for: (1) improving current methodologies for assessing human performance and workload in the operational space environment; (2) developing tools for assessing alternative system designs; and (3) developing an integrated set of methodologies for the evaluation of performance degradation during extended duration spaceflight. The evaluation entailed an analysis of the Remote Manipulator System payload-grapple task performed on many shuttle missions. Task-network modeling can be used as a tool for assessing and enhancing human performance in man-machine systems, particularly for modeling long-duration manned spaceflight. Task-network modeling can be directed toward improving system efficiency by increasing the understanding of basic capabilities of the human component in the system and the factors that influence these capabilities.

  5. Linking normative models of natural tasks to descriptive models of neural response.

    PubMed

    Jaini, Priyank; Burge, Johannes

    2017-10-01

    Understanding how nervous systems exploit task-relevant properties of sensory stimuli to perform natural tasks is fundamental to the study of perceptual systems. However, there are few formal methods for determining which stimulus properties are most useful for a given natural task. As a consequence, it is difficult to develop principled models for how to compute task-relevant latent variables from natural signals, and it is difficult to evaluate descriptive models fit to neural response. Accuracy maximization analysis (AMA) is a recently developed Bayesian method for finding the optimal task-specific filters (receptive fields). Here, we introduce AMA-Gauss, a new faster form of AMA that incorporates the assumption that the class-conditional filter responses are Gaussian distributed. Then, we use AMA-Gauss to show that its assumptions are justified for two fundamental visual tasks: retinal speed estimation and binocular disparity estimation. Next, we show that AMA-Gauss has striking formal similarities to popular quadratic models of neural response: the energy model and the generalized quadratic model (GQM). Together, these developments deepen our understanding of why the energy model of neural response have proven useful, improve our ability to evaluate results from subunit model fits to neural data, and should help accelerate psychophysics and neuroscience research with natural stimuli.

  6. Collaborative Research and Development (CR&D). Task Order 0049: Tribological Modeling

    DTIC Science & Technology

    2008-05-01

    scratch test for TiN on stainless steel with better substrate mechanical properties. This present study was focused on the study of stress distribution...AFRL-RX-WP-TR-2010-4189 COLLABORATIVE RESEARCH AND DEVELOPMENT (CR&D) Task Order 0049: Tribological Modeling Young Sup Kang Universal...SUBTITLE COLLABORATIVE RESEARCH AND DEVELOPMENT (CR&D) Task Order 0049: Tribological Modeling 5a. CONTRACT NUMBER F33615-03-D-5801-0049 5b

  7. Next Generation Electromagnetic Pump Analysis Tools (PLM DOC-0005-2188). Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stregy, Seth; Dasilva, Ana; Yilmaz, Serkan

    2015-10-29

    This report provides the broad historical review of EM Pump development and details of MATRIX development under this project. This report summarizes the efforts made to modernize the legacy performance models used in previous EM Pump designs and the improvements made to the analysis tools. This report provides information on Tasks 1, 3, and 4 of the entire project. The research for Task 4 builds upon Task 1: Update EM Pump Databank and Task 3: Modernize the Existing EM Pump Analysis Model, which are summarized within this report. Where research for Task 2: Insulation Materials Development and Evaluation identified parametersmore » applicable to the analysis model with Task 4, the analysis code was updated, and analyses were made for additional materials. The important design variables for the manufacture and operation of an EM Pump that the model improvement can evaluate are: space constraints; voltage capability of insulation system; maximum flux density through iron; flow rate and outlet pressure; efficiency and manufacturability. The development of the next-generation EM Pump analysis tools during this two-year program provides information in three broad areas: Status of analysis model development; Improvements made to older simulations; and Comparison to experimental data.« less

  8. Task allocation model for minimization of completion time in distributed computer systems

    NASA Astrophysics Data System (ADS)

    Wang, Jai-Ping; Steidley, Carl W.

    1993-08-01

    A task in a distributed computing system consists of a set of related modules. Each of the modules will execute on one of the processors of the system and communicate with some other modules. In addition, precedence relationships may exist among the modules. Task allocation is an essential activity in distributed-software design. This activity is of importance to all phases of the development of a distributed system. This paper establishes task completion-time models and task allocation models for minimizing task completion time. Current work in this area is either at the experimental level or without the consideration of precedence relationships among modules. The development of mathematical models for the computation of task completion time and task allocation will benefit many real-time computer applications such as radar systems, navigation systems, industrial process control systems, image processing systems, and artificial intelligence oriented systems.

  9. The Partners in Prevention Program: The Evaluation and Evolution of the Task-Centered Case Management Model

    ERIC Educational Resources Information Center

    Colvin, Julanne; Lee, Mingun; Magnano, Julienne; Smith, Valerie

    2008-01-01

    This article reports on the further development of the task-centered model for difficulties in school performance. We used Bailey-Dempsey and Reid's (1996) application of Rothman and Thomas's (1994) design and development framework and annual evaluations of the Partners in Prevention (PIP) Program to refine the task-centered case management model.…

  10. Assessment Engineering Task Model Maps, Task Models and Templates as a New Way to Develop and Implement Test Specifications

    ERIC Educational Resources Information Center

    Luecht, Richard M.

    2013-01-01

    Assessment engineering is a new way to design and implement scalable, sustainable and ideally lower-cost solutions to the complexities of designing and developing tests. It represents a merger of sorts between cognitive task modeling and engineering design principles--a merger that requires some new thinking about the nature of score scales, item…

  11. Integrated Workforce Modeling System

    NASA Technical Reports Server (NTRS)

    Moynihan, Gary P.

    2000-01-01

    There are several computer-based systems, currently in various phases of development at KSC, which encompass some component, aspect, or function of workforce modeling. These systems may offer redundant capabilities and/or incompatible interfaces. A systems approach to workforce modeling is necessary in order to identify and better address user requirements. This research has consisted of two primary tasks. Task 1 provided an assessment of existing and proposed KSC workforce modeling systems for their functionality and applicability to the workforce planning function. Task 2 resulted in the development of a proof-of-concept design for a systems approach to workforce modeling. The model incorporates critical aspects of workforce planning, including hires, attrition, and employee development.

  12. Final Report: Demographic Tools for Climate Change and Environmental Assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Neill, Brian

    2017-01-24

    This report summarizes work over the course of a three-year project (2012-2015, with one year no-cost extension to 2016). The full proposal detailed six tasks: Task 1: Population projection model Task 2: Household model Task 3: Spatial population model Task 4: Integrated model development Task 5: Population projections for Shared Socio-economic Pathways (SSPs) Task 6: Population exposure to climate extremes We report on all six tasks, provide details on papers that have appeared or been submitted as a result of this project, and list selected key presentations that have been made within the university community and at professional meetings.

  13. FNAS/summer faculty fellowship research continuation program. Task 6: Integrated model development for liquid fueled rocket propulsion systems. Task 9: Aspects of model-based rocket engine condition monitoring and control

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael; Helmicki, Arthur J.

    1993-01-01

    The objective of Phase I of this research effort was to develop an advanced mathematical-empirical model of SSME steady-state performance. Task 6 of Phase I is to develop component specific modification strategy for baseline case influence coefficient matrices. This report describes the background of SSME performance characteristics and provides a description of the control variable basis of three different gains models. The procedure used to establish influence coefficients for each of these three models is also described. Gains model analysis results are compared to Rocketdyne's power balance model (PBM).

  14. An Integrated Model of Cognitive Control in Task Switching

    ERIC Educational Resources Information Center

    Altmann, Erik M.; Gray, Wayne D.

    2008-01-01

    A model of cognitive control in task switching is developed in which controlled performance depends on the system maintaining access to a code in episodic memory representing the most recently cued task. The main constraint on access to the current task code is proactive interference from old task codes. This interference and the mechanisms that…

  15. Using Modeling Tasks to Facilitate the Development of Percentages

    ERIC Educational Resources Information Center

    Shahbari, Juhaina Awawdeh; Peled, Irit

    2016-01-01

    This study analyzes the development of percentages knowledge by seventh graders given a sequence of activities starting with a realistic modeling task, in which students were expected to create a model that would facilitate the reinvention of percentages. In the first two activities, students constructed their own pricing model using fractions and…

  16. Teachers as Managers of the Modelling Process

    ERIC Educational Resources Information Center

    Lingefjard, Thomas; Meier, Stephanie

    2010-01-01

    The work in the Comenius Network project Developing Quality in Mathematics Education II (DQME II) has a main focus on development and evaluation of modelling tasks. One reason is the gap between what mathematical modelling is and what is taught in mathematical classrooms. This article deals with one modelling task and focuses on how two teachers…

  17. Synthetic Proxy Infrastructure for Task Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Junghans, Christoph; Pavel, Robert

    The Synthetic Proxy Infrastructure for Task Evaluation is a proxy application designed to support application developers in gauging the performance of various task granularities when determining how best to utilize task based programming models.The infrastructure is designed to provide examples of common communication patterns with a synthetic workload intended to provide performance data to evaluate programming model and platform overheads for the purpose of determining task granularity for task decomposition purposes. This is presented as a reference implementation of a proxy application with run-time configurable input and output task dependencies ranging from an embarrassingly parallel scenario to patterns with stencil-likemore » dependencies upon their nearest neighbors. Once all, if any, inputs are satisfied each task will execute a synthetic workload (a simple DGEMM of in this case) of varying size and output all, if any, outputs to the next tasks.The intent is for this reference implementation to be implemented as a proxy app in different programming models so as to provide the same infrastructure and to allow for application developers to simulate their own communication needs to assist in task decomposition under various models on a given platform.« less

  18. Solid State Energy Conversion Alliance Delphi SOFC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steven Shaffer; Gary Blake; Sean Kelly

    2006-12-31

    The following report details the results under the DOE SECA program for the period July 2006 through December 2006. Developments pertain to the development of a 3 to 5 kW Solid Oxide Fuel Cell power system for a range of fuels and applications. This report details technical results of the work performed under the following tasks for the SOFC Power System: Task 1 SOFC System Development; Task 2 Solid Oxide Fuel Cell Stack Developments; Task 3 Reformer Developments; Task 4 Development of Balance of Plant Components; Task 5 Project Management; and Task 6 System Modeling & Cell Evaluation for Highmore » Efficiency Coal-Based Solid Oxide Fuel Cell Gas Turbine Hybrid System.« less

  19. Advanced Reactors-Intermediate Heat Exchanger (IHX) Coupling: Theoretical Modeling and Experimental Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Utgikar, Vivek; Sun, Xiaodong; Christensen, Richard

    2016-12-29

    The overall goal of the research project was to model the behavior of the advanced reactorintermediate heat exchange system and to develop advanced control techniques for off-normal conditions. The specific objectives defined for the project were: 1. To develop the steady-state thermal hydraulic design of the intermediate heat exchanger (IHX); 2. To develop mathematical models to describe the advanced nuclear reactor-IHX-chemical process/power generation coupling during normal and off-normal operations, and to simulate models using multiphysics software; 3. To develop control strategies using genetic algorithm or neural network techniques and couple these techniques with the multiphysics software; 4. To validate themore » models experimentally The project objectives were accomplished by defining and executing four different tasks corresponding to these specific objectives. The first task involved selection of IHX candidates and developing steady state designs for those. The second task involved modeling of the transient and offnormal operation of the reactor-IHX system. The subsequent task dealt with the development of control strategies and involved algorithm development and simulation. The last task involved experimental validation of the thermal hydraulic performances of the two prototype heat exchangers designed and fabricated for the project at steady state and transient conditions to simulate the coupling of the reactor- IHX-process plant system. The experimental work utilized the two test facilities at The Ohio State University (OSU) including one existing High-Temperature Helium Test Facility (HTHF) and the newly developed high-temperature molten salt facility.« less

  20. Development of a lumbar EMG-based coactivation index for the assessment of complex dynamic tasks.

    PubMed

    Le, Peter; Aurand, Alexander; Walter, Benjamin A; Best, Thomas M; Khan, Safdar N; Mendel, Ehud; Marras, William S

    2018-03-01

    The objective of this study was to develop and test an EMG-based coactivation index and compare it to a coactivation index defined by a biologically assisted lumbar spine model to differentiate between tasks. The purpose was to provide a universal approach to assess coactivation of a multi-muscle system when a computational model is not accessible. The EMG-based index developed utilised anthropometric-defined muscle characteristics driven by torso kinematics and EMG. Muscles were classified as agonists/antagonists based upon 'simulated' moments of the muscles relative to the total 'simulated' moment. Different tasks were used to test the range of the index including lifting, pushing and Valsalva. Results showed that the EMG-based index was comparable to the index defined by a biologically assisted model (r 2  = 0.78). Overall, the EMG-based index provides a universal, usable method to assess the neuromuscular effort associated with coactivation for complex dynamic tasks when the benefit of a biomechanical model is not available. Practitioner Summary: A universal coactivation index for the lumbar spine was developed to assess complex dynamic tasks. This method was validated relative to a model-based index for use when a high-end computational model is not available. Its simplicity allows for fewer inputs and usability for assessment of task ergonomics and rehabilitation.

  1. Modeling Working Memory Tasks on the Item Level

    ERIC Educational Resources Information Center

    Luo, Dasen; Chen, Guopeng; Zen, Fanlin; Murray, Bronwyn

    2010-01-01

    Item responses to Digit Span and Letter-Number Sequencing were analyzed to develop a better-refined model of the two working memory tasks using the finite mixture (FM) modeling method. Models with ordinal latent traits were found to better account for the independent sources of the variability in the tasks than those with continuous traits, and…

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stormont, John; Lampe, Brandon; Mills, Melissa

    The goal of this project is to improve the understanding of key aspects of the coupled thermal-mechanical-hydrologic response of granular (or crushed) salt used as a seal material for shafts, drifts, and boreholes in mined repositories in salt. The project is organized into three tasks to accomplish this goal: laboratory measurements of granular salt consolidation (Task 1), microstructural observations on consolidated samples (Task 2), and constitutive model development and evaluation (Task 3). Task 1 involves laboratory measurements of salt consolidation along with thermal properties and permeability measurements conducted under a range of temperatures and stresses expected for potential mined repositoriesmore » in salt. Testing focused on the role of moisture, temperature and stress state on the hydrologic (permeability) and thermal properties of consolidating granular salt at high fractional densities. Task 2 consists of microstructural observations made on samples after they have been consolidated to interpret deformation mechanisms and evaluate the ability of the constitutive model to predict operative mechanisms under different conditions. Task 3 concerns the development of the coupled thermal-mechanical-hydrologic constitutive model for granular salt consolidation. The measurements and observations in Tasks 1 and 2 were used to develop a thermal-mechanical constitutive model. Accomplishments and status from each of these efforts is reported in subsequent sections of this report« less

  3. Application of a Curriculum Hierarchy Evaluation (CHE) Model to Sequentially Arranged Tasks.

    ERIC Educational Resources Information Center

    O'Malley, J. Michael

    A curriculum hierarchy evaluation (CHE) model was developed by combining a transfer paradigm with an aptitude-treatment-task interaction (ATTI) paradigm. Positive transfer was predicted between sequentially arranged tasks, and a programed or nonprogramed treatment was predicted to interact with aptitude and with tasks. Eighteen four and five…

  4. I. WORKING MEMORY CAPACITY IN CONTEXT: MODELING DYNAMIC PROCESSES OF BEHAVIOR, MEMORY, AND DEVELOPMENT.

    PubMed

    Simmering, Vanessa R

    2016-09-01

    Working memory is a vital cognitive skill that underlies a broad range of behaviors. Higher cognitive functions are reliably predicted by working memory measures from two domains: children's performance on complex span tasks, and infants' performance in looking paradigms. Despite the similar predictive power across these research areas, theories of working memory development have not connected these different task types and developmental periods. The current project takes a first step toward bridging this gap by presenting a process-oriented theory, focusing on two tasks designed to assess visual working memory capacity in infants (the change-preference task) versus children and adults (the change detection task). Previous studies have shown inconsistent results, with capacity estimates increasing from one to four items during infancy, but only two to three items during early childhood. A probable source of this discrepancy is the different task structures used with each age group, but prior theories were not sufficiently specific to explain how performance relates across tasks. The current theory focuses on cognitive dynamics, that is, how memory representations are formed, maintained, and used within specific task contexts over development. This theory was formalized in a computational model to generate three predictions: 1) capacity estimates in the change-preference task should continue to increase beyond infancy; 2) capacity estimates should be higher in the change-preference versus change detection task when tested within individuals; and 3) performance should correlate across tasks because both rely on the same underlying memory system. I also tested a fourth prediction, that development across tasks could be explained through increasing real-time stability, realized computationally as strengthening connectivity within the model. Results confirmed these predictions, supporting the cognitive dynamics account of performance and developmental changes in real-time stability. The monograph concludes with implications for understanding memory, behavior, and development in a broader range of cognitive development. © 2016 The Society for Research in Child Development, Inc.

  5. FOCUS: A Model of Sensemaking

    DTIC Science & Technology

    2007-05-01

    of the current project was to unpack and develop the concept of sensemaking, principally by developing and testing a cognitive model of the processes...themselves. In Year 2, new Cognitive Task Analysis data collection methods were developed and used to further test the model. Cognitive Task Analysis is a...2004) to examine the phenomenon of "sensemaking," a concept initially formulated by Weick (1995), but not developed from a cognitive perspective

  6. Modeling the Relationship between Perceptions of Assessment Tasks and Classroom Assessment Environment as a Function of Gender

    ERIC Educational Resources Information Center

    Alkharusi, Hussain; Aldhafri, Said; Alnabhani, Hilal; Alkalbani, Muna

    2014-01-01

    A substantial proportion of the classroom time involves exposing students to a variety of assessment tasks. As students process these tasks, they develop beliefs about the importance, utility, value, and difficulty of the tasks. This study aimed at deriving a model describing the multivariate relationship between students' perceptions of the…

  7. A multi-site cognitive task analysis for biomedical query mediation.

    PubMed

    Hruby, Gregory W; Rasmussen, Luke V; Hanauer, David; Patel, Vimla L; Cimino, James J; Weng, Chunhua

    2016-09-01

    To apply cognitive task analyses of the Biomedical query mediation (BQM) processes for EHR data retrieval at multiple sites towards the development of a generic BQM process model. We conducted semi-structured interviews with eleven data analysts from five academic institutions and one government agency, and performed cognitive task analyses on their BQM processes. A coding schema was developed through iterative refinement and used to annotate the interview transcripts. The annotated dataset was used to reconstruct and verify each BQM process and to develop a harmonized BQM process model. A survey was conducted to evaluate the face and content validity of this harmonized model. The harmonized process model is hierarchical, encompassing tasks, activities, and steps. The face validity evaluation concluded the model to be representative of the BQM process. In the content validity evaluation, out of the 27 tasks for BQM, 19 meet the threshold for semi-valid, including 3 fully valid: "Identify potential index phenotype," "If needed, request EHR database access rights," and "Perform query and present output to medical researcher", and 8 are invalid. We aligned the goals of the tasks within the BQM model with the five components of the reference interview. The similarity between the process of BQM and the reference interview is promising and suggests the BQM tasks are powerful for eliciting implicit information needs. We contribute a BQM process model based on a multi-site study. This model promises to inform the standardization of the BQM process towards improved communication efficiency and accuracy. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. A Multi-Site Cognitive Task Analysis for Biomedical Query Mediation

    PubMed Central

    Hruby, Gregory W.; Rasmussen, Luke V.; Hanauer, David; Patel, Vimla; Cimino, James J.; Weng, Chunhua

    2016-01-01

    Objective To apply cognitive task analyses of the Biomedical query mediation (BQM) processes for EHR data retrieval at multiple sites towards the development of a generic BQM process model. Materials and Methods We conducted semi-structured interviews with eleven data analysts from five academic institutions and one government agency, and performed cognitive task analyses on their BQM processes. A coding schema was developed through iterative refinement and used to annotate the interview transcripts. The annotated dataset was used to reconstruct and verify each BQM process and to develop a harmonized BQM process model. A survey was conducted to evaluate the face and content validity of this harmonized model. Results The harmonized process model is hierarchical, encompassing tasks, activities, and steps. The face validity evaluation concluded the model to be representative of the BQM process. In the content validity evaluation, out of the 27 tasks for BQM, 19 meet the threshold for semi-valid, including 3 fully valid: “Identify potential index phenotype,” “If needed, request EHR database access rights,” and “Perform query and present output to medical researcher”, and 8 are invalid. Discussion We aligned the goals of the tasks within the BQM model with the five components of the reference interview. The similarity between the process of BQM and the reference interview is promising and suggests the BQM tasks are powerful for eliciting implicit information needs. Conclusions We contribute a BQM process model based on a multi-site study. This model promises to inform the standardization of the BQM process towards improved communication efficiency and accuracy. PMID:27435950

  9. The Blake Interaction Model for Task Force Program Development in Vocational Education.

    ERIC Educational Resources Information Center

    Blake, Duane L.

    The Blake Interaction Model presented in this manual is designed to eliminate three problems which usually confront a task force charged with the responsibility of program development in a conference setting: (1) how to involve simultaneously several work groups in the productive capacity developing solutions for several separate problems; (2) how…

  10. Recovery Act: An Integrated Experimental and Numerical Study: Developing a Reaction Transport Model that Couples Chemical Reactions of Mineral Dissolution/Precipitation with Spatial and Temporal Flow Variations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saar, Martin O.; Seyfried, Jr., William E.; Longmire, Ellen K.

    2016-06-24

    A total of 12 publications and 23 abstracts were produced as a result of this study. In particular, the compilation of a thermodynamic database utilizing consistent, current thermodynamic data is a major step toward accurately modeling multi-phase fluid interactions with solids. Existing databases designed for aqueous fluids did not mesh well with existing solid phase databases. Addition of a second liquid phase (CO2) magnifies the inconsistencies between aqueous and solid thermodynamic databases. Overall, the combination of high temperature and pressure lab studies (task 1), using a purpose built apparatus, and solid characterization (task 2), using XRCT and more developed technologies,more » allowed observation of dissolution and precipitation processes under CO2 reservoir conditions. These observations were combined with results from PIV experiments on multi-phase fluids (task 3) in typical flow path geometries. The results of the tasks 1, 2, and 3 were compiled and integrated into numerical models utilizing Lattice-Boltzmann simulations (task 4) to realistically model the physical processes and were ultimately folded into TOUGH2 code for reservoir scale modeling (task 5). Compilation of the thermodynamic database assisted comparisons to PIV experiments (Task 3) and greatly improved Lattice Boltzmann (Task 4) and TOUGH2 simulations (Task 5). PIV (Task 3) and experimental apparatus (Task 1) have identified problem areas in TOUGHREACT code. Additional lab experiments and coding work has been integrated into an improved numerical modeling code.« less

  11. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Hillery, R. V.; Pilsner, B. H.; Mcknight, R. L.; Cook, T. S.; Hartle, M. S.

    1988-01-01

    This report describes work performed to determine the predominat modes of degradation of a plasma sprayed thermal barrier coating system and to develop and verify life prediction models accounting for these degradation modes. The primary TBC system consisted of a low pressure plasma sprayed NiCrAlY bond coat, an air plasma sprayed ZrO2-Y2O3 top coat, and a Rene' 80 substrate. The work was divided into 3 technical tasks. The primary failure mode to be addressed was loss of the zirconia layer through spalling. Experiments showed that oxidation of the bond coat is a significant contributor to coating failure. It was evident from the test results that the species of oxide scale initially formed on the bond coat plays a role in coating degradation and failure. It was also shown that elevated temperature creep of the bond coat plays a role in coating failure. An empirical model was developed for predicting the test life of specimens with selected coating, specimen, and test condition variations. In the second task, a coating life prediction model was developed based on the data from Task 1 experiments, results from thermomechanical experiments performed as part of Task 2, and finite element analyses of the TBC system during thermal cycles. The third and final task attempted to verify the validity of the model developed in Task 2. This was done by using the model to predict the test lives of several coating variations and specimen geometries, then comparing these predicted lives to experimentally determined test lives. It was found that the model correctly predicts trends, but that additional refinement is needed to accurately predict coating life.

  12. Development of a Model for Measuring Scientific Processing Skills Based on Brain-Imaging Technology: Focused on the Experimental Design Process

    ERIC Educational Resources Information Center

    Lee, Il-Sun; Byeon, Jung-Ho; Kim, Young-shin; Kwon, Yong-Ju

    2014-01-01

    The purpose of this study was to develop a model for measuring experimental design ability based on functional magnetic resonance imaging (fMRI) during biological inquiry. More specifically, the researchers developed an experimental design task that measures experimental design ability. Using the developed experimental design task, they measured…

  13. Application of the NUREG/CR-6850 EPRI/NRC Fire PRA Methodology to a DOE Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tom Elicson; Bentley Harwood; Richard Yorg

    2011-03-01

    The application NUREG/CR-6850 EPRI/NRC fire PRA methodology to DOE facility presented several challenges. This paper documents the process and discusses several insights gained during development of the fire PRA. A brief review of the tasks performed is provided with particular focus on the following: • Tasks 5 and 14: Fire-induced risk model and fire risk quantification. A key lesson learned was to begin model development and quantification as early as possible in the project using screening values and simplified modeling if necessary. • Tasks 3 and 9: Fire PRA cable selection and detailed circuit failure analysis. In retrospect, it wouldmore » have been beneficial to perform the model development and quantification in 2 phases with detailed circuit analysis applied during phase 2. This would have allowed for development of a robust model and quantification earlier in the project and would have provided insights into where to focus the detailed circuit analysis efforts. • Tasks 8 and 11: Scoping fire modeling and detailed fire modeling. More focus should be placed on detailed fire modeling and less focus on scoping fire modeling. This was the approach taken for the fire PRA. • Task 14: Fire risk quantification. Typically, multiple safe shutdown (SSD) components fail during a given fire scenario. Therefore dependent failure analysis is critical to obtaining a meaningful fire risk quantification. Dependent failure analysis for the fire PRA presented several challenges which will be discussed in the full paper.« less

  14. A Decision Model for Supporting Task Allocation Processes in Global Software Development

    NASA Astrophysics Data System (ADS)

    Lamersdorf, Ansgar; Münch, Jürgen; Rombach, Dieter

    Today, software-intensive systems are increasingly being developed in a globally distributed way. However, besides its benefit, global development also bears a set of risks and problems. One critical factor for successful project management of distributed software development is the allocation of tasks to sites, as this is assumed to have a major influence on the benefits and risks. We introduce a model that aims at improving management processes in globally distributed projects by giving decision support for task allocation that systematically regards multiple criteria. The criteria and causal relationships were identified in a literature study and refined in a qualitative interview study. The model uses existing approaches from distributed systems and statistical modeling. The article gives an overview of the problem and related work, introduces the empirical and theoretical foundations of the model, and shows the use of the model in an example scenario.

  15. Design, engineering and evaluation of refractory liners for slagging gasifiers. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    deTineo, B J; Booth, G; Firestone, R F

    1982-08-01

    The contract for this program was awarded at the end of September 1978. Work was started on 1 October 1978, on Tasks A, B, and E. Task A, Conceptual Liner Designs, and Task B, Test System Design and Construction, were completed. Task C, Liner Tests, and Task D, Liner Design Evaluation, were to begin upon completion of Task B. Task E, Liner Model Development, is inactive after an initial data compilation and theoretical model development effort. It was to be activated as soon as data were available from Task D. Task F, Liner Design Handbook, was active along with Taskmore » A since the reports of both tasks were to use the same format. At this time, Tasks C, D, and F are not to be completed since funding of this project was phased out by DOE directive. The refractory text facility, which was constructed, was tested and found to perform satisfactorily. It is described in detail, including a hazard analysis which was performed. (LTN)« less

  16. Projecting manpower to attain quality

    NASA Technical Reports Server (NTRS)

    Rone, K. Y.

    1983-01-01

    The resulting model is useful as a projection tool but must be validated in order to be used as an on-going software cost engineering tool. A procedure is developed to facilitate the tracking of model projections and actual data to allow the model to be tuned. Finally, since the model must be used in an environment of overlapping development activities on a progression of software elements in development and maintenance, a manpower allocation model is developed for use in a steady state development/maintenance environment. In these days of soaring software costs it becomes increasingly important to properly manage a software development project. One element of the management task is the projection and tracking of manpower required to perform the task. In addition, since the total cost of the task is directly related to the initial quality built into the software, it becomes a necessity to project the development manpower in a way to attain that quality. An approach to projecting and tracking manpower with quality in mind is described.

  17. The Design and Enactment of Modeling Tasks: A Study on the Development of Modeling Abilities in A Secondary Mathematics Course

    ERIC Educational Resources Information Center

    Buhrman, Danielle

    2017-01-01

    This study uses components of action and self-study research to examine the design and enactment of modeling tasks with the goal of developing student modeling abilities. The author, a secondary mathematics teacher, first closely examined the curriculum design and instructional decisions she made as she prepared for a unit on mathematical modeling…

  18. NASA: Model development for human factors interfacing

    NASA Technical Reports Server (NTRS)

    Smith, L. L.

    1984-01-01

    The results of an intensive literature review in the general topics of human error analysis, stress and job performance, and accident and safety analysis revealed no usable techniques or approaches for analyzing human error in ground or space operations tasks. A task review model is described and proposed to be developed in order to reduce the degree of labor intensiveness in ground and space operations tasks. An extensive number of annotated references are provided.

  19. Jet Noise Modeling for Suppressed and Unsuppressed Aircraft in Simulated Flight

    NASA Technical Reports Server (NTRS)

    Stone, James R.; Krejsa, Eugene A.; Clark, Bruce J; Berton, Jeffrey J.

    2009-01-01

    This document describes the development of further extensions and improvements to the jet noise model developed by Modern Technologies Corporation (MTC) for the National Aeronautics and Space Administration (NASA). The noise component extraction and correlation approach, first used successfully by MTC in developing a noise prediction model for two-dimensional mixer ejector (2DME) nozzles under the High Speed Research (HSR) Program, has been applied to dual-stream nozzles, then extended and improved in earlier tasks under this contract. Under Task 6, the coannular jet noise model was formulated and calibrated with limited scale model data, mainly at high bypass ratio, including a limited-range prediction of the effects of mixing-enhancement nozzle-exit chevrons on jet noise. Under Task 9 this model was extended to a wider range of conditions, particularly those appropriate for a Supersonic Business Jet, with an improvement in simulated flight effects modeling and generalization of the suppressor model. In the present task further comparisons are made over a still wider range of conditions from more test facilities. The model is also further generalized to cover single-stream nozzles of otherwise similar configuration. So the evolution of this prediction/analysis/correlation approach has been in a sense backward, from the complex to the simple; but from this approach a very robust capability is emerging. Also from these studies, some observations emerge relative to theoretical considerations. The purpose of this task is to develop an analytical, semi-empirical jet noise prediction method applicable to takeoff, sideline and approach noise of subsonic and supersonic cruise aircraft over a wide size range. The product of this task is an even more consistent and robust model for the Footprint/Radius (FOOTPR) code than even the Task 9 model. The model is validated for a wider range of cases and statistically quantified for the various reference facilities. The possible role of facility effects will thus be documented. Although the comparisons that can be accomplished within the limited resources of this task are not comprehensive, they provide a broad enough sampling to enable NASA to make an informed decision on how much further effort should be expended on such comparisons. The improved finalized model is incorporated into the FOOTPR code. MTC has also supported the adaptation of this code for incorporation in NASA s Aircraft Noise Prediction Program (ANOPP).

  20. Models and techniques for evaluating the effectiveness of aircraft computing systems

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1977-01-01

    Models, measures and techniques were developed for evaluating the effectiveness of aircraft computing systems. The concept of effectiveness involves aspects of system performance, reliability and worth. Specifically done was a detailed development of model hierarchy at mission, functional task, and computational task levels. An appropriate class of stochastic models was investigated which served as bottom level models in the hierarchial scheme. A unified measure of effectiveness called 'performability' was defined and formulated.

  1. Prediction of Human Cytochrome P450 Inhibition Using a Multitask Deep Autoencoder Neural Network.

    PubMed

    Li, Xiang; Xu, Youjun; Lai, Luhua; Pei, Jianfeng

    2018-05-30

    Adverse side effects of drug-drug interactions induced by human cytochrome P450 (CYP450) inhibition is an important consideration in drug discovery. It is highly desirable to develop computational models that can predict the inhibitive effect of a compound against a specific CYP450 isoform. In this study, we developed a multitask model for concurrent inhibition prediction of five major CYP450 isoforms, namely, 1A2, 2C9, 2C19, 2D6, and 3A4. The model was built by training a multitask autoencoder deep neural network (DNN) on a large dataset containing more than 13 000 compounds, extracted from the PubChem BioAssay Database. We demonstrate that the multitask model gave better prediction results than that of single-task models, previous reported classifiers, and traditional machine learning methods on an average of five prediction tasks. Our multitask DNN model gave average prediction accuracies of 86.4% for the 10-fold cross-validation and 88.7% for the external test datasets. In addition, we built linear regression models to quantify how the other tasks contributed to the prediction difference of a given task between single-task and multitask models, and we explained under what conditions the multitask model will outperform the single-task model, which suggested how to use multitask DNN models more effectively. We applied sensitivity analysis to extract useful knowledge about CYP450 inhibition, which may shed light on the structural features of these isoforms and give hints about how to avoid side effects during drug development. Our models are freely available at http://repharma.pku.edu.cn/deepcyp/home.php or http://www.pkumdl.cn/deepcyp/home.php .

  2. Development of an algorithm to model an aircraft equipped with a generic CDTI display

    NASA Technical Reports Server (NTRS)

    Driscoll, W. C.; Houck, J. A.

    1986-01-01

    A model of human pilot performance of a tracking task using a generic Cockpit Display of Traffic Information (CDTI) display is developed from experimental data. The tracking task is to use CDTI in tracking a leading aircraft at a nominal separation of three nautical miles over a prescribed trajectory in space. The analysis of the data resulting from a factorial design of experiments reveals that the tracking task performance depends on the pilot and his experience at performing the task. Performance was not strongly affected by the type of control system used (velocity vector control wheel steering versus 3D automatic flight path guidance and control). The model that is developed and verified results in state trajectories whose difference from the experimental state trajectories is small compared to the variation due to the pilot and experience factors.

  3. Modeling Personnel Turnover in the Parametric Organization

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1991-01-01

    A primary issue in organizing a new parametric cost analysis function is to determine the skill mix and number of personnel required. The skill mix can be obtained by a functional decomposition of the tasks required within the organization and a matrixed correlation with educational or experience backgrounds. The number of personnel is a function of the skills required to cover all tasks, personnel skill background and cross training, the intensity of the workload for each task, migration through various tasks by personnel along a career path, personnel hiring limitations imposed by management and the applicant marketplace, personnel training limitations imposed by management and personnel capability, and the rate at which personnel leave the organization for whatever reason. Faced with the task of relating all of these organizational facets in order to grow a parametric cost analysis (PCA) organization from scratch, it was decided that a dynamic model was required in order to account for the obvious dynamics of the forming organization. The challenge was to create such a simple model which would be credible during all phases of organizational development. The model development process was broken down into the activities of determining the tasks required for PCA, determining the skills required for each PCA task, determining the skills available in the applicant marketplace, determining the structure of the dynamic model, implementing the dynamic model, and testing the dynamic model.

  4. Interagency Task Forces: The Right Tools for the Job

    DTIC Science & Technology

    2011-01-01

    shortcomings. This analysis discusses four organizational reform models and recommends the interagency task force ( IATF ) as the preferred structure...model.64 Still others recommend creating and deploying ad hoc IATFs for crisis operations. These interagency task forces would be task- organized to...forces assigned for planning, exercises, and mission execution.65 A 2005 article in Policy Review recommended developing IATFs as needed for specific

  5. Re-Thinking Stages of Cognitive Development: An Appraisal of Connectionist Models of the Balance Scale Task

    ERIC Educational Resources Information Center

    Quinlan, Philip T.; van der Maas, Han L. J.; Jansen, Brenda R. J.; Booij, Olaf; Rendell, Mark

    2007-01-01

    The present paper re-appraises connectionist attempts to explain how human cognitive development appears to progress through a series of sequential stages. Models of performance on the Piagetian balance scale task are the focus of attention. Limitations of these models are discussed and replications and extensions to the work are provided via the…

  6. Precision pointing and control of flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Bantell, M. H., Jr.

    1987-01-01

    The problem and long term objectives for the precision pointing and control of flexible spacecraft are given. The four basic objectives are stated in terms of two principle tasks. Under Task 1, robust low order controllers, improved structural modeling methods for control applications and identification methods for structural dynamics are being developed. Under Task 2, a lab test experiment for verification of control laws and system identification algorithms is being developed. For Task 1, work has focused on robust low order controller design and some initial considerations for structural modeling in control applications. For Task 2, work has focused on experiment design and fabrication, along with sensor selection and initial digital controller implementation. Conclusions are given.

  7. New Uses for Sensitivity Analysis: How Different Movement Tasks Effect Limb Model Parameter Sensitivity

    NASA Technical Reports Server (NTRS)

    Winters, J. M.; Stark, L.

    1984-01-01

    Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.

  8. Software Cost-Estimation Model

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1985-01-01

    Software Cost Estimation Model SOFTCOST provides automated resource and schedule model for software development. Combines several cost models found in open literature into one comprehensive set of algorithms. Compensates for nearly fifty implementation factors relative to size of task, inherited baseline, organizational and system environment and difficulty of task.

  9. Study of helicopterroll control effectiveness criteria

    NASA Technical Reports Server (NTRS)

    Heffley, Robert K.; Bourne, Simon M.; Curtiss, Howard C., Jr.; Hindson, William S.; Hess, Ronald A.

    1986-01-01

    A study of helicopter roll control effectiveness based on closed-loop task performance measurement and modeling is presented. Roll control critieria are based on task margin, the excess of vehicle task performance capability over the pilot's task performance demand. Appropriate helicopter roll axis dynamic models are defined for use with analytic models for task performance. Both near-earth and up-and-away large-amplitude maneuvering phases are considered. The results of in-flight and moving-base simulation measurements are presented to support the roll control effectiveness criteria offered. This Volume contains the theoretical analysis, simulation results and criteria development.

  10. Society of Gynecologic Oncology Future of Physician Payment Reform Task Force report: The Endometrial Cancer Alternative Payment Model (ECAP).

    PubMed

    Ko, Emily M; Havrilesky, Laura J; Alvarez, Ronald D; Zivanovic, Oliver; Boyd, Leslie R; Jewell, Elizabeth L; Timmins, Patrick F; Gibb, Randall S; Jhingran, Anuja; Cohn, David E; Dowdy, Sean C; Powell, Matthew A; Chalas, Eva; Huang, Yongmei; Rathbun, Jill; Wright, Jason D

    2018-05-01

    Health care in the United States is in the midst of a significant transformation from a "fee for service" to a "fee for value" based model. The Medicare Access and CHIP Reauthorization Act of 2015 has only accelerated this transition. Anticipating these reforms, the Society of Gynecologic Oncology developed the Future of Physician Payment Reform Task Force (PPRTF) in 2015 to develop strategies to ensure fair value based reimbursement policies for gynecologic cancer care. The PPRTF elected as a first task to develop an Alternative Payment Model for thesurgical management of low risk endometrial cancer. The history, rationale, and conceptual framework for the development of an Endometrial Cancer Alternative Payment Model are described in this white paper, as well as directions forfuture efforts. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Understanding Prospective Teachers' Mathematical Modeling Processes in the Context of a Mathematical Modeling Course

    ERIC Educational Resources Information Center

    Zeytun, Aysel Sen; Cetinkaya, Bulent; Erbas, Ayhan Kursat

    2017-01-01

    This paper investigates how prospective teachers develop mathematical models while they engage in modeling tasks. The study was conducted in an undergraduate elective course aiming to improve prospective teachers' mathematical modeling abilities, while enhancing their pedagogical knowledge for the integrating of modeling tasks into their future…

  12. POPEYE: A production rule-based model of multitask supervisory control (POPCORN)

    NASA Technical Reports Server (NTRS)

    Townsend, James T.; Kadlec, Helena; Kantowitz, Barry H.

    1988-01-01

    Recent studies of relationships between subjective ratings of mental workload, performance, and human operator and task characteristics have indicated that these relationships are quite complex. In order to study the various relationships and place subjective mental workload within a theoretical framework, we developed a production system model for the performance component of the complex supervisory task called POPCORN. The production system model is represented by a hierarchial structure of goals and subgoals, and the information flow is controlled by a set of condition-action rules. The implementation of this production system, called POPEYE, generates computer simulated data under different task difficulty conditions which are comparable to those of human operators performing the task. This model is the performance aspect of an overall dynamic psychological model which we are developing to examine and quantify relationships between performance and psychological aspects in a complex environment.

  13. Generalizing the dynamic field theory of spatial cognition across real and developmental time scales

    PubMed Central

    Simmering, Vanessa R.; Spencer, John P.; Schutte, Anne R.

    2008-01-01

    Within cognitive neuroscience, computational models are designed to provide insights into the organization of behavior while adhering to neural principles. These models should provide sufficient specificity to generate novel predictions while maintaining the generality needed to capture behavior across tasks and/or time scales. This paper presents one such model, the Dynamic Field Theory (DFT) of spatial cognition, showing new simulations that provide a demonstration proof that the theory generalizes across developmental changes in performance in four tasks—the Piagetian A-not-B task, a sandbox version of the A-not-B task, a canonical spatial recall task, and a position discrimination task. Model simulations demonstrate that the DFT can accomplish both specificity—generating novel, testable predictions—and generality—spanning multiple tasks across development with a relatively simple developmental hypothesis. Critically, the DFT achieves generality across tasks and time scales with no modification to its basic structure and with a strong commitment to neural principles. The only change necessary to capture development in the model was an increase in the precision of the tuning of receptive fields as well as an increase in the precision of local excitatory interactions among neurons in the model. These small quantitative changes were sufficient to move the model through a set of quantitative and qualitative behavioral changes that span the age range from 8 months to 6 years and into adulthood. We conclude by considering how the DFT is positioned in the literature, the challenges on the horizon for our framework, and how a dynamic field approach can yield new insights into development from a computational cognitive neuroscience perspective. PMID:17716632

  14. Cognitive task analysis: harmonizing tasks to human capacities.

    PubMed

    Neerincx, M A; Griffioen, E

    1996-04-01

    This paper presents the development of a cognitive task analysis that assesses the task load of jobs and provides indicators for the redesign of jobs. General principles of human task performance were selected and, subsequently, integrated into current task modelling techniques. The resulting cognitive task analysis centres around four aspects of task load: the number of actions in a period, the ratio between knowledge- and rule-based actions, lengthy uninterrupted actions, and momentary overloading. The method consists of three stages: (1) construction of a hierarchical task model, (2) a time-line analysis and task load assessment, and (3), if necessary, adjustment of the task model. An application of the cognitive task analysis in railway traffic control showed its benefits over the 'old' task load analysis of the Netherlands Railways. It provided a provisional standard for traffic control jobs, conveyed two load risks -- momentary overloading and underloading -- and resulted in proposals to satisfy the standard and to diminish the two load risk.

  15. Development of the NASA Digital Astronaut Project Muscle Model

    NASA Technical Reports Server (NTRS)

    Lewandowski, Beth E.; Pennline, James A.; Thompson, W. K.; Humphreys, B. T.; Ryder, J. W.; Ploutz-Snyder, L. L.; Mulugeta, L.

    2015-01-01

    This abstract describes development work performed on the NASA Digital Astronaut Project Muscle Model. Muscle atrophy is a known physiological response to exposure to a low gravity environment. The DAP muscle model computationally predicts the change in muscle structure and function vs. time in a reduced gravity environment. The spaceflight muscle model can then be used in biomechanical models of exercise countermeasures and spaceflight tasks to: 1) develop site specific bone loading input to the DAP bone adaptation model over the course of a mission; 2) predict astronaut performance of spaceflight tasks; 3) inform effectiveness of new exercise countermeasures concepts.

  16. Clean Metal Casting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makhlouf M. Makhlouf; Diran Apelian

    The objective of this project is to develop a technology for clean metal processing that is capable of consistently providing a metal cleanliness level that is fit for a given application. The program has five tasks: Development of melt cleanliness assessment technology, development of melt contamination avoidance technology, development of high temperature phase separation technology, establishment of a correlation between the level of melt cleanliness and as cast mechanical properties, and transfer of technology to the industrial sector. Within the context of the first task, WPI has developed a standardized Reduced Pressure Test that has been endorsed by AFS asmore » a recommended practice. In addition, within the context of task1, WPI has developed a melt cleanliness sensor based on the principles of electromagnetic separation. An industrial partner is commercializing the sensor. Within the context of the second task, WPI has developed environmentally friendly fluxes that do not contain fluorine. Within the context of the third task, WPI modeled the process of rotary degassing and verified the model predictions with experimental data. This model may be used to optimize the performance of industrial rotary degassers. Within the context of the fourth task, WPI has correlated the level of melt cleanliness at various foundries, including a sand casting foundry, a permanent mold casting foundry, and a die casting foundry, to the casting process and the resultant mechanical properties. This is useful in tailoring the melt cleansing operations at foundries to the particular casting process and the desired properties of cast components.« less

  17. PARAMO: A Parallel Predictive Modeling Platform for Healthcare Analytic Research using Electronic Health Records

    PubMed Central

    Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R.; Stewart, Walter F.; Malin, Bradley; Sun, Jimeng

    2014-01-01

    Objective Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: 1) cohort construction, 2) feature construction, 3) cross-validation, 4) feature selection, and 5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. Methods To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which 1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, 2) schedules the tasks in a topological ordering of the graph, and 3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. Results We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3 hours in parallel compared to 9 days if running sequentially. Conclusion This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines that are specialized for health data researchers. PMID:24370496

  18. PARAMO: a PARAllel predictive MOdeling platform for healthcare analytic research using electronic health records.

    PubMed

    Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R; Stewart, Walter F; Malin, Bradley; Sun, Jimeng

    2014-04-01

    Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: (1) cohort construction, (2) feature construction, (3) cross-validation, (4) feature selection, and (5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which (1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, (2) schedules the tasks in a topological ordering of the graph, and (3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3h in parallel compared to 9days if running sequentially. This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines that are specialized for health data researchers. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Wearable-Sensor-Based Classification Models of Faller Status in Older Adults.

    PubMed

    Howcroft, Jennifer; Lemaire, Edward D; Kofman, Jonathan

    2016-01-01

    Wearable sensors have potential for quantitative, gait-based, point-of-care fall risk assessment that can be easily and quickly implemented in clinical-care and older-adult living environments. This investigation generated models for wearable-sensor based fall-risk classification in older adults and identified the optimal sensor type, location, combination, and modelling method; for walking with and without a cognitive load task. A convenience sample of 100 older individuals (75.5 ± 6.7 years; 76 non-fallers, 24 fallers based on 6 month retrospective fall occurrence) walked 7.62 m under single-task and dual-task conditions while wearing pressure-sensing insoles and tri-axial accelerometers at the head, pelvis, and left and right shanks. Participants also completed the Activities-specific Balance Confidence scale, Community Health Activities Model Program for Seniors questionnaire, six minute walk test, and ranked their fear of falling. Fall risk classification models were assessed for all sensor combinations and three model types: multi-layer perceptron neural network, naïve Bayesian, and support vector machine. The best performing model was a multi-layer perceptron neural network with input parameters from pressure-sensing insoles and head, pelvis, and left shank accelerometers (accuracy = 84%, F1 score = 0.600, MCC score = 0.521). Head sensor-based models had the best performance of the single-sensor models for single-task gait assessment. Single-task gait assessment models outperformed models based on dual-task walking or clinical assessment data. Support vector machines and neural networks were the best modelling technique for fall risk classification. Fall risk classification models developed for point-of-care environments should be developed using support vector machines and neural networks, with a multi-sensor single-task gait assessment.

  20. A Bio-Inspired Model-Based Approach for Context-Aware Post-WIMP Tele-Rehabilitation.

    PubMed

    López-Jaquero, Víctor; Rodríguez, Arturo C; Teruel, Miguel A; Montero, Francisco; Navarro, Elena; Gonzalez, Pascual

    2016-10-13

    Tele-rehabilitation is one of the main domains where Information and Communication Technologies (ICT) have been proven useful to move healthcare from care centers to patients' home. Moreover, patients, especially those carrying out a physical therapy, cannot use a traditional Window, Icon, Menu, Pointer (WIMP) system, but they need to interact in a natural way, that is, there is a need to move from WIMP systems to Post-WIMP ones. Moreover, tele-rehabilitation systems should be developed following the context-aware approach, so that they are able to adapt to the patients' context to provide them with usable and effective therapies. In this work a model-based approach is presented to assist stakeholders in the development of context-aware Post-WIMP tele-rehabilitation systems. It entails three different models: (i) a task model for designing the rehabilitation tasks; (ii) a context model to facilitate the adaptation of these tasks to the context; and (iii) a bio-inspired presentation model to specify thoroughly how such tasks should be performed by the patients. Our proposal overcomes one of the limitations of the model-based approach for the development of context-aware systems supporting the specification of non-functional requirements. Finally, a case study is used to illustrate how this proposal can be put into practice to design a real world rehabilitation task.

  1. Detection of driver engagement in secondary tasks from observed naturalistic driving behavior.

    PubMed

    Ye, Mengqiu; Osman, Osama A; Ishak, Sherif; Hashemi, Bita

    2017-09-01

    Distracted driving has long been acknowledged as one of the leading causes of death or injury in roadway crashes. The focus of past research has been mainly on the impact of different causes of distraction on driving behavior. However, only a few studies attempted to address how some driving behavior attributes could be linked to the cause of distraction. In essence, this study takes advantage of the rich SHRP 2 Naturalistic Driving Study (NDS) database to develop a model for detecting the likelihood of a driver's involvement in secondary tasks from distinctive attributes of driving behavior. Five performance attributes, namely speed, longitudinal acceleration, lateral acceleration, yaw rate, and throttle position were used to describe the driving behavior. A model was developed for each of three selected secondary tasks: calling, texting, and passenger interaction. The models were developed using a supervised feed-forward Artificial Neural Network (ANN) architecture to account for the effect of inherent nonlinearity in the relationships between driving behavior and secondary tasks. The results show that the developed ANN models were able to detect the drivers' involvement in calling, texting, and passenger interaction with an overall accuracy of 99.5%, 98.1%, and 99.8%, respectively. These results show that the selected driving performance attributes were effective in detecting the associated secondary tasks with driving behavior. The results are very promising and the developed models could potentially be applied in crash investigations to resolve legal disputes in traffic accidents. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Analysis of the Latin Square Task with Linear Logistic Test Models

    ERIC Educational Resources Information Center

    Zeuch, Nina; Holling, Heinz; Kuhn, Jorg-Tobias

    2011-01-01

    The Latin Square Task (LST) was developed by Birney, Halford, and Andrews [Birney, D. P., Halford, G. S., & Andrews, G. (2006). Measuring the influence of cognitive complexity on relational reasoning: The development of the Latin Square Task. Educational and Psychological Measurement, 66, 146-171.] and represents a non-domain specific,…

  3. Task 21 - Development of Systems Engineering Applications for Decontamination and Decommissioning Activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, T.A.

    1998-11-01

    The objectives of this task are to: Develop a model (paper) to estimate the cost and waste generation of cleanup within the Environmental Management (EM) complex; Identify technologies applicable to decontamination and decommissioning (D and D) operations within the EM complex; Develop a database of facility information as linked to project baseline summaries (PBSs). The above objectives are carried out through the following four subtasks: Subtask 1--D and D Model Development, Subtask 2--Technology List; Subtask 3--Facility Database, and Subtask 4--Incorporation into a User Model.

  4. Keep it simple - A case study of model development in the context of the Dynamic Stocks and Flows (DSF) task

    NASA Astrophysics Data System (ADS)

    Halbrügge, Marc

    2010-12-01

    This paper describes the creation of a cognitive model submitted to the ‘Dynamic Stocks and Flows’ (DSF) modeling challenge. This challenge aims at comparing computational cognitive models for human behavior during an open ended control task. Participants in the modeling competition were provided with a simulation environment and training data for benchmarking their models while the actual specification of the competition task was withheld. To meet this challenge, the cognitive model described here was designed and optimized for generalizability. Only two simple assumptions about human problem solving were used to explain the empirical findings of the training data. In-depth analysis of the data set prior to the development of the model led to the dismissal of correlations or other parametric statistics as goodness-of-fit indicators. A new statistical measurement based on rank orders and sequence matching techniques is being proposed instead. This measurement, when being applied to the human sample, also identifies clusters of subjects that use different strategies for the task. The acceptability of the fits achieved by the model is verified using permutation tests.

  5. Dual Arm Work Package performance estimates and telerobot task network simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Draper, J.V.; Blair, L.M.

    1997-02-01

    This paper describes the methodology and results of a network simulation study of the Dual Arm Work Package (DAWP), to be employed for dismantling the Argonne National Laboratory CP-5 reactor. The development of the simulation model was based upon the results of a task analysis for the same system. This study was performed by the Oak Ridge National Laboratory (ORNL), in the Robotics and Process Systems Division. Funding was provided the US Department of Energy`s Office of Technology Development, Robotics Technology Development Program (RTDP). The RTDP is developing methods of computer simulation to estimate telerobotic system performance. Data were collectedmore » to provide point estimates to be used in a task network simulation model. Three skilled operators performed six repetitions of a pipe cutting task representative of typical teleoperation cutting operations.« less

  6. Basic Remote Sensing Investigations for Beach Reconnaissance.

    DTIC Science & Technology

    Progress is reported on three tasks designed to develop remote sensing beach reconnaissance techniques applicable to the benthic, beach intertidal...and beach upland zones. Task 1 is designed to develop remote sensing indicators of important beach composition and physical parameters which will...ultimately prove useful in models to predict beach conditions. Task 2 is designed to develop remote sensing techniques for survey of bottom features in

  7. A model for combined targeting and tracking tasks in computer applications.

    PubMed

    Senanayake, Ransalu; Hoffmann, Errol R; Goonetilleke, Ravindra S

    2013-11-01

    Current models for targeted-tracking are discussed and shown to be inadequate as a means of understanding the combined task of tracking, as in the Drury's paradigm, and having a final target to be aimed at, as in the Fitts' paradigm. It is shown that the task has to be split into components that are, in general, performed sequentially and have a movement time component dependent on the difficulty of the individual component of the task. In some cases, the task time may be controlled by the Fitts' task difficulty, and in others, it may be dominated by the Drury's task difficulty. Based on an experiment carried out that captured movement time in combinations of visually controlled and ballistic movements, a model for movement time in targeted-tracking was developed.

  8. Developing Battery Computer Aided Engineering Tools for Military Vehicles

    DTIC Science & Technology

    2013-12-01

    Task 1.b Modeling Bullet penetration. The purpose of Task 1.a was to extend the chemical kinetics models of CoO2 cathodes developed under CAEBAT to...lithium- ion batteries. The new finite element model captures swelling/shrinking in cathodes /anodes due to thermal expansion and lithium intercalation...Solid Electrolyte Interphase (SEI) layer decomposition 80 2 Anode — electrolyte 100 3 Cathode — electrolyte 130 4 Electrolyte decomposition 180

  9. Medication Reconciliation: Work Domain Ontology, prototype development, and a predictive model.

    PubMed

    Markowitz, Eliz; Bernstam, Elmer V; Herskovic, Jorge; Zhang, Jiajie; Shneiderman, Ben; Plaisant, Catherine; Johnson, Todd R

    2011-01-01

    Medication errors can result from administration inaccuracies at any point of care and are a major cause for concern. To develop a successful Medication Reconciliation (MR) tool, we believe it necessary to build a Work Domain Ontology (WDO) for the MR process. A WDO defines the explicit, abstract, implementation-independent description of the task by separating the task from work context, application technology, and cognitive architecture. We developed a prototype based upon the WDO and designed to adhere to standard principles of interface design. The prototype was compared to Legacy Health System's and Pre-Admission Medication List Builder MR tools via a Keystroke-Level Model analysis for three MR tasks. The analysis found the prototype requires the fewest mental operations, completes tasks in the fewest steps, and completes tasks in the least amount of time. Accordingly, we believe that developing a MR tool, based upon the WDO and user interface guidelines, improves user efficiency and reduces cognitive load.

  10. Medication Reconciliation: Work Domain Ontology, Prototype Development, and a Predictive Model

    PubMed Central

    Markowitz, Eliz; Bernstam, Elmer V.; Herskovic, Jorge; Zhang, Jiajie; Shneiderman, Ben; Plaisant, Catherine; Johnson, Todd R.

    2011-01-01

    Medication errors can result from administration inaccuracies at any point of care and are a major cause for concern. To develop a successful Medication Reconciliation (MR) tool, we believe it necessary to build a Work Domain Ontology (WDO) for the MR process. A WDO defines the explicit, abstract, implementation-independent description of the task by separating the task from work context, application technology, and cognitive architecture. We developed a prototype based upon the WDO and designed to adhere to standard principles of interface design. The prototype was compared to Legacy Health System’s and Pre-Admission Medication List Builder MR tools via a Keystroke-Level Model analysis for three MR tasks. The analysis found the prototype requires the fewest mental operations, completes tasks in the fewest steps, and completes tasks in the least amount of time. Accordingly, we believe that developing a MR tool, based upon the WDO and user interface guidelines, improves user efficiency and reduces cognitive load. PMID:22195146

  11. Rule Following and Rule Use in the Balance-Scale Task

    ERIC Educational Resources Information Center

    Shultz, Thomas R.; Takane, Yoshio

    2007-01-01

    Quinlan et al. [Quinlan, p., van der Mass, H., Jansen, B., Booij, O., & Rendell, M. (this issue). Re-thinking stages of cognitive development: An appraisal of connectionist models of the balance scale task. "Cognition", doi:10.1016/j.cognition.2006.02.004] use Latent Class Analysis (LCA) to criticize a connectionist model of development on the…

  12. National facilities study. Volume 3: Mission and requirements model report

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The National Facility Study (NFS) was initiated in 1992 by Daniel S. Goldin, Administrator of NASA as an initiative to develop a comprehensive and integrated long-term plan for future facilities. The resulting, multi-agency NFS consisted of three Task Groups: Aeronautics, Space Operations, and Space Research and Development (R&D) Task Groups. A fourth group, the Engineering and Cost Analysis Task Group, was subsequently added to provide cross-cutting functions, such as assuring consistency in developing an inventory of space facilities. Space facilities decisions require an assessment of current and future needs. Therefore, the two task groups dealing with space developed a consistent model of future space mission programs, operations and R&D. The model is a middle ground baseline constructed for NFS analytical purposes with excursions to cover potential space program strategies. The model includes three major sectors: DOD, civilian government, and commercial space. The model spans the next 30 years because of the long lead times associated with facilities development and usage. This document, Volume 3 of the final NFS report, is organized along the following lines: Executive Summary -- provides a summary view of the 30-year mission forecast and requirements baseline, an overview of excursions from that baseline that were studied, and organization of the report; Introduction -- provides discussions of the methodology used in this analysis; Baseline Model -- provides the mission and requirements model baseline developed for Space Operations and Space R&D analyses; Excursions from the baseline -- reviews the details of variations or 'excursions' that were developed to test the future program projections captured in the baseline; and a Glossary of Acronyms.

  13. How Knowledge Worker Teams Deal Effectively with Task Uncertainty: The Impact of Transformational Leadership and Group Development.

    PubMed

    Leuteritz, Jan-Paul; Navarro, José; Berger, Rita

    2017-01-01

    The purpose of this paper is to clarify how leadership is able to improve team effectiveness, by means of its influence on group processes (i.e., increasing group development) and on the group task (i.e., decreasing task uncertainty). Four hundred and eight members of 107 teams in a German research and development (R&D) organization completed a web-based survey; they provided measures of transformational leadership, group development, 2 aspects of task uncertainty, task interdependence, and team effectiveness. In 54 of these teams, the leaders answered a web-based survey on team effectiveness. We tested the model with the data from team members, using structural equations modeling. Group development and a task uncertainty measurement that refers to unstable demands from outside the team partially mediate the effect of transformational leadership on team effectiveness in R&D organizations ( p < 0.05). Although transformational leaders reduce unclarity of goals ( p < 0.05), this seems not to contribute to team effectiveness. The data provided by the leaders was used to assess common source bias, which did not affect the interpretability of the results. Limitations include cross-sectional data and a lower than expected variance of task uncertainty across different job types. This paper contributes to understanding how knowledge worker teams deal effectively with task uncertainty and confirms the importance of group development in this context. This is the first study to examine the effects of transformational leadership and team processes on team effectiveness considering the task characteristics uncertainty and interdependence.

  14. How Knowledge Worker Teams Deal Effectively with Task Uncertainty: The Impact of Transformational Leadership and Group Development

    PubMed Central

    Leuteritz, Jan-Paul; Navarro, José; Berger, Rita

    2017-01-01

    The purpose of this paper is to clarify how leadership is able to improve team effectiveness, by means of its influence on group processes (i.e., increasing group development) and on the group task (i.e., decreasing task uncertainty). Four hundred and eight members of 107 teams in a German research and development (R&D) organization completed a web-based survey; they provided measures of transformational leadership, group development, 2 aspects of task uncertainty, task interdependence, and team effectiveness. In 54 of these teams, the leaders answered a web-based survey on team effectiveness. We tested the model with the data from team members, using structural equations modeling. Group development and a task uncertainty measurement that refers to unstable demands from outside the team partially mediate the effect of transformational leadership on team effectiveness in R&D organizations (p < 0.05). Although transformational leaders reduce unclarity of goals (p < 0.05), this seems not to contribute to team effectiveness. The data provided by the leaders was used to assess common source bias, which did not affect the interpretability of the results. Limitations include cross-sectional data and a lower than expected variance of task uncertainty across different job types. This paper contributes to understanding how knowledge worker teams deal effectively with task uncertainty and confirms the importance of group development in this context. This is the first study to examine the effects of transformational leadership and team processes on team effectiveness considering the task characteristics uncertainty and interdependence. PMID:28861012

  15. A tweaking principle for executive control: neuronal circuit mechanism for rule-based task switching and conflict resolution.

    PubMed

    Ardid, Salva; Wang, Xiao-Jing

    2013-12-11

    A hallmark of executive control is the brain's agility to shift between different tasks depending on the behavioral rule currently in play. In this work, we propose a "tweaking hypothesis" for task switching: a weak rule signal provides a small bias that is dramatically amplified by reverberating attractor dynamics in neural circuits for stimulus categorization and action selection, leading to an all-or-none reconfiguration of sensory-motor mapping. Based on this principle, we developed a biologically realistic model with multiple modules for task switching. We found that the model quantitatively accounts for complex task switching behavior: switch cost, congruency effect, and task-response interaction; as well as monkey's single-neuron activity associated with task switching. The model yields several testable predictions, in particular, that category-selective neurons play a key role in resolving sensory-motor conflict. This work represents a neural circuit model for task switching and sheds insights in the brain mechanism of a fundamental cognitive capability.

  16. A Method for Cognitive Task Analysis

    DTIC Science & Technology

    1992-07-01

    A method for cognitive task analysis is described based on the notion of ’generic tasks’. The method distinguishes three layers of analysis. At the...model for applied areas such as the development of knowledge-based systems and training, are discussed. Problem solving, Cognitive Task Analysis , Knowledge, Strategies.

  17. Ares I-X Flight Evaluation Tasks in Support of Ares I Development

    NASA Technical Reports Server (NTRS)

    Huebner, Lawrence D.; Richards, James S.; Coates, Ralph H., III; Cruit, Wendy D.; Ramsey, Matthew N.

    2010-01-01

    NASA s Constellation Program successfully launched the Ares I-X Flight Test Vehicle on October 28, 2009. The Ares I-X flight was a development flight test that offered a unique opportunity for early engineering data to impact the design and development of the Ares I crew launch vehicle. As the primary customer for flight data from the Ares I-X mission, the Ares Projects Office established a set of 33 flight evaluation tasks to correlate fight results with prospective design assumptions and models. Included within these tasks were direct comparisons of flight data with pre-flight predictions and post-flight assessments utilizing models and modeling techniques being applied to design and develop Ares I. A discussion of the similarities and differences in those comparisons and the need for discipline-level model updates based upon those comparisons form the substance of this paper. The benefits of development flight testing were made evident by implementing these tasks that used Ares I-X data to partially validate tools and methodologies in technical disciplines that will ultimately influence the design and development of Ares I and future launch vehicles. The areas in which partial validation from the flight test was most significant included flight control system algorithms to predict liftoff clearance, ascent, and stage separation; structural models from rollout to separation; thermal models that have been updated based on these data; pyroshock attenuation; and the ability to predict complex flow fields during time-varying conditions including plume interactions.

  18. Simulating the role of visual selective attention during the development of perceptual completion

    PubMed Central

    Schlesinger, Matthew; Amso, Dima; Johnson, Scott P.

    2014-01-01

    We recently proposed a multi-channel, image-filtering model for simulating the development of visual selective attention in young infants (Schlesinger, Amso & Johnson, 2007). The model not only captures the performance of 3-month-olds on a visual search task, but also implicates two cortical regions that may play a role in the development of visual selective attention. In the current simulation study, we used the same model to simulate 3-month-olds’ performance on a second measure, the perceptual unity task. Two parameters in the model – corresponding to areas in the occipital and parietal cortices – were systematically varied while the gaze patterns produced by the model were recorded and subsequently analyzed. Three key findings emerged from the simulation study. First, the model successfully replicated the performance of 3-month-olds on the unity perception task. Second, the model also helps to explain the improved performance of 2-month-olds when the size of the occluder in the unity perception task is reduced. Third, in contrast to our previous simulation results, variation in only one of the two cortical regions simulated (i.e. recurrent activity in posterior parietal cortex) resulted in a performance pattern that matched 3-month-olds. These findings provide additional support for our hypothesis that the development of perceptual completion in early infancy is promoted by progressive improvements in visual selective attention and oculomotor skill. PMID:23106728

  19. Simulating the role of visual selective attention during the development of perceptual completion.

    PubMed

    Schlesinger, Matthew; Amso, Dima; Johnson, Scott P

    2012-11-01

    We recently proposed a multi-channel, image-filtering model for simulating the development of visual selective attention in young infants (Schlesinger, Amso & Johnson, 2007). The model not only captures the performance of 3-month-olds on a visual search task, but also implicates two cortical regions that may play a role in the development of visual selective attention. In the current simulation study, we used the same model to simulate 3-month-olds' performance on a second measure, the perceptual unity task. Two parameters in the model - corresponding to areas in the occipital and parietal cortices - were systematically varied while the gaze patterns produced by the model were recorded and subsequently analyzed. Three key findings emerged from the simulation study. First, the model successfully replicated the performance of 3-month-olds on the unity perception task. Second, the model also helps to explain the improved performance of 2-month-olds when the size of the occluder in the unity perception task is reduced. Third, in contrast to our previous simulation results, variation in only one of the two cortical regions simulated (i.e. recurrent activity in posterior parietal cortex) resulted in a performance pattern that matched 3-month-olds. These findings provide additional support for our hypothesis that the development of perceptual completion in early infancy is promoted by progressive improvements in visual selective attention and oculomotor skill. © 2012 Blackwell Publishing Ltd.

  20. International Collaboration on Spent Fuel Disposition in Crystalline Media: FY17 Progress Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yifeng; Hadgu, Teklu; Kainina, Elena

    Active participation in international R&D is crucial for achieving the Spent Fuel Waste Science & Technology (SFWST) long-term goals of conducting “experiments to fill data needs and confirm advanced modeling approaches” and of having a “robust modeling and experimental basis for evaluation of multiple disposal system options” (by 2020). DOE’s Office of Nuclear Energy (NE) has developed a strategic plan to advance cooperation with international partners. The international collaboration on the evaluation of crystalline disposal media at Sandia National Laboratories (SNL) in FY17 focused on the collaboration through the Development of Coupled Models and their Validation against Experiments (DECOVALEX-2019) project.more » The DECOVALEX project is an international research and model comparison collaboration, initiated in 1992, for advancing the understanding and modeling of coupled thermo-hydro-mechanical-chemical (THMC) processes in geological systems. SNL has been participating in three tasks of the DECOVALEX project: Task A. Modeling gas injection experiments (ENGINEER), Task C. Modeling groundwater recovery experiment in tunnel (GREET), and Task F. Fluid inclusion and movement in the tight rock (FINITO).« less

  1. Acquisition of Joint Attention by a Developmental Learning Model based on Interactions between a Robot and a Caregiver

    NASA Astrophysics Data System (ADS)

    Nagai, Yukie; Asada, Minoru; Hosoda, Koh

    This paper presents a developmental learning model for joint attention between a robot and a human caregiver. The basic idea of the proposed model comes from the insight of the cognitive developmental science that the development can help the task learning. The model consists of a learning mechanism based on evaluation and two kinds of developmental mechanisms: a robot's development and a caregiver's one. The former means that the sensing and the actuating capabilities of the robot change from immaturity to maturity. On the other hand, the latter is defined as a process that the caregiver changes the task from easy situation to difficult one. These two developments are triggered by the learning progress. The experimental results show that the proposed model can accelerate the learning of joint attention owing to the caregiver's development. Furthermore, it is observed that the robot's development can improve the final task performance by reducing the internal representation in the learned neural network. The mechanisms that bring these effects to the learning are analyzed in line with the cognitive developmental science.

  2. Demands and Tasks of Intercultural School Development: Group Discussions with Experts about Intercultural School Development

    ERIC Educational Resources Information Center

    Syring, Marcus; Tillmann, Teresa; Sacher, Nicole; Weiß, Sabine; Kiel, Ewald

    2018-01-01

    The present study is aimed at identifying demands and tasks that are considered important by experts in the field of interculturalism for the successful development of schools. Although different theoretical models about intercultural school development, incorporating various conditions and dimensions, have already been suggested, gaps in research…

  3. A neurally plausible parallel distributed processing model of event-related potential word reading data.

    PubMed

    Laszlo, Sarah; Plaut, David C

    2012-03-01

    The Parallel Distributed Processing (PDP) framework has significant potential for producing models of cognitive tasks that approximate how the brain performs the same tasks. To date, however, there has been relatively little contact between PDP modeling and data from cognitive neuroscience. In an attempt to advance the relationship between explicit, computational models and physiological data collected during the performance of cognitive tasks, we developed a PDP model of visual word recognition which simulates key results from the ERP reading literature, while simultaneously being able to successfully perform lexical decision-a benchmark task for reading models. Simulations reveal that the model's success depends on the implementation of several neurally plausible features in its architecture which are sufficiently domain-general to be relevant to cognitive modeling more generally. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. Assessing Measurement Invariance for Spanish Sentence Repetition and Morphology Elicitation Tasks.

    PubMed

    Kapantzoglou, Maria; Thompson, Marilyn S; Gray, Shelley; Restrepo, M Adelaida

    2016-04-01

    The purpose of this study was to evaluate evidence supporting the construct validity of two grammatical tasks (sentence repetition, morphology elicitation) included in the Spanish Screener for Language Impairment in Children (Restrepo, Gorin, & Gray, 2013). We evaluated if the tasks measured the targeted grammatical skills in the same way across predominantly Spanish-speaking children with typical language development and those with primary language impairment. A multiple-group, confirmatory factor analytic approach was applied to examine factorial invariance in a sample of 307 predominantly Spanish-speaking children (177 with typical language development; 130 with primary language impairment). The 2 newly developed grammatical tasks were modeled as measures in a unidimensional confirmatory factor analytic model along with 3 well-established grammatical measures from the Clinical Evaluation of Language Fundamentals-Fourth Edition, Spanish (Wiig, Semel, & Secord, 2006). Results suggest that both new tasks measured the construct of grammatical skills for both language-ability groups in an equivalent manner. There was no evidence of bias related to children's language status for the Spanish Screener for Language Impairment in Children Sentence Repetition or Morphology Elicitation tasks. Results provide support for the validity of the new tasks as measures of grammatical skills.

  5. Silica exposure during construction activities: statistical modeling of task-based measurements from the literature.

    PubMed

    Sauvé, Jean-François; Beaudry, Charles; Bégin, Denis; Dion, Chantal; Gérin, Michel; Lavoué, Jérôme

    2013-05-01

    Many construction activities can put workers at risk of breathing silica containing dusts, and there is an important body of literature documenting exposure levels using a task-based strategy. In this study, statistical modeling was used to analyze a data set containing 1466 task-based, personal respirable crystalline silica (RCS) measurements gathered from 46 sources to estimate exposure levels during construction tasks and the effects of determinants of exposure. Monte-Carlo simulation was used to recreate individual exposures from summary parameters, and the statistical modeling involved multimodel inference with Tobit models containing combinations of the following exposure variables: sampling year, sampling duration, construction sector, project type, workspace, ventilation, and controls. Exposure levels by task were predicted based on the median reported duration by activity, the year 1998, absence of source control methods, and an equal distribution of the other determinants of exposure. The model containing all the variables explained 60% of the variability and was identified as the best approximating model. Of the 27 tasks contained in the data set, abrasive blasting, masonry chipping, scabbling concrete, tuck pointing, and tunnel boring had estimated geometric means above 0.1mg m(-3) based on the exposure scenario developed. Water-fed tools and local exhaust ventilation were associated with a reduction of 71 and 69% in exposure levels compared with no controls, respectively. The predictive model developed can be used to estimate RCS concentrations for many construction activities in a wide range of circumstances.

  6. Emergence of Tables as First-Graders Cope with Modelling Tasks

    ERIC Educational Resources Information Center

    Peled, Irit; Keisar, Einav

    2015-01-01

    In this action research, first-graders were challenged to cope with a sequence of modelling tasks involving an analysis of given situations and choices of mathematical tools. In the course of the sequence, they underwent a change in the nature of their problem-solving processes and developed modelling competencies. Moreover, during the task…

  7. A Standard Procedure for Conducting Cognitive Task Analysis.

    ERIC Educational Resources Information Center

    Redding, Richard E.

    Traditional methods for task analysis have been largely based on the Instructional Systems Development (ISD) model, which is widely used throughout industry and the military. The first part of this document gives an overview of cognitive task analysis, which is conducted within the first phase of ISD. The following steps of cognitive task analysis…

  8. Development of "Task Value" Instrument for Biology as a School Subject

    ERIC Educational Resources Information Center

    Köksal, Mustafa Serdar; Yaman, Süleyman

    2013-01-01

    The expectancy-value model of motivation states that individuals' choice, persistence and performances are related to their beliefs about how much they value task. Despite the importance of "task value" in learning biology, lack of the instruments on task value for high school biology courses for practical use indicated requirement to…

  9. Task Based Language Teaching: Development of CALL

    ERIC Educational Resources Information Center

    Anwar, Khoirul; Arifani, Yudhi

    2016-01-01

    The dominant complexities of English teaching in Indonesia are about limited development of teaching methods and materials which still cannot optimally reflect students' needs (in particular of how to acquire knowledge and select the most effective learning models). This research is to develop materials with complete task-based activities by using…

  10. A task-based support architecture for developing point-of-care clinical decision support systems for the emergency department.

    PubMed

    Wilk, S; Michalowski, W; O'Sullivan, D; Farion, K; Sayyad-Shirabad, J; Kuziemsky, C; Kukawka, B

    2013-01-01

    The purpose of this study was to create a task-based support architecture for developing clinical decision support systems (CDSSs) that assist physicians in making decisions at the point-of-care in the emergency department (ED). The backbone of the proposed architecture was established by a task-based emergency workflow model for a patient-physician encounter. The architecture was designed according to an agent-oriented paradigm. Specifically, we used the O-MaSE (Organization-based Multi-agent System Engineering) method that allows for iterative translation of functional requirements into architectural components (e.g., agents). The agent-oriented paradigm was extended with ontology-driven design to implement ontological models representing knowledge required by specific agents to operate. The task-based architecture allows for the creation of a CDSS that is aligned with the task-based emergency workflow model. It facilitates decoupling of executable components (agents) from embedded domain knowledge (ontological models), thus supporting their interoperability, sharing, and reuse. The generic architecture was implemented as a pilot system, MET3-AE--a CDSS to help with the management of pediatric asthma exacerbation in the ED. The system was evaluated in a hospital ED. The architecture allows for the creation of a CDSS that integrates support for all tasks from the task-based emergency workflow model, and interacts with hospital information systems. Proposed architecture also allows for reusing and sharing system components and knowledge across disease-specific CDSSs.

  11. Student Task Analysis for the Development of E-Learning Lectural System in Basic Chemistry Courses in FKIP UMMY Solok

    NASA Astrophysics Data System (ADS)

    Afrahamiryano, A.; Ariani, D.

    2018-04-01

    The student task analysis is one part of the define stage in development research using the 4-D development model. Analysis of this task is useful to determine the level of understanding of students on lecture materials that have been given. The results of this task analysis serve as a measuring tool to determine the level of success of learning and as a basis in the development of lecture system. Analysis of this task is done by the method of observation and documentation study of the tasks undertaken by students. The results of this analysis are then described and after that triangulation are done to draw conclusions. The results of the analysis indicate that the students' level of understanding is high for theoretical and low material for counting material. Based on the results of this task analysis, it can be concluded that e-learning lecture system developed should be able to increase students' understanding on basic chemicals that are calculated.

  12. SIGNAL DETECTION BEHAVIOR IN HUMANS AND RATS: A COMPARISON WITH MATCHED TASKS.

    EPA Science Inventory

    Animal models of human cognitive processes are essential for studying the neurobiological mechanisms of these processes and for developing therapies for intoxication and neurodegenerative diseases. A discrete-trial signal detection task was developed for assessing sustained atten...

  13. Development of a task-level robot programming and simulation system

    NASA Technical Reports Server (NTRS)

    Liu, H.; Kawamura, K.; Narayanan, S.; Zhang, G.; Franke, H.; Ozkan, M.; Arima, H.; Liu, H.

    1987-01-01

    An ongoing project in developing a Task-Level Robot Programming and Simulation System (TARPS) is discussed. The objective of this approach is to design a generic TARPS that can be used in a variety of applications. Many robotic applications require off-line programming, and a TARPS is very useful in such applications. Task level programming is object centered in that the user specifies tasks to be performed instead of robot paths. Graphics simulation provides greater flexibility and also avoids costly machine setup and possible damage. A TARPS has three major modules: world model, task planner and task simulator. The system architecture, design issues and some preliminary results are given.

  14. Common EEG features for behavioral estimation in disparate, real-world tasks.

    PubMed

    Touryan, Jon; Lance, Brent J; Kerick, Scott E; Ries, Anthony J; McDowell, Kaleb

    2016-02-01

    In this study we explored the potential for capturing the behavioral dynamics observed in real-world tasks from concurrent measures of EEG. In doing so, we sought to develop models of behavior that would enable the identification of common cross-participant and cross-task EEG features. To accomplish this we had participants perform both simulated driving and guard duty tasks while we recorded their EEG. For each participant we developed models to estimate their behavioral performance during both tasks. Sequential forward floating selection was used to identify the montage of independent components for each model. Linear regression was then used on the combined power spectra from these independent components to generate a continuous estimate of behavior. Our results show that oscillatory processes, evidenced in EEG, can be used to successfully capture slow fluctuations in behavior in complex, multi-faceted tasks. The average correlation coefficients between the actual and estimated behavior was 0.548 ± 0.117 and 0.701 ± 0.154 for the driving and guard duty tasks respectively. Interestingly, through a simple clustering approach we were able to identify a number of common components, both neural and eye-movement related, across participants and tasks. We used these component clusters to quantify the relative influence of common versus participant-specific features in the models of behavior. These findings illustrate the potential for estimating complex behavioral dynamics from concurrent measures from EEG using a finite library of universal features. Published by Elsevier B.V.

  15. Predicting operator workload during system design

    NASA Technical Reports Server (NTRS)

    Aldrich, Theodore B.; Szabo, Sandra M.

    1988-01-01

    A workload prediction methodology was developed in response to the need to measure workloads associated with operation of advanced aircraft. The application of the methodology will involve: (1) conducting mission/task analyses of critical mission segments and assigning estimates of workload for the sensory, cognitive, and psychomotor workload components of each task identified; (2) developing computer-based workload prediction models using the task analysis data; and (3) exercising the computer models to produce predictions of crew workload under varying automation and/or crew configurations. Critical issues include reliability and validity of workload predictors and selection of appropriate criterion measures.

  16. Using Modeling and Simulation to Predict Operator Performance and Automation-Induced Complacency With Robotic Automation: A Case Study and Empirical Validation.

    PubMed

    Wickens, Christopher D; Sebok, Angelia; Li, Huiyang; Sarter, Nadine; Gacy, Andrew M

    2015-09-01

    The aim of this study was to develop and validate a computational model of the automation complacency effect, as operators work on a robotic arm task, supported by three different degrees of automation. Some computational models of complacency in human-automation interaction exist, but those are formed and validated within the context of fairly simplified monitoring failures. This research extends model validation to a much more complex task, so that system designers can establish, without need for human-in-the-loop (HITL) experimentation, merits and shortcomings of different automation degrees. We developed a realistic simulation of a space-based robotic arm task that could be carried out with three different levels of trajectory visualization and execution automation support. Using this simulation, we performed HITL testing. Complacency was induced via several trials of correctly performing automation and then was assessed on trials when automation failed. Following a cognitive task analysis of the robotic arm operation, we developed a multicomponent model of the robotic operator and his or her reliance on automation, based in part on visual scanning. The comparison of model predictions with empirical results revealed that the model accurately predicted routine performance and predicted the responses to these failures after complacency developed. However, the scanning models do not account for the entire attention allocation effects of complacency. Complacency modeling can provide a useful tool for predicting the effects of different types of imperfect automation. The results from this research suggest that focus should be given to supporting situation awareness in automation development. © 2015, Human Factors and Ergonomics Society.

  17. Simulation Of Research And Development Projects

    NASA Technical Reports Server (NTRS)

    Miles, Ralph F.

    1987-01-01

    Measures of preference for alternative project plans calculated. Simulation of Research and Development Projects (SIMRAND) program aids in optimal allocation of research and development resources needed to achieve project goals. Models system subsets or project tasks as various network paths to final goal. Each path described in terms of such task variables as cost per hour, cost per unit, and availability of resources. Uncertainty incorporated by treating task variables as probabilistic random variables. Written in Microsoft FORTRAN 77.

  18. The Use of Logistic Model in RUL Assessment

    NASA Astrophysics Data System (ADS)

    Gumiński, R.; Radkowski, S.

    2017-12-01

    The paper takes on the issue of assessment of remaining useful life (RUL). The goal of the paper was to develop a method, which would enable use of diagnostic information in the task of reducing the uncertainty related to technical risk. Prediction of the remaining useful life (RUL) of the system is a very important task for maintenance strategy. In the literature RUL of an engineering system is defined as the first future time instant in which thresholds of conditions (safety, operational quality, maintenance cost, etc) are violated. Knowledge of RUL offers the possibility of planning the testing and repair activities. Building models of damage development is important in this task. In the presented work, logistic function will be used to model fatigue crack development. It should be remembered that modeling of every phase of damage development is very difficult, yet modeling of every phase of damage separately, especially including on-line diagnostic information is more effective. Particular attention was paid to the possibility of forecasting the occurrence of damage due to fatigue while relying on the analysis of the structure of a vibroacoustic signal.

  19. COMMUNITY-SCALE MODELING FOR AIR TOXICS AND HOMELAND SECURITY

    EPA Science Inventory

    The purpose of this task is to develop and evaluate numerical and physical modeling tools for simulating ambient concentrations of airborne substances in urban settings at spatial scales ranging from <1-10 km. Research under this task will support client needs in human exposure ...

  20. When more of the same is better

    NASA Astrophysics Data System (ADS)

    Fontanari, José F.

    2016-01-01

    Problem solving (e.g., drug design, traffic engineering, software development) by task forces represents a substantial portion of the economy of developed countries. Here we use an agent-based model of cooperative problem-solving systems to study the influence of diversity on the performance of a task force. We assume that agents cooperate by exchanging information on their partial success and use that information to imitate the more successful agent in the system —the model. The agents differ only in their propensities to copy the model. We find that, for easy tasks, the optimal organization is a homogeneous system composed of agents with the highest possible copy propensities. For difficult tasks, we find that diversity can prevent the system from being trapped in sub-optimal solutions. However, when the system size is adjusted to maximize the performance the homogeneous systems outperform the heterogeneous systems, i.e., for optimal performance, sameness should be preferred to diversity.

  1. Development of an Improved Simulator for Chemical and Microbial EOR Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, Gary A.; Sepehrnoori, Kamy; Delshad, Mojdeh

    2000-09-11

    The objective of this research was to extend the capability of an existing simulator (UTCHEM) to improved oil recovery methods that use surfactants, polymers, gels, alkaline chemicals, microorganisms and foam as well as various combinations of these in both conventional and naturally fractured oil reservoirs. Task 1 is the addition of a dual-porosity model for chemical improved of recovery processes in naturally fractured oil reservoirs. Task 2 is the addition of a foam model. Task 3 addresses several numerical and coding enhancements that will greatly improve the versatility and performance of UTCHEM. Task 4 is the enhancements of physical propertymore » models.« less

  2. Toward a model-based cognitive neuroscience of mind wandering.

    PubMed

    Hawkins, G E; Mittner, M; Boekel, W; Heathcote, A; Forstmann, B U

    2015-12-03

    People often "mind wander" during everyday tasks, temporarily losing track of time, place, or current task goals. In laboratory-based tasks, mind wandering is often associated with performance decrements in behavioral variables and changes in neural recordings. Such empirical associations provide descriptive accounts of mind wandering - how it affects ongoing task performance - but fail to provide true explanatory accounts - why it affects task performance. In this perspectives paper, we consider mind wandering as a neural state or process that affects the parameters of quantitative cognitive process models, which in turn affect observed behavioral performance. Our approach thus uses cognitive process models to bridge the explanatory divide between neural and behavioral data. We provide an overview of two general frameworks for developing a model-based cognitive neuroscience of mind wandering. The first approach uses neural data to segment observed performance into a discrete mixture of latent task-related and task-unrelated states, and the second regresses single-trial measures of neural activity onto structured trial-by-trial variation in the parameters of cognitive process models. We discuss the relative merits of the two approaches, and the research questions they can answer, and highlight that both approaches allow neural data to provide additional constraint on the parameters of cognitive models, which will lead to a more precise account of the effect of mind wandering on brain and behavior. We conclude by summarizing prospects for mind wandering as conceived within a model-based cognitive neuroscience framework, highlighting the opportunities for its continued study and the benefits that arise from using well-developed quantitative techniques to study abstract theoretical constructs. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Do we always prioritize balance when walking? Towards an integrated model of task prioritization.

    PubMed

    Yogev-Seligmann, Galit; Hausdorff, Jeffrey M; Giladi, Nir

    2012-05-01

    Previous studies suggest that strategies such as "posture first" are implicitly employed to regulate safety when healthy adults walk while simultaneously performing another task, whereas "posture second" may be inappropriately applied in the presence of neurological disease. However, recent understandings raise questions about the traditional resource allocation concept during walking while dual tasking. We propose a task prioritization model of walking while dual tasking that integrates motor and cognitive capabilities, focusing on postural reserve, hazard estimation, and other individual intrinsic factors. The proposed prioritization model provides a theoretical foundation for future studies and a framework for the development of interventions designed to reduce the profound negative impacts of dual tasking on gait and fall risk in patients with neurological diseases. © 2012 Movement Disorder Society. Copyright © 2012 Movement Disorder Society.

  4. Application of GIS Technology for Town Planning Tasks Solving

    NASA Astrophysics Data System (ADS)

    Kiyashko, G. A.

    2017-11-01

    For developing territories, one of the most actual town-planning tasks is to find out the suitable sites for building projects. The geographic information system (GIS) allows one to model complex spatial processes and can provide necessary effective tools to solve these tasks. We propose several GIS analysis models which can define suitable settlement allocations and select appropriate parcels for construction objects. We implement our models in the ArcGIS Desktop package and verify by application to the existing objects in Primorsky Region (Primorye Territory). These suitability models use several variations of the analysis method combinations and include various ways to resolve the suitability task using vector data and a raster data set. The suitability models created in this study can be combined, and one model can be integrated into another as its part. Our models can be updated by other suitability models for further detailed planning.

  5. Assessing the similarity of mental models of operating room team members and implications for patient safety: a prospective, replicated study.

    PubMed

    Nakarada-Kordic, Ivana; Weller, Jennifer M; Webster, Craig S; Cumin, David; Frampton, Christopher; Boyd, Matt; Merry, Alan F

    2016-08-31

    Patient safety depends on effective teamwork. The similarity of team members' mental models - or their shared understanding-regarding clinical tasks is likely to influence the effectiveness of teamwork. Mental models have not been measured in the complex, high-acuity environment of the operating room (OR), where professionals of different backgrounds must work together to achieve the best surgical outcome for each patient. Therefore, we aimed to explore the similarity of mental models of task sequence and of responsibility for task within multidisciplinary OR teams. We developed a computer-based card sorting tool (Momento) to capture the information on mental models in 20 six-person surgical teams, each comprised of three subteams (anaesthesia, surgery, and nursing) for two simulated laparotomies. Team members sorted 20 cards depicting key tasks according to when in the procedure each task should be performed, and which subteam was primarily responsible for each task. Within each OR team and subteam, we conducted pairwise comparisons of scores to arrive at mean similarity scores for each task. Mean similarity score for task sequence was 87 % (range 57-97 %). Mean score for responsibility for task was 70 % (range = 38-100 %), but for half of the tasks was only 51 % (range = 38-69 %). Participants believed their own subteam was primarily responsible for approximately half the tasks in each procedure. We found differences in the mental models of some OR team members about responsibility for and order of certain tasks in an emergency laparotomy. Momento is a tool that could help elucidate and better align the mental models of OR team members about surgical procedures and thereby improve teamwork and outcomes for patients.

  6. Small Engine Technology (SET) Task 23 ANOPP Noise Prediction for Small Engines, Wing Reflection Code

    NASA Technical Reports Server (NTRS)

    Lieber, Lysbeth; Brown, Daniel; Golub, Robert A. (Technical Monitor)

    2000-01-01

    The work performed under Task 23 consisted of the development and demonstration of improvements for the NASA Aircraft Noise Prediction Program (ANOPP), specifically targeted to the modeling of engine noise enhancement due to wing reflection. This report focuses on development of the model and procedure to predict the effects of wing reflection, and the demonstration of the procedure, using a representative wing/engine configuration.

  7. Long-term Recurrent Convolutional Networks for Visual Recognition and Description

    DTIC Science & Technology

    2014-11-17

    deep???, are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large...models which are also recurrent, or “temporally deep”, are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent...limitation of simple RNN models which strictly integrate state information over time is known as the “vanishing gradient” effect : the ability to

  8. Computational and fMRI Studies of Visualization

    DTIC Science & Technology

    2009-03-31

    spatial thinking in high level cognition, such as in problem-solving and reasoning. In conjunction with the experimental work, the project developed a...computational modeling system (4CAPS) as well as the development of 4CAPS models for particular tasks. The cognitive level of 4CAPS accounts for...neuroarchitecture to interpret and predict the brain activation in a network of cortical areas that underpin the performance of a visual thinking task. The

  9. Deep space network software cost estimation model

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1981-01-01

    A parametric software cost estimation model prepared for Jet PRopulsion Laboratory (JPL) Deep Space Network (DSN) Data System implementation tasks is described. The resource estimation mdel modifies and combines a number of existing models. The model calibrates the task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit JPL software life-cycle statistics.

  10. Propulsion System Dynamic Modeling of the NASA Supersonic Concept Vehicle for AeroPropulsoServoElasticity

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; Connolly, Joseph W.; Seiel, Jonathan

    2016-01-01

    A summary of the propulsion system modeling under NASA's High Speed Project (HSP) AeroPropulsoServoElasticity (APSE) task is provided with a focus on the propulsion system for the low-boom supersonic configuration developed by Lockheed Martin and referred to as the N+2 configuration. This summary includes details on the effort to date to develop computational models for the various propulsion system components. The objective of this paper is to summarize the model development effort in this task, while providing more detail in the modeling areas that have not been previously published. The purpose of the propulsion system modeling and the overall APSE effort is to develop an integrated dynamic vehicle model to conduct appropriate unsteady analysis of supersonic vehicle performance. This integrated APSE system model concept includes the propulsion system model, and the vehicle structural aerodynamics model. The development to date of such a preliminary integrated model will also be summarized in this report

  11. Propulsion System Dynamic Modeling for the NASA Supersonic Concept Vehicle: AeroPropulsoServoElasticity

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; Connolly, Joseph; Seidel, Jonathan

    2014-01-01

    A summary of the propulsion system modeling under NASA's High Speed Project (HSP) AeroPropulsoServoElasticity (APSE) task is provided with a focus on the propulsion system for the low-boom supersonic configuration developed by Lockheed Martin and referred to as the N+2 configuration. This summary includes details on the effort to date to develop computational models for the various propulsion system components. The objective of this paper is to summarize the model development effort in this task, while providing more detail in the modeling areas that have not been previously published. The purpose of the propulsion system modeling and the overall APSE effort is to develop an integrated dynamic vehicle model to conduct appropriate unsteady analysis of supersonic vehicle performance. This integrated APSE system model concept includes the propulsion system model, and the vehicle structural-aerodynamics model. The development to date of such a preliminary integrated model will also be summarized in this report.propulsion system dynamics, the structural dynamics, and aerodynamics.

  12. Propulsion System Dynamic Modeling of the NASA Supersonic Concept Vehicle for AeroPropulsoServoElasticity

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; Connolly, Joseph W.; Seidel, Jonathan

    2014-01-01

    A summary of the propulsion system modeling under NASA's High Speed Project (HSP) AeroPropulsoServoElasticity (APSE) task is provided with a focus on the propulsion system for the lowboom supersonic configuration developed by Lockheed Martin and referred to as the N+2 configuration. This summary includes details on the effort to date to develop computational models for the various propulsion system components. The objective of this paper is to summarize the model development effort in this task, while providing more detail in the modeling areas that have not been previously published. The purpose of the propulsion system modeling and the overall APSE effort is to develop an integrated dynamic vehicle model to conduct appropriate unsteady analysis of supersonic vehicle performance. This integrated APSE system model concept includes the propulsion system model, and the vehicle structural-aerodynamics model. The development to date of such a preliminary integrated model will also be summarized in this report.

  13. POPCORN: a Supervisory Control Simulation for Workload and Performance Research

    NASA Technical Reports Server (NTRS)

    Hart, S. G.; Battiste, V.; Lester, P. T.

    1984-01-01

    A multi-task simulation of a semi-automatic supervisory control system was developed to provide an environment in which training, operator strategy development, failure detection and resolution, levels of automation, and operator workload can be investigated. The goal was to develop a well-defined, but realistically complex, task that would lend itself to model-based analysis. The name of the task (POPCORN) reflects the visual display that depicts different task elements milling around waiting to be released and pop out to be performed. The operator's task was to complete each of 100 task elements that ere represented by different symbols, by selecting a target task and entering the desired a command. The simulated automatic system then completed the selected function automatically. Highly significant differences in performance, strategy, and rated workload were found as a function of all experimental manipulations (except reward/penalty).

  14. Three-dimensional Aerodynamic Instability in Multi-stage Axial Compressors

    NASA Technical Reports Server (NTRS)

    Suder, Kenneth (Technical Monitor); Tan, Choon-Sooi

    2003-01-01

    Four separate tasks are reported. The first task: A Computational Model for Short Wavelength Stall Inception and Development In Multi-Stage Compressors; the second task: Three-dimensional Rotating Stall Inception and Effects of Rotating Tip Clearance Asymmetry in Axial Compressors; the third task:Development of an Effective Computational Methodology for Body Force Representation of High-speed Rotor 37; and the fourth task:Development of Circumferential Inlet Distortion through a Representative Eleven Stage High-speed axial compressor. The common theme that threaded throughout these four tasks is the conceptual framework that consists of quantifying flow processes at the fadcompressor blade passage level to define the compressor performance characteristics needed for addressing physical phenomena such compressor aerodynamic instability and compressor response to flow distoriton with length scales larger than compressor blade-to-blade spacing at the system level. The results from these two levels can be synthesized to: (1) simulate compressor aerodynamic instability inception local to a blade rotor tip and its development from a local flow event into the nonlinear limit cycle instability that involves the entire compressor as was demonstrated in the first task; (2) determine the conditions under which compressor stability assessment based on two-dimensional model may not be adequate and the effects of self-induced flow distortion on compressor stability limit as in the second task; (3) quantify multistage compressor response to inlet distortion in stagnation pressure as illustrated in the fourth task; and (4) elucidate its potential applicability for compressor map generation under uniform as well as non-uniform inlet flow given three-dimensional Navier-Stokes solution for each individual blade row as was demonstrated in the third task.

  15. An interpersonal neurobiological-informed treatment model for childhood traumatic grief.

    PubMed

    Crenshaw, David A

    This article expands an earlier model of the tasks of grieving (1990, [1995], [2001]) by building on science based findings derived from research in attachment theory, neuroscience, interpersonal neurobiology, and childhood traumatic grief (CTG). The proposed treatment model is a prescriptive approach that spells out specific tasks to be undertaken by children suffering traumatic grief under the direction of a therapist who is trained in trauma-informed therapy approaches and draws heavily on the empirically derived childhood traumatic grief treatment model developed by Cohen and Mannarino (2004; Cohen, Mannarino, & Deblinger, 2006). This model expands on their work by proposing specific tasks that are informed by attachment theory research and the interpersonal neurobiological research (Schore, 2003a, 2003b; Siegel, 1999). Particular emphasis is placed on developing a coherent and meaningful narrative since this has been found as a crucial factor in recovery from trauma in attachment research (Siegel, 1999; Siegel & Hartzell, 2003).

  16. Energy Auditor and Quality Control Inspector Competency Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Head, Heather R.; Kurnik, Charles W.; Schroeder, Derek

    The Energy Auditor (EA) and Quality Control Inspector (QCI) Competency model was developed to identify the soft skills, foundational competencies and define the levels of Knowledge, Skills, and Abilities (KSAs) required to successfully perform the tasks defined in the EA and QCI Job Task Analysis (JTAs), the U.S. Department of Energy (DOE) used the U.S. Department of Labor's (DOL) Competency Model Clearinghouse resources to develop a QCI and EA Competency Model. To keep the QCI and EA competency model consistent with other construction and energy management competency models, DOE and the National Renewable Energy Laboratory used the existing 'Residential Constructionmore » Competency Model' and the 'Advanced Commercial Building Competency Model' where appropriate.« less

  17. Prediction task guided representation learning of medical codes in EHR.

    PubMed

    Cui, Liwen; Xie, Xiaolei; Shen, Zuojun

    2018-06-18

    There have been rapidly growing applications using machine learning models for predictive analytics in Electronic Health Records (EHR) to improve the quality of hospital services and the efficiency of healthcare resource utilization. A fundamental and crucial step in developing such models is to convert medical codes in EHR to feature vectors. These medical codes are used to represent diagnoses or procedures. Their vector representations have a tremendous impact on the performance of machine learning models. Recently, some researchers have utilized representation learning methods from Natural Language Processing (NLP) to learn vector representations of medical codes. However, most previous approaches are unsupervised, i.e. the generation of medical code vectors is independent from prediction tasks. Thus, the obtained feature vectors may be inappropriate for a specific prediction task. Moreover, unsupervised methods often require a lot of samples to obtain reliable results, but most practical problems have very limited patient samples. In this paper, we develop a new method called Prediction Task Guided Health Record Aggregation (PTGHRA), which aggregates health records guided by prediction tasks, to construct training corpus for various representation learning models. Compared with unsupervised approaches, representation learning models integrated with PTGHRA yield a significant improvement in predictive capability of generated medical code vectors, especially for limited training samples. Copyright © 2018. Published by Elsevier Inc.

  18. MoDOT pavement preservation research program volume VII, re-calibration of triggers and performance models.

    DOT National Transportation Integrated Search

    2015-10-01

    The objective of this task is to develop the concept and framework for a procedure to routinely create, re-calibrate, and update the : Trigger Tables and Performance Models. The scope of work for Task 6 includes a limited review of the recent pavemen...

  19. Designing simulator-based training: an approach integrating cognitive task analysis and four-component instructional design.

    PubMed

    Tjiam, Irene M; Schout, Barbara M A; Hendrikx, Ad J M; Scherpbier, Albert J J M; Witjes, J Alfred; van Merriënboer, Jeroen J G

    2012-01-01

    Most studies of simulator-based surgical skills training have focused on the acquisition of psychomotor skills, but surgical procedures are complex tasks requiring both psychomotor and cognitive skills. As skills training is modelled on expert performance consisting partly of unconscious automatic processes that experts are not always able to explicate, simulator developers should collaborate with educational experts and physicians in developing efficient and effective training programmes. This article presents an approach to designing simulator-based skill training comprising cognitive task analysis integrated with instructional design according to the four-component/instructional design model. This theory-driven approach is illustrated by a description of how it was used in the development of simulator-based training for the nephrostomy procedure.

  20. Nonequilibrium radiation and chemistry models for aerocapture vehicle flowfields

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1991-01-01

    The primary tasks performed are: (1) the development of a second order local thermodynamic nonequilibrium (LTNE) model for atoms; (2) the continued development of vibrational nonequilibrium models; and (3) the development of a new multicomponent diffusion model. In addition, studies comparing these new models with previous models and results were conducted and reported.

  1. Hypersonic Vehicle Propulsion System Simplified Model Development

    NASA Technical Reports Server (NTRS)

    Stueber, Thomas J.; Raitano, Paul; Le, Dzu K.; Ouzts, Peter

    2007-01-01

    This document addresses the modeling task plan for the hypersonic GN&C GRC team members. The overall propulsion system modeling task plan is a multi-step process and the task plan identified in this document addresses the first steps (short term modeling goals). The procedures and tools produced from this effort will be useful for creating simplified dynamic models applicable to a hypersonic vehicle propulsion system. The document continues with the GRC short term modeling goal. Next, a general description of the desired simplified model is presented along with simulations that are available to varying degrees. The simulations may be available in electronic form (FORTRAN, CFD, MatLab,...) or in paper form in published documents. Finally, roadmaps outlining possible avenues towards realizing simplified model are presented.

  2. A Validated Set of MIDAS V5 Task Network Model Scenarios to Evaluate Nextgen Closely Spaced Parallel Operations Concepts

    NASA Technical Reports Server (NTRS)

    Gore, Brian Francis; Hooey, Becky Lee; Haan, Nancy; Socash, Connie; Mahlstedt, Eric; Foyle, David C.

    2013-01-01

    The Closely Spaced Parallel Operations (CSPO) scenario is a complex, human performance model scenario that tested alternate operator roles and responsibilities to a series of off-nominal operations on approach and landing (see Gore, Hooey, Mahlstedt, Foyle, 2013). The model links together the procedures, equipment, crewstation, and external environment to produce predictions of operator performance in response to Next Generation system designs, like those expected in the National Airspaces NextGen concepts. The task analysis that is contained in the present report comes from the task analysis window in the MIDAS software. These tasks link definitions and states for equipment components, environmental features as well as operational contexts. The current task analysis culminated in 3300 tasks that included over 1000 Subject Matter Expert (SME)-vetted, re-usable procedural sets for three critical phases of flight; the Descent, Approach, and Land procedural sets (see Gore et al., 2011 for a description of the development of the tasks included in the model; Gore, Hooey, Mahlstedt, Foyle, 2013 for a description of the model, and its results; Hooey, Gore, Mahlstedt, Foyle, 2013 for a description of the guidelines that were generated from the models results; Gore, Hooey, Foyle, 2012 for a description of the models implementation and its settings). The rollout, after landing checks, taxi to gate and arrive at gate illustrated in Figure 1 were not used in the approach and divert scenarios exercised. The other networks in Figure 1 set up appropriate context settings for the flight deck.The current report presents the models task decomposition from the tophighest level and decomposes it to finer-grained levels. The first task that is completed by the model is to set all of the initial settings for the scenario runs included in the model (network 75 in Figure 1). This initialization process also resets the CAD graphic files contained with MIDAS, as well as the embedded operator models that comprise MIDAS. Following the initial settings, the model progresses to begin the first tasks required of the two flight deck operators, the Captain (CA) and the First Officer (FO). The task sets will initialize operator specific settings prior to loading all of the alerts, probes, and other events that occur in the scenario. As a note, the CA and FO were terms used in developing this model but the CA can also be thought of as the Pilot Flying (PF), while the FO can be considered the Pilot-Not-Flying (PNF)or Pilot Monitoring (PM). As such, the document refers to the operators as PFCA and PNFFO respectively.

  3. A toolbox and a record for scientific model development

    NASA Technical Reports Server (NTRS)

    Ellman, Thomas

    1994-01-01

    Scientific computation can benefit from software tools that facilitate construction of computational models, control the application of models, and aid in revising models to handle new situations. Existing environments for scientific programming provide only limited means of handling these tasks. This paper describes a two pronged approach for handling these tasks: (1) designing a 'Model Development Toolbox' that includes a basic set of model constructing operations; and (2) designing a 'Model Development Record' that is automatically generated during model construction. The record is subsequently exploited by tools that control the application of scientific models and revise models to handle new situations. Our two pronged approach is motivated by our belief that the model development toolbox and record should be highly interdependent. In particular, a suitable model development record can be constructed only when models are developed using a well defined set of operations. We expect this research to facilitate rapid development of new scientific computational models, to help ensure appropriate use of such models and to facilitate sharing of such models among working computational scientists. We are testing this approach by extending SIGMA, and existing knowledge-based scientific software design tool.

  4. Overview of the Cancer Genetics and Pathway Curation tasks of BioNLP Shared Task 2013

    PubMed Central

    2015-01-01

    Background Since their introduction in 2009, the BioNLP Shared Task events have been instrumental in advancing the development of methods and resources for the automatic extraction of information from the biomedical literature. In this paper, we present the Cancer Genetics (CG) and Pathway Curation (PC) tasks, two event extraction tasks introduced in the BioNLP Shared Task 2013. The CG task focuses on cancer, emphasizing the extraction of physiological and pathological processes at various levels of biological organization, and the PC task targets reactions relevant to the development of biomolecular pathway models, defining its extraction targets on the basis of established pathway representations and ontologies. Results Six groups participated in the CG task and two groups in the PC task, together applying a wide range of extraction approaches including both established state-of-the-art systems and newly introduced extraction methods. The best-performing systems achieved F-scores of 55% on the CG task and 53% on the PC task, demonstrating a level of performance comparable to the best results achieved in similar previously proposed tasks. Conclusions The results indicate that existing event extraction technology can generalize to meet the novel challenges represented by the CG and PC task settings, suggesting that extraction methods are capable of supporting the construction of knowledge bases on the molecular mechanisms of cancer and the curation of biomolecular pathway models. The CG and PC tasks continue as open challenges for all interested parties, with data, tools and resources available from the shared task homepage. PMID:26202570

  5. Development and Optimization of Gas-Assisted Gravity Drainage (GAGD) Process for Improved Light Oil Recovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dandina N. Rao; Subhash C. Ayirala; Madhav M. Kulkarni

    This is the final report describing the evolution of the project ''Development and Optimization of Gas-Assisted Gravity Drainage (GAGD) Process for Improved Light Oil Recovery'' from its conceptual stage in 2002 to the field implementation of the developed technology in 2006. This comprehensive report includes all the experimental research, models developments, analyses of results, salient conclusions and the technology transfer efforts. As planned in the original proposal, the project has been conducted in three separate and concurrent tasks: Task 1 involved a physical model study of the new GAGD process, Task 2 was aimed at further developing the vanishing interfacialmore » tension (VIT) technique for gas-oil miscibility determination, and Task 3 was directed at determining multiphase gas-oil drainage and displacement characteristics in reservoir rocks at realistic pressures and temperatures. The project started with the task of recruiting well-qualified graduate research assistants. After collecting and reviewing the literature on different aspects of the project such gas injection EOR, gravity drainage, miscibility characterization, and gas-oil displacement characteristics in porous media, research plans were developed for the experimental work to be conducted under each of the three tasks. Based on the literature review and dimensional analysis, preliminary criteria were developed for the design of the partially-scaled physical model. Additionally, the need for a separate transparent model for visual observation and verification of the displacement and drainage behavior under gas-assisted gravity drainage was identified. Various materials and methods (ceramic porous material, Stucco, Portland cement, sintered glass beads) were attempted in order to fabricate a satisfactory visual model. In addition to proving the effectiveness of the GAGD process (through measured oil recoveries in the range of 65 to 87% IOIP), the visual models demonstrated three possible multiphase mechanisms at work, namely, Darcy-type displacement until gas breakthrough, gravity drainage after breakthrough and film-drainage in gas-invaded zones throughout the duration of the process. The partially-scaled physical model was used in a series of experiments to study the effects of wettability, gas-oil miscibility, secondary versus tertiary mode gas injection, and the presence of fractures on GAGD oil recovery. In addition to yielding recoveries of up to 80% IOIP, even in the immiscible gas injection mode, the partially-scaled physical model confirmed the positive influence of fractures and oil-wet characteristics in enhancing oil recoveries over those measured in the homogeneous (unfractured) water-wet models. An interesting observation was that a single logarithmic relationship between the oil recovery and the gravity number was obeyed by the physical model, the high-pressure corefloods and the field data.« less

  6. Mining and Minerals Technical Advisory Committee on Curriculum Development. Job Clusters, Competencies and Task Analysis.

    ERIC Educational Resources Information Center

    Northern Montana Coll., Havre. Montana Center for Vocational Education, Research, Curriculum and Personnel Development.

    This skills inventory for mining occupations was developed by a technical committee in Montana to assist in the development of model curricula and to address state labor market needs. The committee included employers from the mining industry, members of trade and professional associations, and educators. The validated task list and defined job…

  7. A Neuroconstructivist Model of Past Tense Development and Processing

    ERIC Educational Resources Information Center

    Westermann, Gert; Ruh, Nicolas

    2012-01-01

    We present a neural network model of learning and processing the English past tense that is based on the notion that experience-dependent cortical development is a core aspect of cognitive development. During learning the model adds and removes units and connections to develop a task-specific final architecture. The model provides an integrated…

  8. Employer-Sponsored Career Development Programs. Information Series No. 231.

    ERIC Educational Resources Information Center

    Lancaster, Anita Sklare; Berne, Richard R.

    This monograph presents an overview of employer-sponsored career development programs. It is divided into four sections. The "Adult Development" and "Adult Career Development" sections review pertinent theories and research (basic concepts, task model, transition model, theme model, adult career stages, career anchors approach, career development…

  9. A software engineering perspective on environmental modeling framework design: The object modeling system

    USDA-ARS?s Scientific Manuscript database

    The environmental modeling community has historically been concerned with the proliferation of models and the effort associated with collective model development tasks (e.g., code generation, data provisioning and transformation, etc.). Environmental modeling frameworks (EMFs) have been developed to...

  10. Modeling of Depth Cue Integration in Manual Control Tasks

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara T.; Kaiser, Mary K.; Davis, Wendy

    2003-01-01

    Psychophysical research has demonstrated that human observers utilize a variety of visual cues to form a perception of three-dimensional depth. However, most of these studies have utilized a passive judgement paradigm, and failed to consider depth-cue integration as a dynamic and task-specific process. In the current study, we developed and experimentally validated a model of manual control of depth that examines how two potential cues (stereo disparity and relative size) are utilized in both first- and second-order active depth control tasks. We found that stereo disparity plays the dominate role for determining depth position, while relative size dominates perception of depth velocity. Stereo disparity also plays a reduced role when made less salient (i.e., when viewing distance is increased). Manual control models predict that position information is sufficient for first-order control tasks, while velocity information is required to perform a second-order control task. Thus, the rules for depth-cue integration in active control tasks are dependent on both task demands and cue quality.

  11. Temporal Sequences Quantify the Contributions of Individual Fixations in Complex Perceptual Matching Tasks

    ERIC Educational Resources Information Center

    Busey, Thomas; Yu, Chen; Wyatte, Dean; Vanderkolk, John

    2013-01-01

    Perceptual tasks such as object matching, mammogram interpretation, mental rotation, and satellite imagery change detection often require the assignment of correspondences to fuse information across views. We apply techniques developed for machine translation to the gaze data recorded from a complex perceptual matching task modeled after…

  12. A Systematic Review of fMRI Reward Paradigms in Adolescents versus Adults: The Impact of Task Design and Implications for Understanding Neurodevelopment

    PubMed Central

    Richards, Jessica M.; Plate, Rista C.; Ernst, Monique

    2013-01-01

    The neural systems underlying reward-related behaviors across development have recently generated a great amount of interest. Yet, the neurodevelopmental literature on reward processing is marked by inconsistencies due to the heterogeneity of the reward paradigms used, the complexity of the behaviors being studied, and the developing brain itself as a moving target. The present review will examine task design as one source of variability across findings by compiling this literature along three dimensions: (1) task structures, (2) cognitive processes, and (3) neural systems. We start with the presentation of a heuristic neural systems model, the Triadic Model, as a way to provide a theoretical framework for the neuroscience research on motivated behaviors. We then discuss the principles guiding reward task development. Finally, we review the extant developmental neuroimaging literature on reward-related processing, organized by reward task type. We hope that this approach will help to clarify the literature on the functional neurodevelopment of reward-related neural systems, and to identify the role of the experimental parameters that significantly influence these findings. PMID:23518270

  13. Upper limb load as a function of repetitive task parameters: part 1--a model of upper limb load.

    PubMed

    Roman-Liu, Danuta

    2005-01-01

    The aim of the study was to develop a theoretical indicator of upper limb musculoskeletal load based on repetitive task parameters. As such the dimensionless parameter, Integrated Cycle Load (ICL) was accepted. It expresses upper limb load which occurs during 1 cycle. The indicator is based on a model of a repetitive task, which consists of a model of the upper limb, a model of basic types of upper limb forces and a model of parameters of a repetitive task such as length of the cycle, length of periods of the cycle and external force exerted during each of the periods of the cycle. Calculations of the ICL parameter were performed for 12 different variants of external load characterised by different values of repetitive task parameters. A comparison of ICL, which expresses external load with a physiological indicator of upper limb load, is presented in Part 2 of the paper.

  14. Evaluation of a computerized aid for creating human behavioral representations of human-computer interaction.

    PubMed

    Williams, Kent E; Voigt, Jeffrey R

    2004-01-01

    The research reported herein presents the results of an empirical evaluation that focused on the accuracy and reliability of cognitive models created using a computerized tool: the cognitive analysis tool for human-computer interaction (CAT-HCI). A sample of participants, expert in interacting with a newly developed tactical display for the U.S. Army's Bradley Fighting Vehicle, individually modeled their knowledge of 4 specific tasks employing the CAT-HCI tool. Measures of the accuracy and consistency of task models created by these task domain experts using the tool were compared with task models created by a double expert. The findings indicated a high degree of consistency and accuracy between the different "single experts" in the task domain in terms of the resultant models generated using the tool. Actual or potential applications of this research include assessing human-computer interaction complexity, determining the productivity of human-computer interfaces, and analyzing an interface design to determine whether methods can be automated.

  15. Model-based software engineering for an optical navigation system for spacecraft

    NASA Astrophysics Data System (ADS)

    Franz, T.; Lüdtke, D.; Maibaum, O.; Gerndt, A.

    2017-09-01

    The project Autonomous Terrain-based Optical Navigation (ATON) at the German Aerospace Center (DLR) is developing an optical navigation system for future landing missions on celestial bodies such as the moon or asteroids. Image data obtained by optical sensors can be used for autonomous determination of the spacecraft's position and attitude. Camera-in-the-loop experiments in the Testbed for Robotic Optical Navigation (TRON) laboratory and flight campaigns with unmanned aerial vehicle (UAV) are performed to gather flight data for further development and to test the system in a closed-loop scenario. The software modules are executed in the C++ Tasking Framework that provides the means to concurrently run the modules in separated tasks, send messages between tasks, and schedule task execution based on events. Since the project is developed in collaboration with several institutes in different domains at DLR, clearly defined and well-documented interfaces are necessary. Preventing misconceptions caused by differences between various development philosophies and standards turned out to be challenging. After the first development cycles with manual Interface Control Documents (ICD) and manual implementation of the complex interactions between modules, we switched to a model-based approach. The ATON model covers a graphical description of the modules, their parameters and communication patterns. Type and consistency checks on this formal level help to reduce errors in the system. The model enables the generation of interfaces and unified data types as well as their documentation. Furthermore, the C++ code for the exchange of data between the modules and the scheduling of the software tasks is created automatically. With this approach, changing the data flow in the system or adding additional components (e.g., a second camera) have become trivial.

  16. Model-based software engineering for an optical navigation system for spacecraft

    NASA Astrophysics Data System (ADS)

    Franz, T.; Lüdtke, D.; Maibaum, O.; Gerndt, A.

    2018-06-01

    The project Autonomous Terrain-based Optical Navigation (ATON) at the German Aerospace Center (DLR) is developing an optical navigation system for future landing missions on celestial bodies such as the moon or asteroids. Image data obtained by optical sensors can be used for autonomous determination of the spacecraft's position and attitude. Camera-in-the-loop experiments in the Testbed for Robotic Optical Navigation (TRON) laboratory and flight campaigns with unmanned aerial vehicle (UAV) are performed to gather flight data for further development and to test the system in a closed-loop scenario. The software modules are executed in the C++ Tasking Framework that provides the means to concurrently run the modules in separated tasks, send messages between tasks, and schedule task execution based on events. Since the project is developed in collaboration with several institutes in different domains at DLR, clearly defined and well-documented interfaces are necessary. Preventing misconceptions caused by differences between various development philosophies and standards turned out to be challenging. After the first development cycles with manual Interface Control Documents (ICD) and manual implementation of the complex interactions between modules, we switched to a model-based approach. The ATON model covers a graphical description of the modules, their parameters and communication patterns. Type and consistency checks on this formal level help to reduce errors in the system. The model enables the generation of interfaces and unified data types as well as their documentation. Furthermore, the C++ code for the exchange of data between the modules and the scheduling of the software tasks is created automatically. With this approach, changing the data flow in the system or adding additional components (e.g., a second camera) have become trivial.

  17. Florida Model Task Force on Diabetic Retinopathy: Development of an Interagency Network.

    ERIC Educational Resources Information Center

    Groff, G.; And Others

    1990-01-01

    This article describes the development of a mechanism to organize a network in Florida for individuals who are at risk for diabetic retinopathy. The task force comprised representatives from governmental, academic, professional, and voluntary organizations. It worked to educate professionals, patients, and the public through brochures, resource…

  18. Modelling of non-equilibrium flow in the branched pipeline systems

    NASA Astrophysics Data System (ADS)

    Sumskoi, S. I.; Sverchkov, A. M.; Lisanov, M. V.; Egorov, A. F.

    2016-09-01

    This article presents a mathematical model and a numerical method for solving the task of water hammer in the branched pipeline system. The task is considered in the onedimensional non-stationary formulation taking into account the realities such as the change in the diameter of the pipeline and its branches. By comparison with the existing analytic solution it has been shown that the proposed method possesses good accuracy. With the help of the developed model and numerical method the task has been solved concerning the transmission of the compression waves complex in the branching pipeline system when several shut down valves operate. It should be noted that the offered model and method may be easily introduced to a number of other tasks, for example, to describe the flow of blood in the vessels.

  19. Function-Task-Competency Approach to Curriculum Development in Vocational Education in Agriculture: Research Report No. 1. Project Background, Plan, and Model Development.

    ERIC Educational Resources Information Center

    Matteson, Harold R.

    The report explains the construction of the function-task-competency method of developing vocational education curricula in agriculture at the secondary and postsecondary levels. It discusses at some length five approaches to the development of vocational education curricula used in the past: the subject approach (which centers on subjects taught…

  20. Identifying optimum performance trade-offs using a cognitively bounded rational analysis model of discretionary task interleaving.

    PubMed

    Janssen, Christian P; Brumby, Duncan P; Dowell, John; Chater, Nick; Howes, Andrew

    2011-01-01

    We report the results of a dual-task study in which participants performed a tracking and typing task under various experimental conditions. An objective payoff function was used to provide explicit feedback on how participants should trade off performance between the tasks. Results show that participants' dual-task interleaving strategy was sensitive to changes in the difficulty of the tracking task and resulted in differences in overall task performance. To test the hypothesis that people select strategies that maximize payoff, a Cognitively Bounded Rational Analysis model was developed. This analysis evaluated a variety of dual-task interleaving strategies to identify the optimal strategy for maximizing payoff in each condition. The model predicts that the region of optimum performance is different between experimental conditions. The correspondence between human data and the prediction of the optimal strategy is found to be remarkably high across a number of performance measures. This suggests that participants were honing their behavior to maximize payoff. Limitations are discussed. Copyright © 2011 Cognitive Science Society, Inc.

  1. IEA Wind Task 37: Systems Modeling Framework and Ontology for Wind Turbines and Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykes, Katherine L; Zahle, Frederik; Merz, Karl

    This presentation will provide an overview of progress to date in the development of a system modeling framework and ontology for wind turbines and plants as part of the larger IEA Wind Task 37 on wind energy systems engineering. The goals of the effort are to create a set of guidelines for a common conceptual architecture for wind turbines and plants so that practitioners can more easily share descriptions of wind turbines and plants across multiple parties and reduce the effort for translating descriptions between models; integrate different models together and collaborate on model development; and translate models among differentmore » levels of fidelity in the system.« less

  2. Modeling personnel turnover in the parametric organization

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1991-01-01

    A model is developed for simulating the dynamics of a newly formed organization, credible during all phases of organizational development. The model development process is broken down into the activities of determining the tasks required for parametric cost analysis (PCA), determining the skills required for each PCA task, determining the skills available in the applicant marketplace, determining the structure of the model, implementing the model, and testing it. The model, parameterized by the likelihood of job function transition, has demonstrated by the capability to represent the transition of personnel across functional boundaries within a parametric organization using a linear dynamical system, and the ability to predict required staffing profiles to meet functional needs at the desired time. The model can be extended by revisions of the state and transition structure to provide refinements in functional definition for the parametric and extended organization.

  3. Methods for cost estimation in software project management

    NASA Astrophysics Data System (ADS)

    Briciu, C. V.; Filip, I.; Indries, I. I.

    2016-02-01

    The speed in which the processes used in software development field have changed makes it very difficult the task of forecasting the overall costs for a software project. By many researchers, this task has been considered unachievable, but there is a group of scientist for which this task can be solved using the already known mathematical methods (e.g. multiple linear regressions) and the new techniques as genetic programming and neural networks. The paper presents a solution for building a model for the cost estimation models in the software project management using genetic algorithms starting from the PROMISE datasets related COCOMO 81 model. In the first part of the paper, a summary of the major achievements in the research area of finding a model for estimating the overall project costs is presented together with the description of the existing software development process models. In the last part, a basic proposal of a mathematical model of a genetic programming is proposed including here the description of the chosen fitness function and chromosome representation. The perspective of model described it linked with the current reality of the software development considering as basis the software product life cycle and the current challenges and innovations in the software development area. Based on the author's experiences and the analysis of the existing models and product lifecycle it was concluded that estimation models should be adapted with the new technologies and emerging systems and they depend largely by the chosen software development method.

  4. Archaeological predictive model set.

    DOT National Transportation Integrated Search

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  5. An Excel sheet for inferring children's number-knower levels from give-N data.

    PubMed

    Negen, James; Sarnecka, Barbara W; Lee, Michael D

    2012-03-01

    Number-knower levels are a series of stages of number concept development in early childhood. A child's number-knower level is typically assessed using the give-N task. Although the task procedure has been highly refined, the standard ways of analyzing give-N data remain somewhat crude. Lee and Sarnecka (Cogn Sci 34:51-67, 2010, in press) have developed a Bayesian model of children's performance on the give-N task that allows knower level to be inferred in a more principled way. However, this model requires considerable expertise and computational effort to implement and apply to data. Here, we present an approximation to the model's inference that can be computed with Microsoft Excel. We demonstrate the accuracy of the approximation and provide instructions for its use. This makes the powerful inferential capabilities of the Bayesian model accessible to developmental researchers interested in estimating knower levels from give-N data.

  6. Model curriculum outline for Alternatively Fueled Vehicle (AFV) automotive technician training in light and medium duty CNG and LPG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    This model curriculum outline was developed using a turbo-DACUM (Developing a Curriculum) process which utilizes practicing experts to undertake a comprehensive job and task analysis. The job and task analysis serves to establish current baseline data accurately and to improve both the process and the product of the job through constant and continuous improvement of training. The DACUM process is based on the following assumptions: (1) Expert workers are the best source for task analysis. (2) Any occupation can be described effectively in terms of tasks. (3) All tasks imply knowledge, skills, and attitudes/values. A DACUM panel, comprised of sixmore » experienced and knowledgeable technicians who are presently working in the field, was given an orientation to the DACUM process. The panel then identified, verified, and sequenced all the necessary job duty areas and tasks. The broad duty categories were rated according to relative importance and assigned percentage ratings in priority order. The panel then rated every task for each of the duties on a scale of 1 to 3. A rating of 3 indicates an {open_quotes}essential{close_quotes} task, a rating of 2 indicates an {open_quotes}important{close_quotes} task, and a rating of 1 indicates a {open_quotes}desirable{close_quotes} task.« less

  7. Developing Analytic Rating Guides for "TOEFL iBT"® Integrated Speaking Tasks. "TOEFL iBT"® Research Report, TOEFL iBT-20. ETS Research Report. RR-13-13

    ERIC Educational Resources Information Center

    Jamieson, Joan; Poonpon, Kornwipa

    2013-01-01

    Research and development of a new type of scoring rubric for the integrated speaking tasks of "TOEFL iBT"® are described. These "analytic rating guides" could be helpful if tasks modeled after those in TOEFL iBT were used for formative assessment, a purpose which is different from TOEFL iBT's primary use for admission…

  8. Benchmarking Ada tasking on tightly coupled multiprocessor architectures

    NASA Technical Reports Server (NTRS)

    Collard, Philippe; Goforth, Andre; Marquardt, Matthew

    1989-01-01

    The development of benchmarks and performance measures for parallel Ada tasking is reported with emphasis on the macroscopic behavior of the benchmark across a set of load parameters. The application chosen for the study was the NASREM model for telerobot control, relevant to many NASA missions. The results of the study demonstrate the potential of parallel Ada in accomplishing the task of developing a control system for a system such as the Flight Telerobotic Servicer using the NASREM framework.

  9. From guideline modeling to guideline execution: defining guideline-based decision-support services.

    PubMed Central

    Tu, S. W.; Musen, M. A.

    2000-01-01

    We describe our task-based approach to defining the guideline-based decision-support services that the EON system provides. We categorize uses of guidelines in patient-specific decision support into a set of generic tasks--making of decisions, specification of work to be performed, interpretation of data, setting of goals, and issuance of alert and reminders--that can be solved using various techniques. Our model includes constructs required for representing the knowledge used by these techniques. These constructs form a toolkit from which developers can select modeling solutions for guideline task. Based on the tasks and the guideline model, we define a guideline-execution architecture and a model of interactions between a decision-support server and clients that invoke services provided by the server. These services use generic interfaces derived from guideline tasks and their associated modeling constructs. We describe two implementations of these decision-support services and discuss how this work can be generalized. We argue that a well-defined specification of guideline-based decision-support services will facilitate sharing of tools that implement computable clinical guidelines. PMID:11080007

  10. Compensatory Limb Use and Behavioral Assessment of Motor Skill Learning Following Sensorimotor Cortex Injury in a Mouse Model of Ischemic Stroke

    PubMed Central

    Kerr, Abigail L.; Tennant, Kelly A.

    2014-01-01

    Mouse models have become increasingly popular in the field of behavioral neuroscience, and specifically in studies of experimental stroke. As models advance, it is important to develop sensitive behavioral measures specific to the mouse. The present protocol describes a skilled motor task for use in mouse models of stroke. The Pasta Matrix Reaching Task functions as a versatile and sensitive behavioral assay that permits experimenters to collect accurate outcome data and manipulate limb use to mimic human clinical phenomena including compensatory strategies (i.e., learned non-use) and focused rehabilitative training. When combined with neuroanatomical tools, this task also permits researchers to explore the mechanisms that support behavioral recovery of function (or lack thereof) following stroke. The task is both simple and affordable to set up and conduct, offering a variety of training and testing options for numerous research questions concerning functional outcome following injury. Though the task has been applied to mouse models of stroke, it may also be beneficial in studies of functional outcome in other upper extremity injury models. PMID:25045916

  11. Ozone measurement systems improvements studies

    NASA Technical Reports Server (NTRS)

    Thomas, R. W.; Guard, K.; Holland, A. C.; Spurling, J. F.

    1974-01-01

    Results are summarized of an initial study of techniques for measuring atmospheric ozone, carried out as the first phase of a program to improve ozone measurement techniques. The study concentrated on two measurement systems, the electro chemical cell (ECC) ozonesonde and the Dobson ozone spectrophotometer, and consisted of two tasks. The first task consisted of error modeling and system error analysis of the two measurement systems. Under the second task a Monte-Carlo model of the Dobson ozone measurement technique was developed and programmed for computer operation.

  12. Computer Aided Battery Engineering Consortium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pesaran, Ahmad

    A multi-national lab collaborative team was assembled that includes experts from academia and industry to enhance recently developed Computer-Aided Battery Engineering for Electric Drive Vehicles (CAEBAT)-II battery crush modeling tools and to develop microstructure models for electrode design - both computationally efficient. Task 1. The new Multi-Scale Multi-Domain model framework (GH-MSMD) provides 100x to 1,000x computation speed-up in battery electrochemical/thermal simulation while retaining modularity of particles and electrode-, cell-, and pack-level domains. The increased speed enables direct use of the full model in parameter identification. Task 2. Mechanical-electrochemical-thermal (MECT) models for mechanical abuse simulation were simultaneously coupled, enabling simultaneous modelingmore » of electrochemical reactions during the short circuit, when necessary. The interactions between mechanical failure and battery cell performance were studied, and the flexibility of the model for various batteries structures and loading conditions was improved. Model validation is ongoing to compare with test data from Sandia National Laboratories. The ABDT tool was established in ANSYS. Task 3. Microstructural modeling was conducted to enhance next-generation electrode designs. This 3- year project will validate models for a variety of electrodes, complementing Advanced Battery Research programs. Prototype tools have been developed for electrochemical simulation and geometric reconstruction.« less

  13. A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori

    2005-07-01

    The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectivesmore » of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.« less

  14. A Framework to Design and Optimize Chemical Flooding Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori

    2006-08-31

    The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectivesmore » of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.« less

  15. A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori

    2004-11-01

    The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectivesmore » of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.« less

  16. Real-Time Performance Feedback for the Manual Control of Spacecraft

    NASA Astrophysics Data System (ADS)

    Karasinski, John Austin

    Real-time performance metrics were developed to quantify workload, situational awareness, and manual task performance for use as visual feedback to pilots of aerospace vehicles. Results from prior lunar lander experiments with variable levels of automation were replicated and extended to provide insights for the development of real-time metrics. Increased levels of automation resulted in increased flight performance, lower workload, and increased situational awareness. Automated Speech Recognition (ASR) was employed to detect verbal callouts as a limited measure of subjects' situational awareness. A one-dimensional manual tracking task and simple instructor-model visual feedback scheme was developed. This feedback was indicated to the operator by changing the color of a guidance element on the primary flight display, similar to how a flight instructor points out elements of a display to a student pilot. Experiments showed that for this low-complexity task, visual feedback did not change subject performance, but did increase the subjects' measured workload. Insights gained from these experiments were applied to a Simplified Aid for EVA Rescue (SAFER) inspection task. The effects of variations of an instructor-model performance-feedback strategy on human performance in a novel SAFER inspection task were investigated. Real-time feedback was found to have a statistically significant effect of improving subject performance and decreasing workload in this complicated four degree of freedom manual control task with two secondary tasks.

  17. How to Develop an Engineering Design Task

    ERIC Educational Resources Information Center

    Dankenbring, Chelsey; Capobianco, Brenda M.; Eichinger, David

    2014-01-01

    In this article, the authors provide an overview of engineering and the engineering design process, and describe the steps they took to develop a fifth grade-level, standards-based engineering design task titled "Getting the Dirt on Decomposition." Their main goal was to focus more on modeling the discrete steps they took to create and…

  18. Towards the unification of inference structures in medical diagnostic tasks.

    PubMed

    Mira, J; Rives, J; Delgado, A E; Martínez, R

    1998-01-01

    The central purpose of artificial intelligence applied to medicine is to develop models for diagnosis and therapy planning at the knowledge level, in the Newell sense, and software environments to facilitate the reduction of these models to the symbol level. The usual methodology (KADS, Common-KADS, GAMES, HELIOS, Protégé, etc) has been to develop libraries of generic tasks and reusable problem-solving methods with explicit ontologies. The principal problem which clinicians have with these methodological developments concerns the diversity and complexity of new terms whose meaning is not sufficiently clear, precise, unambiguous and consensual for them to be accessible in the daily clinical environment. As a contribution to the solution of this problem, we develop in this article the conjecture that one inference structure is enough to describe the set of analysis tasks associated with medical diagnoses. To this end, we first propose a modification of the systematic diagnostic inference scheme to obtain an analysis generic task and then compare it with the monitoring and the heuristic classification task inference schemes using as comparison criteria the compatibility of domain roles (data structures), the similarity in the inferences, and the commonality in the set of assumptions which underlie the functionally equivalent models. The equivalences proposed are illustrated with several examples. Note that though our ongoing work aims to simplify the methodology and to increase the precision of the terms used, the proposal presented here should be viewed more in the nature of a conjecture.

  19. Cognitive task load in a naval ship control centre: from identification to prediction.

    PubMed

    Grootjen, M; Neerincx, M A; Veltman, J A

    Deployment of information and communication technology will lead to further automation of control centre tasks and an increasing amount of information to be processed. A method for establishing adequate levels of cognitive task load for the operators in such complex environments has been developed. It is based on a model distinguishing three load factors: time occupied, task-set switching, and level of information processing. Application of the method resulted in eight scenarios for eight extremes of task load (i.e. low and high values for each load factor). These scenarios were performed by 13 teams in a high-fidelity control centre simulator of the Royal Netherlands Navy. The results show that the method provides good prediction of the task load that will actually appear in the simulator. The model allowed identification of under- and overload situations showing negative effects on operator performance corresponding to controlled experiments in a less realistic task environment. Tools proposed to keep the operator at an optimum task load are (adaptive) task allocation and interface support.

  20. IGT-Open: An open-source, computerized version of the Iowa Gambling Task.

    PubMed

    Dancy, Christopher L; Ritter, Frank E

    2017-06-01

    The Iowa Gambling Task (IGT) is commonly used to understand the processes involved in decision-making. Though the task was originally run without a computer, using a computerized version of the task has become typical. These computerized versions of the IGT are useful, because they can make the task more standardized across studies and allow for the task to be used in environments where a physical version of the task may be difficult or impossible to use (e.g., while collecting brain imaging data). Though these computerized versions of the IGT have been useful for experimentation, having multiple software implementations of the task could present reliability issues. We present an open-source software version of the Iowa Gambling Task (called IGT-Open) that allows for millisecond visual presentation accuracy and is freely available to be used and modified. This software has been used to collect data from human subjects and also has been used to run model-based simulations with computational process models developed to run in the ACT-R architecture.

  1. Altered behavior in experimental cortical dysplasia.

    PubMed

    Zhou, Fu-Wen; Rani, Asha; Martinez-Diaz, Hildabelis; Foster, Thomas C; Roper, Steven N

    2011-12-01

    Developmental delay and cognitive impairment are common comorbidities in people with epilepsy associated with malformations of cortical development (MCDs). We studied cognition and behavior in an animal model of diffuse cortical dysplasia (CD), in utero irradiation, using a battery of behavioral tests for neuromuscular and cognitive function. Fetal rats were exposed to 2.25 Gy external radiation on embryonic day 17 (E17). At 1 month of age they were tested using an open field task, a grip strength task, a grid walk task, inhibitory avoidance, an object recognition task, and the Morris water maze task. Rats with CD showed reduced nonlocomotor activity in the open field task and impaired motor coordination for grid walking but normal grip strength. They showed a reduced tendency to recognize novel objects and reduced retention in an inhibitory avoidance task. Water maze testing showed that learning and memory were impaired in irradiated rats for both cue discrimination and spatially oriented tasks. These results demonstrate significant deficits in cortex- and hippocampus-dependent cognitive functions associated with the diffuse abnormalities of cortical and hippocampal development that have been documented in this model. This study documents multimodal cognitive deficits associated with CD and can serve as the foundation for future investigations into the mechanisms of and possible therapeutic interventions for this problem. Wiley Periodicals, Inc. © 2011 International League Against Epilepsy.

  2. An ICAI architecture for troubleshooting in complex, dynamic systems

    NASA Technical Reports Server (NTRS)

    Fath, Janet L.; Mitchell, Christine M.; Govindaraj, T.

    1990-01-01

    Ahab, an intelligent computer-aided instruction (ICAI) program, illustrates an architecture for simulator-based ICAI programs to teach troubleshooting in complex, dynamic environments. The architecture posits three elements of a computerized instructor: the task model, the student model, and the instructional module. The task model is a prescriptive model of expert performance that uses symptomatic and topographic search strategies to provide students with directed problem-solving aids. The student model is a descriptive model of student performance in the context of the task model. This student model compares the student and task models, critiques student performance, and provides interactive performance feedback. The instructional module coordinates information presented by the instructional media, the task model, and the student model so that each student receives individualized instruction. Concept and metaconcept knowledge that supports these elements is contained in frames and production rules, respectively. The results of an experimental evaluation are discussed. They support the hypothesis that training with an adaptive online system built using the Ahab architecture produces better performance than training using simulator practice alone, at least with unfamiliar problems. It is not sufficient to develop an expert strategy and present it to students using offline materials. The training is most effective if it adapts to individual student needs.

  3. A Model of Manual Control with Perspective Scene Viewing

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara Townsend

    2013-01-01

    A model of manual control during perspective scene viewing is presented, which combines the Crossover Model with a simpli ed model of perspective-scene viewing and visual- cue selection. The model is developed for a particular example task: an idealized constant- altitude task in which the operator controls longitudinal position in the presence of both longitudinal and pitch disturbances. An experiment is performed to develop and vali- date the model. The model corresponds closely with the experimental measurements, and identi ed model parameters are highly consistent with the visual cues available in the perspective scene. The modeling results indicate that operators used one visual cue for position control, and another visual cue for velocity control (lead generation). Additionally, operators responded more quickly to rotation (pitch) than translation (longitudinal).

  4. Software Development Cost Estimation Executive Summary

    NASA Technical Reports Server (NTRS)

    Hihn, Jairus M.; Menzies, Tim

    2006-01-01

    Identify simple fully validated cost models that provide estimation uncertainty with cost estimate. Based on COCOMO variable set. Use machine learning techniques to determine: a) Minimum number of cost drivers required for NASA domain based cost models; b) Minimum number of data records required and c) Estimation Uncertainty. Build a repository of software cost estimation information. Coordinating tool development and data collection with: a) Tasks funded by PA&E Cost Analysis; b) IV&V Effort Estimation Task and c) NASA SEPG activities.

  5. Developing Symbolic Capacity One Step at a Time

    ERIC Educational Resources Information Center

    Huttenlocher, Janellen; Vasilyeva, Marina; Newcombe, Nora; Duffy, Sean

    2008-01-01

    The present research examines the ability of children as young as 4 years to use models in tasks that require scaling of distance along a single dimension. In Experiment 1, we found that tasks involving models are similar in difficulty to those involving maps that we studied earlier (Huttenlocher, J., Newcombe, N., & Vasilyeva, M. (1999). Spatial…

  6. Creep-fatigue life prediction for engine hot section materials (isotropic)

    NASA Technical Reports Server (NTRS)

    Moreno, V.

    1982-01-01

    The objectives of this program are the investigation of fundamental approaches to high temperature crack initiation life prediction, identification of specific modeling strategies and the development of specific models for component relevant loading conditions. A survey of the hot section material/coating systems used throughout the gas turbine industry is included. Two material/coating systems will be identified for the program. The material/coating system designated as the base system shall be used throughout Tasks 1-12. The alternate material/coating system will be used only in Task 12 for further evaluation of the models developed on the base material. In Task II, candidate life prediction approaches will be screened based on a set of criteria that includes experience of the approaches within the literature, correlation with isothermal data generated on the base material, and judgements relative to the applicability of the approach for the complex cycles to be considered in the option program. The two most promising approaches will be identified. Task 3 further evaluates the best approach using additional base material fatigue testing including verification tests. Task 4 consists of technical, schedular, financial and all other reporting requirements in accordance with the Reports of Work clause.

  7. Task Assignment Heuristics for Parallel and Distributed CFD Applications

    NASA Technical Reports Server (NTRS)

    Lopez-Benitez, Noe; Djomehri, M. Jahed; Biswas, Rupak

    2003-01-01

    This paper proposes a task graph (TG) model to represent a single discrete step of multi-block overset grid computational fluid dynamics (CFD) applications. The TG model is then used to not only balance the computational workload across the overset grids but also to reduce inter-grid communication costs. We have developed a set of task assignment heuristics based on the constraints inherent in this class of CFD problems. Two basic assignments, the smallest task first (STF) and the largest task first (LTF), are first presented. They are then systematically costs. To predict the performance of the proposed task assignment heuristics, extensive performance evaluations are conducted on a synthetic TG with tasks defined in terms of the number of grid points in predetermined overlapping grids. A TG derived from a realistic problem with eight million grid points is also used as a test case.

  8. Recoding low-level simulator data into a record of meaningful task performance: the integrated task modeling environment (ITME).

    PubMed

    King, Robert; Parker, Simon; Mouzakis, Kon; Fletcher, Winston; Fitzgerald, Patrick

    2007-11-01

    The Integrated Task Modeling Environment (ITME) is a user-friendly software tool that has been developed to automatically recode low-level data into an empirical record of meaningful task performance. The present research investigated and validated the performance of the ITME software package by conducting complex simulation missions and comparing the task analyses produced by ITME with taskanalyses produced by experienced video analysts. A very high interrater reliability (> or = .94) existed between experienced video analysts and the ITME for the task analyses produced for each mission. The mean session time:analysis time ratio was 1:24 using video analysis techniques and 1:5 using the ITME. It was concluded that the ITME produced task analyses that were as reliable as those produced by experienced video analysts, and significantly reduced the time cost associated with these analyses.

  9. The integrated manual and automatic control of complex flight systems

    NASA Technical Reports Server (NTRS)

    Schmidt, David K.

    1991-01-01

    Research dealt with the general area of optimal flight control synthesis for manned flight vehicles. The work was generic; no specific vehicle was the focus of study. However, the class of vehicles generally considered were those for which high authority, multivariable control systems might be considered, for the purpose of stabilization and the achievement of optimal handling characteristics. Within this scope, the topics of study included several optimal control synthesis techniques, control-theoretic modeling of the human operator in flight control tasks, and the development of possible handling qualities metrics and/or measures of merit. Basic contributions were made in all these topics, including human operator (pilot) models for multi-loop tasks, optimal output feedback flight control synthesis techniques; experimental validations of the methods developed, and fundamental modeling studies of the air-to-air tracking and flared landing tasks.

  10. Cognitive control over learning: Creating, clustering and generalizing task-set structure

    PubMed Central

    Collins, Anne G.E.; Frank, Michael J.

    2013-01-01

    Executive functions and learning share common neural substrates essential for their expression, notably in prefrontal cortex and basal ganglia. Understanding how they interact requires studying how cognitive control facilitates learning, but also how learning provides the (potentially hidden) structure, such as abstract rules or task-sets, needed for cognitive control. We investigate this question from three complementary angles. First, we develop a new computational “C-TS” (context-task-set) model inspired by non-parametric Bayesian methods, specifying how the learner might infer hidden structure and decide whether to re-use that structure in new situations, or to create new structure. Second, we develop a neurobiologically explicit model to assess potential mechanisms of such interactive structured learning in multiple circuits linking frontal cortex and basal ganglia. We systematically explore the link betweens these levels of modeling across multiple task demands. We find that the network provides an approximate implementation of high level C-TS computations, where manipulations of specific neural mechanisms are well captured by variations in distinct C-TS parameters. Third, this synergism across models yields strong predictions about the nature of human optimal and suboptimal choices and response times during learning. In particular, the models suggest that participants spontaneously build task-set structure into a learning problem when not cued to do so, which predicts positive and negative transfer in subsequent generalization tests. We provide evidence for these predictions in two experiments and show that the C-TS model provides a good quantitative fit to human sequences of choices in this task. These findings implicate a strong tendency to interactively engage cognitive control and learning, resulting in structured abstract representations that afford generalization opportunities, and thus potentially long-term rather than short-term optimality. PMID:23356780

  11. Simulation, Model Verification and Controls Development of Brayton Cycle PM Alternator: Testing and Simulation of 2 KW PM Generator with Diode Bridge Output

    NASA Technical Reports Server (NTRS)

    Stankovic, Ana V.

    2003-01-01

    Professor Stankovic will be developing and refining Simulink based models of the PM alternator and comparing the simulation results with experimental measurements taken from the unit. Her first task is to validate the models using the experimental data. Her next task is to develop alternative control techniques for the application of the Brayton Cycle PM Alternator in a nuclear electric propulsion vehicle. The control techniques will be first simulated using the validated models then tried experimentally with hardware available at NASA. Testing and simulation of a 2KW PM synchronous generator with diode bridge output is described. The parameters of a synchronous PM generator have been measured and used in simulation. Test procedures have been developed to verify the PM generator model with diode bridge output. Experimental and simulation results are in excellent agreement.

  12. Segment-based acoustic models for continuous speech recognition

    NASA Astrophysics Data System (ADS)

    Ostendorf, Mari; Rohlicek, J. R.

    1993-07-01

    This research aims to develop new and more accurate stochastic models for speaker-independent continuous speech recognition, by extending previous work in segment-based modeling and by introducing a new hierarchical approach to representing intra-utterance statistical dependencies. These techniques, which are more costly than traditional approaches because of the large search space associated with higher order models, are made feasible through rescoring a set of HMM-generated N-best sentence hypotheses. We expect these different modeling techniques to result in improved recognition performance over that achieved by current systems, which handle only frame-based observations and assume that these observations are independent given an underlying state sequence. In the fourth quarter of the project, we have completed the following: (1) ported our recognition system to the Wall Street Journal task, a standard task in the ARPA community; (2) developed an initial dependency-tree model of intra-utterance observation correlation; and (3) implemented baseline language model estimation software. Our initial results on the Wall Street Journal task are quite good and represent significantly improved performance over most HMM systems reporting on the Nov. 1992 5k vocabulary test set.

  13. ATMOSPHERIC MODEL DEVELOPMENT

    EPA Science Inventory

    This task provides credible state of the art air quality models and guidance for use in implementation of National Ambient Air Quality Standards for ozone and PM. This research effort is to develop and improve air quality models, such as the Community Multiscale Air Quality (CMA...

  14. Novel Analog For Muscle Deconditioning

    NASA Technical Reports Server (NTRS)

    Ploutz-Snyder, Lori; Ryder, Jeff; Buxton, Roxanne; Redd, Elizabeth; Scott-Pandorf, Melissa; Hackney, Kyle; Fiedler, James; Bloomberg, Jacob

    2010-01-01

    Existing models of muscle deconditioning are cumbersome and expensive (ex: bedrest). We propose a new model utilizing a weighted suit to manipulate strength, power or endurance (function) relative to body weight (BW). Methods: 20 subjects performed 7 occupational astronaut tasks while wearing a suit weighted with 0-120% of BW. Models of the full relationship between muscle function/BW and task completion time were developed using fractional polynomial regression and verified by the addition of pre- and post-flight astronaut performance data using the same tasks. Spline regression was used to identify muscle function thresholds below which task performance was impaired. Results: Thresholds of performance decline were identified for each task. Seated egress & walk (most difficult task) showed thresholds of: leg press (LP) isometric peak force/BW of 18 N/kg, LP power/BW of 18 W/kg, LP work/ BW of 79 J/kg, knee extension (KE) isokinetic/BW of 6 Nm/Kg and KE torque/BW of 1.9 Nm/kg. Conclusions: Laboratory manipulation of strength / BW has promise as an appropriate analog for spaceflight-induced loss of muscle function for predicting occupational task performance and establishing operationally relevant exercise targets.

  15. Novel Analog For Muscle Deconditioning

    NASA Technical Reports Server (NTRS)

    Ploutz-Snyder, Lori; Ryder, Jeff; Buxton, Roxanne; Redd. Elizabeth; Scott-Pandorf, Melissa; Hackney, Kyle; Fiedler, James; Ploutz-Snyder, Robert; Bloomberg, Jacob

    2011-01-01

    Existing models (such as bed rest) of muscle deconditioning are cumbersome and expensive. We propose a new model utilizing a weighted suit to manipulate strength, power, or endurance (function) relative to body weight (BW). Methods: 20 subjects performed 7 occupational astronaut tasks while wearing a suit weighted with 0-120% of BW. Models of the full relationship between muscle function/BW and task completion time were developed using fractional polynomial regression and verified by the addition of pre-and postflightastronaut performance data for the same tasks. Splineregression was used to identify muscle function thresholds below which task performance was impaired. Results: Thresholds of performance decline were identified for each task. Seated egress & walk (most difficult task) showed thresholds of leg press (LP) isometric peak force/BW of 18 N/kg, LP power/BW of 18 W/kg, LP work/BW of 79 J/kg, isokineticknee extension (KE)/BW of 6 Nm/kg, and KE torque/BW of 1.9 Nm/kg.Conclusions: Laboratory manipulation of relative strength has promise as an appropriate analog for spaceflight-induced loss of muscle function, for predicting occupational task performance and establishing operationally relevant strength thresholds.

  16. International Space Station ECLSS Technical Task Agreement Summary Report

    NASA Technical Reports Server (NTRS)

    Ray, C. D. (Compiler); Salyer, B. H. (Compiler)

    1999-01-01

    This Technical Memorandum provides a summary of current work accomplished under Technical Task Agreement (TTA) by the Marshall Space Flight Center (MSFC) regarding the International Space Station (ISS) Environmental Control and Life Support System (ECLSS). Current activities include ECLSS component design and development, computer model development, subsystem/integrated system testing, life testing, and general test support provided to the ISS program. Under ECLSS design, MSFC was responsible for the six major ECLSS functions, specifications and standard, component design and development, and was the architectural control agent for the ISS ECLSS. MSFC was responsible for ECLSS analytical model development. In-house subsystem and system level analysis and testing were conducted in support of the design process, including testing air revitalization, water reclamation and management hardware, and certain nonregenerative systems. The activities described herein were approved in task agreements between MSFC and NASA Headquarters Space Station Program Management Office and their prime contractor for the ISS, Boeing. These MSFC activities are in line to the designing, development, testing, and flight of ECLSS equipment planned by Boeing. MSFC's unique capabilities for performing integrated systems testing and analyses, and its ability to perform some tasks cheaper and faster to support ISS program needs, are the basis for the TTA activities.

  17. Software cost/resource modeling: Deep space network software cost estimation model

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. J.

    1980-01-01

    A parametric software cost estimation model prepared for JPL deep space network (DSN) data systems implementation tasks is presented. The resource estimation model incorporates principles and data from a number of existing models, such as those of the General Research Corporation, Doty Associates, IBM (Walston-Felix), Rome Air Force Development Center, University of Maryland, and Rayleigh-Norden-Putnam. The model calibrates task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit JPL software lifecycle statistics. The estimation model output scales a standard DSN work breakdown structure skeleton, which is then input to a PERT/CPM system, producing a detailed schedule and resource budget for the project being planned.

  18. Two Essays on Increasing the Learning Effectiveness of Economics Education

    ERIC Educational Resources Information Center

    McLean, William J.

    2010-01-01

    Scope and Method of Study: This study develops, implements, and evaluates a new economics teaching pedagogy based on the U.S. Army's systems approach to training model. Using the approach, tasks are identified that compose the task domain for the Principles of Microeconomics course. From the 130 identified tasks, 73 are used by Economics of…

  19. A Didactic Analysis of Content Development during the Peer Teaching Tasks of a Sport Education Season

    ERIC Educational Resources Information Center

    Wallhead, Tristan; O'Sullivan, Mary

    2007-01-01

    Background: Research on Sport Education (SE) has shown the curriculum model to be effective in motivating students to undertake specific role responsibilities and engage in the student-led tasks of the curriculum. Despite this level of engagement, emerging evidence suggests that student leadership within the peer teaching tasks of the curriculum…

  20. The Effects of Dimensional Salience, Pretraining Task, and Developmental Level Upon Bidimensional Processing in a Matching Task.

    ERIC Educational Resources Information Center

    Katsuyama, Ronald M.; Reid, Amy

    Purposes of this study are to determine the effects of (1) preassessed dimensional salience upon performance in a bi-dimensional matching task, and (2) pretraining conditions expected to facilitate bi-dimensional processing. An additional aim was to elucidate a model of development involving changing salience hierarchies by comparing the effects…

  1. Information Processing at the Memoryful and Memoryless Channel Levels in Problem-Solving and Recall Tasks.

    ERIC Educational Resources Information Center

    Fazio, Frank; Moser, Gene W.

    A probabilistic model (see SE 013 578) describing information processing during the cognitive tasks of recall and problem solving was tested, refined, and developed by testing graduate students on a number of tasks which combined oral, written, and overt "input" and "output" modes in several ways. In a verbal chain one subject…

  2. A systematic review of fMRI reward paradigms used in studies of adolescents vs. adults: the impact of task design and implications for understanding neurodevelopment.

    PubMed

    Richards, Jessica M; Plate, Rista C; Ernst, Monique

    2013-06-01

    The neural systems underlying reward-related behaviors across development have recently generated a great amount of interest. Yet, the neurodevelopmental literature on reward processing is marked by inconsistencies due to the heterogeneity of the reward paradigms used, the complexity of the behaviors being studied, and the developing brain itself as a moving target. The present review will examine task design as one source of variability across findings by compiling this literature along three dimensions: (1) task structures, (2) cognitive processes, and (3) neural systems. We start with the presentation of a heuristic neural systems model, the Triadic Model, as a way to provide a theoretical framework for the neuroscience research on motivated behaviors. We then discuss the principles guiding reward task development. Finally, we review the extant developmental neuroimaging literature on reward-related processing, organized by reward task type. We hope that this approach will help to clarify the literature on the functional neurodevelopment of reward-related neural systems, and to identify the role of the experimental parameters that significantly influence these findings. Published by Elsevier Ltd.

  3. Problems and research issues associated with the hybrid control of force and displacement

    NASA Technical Reports Server (NTRS)

    Paul, R. P.

    1987-01-01

    The hybrid control of force and position is basic to the science of robotics but is only poorly understood. Before much progress can be made in robotics, this problem needs to be solved in a robust manner. However, the use of hybrid control implies the existence of a model of the environment, not an exact model (as the function of hybrid control is to accommodate these errors), but a model appropriate for planning and reasoning. The monitored forces in position control are interpreted in terms of a model of the task as are the monitored displacements in force control. The reaction forces of the task of writing are far different from those of hammering. The programming of actions in such a modeled world becomes more complicated and systems of task level programming need to be developed. Sensor based robotics, of which force sensing is the most basic, implies an entirely new level of technology. Indeed, robot force sensors, no matter how compliant they may be, must be protected from accidental collisions. This implies other sensors to monitor task execution and again the use of a world model. This new level of technology is the task level, in which task actions are specified, not the actions of individual sensors and manipulators.

  4. Atrial Model Development and Prototype Simulations: CRADA Final Report on Tasks 3 and 4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Hara, T.; Zhang, X.; Villongco, C.

    2016-10-28

    The goal of this CRADA was to develop essential tools needed to simulate human atrial electrophysiology in 3-dimensions using an anatomical image-based anatomy and physiologically detailed human cellular model. The atria were modeled as anisotropic, representing the preferentially longitudinal electrical coupling between myocytes. Across the entire anatomy, cellular electrophysiology was heterogeneous, with left and right atrial myocytes defined differently. Left and right cell types for the “control” case of sinus rhythm (SR) was compared with remodeled electrophysiology and calcium cycling characteristics of chronic atrial fibrillation (cAF). The effects of Isoproterenol (ISO), a beta-adrenergic agonist that represents the functional consequences ofmore » PKA phosphorylation of various ion channels and transporters, was also simulated in SR and cAF to represent atrial activity under physical or emotional stress. Results and findings from Tasks 3 & 4 are described. Tasks 3 and 4 are, respectively: Input parameters prepared for a Cardioid simulation; Report including recommendations for additional scenario development and post-processing analytic strategy.« less

  5. Research and development on performance models of thermal imaging systems

    NASA Astrophysics Data System (ADS)

    Wang, Ji-hui; Jin, Wei-qi; Wang, Xia; Cheng, Yi-nan

    2009-07-01

    Traditional ACQUIRE models perform the discrimination tasks of detection (target orientation, recognition and identification) for military target based upon minimum resolvable temperature difference (MRTD) and Johnson criteria for thermal imaging systems (TIS). Johnson criteria is generally pessimistic for performance predict of sampled imager with the development of focal plane array (FPA) detectors and digital image process technology. Triangle orientation discrimination threshold (TOD) model, minimum temperature difference perceived (MTDP)/ thermal range model (TRM3) Model and target task performance (TTP) metric have been developed to predict the performance of sampled imager, especially TTP metric can provides better accuracy than the Johnson criteria. In this paper, the performance models above are described; channel width metrics have been presented to describe the synthesis performance including modulate translate function (MTF) channel width for high signal noise to ration (SNR) optoelectronic imaging systems and MRTD channel width for low SNR TIS; the under resolvable questions for performance assessment of TIS are indicated; last, the development direction of performance models for TIS are discussed.

  6. Automating CPM-GOMS

    NASA Technical Reports Server (NTRS)

    John, Bonnie; Vera, Alonso; Matessa, Michael; Freed, Michael; Remington, Roger

    2002-01-01

    CPM-GOMS is a modeling method that combines the task decomposition of a GOMS analysis with a model of human resource usage at the level of cognitive, perceptual, and motor operations. CPM-GOMS models have made accurate predictions about skilled user behavior in routine tasks, but developing such models is tedious and error-prone. We describe a process for automatically generating CPM-GOMS models from a hierarchical task decomposition expressed in a cognitive modeling tool called Apex. Resource scheduling in Apex automates the difficult task of interleaving the cognitive, perceptual, and motor resources underlying common task operators (e.g. mouse move-and-click). Apex's UI automatically generates PERT charts, which allow modelers to visualize a model's complex parallel behavior. Because interleaving and visualization is now automated, it is feasible to construct arbitrarily long sequences of behavior. To demonstrate the process, we present a model of automated teller interactions in Apex and discuss implications for user modeling. available to model human users, the Goals, Operators, Methods, and Selection (GOMS) method [6, 21] has been the most widely used, providing accurate, often zero-parameter, predictions of the routine performance of skilled users in a wide range of procedural tasks [6, 13, 15, 27, 28]. GOMS is meant to model routine behavior. The user is assumed to have methods that apply sequences of operators and to achieve a goal. Selection rules are applied when there is more than one method to achieve a goal. Many routine tasks lend themselves well to such decomposition. Decomposition produces a representation of the task as a set of nested goal states that include an initial state and a final state. The iterative decomposition into goals and nested subgoals can terminate in primitives of any desired granularity, the choice of level of detail dependent on the predictions required. Although GOMS has proven useful in HCI, tools to support the construction of GOMS models have not yet come into general use.

  7. An evidence accumulation model for conflict detection performance in a simulated air traffic control task.

    PubMed

    Neal, Andrew; Kwantes, Peter J

    2009-04-01

    The aim of this article is to develop a formal model of conflict detection performance. Our model assumes that participants iteratively sample evidence regarding the state of the world and accumulate it over time. A decision is made when the evidence reaches a threshold that changes over time in response to the increasing urgency of the task. Two experiments were conducted to examine the effects of conflict geometry and timing on response proportions and response time. The model is able to predict the observed pattern of response times, including a nonmonotonic relationship between distance at point of closest approach and response time, as well as effects of angle of approach and relative velocity. The results demonstrate that evidence accumulation models provide a good account of performance on a conflict detection task. Evidence accumulation models are a form of dynamic signal detection theory, allowing for the analysis of response times as well as response proportions, and can be used for simulating human performance on dynamic decision tasks.

  8. Maximally Expressive Modeling of Operations Tasks

    NASA Technical Reports Server (NTRS)

    Jaap, John; Richardson, Lea; Davis, Elizabeth

    2002-01-01

    Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed, the information sought is at the cutting edge of scientific endeavor, and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a "maximally expressive" modeling schema.

  9. Development of cost-effective surfactant flooding technology, Quarterly report, October 1995--December 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, G.A.; Sepehrnoori, K.

    1995-12-31

    The objective of this research is to develop cost-effective surfactant flooding technology by using simulation studies to evaluate and optimize alternative design strategies taking into account reservoir characteristics process chemistry, and process design options such as horizontal wells. Task 1 is the development of an improved numerical method for our simulator that will enable us to solve a wider class of these difficult simulation problems accurately and affordably. Task 2 is the application of this simulator to the optimization of surfactant flooding to reduce its risk and cost. In this quarter, we have continued working on Task 2 to optimizemore » surfactant flooding design and have included economic analysis to the optimization process. An economic model was developed using a spreadsheet and the discounted cash flow (DCF) method of economic analysis. The model was designed specifically for a domestic onshore surfactant flood and has been used to economically evaluate previous work that used a technical approach to optimization. The DCF model outputs common economic decision making criteria, such as net present value (NPV), internal rate of return (IRR), and payback period.« less

  10. The internal model: A study of the relative contribution of proprioception and visual information to failure detection in dynamic systems. [sensitivity of operators versus monitors to failures

    NASA Technical Reports Server (NTRS)

    Kessel, C.; Wickens, C. D.

    1978-01-01

    The development of the internal model as it pertains to the detection of step changes in the order of control dynamics is investigated for two modes of participation: whether the subjects are actively controlling those dynamics or are monitoring an autopilot controlling them. A transfer of training design was used to evaluate the relative contribution of proprioception and visual information to the overall accuracy of the internal model. Sixteen subjects either tracked or monitored the system dynamics as a 2-dimensional pursuit display under single task conditions and concurrently with a sub-critical tracking task at two difficulty levels. Detection performance was faster and more accurate in the manual as opposed to the autopilot mode. The concurrent tracking task produced a decrement in detection performance for all conditions though this was more marked for the manual mode. The development of an internal model in the manual mode transferred positively to the automatic mode producing enhanced detection performance. There was no transfer from the internal model developed in the automatic mode to the manual mode.

  11. Modeling Image Patches with a Generic Dictionary of Mini-Epitomes

    PubMed Central

    Papandreou, George; Chen, Liang-Chieh; Yuille, Alan L.

    2015-01-01

    The goal of this paper is to question the necessity of features like SIFT in categorical visual recognition tasks. As an alternative, we develop a generative model for the raw intensity of image patches and show that it can support image classification performance on par with optimized SIFT-based techniques in a bag-of-visual-words setting. Key ingredient of the proposed model is a compact dictionary of mini-epitomes, learned in an unsupervised fashion on a large collection of images. The use of epitomes allows us to explicitly account for photometric and position variability in image appearance. We show that this flexibility considerably increases the capacity of the dictionary to accurately approximate the appearance of image patches and support recognition tasks. For image classification, we develop histogram-based image encoding methods tailored to the epitomic representation, as well as an “epitomic footprint” encoding which is easy to visualize and highlights the generative nature of our model. We discuss in detail computational aspects and develop efficient algorithms to make the model scalable to large tasks. The proposed techniques are evaluated with experiments on the challenging PASCAL VOC 2007 image classification benchmark. PMID:26321859

  12. Searching Nearest Potential of Children with Intellectual Disability--Dynamic Assessment

    ERIC Educational Resources Information Center

    Kulesza, Ewa Maria

    2015-01-01

    The article discussed the issue of the diagnosis with the use of task-support-task procedure. A theoretical model of diagnosis based on the concepts by L. S. Vygotski, R. Case, and A. Bandura was described and developed. The model was tested on a group of non-disabled preschool children, and children with mild and moderate intellectual disability…

  13. It's in the Name: A Synthetic Inquiry of the Knowledge Is Power Program [KIPP

    ERIC Educational Resources Information Center

    Ellison, Scott

    2012-01-01

    The task of this article is to interrogate the Knowledge Is Power Program (KIPP) model to develop a more robust understanding of a prominent trend in the charter school movement and education policy more generally. To accomplish this task, this article details the findings of a synthetic analysis that examines the KIPP model as a Hegelian whole…

  14. A Connectionist Model of a Continuous Developmental Transition in the Balance Scale Task

    ERIC Educational Resources Information Center

    Schapiro, Anna C.; McClelland, James L.

    2009-01-01

    A connectionist model of the balance scale task is presented which exhibits developmental transitions between "Rule I" and "Rule II" behavior [Siegler, R. S. (1976). Three aspects of cognitive development. "Cognitive Psychology," 8, 481-520.] as well as the "catastrophe flags" seen in data from Jansen and van der Maas [Jansen, B. R. J., & van der…

  15. Designing Spatial Visualisation Tasks for Middle School Students with a 3D Modelling Software: An Instrumental Approach

    ERIC Educational Resources Information Center

    Turgut, Melih; Uygan, Candas

    2015-01-01

    In this work, certain task designs to enhance middle school students' spatial visualisation ability, in the context of an instrumental approach, have been developed. 3D modelling software, SketchUp®, was used. In the design process, software tools were focused on and, thereafter, the aim was to interpret the instrumental genesis and spatial…

  16. Effects of Whole-Body Motion Simulation on Flight Skill Development.

    DTIC Science & Technology

    1981-10-01

    computation requirements, compared to the implementation allowing for a deviate internal model, provided further motivation for assuming a correct...We are left with two more likely explanations for the apparent trends: (1) subjects were motivated differently by the different task configurations...because of modeling constraints. The notion of task-related motivational differences are explored in Appendix E. Sensitivity analysis performed with

  17. Developing the Impossible Figures Task to Assess Visual-Spatial Talents among Chinese Students: A Rasch Measurement Model Analysis

    ERIC Educational Resources Information Center

    Chan, David W.

    2010-01-01

    Data of item responses to the Impossible Figures Task (IFT) from 492 Chinese primary, secondary, and university students were analyzed using the dichotomous Rasch measurement model. Item difficulty estimates and person ability estimates located on the same logit scale revealed that the pooled sample of Chinese students, who were relatively highly…

  18. Data base manipulation for assessment of multiresource suitability and land change

    NASA Technical Reports Server (NTRS)

    Colwell, J.; Sanders, P.; Davis, G.; Thomson, F. (Principal Investigator)

    1981-01-01

    Progress is reported in three tasks which support the overall objectives of renewable resources inventory task of the AgRISTARS program. In the first task, the geometric correction algorithms of the Master Data Processor were investigated to determine the utility of data corrected by this processor for U.S. Forest Service uses. The second task involved investigation of logic to form blobs as a precursor step to automatic change detection involving two dates of LANDSAT data. Some routine procedures for selecting BLOB (spatial averaging) parameters were developed. In the third task, a major effort was made to develop land suitability modeling approches for timber, grazing, and wildlife habitat in support of resource planning efforts on the San Juan National Forest.

  19. The use of analytical models in human-computer interface design

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo

    1993-01-01

    Recently, a large number of human-computer interface (HCI) researchers have investigated building analytical models of the user, which are often implemented as computer models. These models simulate the cognitive processes and task knowledge of the user in ways that allow a researcher or designer to estimate various aspects of an interface's usability, such as when user errors are likely to occur. This information can lead to design improvements. Analytical models can supplement design guidelines by providing designers rigorous ways of analyzing the information-processing requirements of specific tasks (i.e., task analysis). These models offer the potential of improving early designs and replacing some of the early phases of usability testing, thus reducing the cost of interface design. This paper describes some of the many analytical models that are currently being developed and evaluates the usefulness of analytical models for human-computer interface design. This paper will focus on computational, analytical models, such as the GOMS model, rather than less formal, verbal models, because the more exact predictions and task descriptions of computational models may be useful to designers. The paper also discusses some of the practical requirements for using analytical models in complex design organizations such as NASA.

  20. The Emergent Executive: A Dynamic Field Theory of the Development of Executive Function

    PubMed Central

    Buss, Aaron T.; Spencer, John P.

    2015-01-01

    A dynamic neural field (DNF) model is presented which provides a process-based account of behavior and developmental change in a key task used to probe the early development of executive function—the Dimensional Change Card Sort (DCCS) task. In the DCCS, children must flexibly switch from sorting cards either by shape or color to sorting by the other dimension. Typically, 3-year-olds, but not 4-year-olds, lack the flexibility to do so and perseverate on the first set of rules when instructed to switch. In the DNF model, rule-use and behavioral flexibility come about through a form of dimensional attention which modulates activity within different cortical fields tuned to specific feature dimensions. In particular, we capture developmental change by increasing the strength of excitatory and inhibitory neural interactions in the dimensional attention system as well as refining the connectivity between this system and the feature-specific cortical fields. Note that although this enables the model to effectively switch tasks, the dimensional attention system does not ‘know’ the details of task-specific performance. Rather, correct performance emerges as a property of system-wide neural interactions. We show how this captures children's behavior in quantitative detail across 12 versions of the DCCS task. Moreover, we successfully test a set of novel predictions with 3-year-old children from a version of the task not explained by other theories. PMID:24818836

  1. A three-finger multisensory hand for dexterous space robotic tasks

    NASA Technical Reports Server (NTRS)

    Murase, Yuichi; Komada, Satoru; Uchiyama, Takashi; Machida, Kazuo; Akita, Kenzo

    1994-01-01

    The National Space Development Agency of Japan will launch ETS-7 in 1997, as a test bed for next generation space technology of RV&D and space robot. MITI has been developing a three-finger multisensory hand for complex space robotic tasks. The hand can be operated under remote control or autonomously. This paper describes the design and development of the hand and the performance of a breadboard model.

  2. Medical Writing Competency Model - Section 1: Functions, Tasks, and Activities.

    PubMed

    Clemow, David B; Wagner, Bertil; Marshallsay, Christopher; Benau, Dan; L'Heureux, Darryl; Brown, David H; Dasgupta, Devjani Ghosh; Girten, Eileen; Hubbard, Frank; Gawrylewski, Helle-Mai; Ebina, Hiroko; Stoltenborg, Janet; York, J P; Green, Kim; Wood, Linda Fossati; Toth, Lisa; Mihm, Michael; Katz, Nancy R; Vasconcelos, Nina-Maria; Sakiyama, Norihisa; Whitsell, Robin; Gopalakrishnan, Shobha; Bairnsfather, Susan; Wanderer, Tatyana; Schindler, Thomas M; Mikyas, Yeshi; Aoyama, Yumiko

    2018-01-01

    This article provides Section 1 of the 2017 Edition 2 Medical Writing Competency Model that describes the core work functions and associated tasks and activities related to professional medical writing within the life sciences industry. The functions in the Model are scientific communication strategy; document preparation, development, and finalization; document project management; document template, standard, format, and style development and maintenance; outsourcing, alliance partner, and client management; knowledge, skill, ability, and behavior development and sharing; and process improvement. The full Model also includes Section 2, which covers the knowledge, skills, abilities, and behaviors needed for medical writers to be effective in their roles; Section 2 is presented in a companion article. Regulatory, publication, and other scientific writing as well as management of writing activities are covered. The Model was developed to aid medical writers and managers within the life sciences industry regarding medical writing hiring, training, expectation and goal setting, performance evaluation, career development, retention, and role value sharing to cross-functional partners.

  3. Modeling of an Adjustable Beam Solid State Light Project

    NASA Technical Reports Server (NTRS)

    Clark, Toni

    2015-01-01

    This proposal is for the development of a computational model of a prototype variable beam light source using optical modeling software, Zemax Optics Studio. The variable beam light source would be designed to generate flood, spot, and directional beam patterns, while maintaining the same average power usage. The optical model would demonstrate the possibility of such a light source and its ability to address several issues: commonality of design, human task variability, and light source design process improvements. An adaptive lighting solution that utilizes the same electronics footprint and power constraints while addressing variability of lighting needed for the range of exploration tasks can save costs and allow for the development of common avionics for lighting controls.

  4. DEVELOPMENT OF AN IMPROVED SIMULATOR FOR CHEMICAL AND MICROBIAL IOR METHODS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gary A. Pope; Kamy Sepehrnoori; Mojdeh Delshad

    2001-10-01

    This is the final report of a three-year research project on further development of a chemical and microbial improved oil recovery reservoir simulator. The objective of this research was to extend the capability of an existing simulator (UTCHEM) to improved oil recovery methods which use surfactants, polymers, gels, alkaline chemicals, microorganisms and foam as well as various combinations of these in both conventional and naturally fractured oil reservoirs. The first task was the addition of a dual-porosity model for chemical IOR in naturally fractured oil reservoirs. They formulated and implemented a multiphase, multicomponent dual porosity model for enhanced oil recoverymore » from naturally fractured reservoirs. The multiphase dual porosity model was tested against analytical solutions, coreflood data, and commercial simulators. The second task was the addition of a foam model. They implemented a semi-empirical surfactant/foam model in UTCHEM and validated the foam model by comparison with published laboratory data. The third task addressed several numerical and coding enhancements that will greatly improve its versatility and performance. Major enhancements were made in UTCHEM output files and memory management. A graphical user interface to set up the simulation input and to process the output data on a Windows PC was developed. New solvers for solving the pressure equation and geochemical system of equations were implemented and tested. A corner point grid geometry option for gridding complex reservoirs was implemented and tested. Enhancements of physical property models for both chemical and microbial IOR simulations were included in the final task of this proposal. Additional options for calculating the physical properties such as relative permeability and capillary pressure were added. A microbiological population model was developed and incorporated into UTCHEM. They have applied the model to microbial enhanced oil recovery (MEOR) processes by including the capability of permeability reduction due to biomass growth and retention. The formations of bio-products such as surfactant and polymer surfactant have also been incorporated.« less

  5. FINAL TECHNICAL REPORT FOR FORESTRY BIOFUEL STATEWIDE COLLABORATION CENTER (MICHIGAN)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LaCourt, Donna M.; Miller, Raymond O.; Shonnard, David R.

    A team composed of scientists from Michigan State University (MSU) and Michigan Technological University (MTU) assembled to better understand, document, and improve systems for using forest-based biomass feedstocks in the production of energy products within Michigan. Work was funded by a grant (DE-EE-0000280) from the U.S. Department of Energy (DOE) and was administered by the Michigan Economic Development Corporation (MEDC). The goal of the project was to improve the forest feedstock supply infrastructure to sustainably provide woody biomass for biofuel production in Michigan over the long-term. Work was divided into four broad areas with associated objectives: • TASK A: Developmore » a Forest-Based Biomass Assessment for Michigan – Define forest-based feedstock inventory, availability, and the potential of forest-based feedstock to support state and federal renewable energy goals while maintaining current uses. • TASK B: Improve Harvesting, Processing and Transportation Systems – Identify and develop cost, energy, and carbon efficient harvesting, processing and transportation systems. • TASK C: Improve Forest Feedstock Productivity and Sustainability – Identify and develop sustainable feedstock production systems through the establishment and monitoring of a statewide network of field trials in forests and energy plantations. • TASK D: Engage Stakeholders – Increase understanding of forest biomass production systems for biofuels by a broad range of stakeholders. The goal and objectives of this research and development project were fulfilled with key model deliverables including: 1) The Forest Biomass Inventory System (Sub-task A1) of feedstock inventory and availability and, 2) The Supply Chain Model (Sub-task B2). Both models are vital to Michigan’s forest biomass industry and support forecasting delivered cost, as well as carbon and energy balance. All of these elements are important to facilitate investor, operational and policy decisions. All other sub-tasks supported the development of these two tools either directly or by building out supporting information in the forest biomass supply chain. Outreach efforts have, and are continuing to get these user friendly models and information to decision makers to support biomass feedstock supply chain decisions across the areas of biomass inventory and availability, procurement, harvest, forwarding, transportation and processing. Outreach will continue on the project website at http://www.michiganforestbiofuels.org/ and http://www.michiganwoodbiofuels.org/« less

  6. Towards using musculoskeletal models for intelligent control of physically assistive robots.

    PubMed

    Carmichael, Marc G; Liu, Dikai

    2011-01-01

    With the increasing number of robots being developed to physically assist humans in tasks such as rehabilitation and assistive living, more intelligent and personalized control systems are desired. In this paper we propose the use of a musculoskeletal model to estimate the strength of the user, from which information can be utilized to improve control schemes in which robots physically assist humans. An optimization model is developed utilizing a musculoskeletal model to estimate human strength in a specified dynamic state. Results of this optimization as well as methods of using it to observe muscle-based weaknesses in task space are presented. Lastly potential methods and problems in incorporating this model into a robot control system are discussed.

  7. MODEL DEVELOPMENT FOR FY08 CMAQ RELEASE

    EPA Science Inventory

    This task provides credible state of the art air quality models and guidance for use in implementation of National Ambient Air Quality Standards for ozone and PM. This research effort is to develop and improve air quality models, such as the Community Multiscale Air Quality (CMA...

  8. The Dynamics of Development on the Dimensional Change Card Sorting Task

    ERIC Educational Resources Information Center

    van Bers, Bianca M. C. W.; Visser, Ingmar; van Schijndel, Tessa J. P.; Mandell, Dorothy J.; Raijmakers, Maartje E. J.

    2011-01-01

    A widely used paradigm to study cognitive flexibility in preschoolers is the Dimensional Change Card Sorting (DCCS) task. The developmental dynamics of DCCS performance was studied in a cross-sectional design (N = 93, 3 to 5 years of age) using a computerized version of the standard DCCS task. A model-based analysis of the data showed that…

  9. Deep space network software cost estimation model

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1981-01-01

    A parametric software cost estimation model prepared for Deep Space Network (DSN) Data Systems implementation tasks is presented. The resource estimation model incorporates principles and data from a number of existing models. The model calibrates task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit DSN software life cycle statistics. The estimation model output scales a standard DSN Work Breakdown Structure skeleton, which is then input into a PERT/CPM system, producing a detailed schedule and resource budget for the project being planned.

  10. Yield model development project implementation plan

    NASA Technical Reports Server (NTRS)

    Ambroziak, R. A.

    1982-01-01

    Tasks remaining to be completed are summarized for the following major project elements: (1) evaluation of crop yield models; (2) crop yield model research and development; (3) data acquisition processing, and storage; (4) related yield research: defining spectral and/or remote sensing data requirements; developing input for driving and testing crop growth/yield models; real time testing of wheat plant process models) and (5) project management and support.

  11. A Pragmatic Cognitive System Engineering Approach to Model Dynamic Human Decision-Making Activities in Intelligent and Automated Systems

    DTIC Science & Technology

    2003-10-01

    Among the procedures developed to identify cognitive processes, there are the Cognitive Task Analysis (CTA) and the Cognitive Work Analysis (CWA...of Cognitive Task Design. [11] Potter, S.S., Roth, E.M., Woods, D.D., and Elm, W.C. (2000). Cognitive Task Analysis as Bootstrapping Multiple...Converging Techniques, In Schraagen, Chipman, and Shalin (Eds.). Cognitive Task Analysis . Mahwah, NJ: Lawrence Erlbaum Associates. [12] Roth, E.M

  12. Risk assessments using the Strain Index and the TLV for HAL, Part I: Task and multi-task job exposure classifications.

    PubMed

    Kapellusch, Jay M; Bao, Stephen S; Silverstein, Barbara A; Merryweather, Andrew S; Thiese, Mathew S; Hegmann, Kurt T; Garg, Arun

    2017-12-01

    The Strain Index (SI) and the American Conference of Governmental Industrial Hygienists (ACGIH) Threshold Limit Value for Hand Activity Level (TLV for HAL) use different constituent variables to quantify task physical exposures. Similarly, time-weighted-average (TWA), Peak, and Typical exposure techniques to quantify physical exposure from multi-task jobs make different assumptions about each task's contribution to the whole job exposure. Thus, task and job physical exposure classifications differ depending upon which model and technique are used for quantification. This study examines exposure classification agreement, disagreement, correlation, and magnitude of classification differences between these models and techniques. Data from 710 multi-task job workers performing 3,647 tasks were analyzed using the SI and TLV for HAL models, as well as with the TWA, Typical and Peak job exposure techniques. Physical exposures were classified as low, medium, and high using each model's recommended, or a priori limits. Exposure classification agreement and disagreement between models (SI, TLV for HAL) and between job exposure techniques (TWA, Typical, Peak) were described and analyzed. Regardless of technique, the SI classified more tasks as high exposure than the TLV for HAL, and the TLV for HAL classified more tasks as low exposure. The models agreed on 48.5% of task classifications (kappa = 0.28) with 15.5% of disagreement between low and high exposure categories. Between-technique (i.e., TWA, Typical, Peak) agreement ranged from 61-93% (kappa: 0.16-0.92) depending on whether the SI or TLV for HAL was used. There was disagreement between the SI and TLV for HAL and between the TWA, Typical and Peak techniques. Disagreement creates uncertainty for job design, job analysis, risk assessments, and developing interventions. Task exposure classifications from the SI and TLV for HAL might complement each other. However, TWA, Typical, and Peak job exposure techniques all have limitations. Part II of this article examines whether the observed differences between these models and techniques produce different exposure-response relationships for predicting prevalence of carpal tunnel syndrome.

  13. Deep ART Neural Model for Biologically Inspired Episodic Memory and Its Application to Task Performance of Robots.

    PubMed

    Park, Gyeong-Moon; Yoo, Yong-Ho; Kim, Deok-Hwa; Kim, Jong-Hwan; Gyeong-Moon Park; Yong-Ho Yoo; Deok-Hwa Kim; Jong-Hwan Kim; Yoo, Yong-Ho; Park, Gyeong-Moon; Kim, Jong-Hwan; Kim, Deok-Hwa

    2018-06-01

    Robots are expected to perform smart services and to undertake various troublesome or difficult tasks in the place of humans. Since these human-scale tasks consist of a temporal sequence of events, robots need episodic memory to store and retrieve the sequences to perform the tasks autonomously in similar situations. As episodic memory, in this paper we propose a novel Deep adaptive resonance theory (ART) neural model and apply it to the task performance of the humanoid robot, Mybot, developed in the Robot Intelligence Technology Laboratory at KAIST. Deep ART has a deep structure to learn events, episodes, and even more like daily episodes. Moreover, it can retrieve the correct episode from partial input cues robustly. To demonstrate the effectiveness and applicability of the proposed Deep ART, experiments are conducted with the humanoid robot, Mybot, for performing the three tasks of arranging toys, making cereal, and disposing of garbage.

  14. The SIMRAND methodology: Theory and application for the simulation of research and development projects

    NASA Technical Reports Server (NTRS)

    Miles, R. F., Jr.

    1986-01-01

    A research and development (R&D) project often involves a number of decisions that must be made concerning which subset of systems or tasks are to be undertaken to achieve the goal of the R&D project. To help in this decision making, SIMRAND (SIMulation of Research ANd Development Projects) is a methodology for the selection of the optimal subset of systems or tasks to be undertaken on an R&D project. Using alternative networks, the SIMRAND methodology models the alternative subsets of systems or tasks under consideration. Each path through an alternative network represents one way of satisfying the project goals. Equations are developed that relate the system or task variables to the measure of reference. Uncertainty is incorporated by treating the variables of the equations probabilistically as random variables, with cumulative distribution functions assessed by technical experts. Analytical techniques of probability theory are used to reduce the complexity of the alternative networks. Cardinal utility functions over the measure of preference are assessed for the decision makers. A run of the SIMRAND Computer I Program combines, in a Monte Carlo simulation model, the network structure, the equations, the cumulative distribution functions, and the utility functions.

  15. A Multidisciplinary Model for Development of Intelligent Computer-Assisted Instruction.

    ERIC Educational Resources Information Center

    Park, Ok-choon; Seidel, Robert J.

    1989-01-01

    Proposes a schematic multidisciplinary model to help developers of intelligent computer-assisted instruction (ICAI) identify the types of required expertise and integrate them into a system. Highlights include domain types and expertise; knowledge acquisition; task analysis; knowledge representation; student modeling; diagnosis of learning needs;…

  16. An investigation of the development of the topological spatial structures in elementary school students

    NASA Astrophysics Data System (ADS)

    Everett, Susan Ann

    1999-09-01

    In this study the relationships among the topological spatial structures were examined in students in kindergarten, second, and fourth grades. These topological spatial structures are part of the three major types of spatial thinking: topological, projective, and Euclidean (as defined by Jean Piaget and associates). According to Piaget's model of spatial thinking, the spatial structures enable humans to think about spatial relationships at a conceptual or representational level rather than only at a simpler, perceptual level. The clinical interview technique was used to interact individually with 72 children to assess the presence of each of the different topological spatial structures. This was accomplished through the use of seven task protocols and simple objects which are familiar to young children. These task protocols allowed the investigator to interact with each child in a consistent manner. The results showed that most of the children in this study (97.2%) had not developed all of the topological spatial structures. The task scores, were analyzed using non-parametric statistical tests due to the ordinal nature of the data. From the data the following results were obtained: (1) the spatial structures did not develop in random order based on the task scores but developed in the sequence expected from Piaget's model, (2) task performance improved with grade level with fourth grade students outperforming second graders and kindergartners on each of the seven tasks, and (3) no significant differences on task performance due to gender were found. Based on these results, young elementary children are beginning to develop topological spatial thinking. This is critical since it provides the foundation for the other types of spatial thinking, projective and Euclidean. Since spatial thinking is not a "gift" but can be developed, educators need to provide more opportunities for students to increase their level of spatial thinking since it is necessary for conceptual understanding of many different topics in math and science.

  17. Nuclear Thermal Propulsion (NTP) Development Activities at the NASA Marshall Space Flight Center - 2006 Accomplishments

    NASA Technical Reports Server (NTRS)

    Ballard, Richard O.

    2007-01-01

    In 2005-06, the Prometheus program funded a number of tasks at the NASA-Marshall Space Flight Center (MSFC) to support development of a Nuclear Thermal Propulsion (NTP) system for future manned exploration missions. These tasks include the following: 1. NTP Design Develop Test & Evaluate (DDT&E) Planning 2. NTP Mission & Systems Analysis / Stage Concepts & Engine Requirements 3. NTP Engine System Trade Space Analysis and Studies 4. NTP Engine Ground Test Facility Assessment 5. Non-Nuclear Environmental Simulator (NTREES) 6. Non-Nuclear Materials Fabrication & Evaluation 7. Multi-Physics TCA Modeling. This presentation is a overview of these tasks and their accomplishments

  18. Reliability Modeling Development and Its Applications for Ceramic Capacitors with Base-Metal Electrodes (BMEs)

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2014-01-01

    This presentation includes a summary of NEPP-funded deliverables for the Base-Metal Electrodes (BMEs) capacitor task, development of a general reliability model for BME capacitors, and a summary and future work.

  19. Spacecraft software training needs assessment research, appendices

    NASA Technical Reports Server (NTRS)

    Ratcliff, Shirley; Golas, Katharine

    1990-01-01

    The appendices to the previously reported study are presented: statistical data from task rating worksheets; SSD references; survey forms; fourth generation language, a powerful, long-term solution to maintenance cost; task list; methodology; SwRI's instructional systems development model; relevant research; and references.

  20. Incorporation of RAM techniques into simulation modeling

    NASA Astrophysics Data System (ADS)

    Nelson, S. C., Jr.; Haire, M. J.; Schryver, J. C.

    1995-01-01

    This work concludes that reliability, availability, and maintainability (RAM) analytical techniques can be incorporated into computer network simulation modeling to yield an important new analytical tool. This paper describes the incorporation of failure and repair information into network simulation to build a stochastic computer model to represent the RAM Performance of two vehicles being developed for the US Army: The Advanced Field Artillery System (AFAS) and the Future Armored Resupply Vehicle (FARV). The AFAS is the US Army's next generation self-propelled cannon artillery system. The FARV is a resupply vehicle for the AFAS. Both vehicles utilize automation technologies to improve the operational performance of the vehicles and reduce manpower. The network simulation model used in this work is task based. The model programmed in this application requirements a typical battle mission and the failures and repairs that occur during that battle. Each task that the FARV performs--upload, travel to the AFAS, refuel, perform tactical/survivability moves, return to logistic resupply, etc.--is modeled. Such a model reproduces a model reproduces operational phenomena (e.g., failures and repairs) that are likely to occur in actual performance. Simulation tasks are modeled as discrete chronological steps; after the completion of each task decisions are programmed that determine the next path to be followed. The result is a complex logic diagram or network. The network simulation model is developed within a hierarchy of vehicle systems, subsystems, and equipment and includes failure management subnetworks. RAM information and other performance measures are collected which have impact on design requirements. Design changes are evaluated through 'what if' questions, sensitivity studies, and battle scenario changes.

  1. Identification of Biokinetic Models Using the Concept of Extents.

    PubMed

    Mašić, Alma; Srinivasan, Sriniketh; Billeter, Julien; Bonvin, Dominique; Villez, Kris

    2017-07-05

    The development of a wide array of process technologies to enable the shift from conventional biological wastewater treatment processes to resource recovery systems is matched by an increasing demand for predictive capabilities. Mathematical models are excellent tools to meet this demand. However, obtaining reliable and fit-for-purpose models remains a cumbersome task due to the inherent complexity of biological wastewater treatment processes. In this work, we present a first study in the context of environmental biotechnology that adopts and explores the use of extents as a way to simplify and streamline the dynamic process modeling task. In addition, the extent-based modeling strategy is enhanced by optimal accounting for nonlinear algebraic equilibria and nonlinear measurement equations. Finally, a thorough discussion of our results explains the benefits of extent-based modeling and its potential to turn environmental process modeling into a highly automated task.

  2. Acquisition and production of skilled behavior in dynamic decision-making tasks: Modeling strategic behavior in human-automation interaction: Why and aid can (and should) go unused

    NASA Technical Reports Server (NTRS)

    Kirlik, Alex

    1991-01-01

    Advances in computer and control technology offer the opportunity for task-offload aiding in human-machine systems. A task-offload aid (e.g., an autopilot, an intelligent assistant) can be selectively engaged by the human operator to dynamically delegate tasks to an automated system. Successful design and performance prediction in such systems requires knowledge of the factors influencing the strategy the operator develops and uses for managing interaction with the task-offload aid. A model is presented that shows how such strategies can be predicted as a function of three task context properties (frequency and duration of secondary tasks and costs of delaying secondary tasks) and three aid design properties (aid engagement and disengagement times, aid performance relative to human performance). Sensitivity analysis indicates how each of these contextual and design factors affect the optimal aid aid usage strategy and attainable system performance. The model is applied to understanding human-automation interaction in laboratory experiments on human supervisory control behavior. The laboratory task allowed subjects freedom to determine strategies for using an autopilot in a dynamic, multi-task environment. Modeling results suggested that many subjects may indeed have been acting appropriately by not using the autopilot in the way its designers intended. Although autopilot function was technically sound, this aid was not designed with due regard to the overall task context in which it was placed. These results demonstrate the need for additional research on how people may strategically manage their own resources, as well as those provided by automation, in an effort to keep workload and performance at acceptable levels.

  3. Computational models of the Posner simple and choice reaction time tasks

    PubMed Central

    Feher da Silva, Carolina; Baldo, Marcus V. C.

    2015-01-01

    The landmark experiments by Posner in the late 1970s have shown that reaction time (RT) is faster when the stimulus appears in an expected location, as indicated by a cue; since then, the so-called Posner task has been considered a “gold standard” test of spatial attention. It is thus fundamental to understand the neural mechanisms involved in performing it. To this end, we have developed a Bayesian detection system and small integrate-and-fire neural networks, which modeled sensory and motor circuits, respectively, and optimized them to perform the Posner task under different cue type proportions and noise levels. In doing so, main findings of experimental research on RT were replicated: the relative frequency effect, suboptimal RTs and significant error rates due to noise and invalid cues, slower RT for choice RT tasks than for simple RT tasks, fastest RTs for valid cues and slowest RTs for invalid cues. Analysis of the optimized systems revealed that the employed mechanisms were consistent with related findings in neurophysiology. Our models predict that (1) the results of a Posner task may be affected by the relative frequency of valid and neutral trials, (2) in simple RT tasks, input from multiple locations are added together to compose a stronger signal, and (3) the cue affects motor circuits more strongly in choice RT tasks than in simple RT tasks. In discussing the computational demands of the Posner task, attention has often been described as a filter that protects the nervous system, whose capacity is limited, from information overload. Our models, however, reveal that the main problems that must be overcome to perform the Posner task effectively are distinguishing signal from external noise and selecting the appropriate response in the presence of internal noise. PMID:26190997

  4. Self-powered information measuring wireless networks using the distribution of tasks within multicore processors

    NASA Astrophysics Data System (ADS)

    Zhuravska, Iryna M.; Koretska, Oleksandra O.; Musiyenko, Maksym P.; Surtel, Wojciech; Assembay, Azat; Kovalev, Vladimir; Tleshova, Akmaral

    2017-08-01

    The article contains basic approaches to develop the self-powered information measuring wireless networks (SPIM-WN) using the distribution of tasks within multicore processors critical applying based on the interaction of movable components - as in the direction of data transmission as wireless transfer of energy coming from polymetric sensors. Base mathematic model of scheduling tasks within multiprocessor systems was modernized to schedule and allocate tasks between cores of one-crystal computer (SoC) to increase energy efficiency SPIM-WN objects.

  5. Development of Mouse Models of Ovarian Cancer for Studying Tumor Biology and Testing Novel Molecularly Targeted Therapeutic Strategies

    DTIC Science & Technology

    2011-09-01

    years 2 and 3, Cho laboratory) Task 9: Treatment of tumor-bearing mice with cisplatin accompanied by MRI and BLI (completed years 2 and 3, Rehemtulla...and Cho laboratories) Task 10: Treatment of tumor-bearing mice with perifosine accompanied by MRI and BLI (completed, years 2 and 3, Rehemtulla and...Cho laboratories). Task 11: Treatment of tumor-bearing mice with SC-560 accompanied by MRI and BLI (not performed) Task 12: Histological and

  6. Toward a human-centered hyperlipidemia management system: the interaction between internal and external information on relational data search.

    PubMed

    Gong, Yang; Zhang, Jiajie

    2011-04-01

    In a distributed information search task, data representation and cognitive distribution jointly affect user search performance in terms of response time and accuracy. Guided by UFuRT (User, Function, Representation, Task), a human-centered framework, we proposed a search model and task taxonomy. The model defines its application in the context of healthcare setting. The taxonomy clarifies the legitimate operations for each type of search task of relational data. We then developed experimental prototypes of hyperlipidemia data displays. Based on the displays, we tested the search tasks performance through two experiments. The experiments are of a within-subject design with a random sample of 24 participants. The results support our hypotheses and validate the prediction of the model and task taxonomy. In this study, representation dimensions, data scales, and search task types are the main factors in determining search efficiency and effectiveness. Specifically, the more external representations provided on the interface the better search task performance of users. The results also suggest the ideal search performance occurs when the question type and its corresponding data scale representation match. The implications of the study lie in contributing to the effective design of search interface for relational data, especially laboratory results, which could be more effectively designed in electronic medical records.

  7. Avionics Simulation, Development and Software Engineering

    NASA Technical Reports Server (NTRS)

    Francis, Ronald C.; Settle, Gray; Tobbe, Patrick A.; Kissel, Ralph; Glaese, John; Blanche, Jim; Wallace, L. D.

    2001-01-01

    This monthly report summarizes the work performed under contract NAS8-00114 for Marshall Space Flight Center in the following tasks: 1) Purchase Order No. H-32831D, Task Order 001A, GPB Program Software Oversight; 2) Purchase Order No. H-32832D, Task Order 002, ISS EXPRESS Racks Software Support; 3) Purchase Order No. H-32833D, Task Order 003, SSRMS Math Model Integration; 4) Purchase Order No. H-32834D, Task Order 004, GPB Program Hardware Oversight; 5) Purchase Order No. H-32835D, Task Order 005, Electrodynamic Tether Operations and Control Analysis; 6) Purchase Order No. H-32837D, Task Order 007, SRB Command Receiver/Decoder; and 7) Purchase Order No. H-32838D, Task Order 008, AVGS/DART SW and Simulation Support

  8. Comprehensive manual handling limits for lowering, pushing, pulling and carrying activities.

    PubMed

    Shoaf, C; Genaidy, A; Karwowski, W; Waters, T; Christensen, D

    1997-11-01

    The objective of this study was to develop a set of mathematical models for manual lowering, pushing, pulling and carrying activities that would result in establishing load capacity limits to protect the lower back against occupational low-back disorders. In order to establish safe guidelines, a three-stage process was used. First, psychophysical data was used to generate the models' discounting factors and recommended load capacities. Second, biomechanical analysis was used to refine the recommended load capacities. Third, physiological criteria were used to validate the models' discounting factors. Both task and personal factors were considered in the models' development. When compared to the results from prior psychophysical research for these activities, the developed load capacity values are lower than previously established limits. The results of this study allowed the authors to validate the hypothesis proposed and tested by Karwowski (1983) that states that the combination of physiological and biomechanical stresses should lead to the overall measure of task acceptability or the psychophysical stress. This study also found that some of the discounting factors for the task frequency parameters recommended in the prior psychophysical research should not be used as several of the high frequency factors violated physiological limits.

  9. Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach

    NASA Technical Reports Server (NTRS)

    Mak, Victor W. K.

    1986-01-01

    Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.

  10. On-the-fly scheduling as a manifestation of partial-order planning and dynamic task values.

    PubMed

    Hannah, Samuel D; Neal, Andrew

    2014-09-01

    The aim of this study was to develop a computational account of the spontaneous task ordering that occurs within jobs as work unfolds ("on-the-fly task scheduling"). Air traffic control is an example of work in which operators have to schedule their tasks as a partially predictable work flow emerges. To date, little attention has been paid to such on-the-fly scheduling situations. We present a series of discrete-event models fit to conflict resolution decision data collected from experienced controllers operating in a high-fidelity simulation. Our simulations reveal air traffic controllers' scheduling decisions as examples of the partial-order planning approach of Hayes-Roth and Hayes-Roth. The most successful model uses opportunistic first-come-first-served scheduling to select tasks from a queue. Tasks with short deadlines are executed immediately. Tasks with long deadlines are evaluated to assess whether they need to be executed immediately or deferred. On-the-fly task scheduling is computationally tractable despite its surface complexity and understandable as an example of both the partial-order planning strategy and the dynamic-value approach to prioritization.

  11. Staff Perceptions of Professional Development and Empowerment as Long-Term Leadership Tasks of School Principals in South African Schools: An Exploratory Study

    ERIC Educational Resources Information Center

    van Niekerk, Eldridge; Muller, Hélène

    2017-01-01

    This article reports on the perceptions of school staff of professional development and empowerment as part of the long-term leadership task of principals. The long-term leadership model was used as a theoretical framework to quantitatively determine the perceptions of 118 teachers and education managers in approximately 100 schools throughout…

  12. Development of a Values Inventory for Grades 1 Through 3 in Five Ethnic Groups. Progress Report.

    ERIC Educational Resources Information Center

    Guilford, Joan S.

    The bulk of the report was an outline of the steps taken to accomplish stated project tasks. These tasks included: (1) identifying the dimensions of value, which involved reviewing the literature and formulating a model based on Maslow's hierarchy of needs; (2) constructing the item pool, all of which were to be pictorial; (3) developing a test…

  13. AgRISTARS: Yield model development/soil moisture. Interface control document

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The interactions and support functions required between the crop Yield Model Development (YMD) Project and Soil Moisture (SM) Project are defined. The requirements for YMD support of SM and vice-versa are outlined. Specific tasks in support of these interfaces are defined for development of support functions.

  14. The emergent executive: a dynamic field theory of the development of executive function.

    PubMed

    Buss, Aaron T; Spencer, John P

    2014-06-01

    Executive function (EF) is a central aspect of cognition that undergoes significant changes in early childhood. Changes in EF in early childhood are robustly predictive of academic achievement and general quality of life measures later in adulthood. We present a dynamic neural field (DNF) model that provides a process-based account of behavior and developmental change in a key task used to probe the early development of executive function—the Dimensional Change Card Sort (DCCS) task. In the DCCS, children must flexibly switch from sorting cards either by shape or color to sorting by the other dimension. Typically, 3-year-olds, but not 5-year-olds, lack the flexibility to do so and perseverate on the first set of rules when instructed to switch. Using the DNF model, we demonstrate how rule-use and behavioral flexibility come about through a form of dimensional attention. Further, developmental change is captured by increasing the robustness and precision of dimensional attention. Note that although this enables the model to effectively switch tasks, the dimensional attention system does not “know” the details of task-specific performance. Rather, correct performance emerges as a property of system–wide interactions. We show how this captures children’s behavior in quantitative detail across 14 versions of the DCCS task. Moreover, we successfully test a set of novel predictions with 3-year-old children from a version of the task not explained by other theories.

  15. Effects-based strategy development through center of gravity and target system analysis

    NASA Astrophysics Data System (ADS)

    White, Christopher M.; Prendergast, Michael; Pioch, Nicholas; Jones, Eric K.; Graham, Stephen

    2003-09-01

    This paper describes an approach to effects-based planning in which a strategic-theater-level mission is refined into operational-level and ultimately tactical-level tasks and desired effects, informed by models of the expected enemy response at each level of abstraction. We describe a strategy development system that implements this approach and supports human-in-the-loop development of an effects-based plan. This system consists of plan authoring tools tightly integrated with a suite of center of gravity (COG) and target system analysis tools. A human planner employs the plan authoring tools to develop a hierarchy of tasks and desired effects. Upon invocation, the target system analysis tools use reduced-order models of enemy centers of gravity to select appropriate target set options for the achievement of desired effects, together with associated indicators for each option. The COG analysis tools also provide explicit models of the causal mechanisms linking tasks and desired effects to one another, and suggest appropriate observable indicators to guide ISR planning, execution monitoring, and campaign assessment. We are currently implementing the system described here as part of the AFRL-sponsored Effects Based Operations program.

  16. Lifespan development of pro- and anti-saccades: multiple regression models for point estimates.

    PubMed

    Klein, Christoph; Foerster, Friedrich; Hartnegg, Klaus; Fischer, Burkhart

    2005-12-07

    The comparative study of anti- and pro-saccade task performance contributes to our functional understanding of the frontal lobes, their alterations in psychiatric or neurological populations, and their changes during the life span. In the present study, we apply regression analysis to model life span developmental effects on various pro- and anti-saccade task parameters, using data of a non-representative sample of 327 participants aged 9 to 88 years. Development up to the age of about 27 years was dominated by curvilinear rather than linear effects of age. Furthermore, the largest developmental differences were found for intra-subject variability measures and the anti-saccade task parameters. Ageing, by contrast, had the shape of a global linear decline of the investigated saccade functions, lacking the differential effects of age observed during development. While these results do support the assumption that frontal lobe functions can be distinguished from other functions by their strong and protracted development, they do not confirm the assumption of disproportionate deterioration of frontal lobe functions with ageing. We finally show that the regression models applied here to quantify life span developmental effects can also be used for individual predictions in applied research contexts or clinical practice.

  17. Hierarchical Control Using Networks Trained with Higher-Level Forward Models

    PubMed Central

    Wayne, Greg; Abbott, L.F.

    2015-01-01

    We propose and develop a hierarchical approach to network control of complex tasks. In this approach, a low-level controller directs the activity of a “plant,” the system that performs the task. However, the low-level controller may only be able to solve fairly simple problems involving the plant. To accomplish more complex tasks, we introduce a higher-level controller that controls the lower-level controller. We use this system to direct an articulated truck to a specified location through an environment filled with static or moving obstacles. The final system consists of networks that have memorized associations between the sensory data they receive and the commands they issue. These networks are trained on a set of optimal associations that are generated by minimizing cost functions. Cost function minimization requires predicting the consequences of sequences of commands, which is achieved by constructing forward models, including a model of the lower-level controller. The forward models and cost minimization are only used during training, allowing the trained networks to respond rapidly. In general, the hierarchical approach can be extended to larger numbers of levels, dividing complex tasks into more manageable sub-tasks. The optimization procedure and the construction of the forward models and controllers can be performed in similar ways at each level of the hierarchy, which allows the system to be modified to perform other tasks, or to be extended for more complex tasks without retraining lower-levels. PMID:25058706

  18. Force on Force Modeling with Formal Task Structures and Dynamic Geometry

    DTIC Science & Technology

    2017-03-24

    task framework, derived using the MMF methodology to structure a complex mission. It further demonstrated the integration of effects from a range of...application methodology was intended to support a combined developmental testing (DT) and operational testing (OT) strategy for selected systems under test... methodology to develop new or modify existing Models and Simulations (M&S) to: • Apply data from multiple, distributed sources (including test

  19. Using Apex To Construct CPM-GOMS Models

    NASA Technical Reports Server (NTRS)

    John, Bonnie; Vera, Alonso; Matessa, Michael; Freed, Michael; Remington, Roger

    2006-01-01

    process for automatically generating computational models of human/computer interactions as well as graphical and textual representations of the models has been built on the conceptual foundation of a method known in the art as CPM-GOMS. This method is so named because it combines (1) the task decomposition of analysis according to an underlying method known in the art as the goals, operators, methods, and selection (GOMS) method with (2) a model of human resource usage at the level of cognitive, perceptual, and motor (CPM) operations. CPM-GOMS models have made accurate predictions about behaviors of skilled computer users in routine tasks, but heretofore, such models have been generated in a tedious, error-prone manual process. In the present process, CPM-GOMS models are generated automatically from a hierarchical task decomposition expressed by use of a computer program, known as Apex, designed previously to be used to model human behavior in complex, dynamic tasks. An inherent capability of Apex for scheduling of resources automates the difficult task of interleaving the cognitive, perceptual, and motor resources that underlie common task operators (e.g., move and click mouse). The user interface of Apex automatically generates Program Evaluation Review Technique (PERT) charts, which enable modelers to visualize the complex parallel behavior represented by a model. Because interleaving and the generation of displays to aid visualization are automated, it is now feasible to construct arbitrarily long sequences of behaviors. The process was tested by using Apex to create a CPM-GOMS model of a relatively simple human/computer-interaction task and comparing the time predictions of the model and measurements of the times taken by human users in performing the various steps of the task. The task was to withdraw $80 in cash from an automated teller machine (ATM). For the test, a Visual Basic mockup of an ATM was created, with a provision for input from (and measurement of the performance of) the user via a mouse. The times predicted by the automatically generated model turned out to approximate the measured times fairly well (see figure). While these results are promising, there is need for further development of the process. Moreover, it will also be necessary to test other, more complex models: The actions required of the user in the ATM task are too sequential to involve substantial parallelism and interleaving and, hence, do not serve as an adequate test of the unique strength of CPM-GOMS models to accommodate parallelism and interleaving.

  20. Global models: Robot sensing, control, and sensory-motor skills

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S.

    1989-01-01

    Robotics research has begun to address the modeling and implementation of a wide variety of unstructured tasks. Examples include automated navigation, platform servicing, custom fabrication and repair, deployment and recovery, and science exploration. Such tasks are poorly described at onset; the workspace layout is partially unfamiliar, and the task control sequence is only qualitatively characterized. The robot must model the workspace, plan detailed physical actions from qualitative goals, and adapt its instantaneous control regimes to unpredicted events. Developing robust representations and computational approaches for these sensing, planning, and control functions is a major challenge. The underlying domain constraints are very general, and seem to offer little guidance for well-bounded approximation of object shape and motion, manipulation postures and trajectories, and the like. This generalized modeling problem is discussed, with an emphasis on the role of sensing. It is also discussed that unstructured tasks often have, in fact, a high degree of underlying physical symmetry, and such implicit knowledge should be drawn on to model task performance strategies in a methodological fashion. A group-theoretic decomposition of the workspace organization, task goals, and their admissible interactions are proposed. This group-mechanical approach to task representation helps to clarify the functional interplay of perception and control, in essence, describing what perception is specifically for, versus how it is generically modeled. One also gains insight how perception might logically evolve in response to needs of more complex motor skills. It is discussed why, of the many solutions that are often mathematically admissible to a given sensory motor-coordination problem, one may be preferred over others.

  1. A Microsoft Project-Based Planning, Tracking, and Management Tool for the National Transonic Facility's Model Changeover Process

    NASA Technical Reports Server (NTRS)

    Vairo, Daniel M.

    1998-01-01

    The removal and installation of sting-mounted wind tunnel models in the National Transonic Facility (NTF) is a multi-task process having a large impact on the annual throughput of the facility. Approximately ten model removal and installation cycles occur annually at the NTF with each cycle requiring slightly over five days to complete. The various tasks of the model changeover process were modeled in Microsoft Project as a template to provide a planning, tracking, and management tool. The template can also be used as a tool to evaluate improvements to this process. This document describes the development of the template and provides step-by-step instructions on its use and as a planning and tracking tool. A secondary role of this document is to provide an overview of the model changeover process and briefly describe the tasks associated with it.

  2. Towards the quantitative evaluation of visual attention models.

    PubMed

    Bylinskii, Z; DeGennaro, E M; Rajalingham, R; Ruda, H; Zhang, J; Tsotsos, J K

    2015-11-01

    Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuring various aspects of human and non-human primate visual attention. All of these elements highlight the need to integrate the computational models with the data by (1) operationalizing the definitions of visual attention tasks and (2) designing benchmark datasets to measure success on specific tasks, under these definitions. In this paper, we provide some examples of operationalizing and benchmarking different visual attention tasks, along with the relevant design considerations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Cancer Model Development Centers

    Cancer.gov

    The Cancer Model Development Centers (CMDCs) are the NCI-funded contributors to the HCMI. They are tasked with producing next-generation cancer models from clinical samples. The cancer models will encompass tumor types that are rare, originate from patients from underrepresented populations, or lack precision therapy. These models will be annotated with clinical and genomic data and will become a community resource.

  4. Evidence for the triadic model of adolescent brain development: Cognitive load and task-relevance of emotion differentially affect adolescents and adults.

    PubMed

    Mueller, Sven C; Cromheeke, Sofie; Siugzdaite, Roma; Nicolas Boehler, C

    2017-08-01

    In adults, cognitive control is supported by several brain regions including the limbic system and the dorsolateral prefrontal cortex (dlPFC) when processing emotional information. However, in adolescents, some theories hypothesize a neurobiological imbalance proposing heightened sensitivity to affective material in the amygdala and striatum within a cognitive control context. Yet, direct neurobiological evidence is scarce. Twenty-four adolescents (12-16) and 28 adults (25-35) completed an emotional n-back working memory task in response to happy, angry, and neutral faces during fMRI. Importantly, participants either paid attention to the emotion (task-relevant condition) or judged the gender (task-irrelevant condition). Behaviorally, for both groups, when happy faces were task-relevant, performance improved relative to when they were task-irrelevant, while performance decrements were seen for angry faces. In the dlPFC, angry faces elicited more activation in adults during low relative to high cognitive load (2-back vs. 0-back). By contrast, happy faces elicited more activation in the amygdala in adolescents when they were task-relevant. Happy faces also generally increased nucleus accumbens activity (regardless of relevance) in adolescents relative to adults. Together, the findings are consistent with neurobiological models of adolescent brain development and identify neurodevelopmental differences in cognitive control emotion interactions. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Crack Damage Detection Method via Multiple Visual Features and Efficient Multi-Task Learning Model.

    PubMed

    Wang, Baoxian; Zhao, Weigang; Gao, Po; Zhang, Yufeng; Wang, Zhe

    2018-06-02

    This paper proposes an effective and efficient model for concrete crack detection. The presented work consists of two modules: multi-view image feature extraction and multi-task crack region detection. Specifically, multiple visual features (such as texture, edge, etc.) of image regions are calculated, which can suppress various background noises (such as illumination, pockmark, stripe, blurring, etc.). With the computed multiple visual features, a novel crack region detector is advocated using a multi-task learning framework, which involves restraining the variability for different crack region features and emphasizing the separability between crack region features and complex background ones. Furthermore, the extreme learning machine is utilized to construct this multi-task learning model, thereby leading to high computing efficiency and good generalization. Experimental results of the practical concrete images demonstrate that the developed algorithm can achieve favorable crack detection performance compared with traditional crack detectors.

  6. Applied Meteorology Unit (AMU) Quarterly Report Fourth Quarter FY-14

    NASA Technical Reports Server (NTRS)

    Bauman, William H.; Crawford, Winifred C.; Watson, Leela R.; Shafer, Jaclyn

    2014-01-01

    Ms. Crawford completed the final report for the dual-Doppler wind field task. Dr. Bauman completed transitioning the 915-MHz and 50-MHz Doppler Radar Wind Profiler (DRWP) splicing algorithm developed at Marshall Space Flight Center (MSFC) into the AMU Upper Winds Tool. Dr. Watson completed work to assimilate data into model configurations for Wallops Flight Facility (WFF) and Kennedy Space Center/Cape Canaveral Air Force Station (KSC/CCAFS). Ms. Shafer began evaluating the a local high-resolution model she had set up previously for its ability to forecast weather elements that affect launches at KSC/CCAFS. Dr. Watson began a task to optimize the data-assimilated model she just developed to run in real time.

  7. The Influence of Feedback on Task-Switching Performance: A Drift Diffusion Modeling Account.

    PubMed

    Cohen Hoffing, Russell; Karvelis, Povilas; Rupprechter, Samuel; Seriès, Peggy; Seitz, Aaron R

    2018-01-01

    Task-switching is an important cognitive skill that facilitates our ability to choose appropriate behavior in a varied and changing environment. Task-switching training studies have sought to improve this ability by practicing switching between multiple tasks. However, an efficacious training paradigm has been difficult to develop in part due to findings that small differences in task parameters influence switching behavior in a non-trivial manner. Here, for the first time we employ the Drift Diffusion Model (DDM) to understand the influence of feedback on task-switching and investigate how drift diffusion parameters change over the course of task switch training. We trained 316 participants on a simple task where they alternated sorting stimuli by color or by shape. Feedback differed in six different ways between subjects groups, ranging from No Feedback (NFB) to a variety of manipulations addressing trial-wise vs. Block Feedback (BFB), rewards vs. punishments, payment bonuses and different payouts depending upon the trial type (switch/non-switch). While overall performance was found to be affected by feedback, no effect of feedback was found on task-switching learning. Drift Diffusion Modeling revealed that the reductions in reaction time (RT) switch cost over the course of training were driven by a continually decreasing decision boundary. Furthermore, feedback effects on RT switch cost were also driven by differences in decision boundary, but not in drift rate. These results reveal that participants systematically modified their task-switching performance without yielding an overall gain in performance.

  8. A Pilot Study of Biomedical Text Comprehension using an Attention-Based Deep Neural Reader: Design and Experimental Analysis

    PubMed Central

    Lee, Kyubum; Kim, Byounggun; Jeon, Minji; Kim, Jihye; Tan, Aik Choon

    2018-01-01

    Background With the development of artificial intelligence (AI) technology centered on deep-learning, the computer has evolved to a point where it can read a given text and answer a question based on the context of the text. Such a specific task is known as the task of machine comprehension. Existing machine comprehension tasks mostly use datasets of general texts, such as news articles or elementary school-level storybooks. However, no attempt has been made to determine whether an up-to-date deep learning-based machine comprehension model can also process scientific literature containing expert-level knowledge, especially in the biomedical domain. Objective This study aims to investigate whether a machine comprehension model can process biomedical articles as well as general texts. Since there is no dataset for the biomedical literature comprehension task, our work includes generating a large-scale question answering dataset using PubMed and manually evaluating the generated dataset. Methods We present an attention-based deep neural model tailored to the biomedical domain. To further enhance the performance of our model, we used a pretrained word vector and biomedical entity type embedding. We also developed an ensemble method of combining the results of several independent models to reduce the variance of the answers from the models. Results The experimental results showed that our proposed deep neural network model outperformed the baseline model by more than 7% on the new dataset. We also evaluated human performance on the new dataset. The human evaluation result showed that our deep neural model outperformed humans in comprehension by 22% on average. Conclusions In this work, we introduced a new task of machine comprehension in the biomedical domain using a deep neural model. Since there was no large-scale dataset for training deep neural models in the biomedical domain, we created the new cloze-style datasets Biomedical Knowledge Comprehension Title (BMKC_T) and Biomedical Knowledge Comprehension Last Sentence (BMKC_LS) (together referred to as BioMedical Knowledge Comprehension) using the PubMed corpus. The experimental results showed that the performance of our model is much higher than that of humans. We observed that our model performed consistently better regardless of the degree of difficulty of a text, whereas humans have difficulty when performing biomedical literature comprehension tasks that require expert level knowledge. PMID:29305341

  9. Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking.

    PubMed

    Lages, Martin; Scheel, Anne

    2016-01-01

    We investigated the proposition of a two-systems Theory of Mind in adults' belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking.

  10. Genotype-phenotype association study via new multi-task learning model

    PubMed Central

    Huo, Zhouyuan; Shen, Dinggang

    2018-01-01

    Research on the associations between genetic variations and imaging phenotypes is developing with the advance in high-throughput genotype and brain image techniques. Regression analysis of single nucleotide polymorphisms (SNPs) and imaging measures as quantitative traits (QTs) has been proposed to identify the quantitative trait loci (QTL) via multi-task learning models. Recent studies consider the interlinked structures within SNPs and imaging QTs through group lasso, e.g. ℓ2,1-norm, leading to better predictive results and insights of SNPs. However, group sparsity is not enough for representing the correlation between multiple tasks and ℓ2,1-norm regularization is not robust either. In this paper, we propose a new multi-task learning model to analyze the associations between SNPs and QTs. We suppose that low-rank structure is also beneficial to uncover the correlation between genetic variations and imaging phenotypes. Finally, we conduct regression analysis of SNPs and QTs. Experimental results show that our model is more accurate in prediction than compared methods and presents new insights of SNPs. PMID:29218896

  11. Development and evaluation of a predictive algorithm for telerobotic task complexity

    NASA Technical Reports Server (NTRS)

    Gernhardt, M. L.; Hunter, R. C.; Hedgecock, J. C.; Stephenson, A. G.

    1993-01-01

    There is a wide range of complexity in the various telerobotic servicing tasks performed in subsea, space, and hazardous material handling environments. Experience with telerobotic servicing has evolved into a knowledge base used to design tasks to be 'telerobot friendly.' This knowledge base generally resides in a small group of people. Written documentation and requirements are limited in conveying this knowledge base to serviceable equipment designers and are subject to misinterpretation. A mathematical model of task complexity based on measurable task parameters and telerobot performance characteristics would be a valuable tool to designers and operational planners. Oceaneering Space Systems and TRW have performed an independent research and development project to develop such a tool for telerobotic orbital replacement unit (ORU) exchange. This algorithm was developed to predict an ORU exchange degree of difficulty rating (based on the Cooper-Harper rating used to assess piloted operations). It is based on measurable parameters of the ORU, attachment receptacle and quantifiable telerobotic performance characteristics (e.g., link length, joint ranges, positional accuracy, tool lengths, number of cameras, and locations). The resulting algorithm can be used to predict task complexity as the ORU parameters, receptacle parameters, and telerobotic characteristics are varied.

  12. Sting, Carry and Stock: How Corpse Availability Can Regulate De-Centralized Task Allocation in a Ponerine Ant Colony

    PubMed Central

    Schmickl, Thomas; Karsai, Istvan

    2014-01-01

    We develop a model to produce plausible patterns of task partitioning in the ponerine ant Ectatomma ruidum based on the availability of living prey and prey corpses. The model is based on the organizational capabilities of a “common stomach” through which the colony utilizes the availability of a natural (food) substance as a major communication channel to regulate the income and expenditure of the very same substance. This communication channel has also a central role in regulating task partitioning of collective hunting behavior in a supply&demand-driven manner. Our model shows that task partitioning of the collective hunting behavior in E. ruidum can be explained by regulation due to a common stomach system. The saturation of the common stomach provides accessible information to individual ants so that they can adjust their hunting behavior accordingly by engaging in or by abandoning from stinging or transporting tasks. The common stomach is able to establish and to keep stabilized an effective mix of workforce to exploit the prey population and to transport food into the nest. This system is also able to react to external perturbations in a de-centralized homeostatic way, such as to changes in the prey density or to accumulation of food in the nest. In case of stable conditions the system develops towards an equilibrium concerning colony size and prey density. Our model shows that organization of work through a common stomach system can allow Ectatomma ruidum to collectively forage for food in a robust, reactive and reliable way. The model is compared to previously published models that followed a different modeling approach. Based on our model analysis we also suggest a series of experiments for which our model gives plausible predictions. These predictions are used to formulate a set of testable hypotheses that should be investigated empirically in future experimentation. PMID:25493558

  13. Process modelling for materials preparation experiments

    NASA Technical Reports Server (NTRS)

    Rosenberger, Franz; Alexander, J. Iwan D.

    1993-01-01

    The main goals of the research consist of the development of mathematical tools and measurement of transport properties necessary for high fidelity modeling of crystal growth from the melt and solution, in particular for the Bridgman-Stockbarger growth of mercury cadmium telluride (MCT) and the solution growth of triglycine sulphate (TGS). Of the tasks described in detail in the original proposal, two remain to be worked on: development of a spectral code for moving boundary problems, and diffusivity measurements on concentrated and supersaturated TGS solutions. During this eighth half-year period, good progress was made on these tasks.

  14. Process modelling for materials preparation experiments

    NASA Technical Reports Server (NTRS)

    Rosenberger, Franz; Alexander, J. Iwan D.

    1992-01-01

    The development is examined of mathematical tools and measurement of transport properties necessary for high fidelity modeling of crystal growth from the melt and solution, in particular for the Bridgman-Stockbarger growth of mercury cadmium telluride (MCT) and the solution growth of triglycine sulphate (TGS). The tasks include development of a spectral code for moving boundary problems, kinematic viscosity measurements on liquid MCT at temperatures close to the melting point, and diffusivity measurements on concentrated and supersaturated TGS solutions. A detailed description is given of the work performed for these tasks, together with a summary of the resulting publications and presentations.

  15. A Contingency Model of Conflict and Team Effectiveness

    ERIC Educational Resources Information Center

    Shaw, Jason D.; Zhu, Jing; Duffy, Michelle K.; Scott, Kristin L.; Shih, Hsi-An; Susanto, Ely

    2011-01-01

    The authors develop and test theoretical extensions of the relationships of task conflict, relationship conflict, and 2 dimensions of team effectiveness (performance and team-member satisfaction) among 2 samples of work teams in Taiwan and Indonesia. Findings show that relationship conflict moderates the task conflict-team performance…

  16. Chaos of Textures or "Tapisserie"? A Model for Creative Teacher Education Curriculum Design

    ERIC Educational Resources Information Center

    Simon, Sue E.

    2013-01-01

    A tapestry or "tapisserie" methodology, inspired by Denzin and Lincoln's "bricolage" methodology (2000), emerged during the complex task of re-developing teacher education programs at the University of the Sunshine Coast, Queensland, Australia. "Tapisserie" methodology highlights the pivotal task of determining…

  17. GeoVision Exploration Task Force Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doughty, Christine; Dobson, Patrick F.; Wall, Anna

    The GeoVision study effort included ground-breaking, detailed research on current and future market conditions and geothermal technologies in order to forecast and quantify the electric and non-electric deployment potentials under a range of scenarios, in addition to their impacts on the Nation’s jobs, economy and environment. Coordinated by the U.S. Department of Energy’s (DOE’s) Geothermal Technologies Office (GTO), the GeoVision study development relied on the collection, modeling, and analysis of robust datasets through seven national laboratory partners, which were organized into eight technical Task Force groups. The purpose of this report is to provide a central repository for the researchmore » conducted by the Exploration Task Force. The Exploration Task Force consists of four individuals representing three national laboratories: Patrick Dobson (task lead) and Christine Doughty of Lawrence Berkeley National Laboratory, Anna Wall of National Renewable Energy Laboratory, Travis McLing of Idaho National Laboratory, and Chester Weiss of Sandia National Laboratories. As part of the GeoVision analysis, our team conducted extensive scientific and financial analyses on a number of topics related to current and future geothermal exploration methods. The GeoVision Exploration Task Force complements the drilling and resource technology investigations conducted as part of the Reservoir Maintenance and Development Task Force. The Exploration Task Force however has focused primarily on early stage R&D technologies in exploration and confirmation drilling, along with an evaluation of geothermal financing challenges and assumptions, and innovative “blue-sky” technologies. This research was used to develop geothermal resource supply curves (through the use of GETEM) for use in the ReEDS capacity expansion modeling that determines geothermal technology deployment potential. It also catalogues and explores the large array of early-stage R&D technologies with the potential to dramatically reduce exploration and geothermal development costs, forming the basis of the GeoVision Technology Improvement (TI) scenario. These modeling topics are covered in detail in Potential to Penetration task force report. Most of the research contained herein has been published in peer-reviewed papers or conference proceedings and are cited and referenced accordingly. The sections that follow provide a central repository for all of the research findings of the Exploration and Confirmation Task Force. In summary, it provides a comprehensive discussion of Engineered Geothermal Systems (EGS) and associated technology challenges, the risks and costs of conducting geothermal exploration, a review of existing government efforts to date in advancing early-stage R&D in both exploration and EGS technologies, as well as a discussion of promising and innovative technologies and implementation of blue-sky concepts that could significantly reduce costs, lower risks, and shorten the time needed to explore and develop geothermal resources of all types.« less

  18. Development and implementation of a three-choice serial reaction time task for zebrafish (Danio rerio).

    PubMed

    Parker, Matthew O; Millington, Mollie E; Combe, Fraser J; Brennan, Caroline H

    2012-02-01

    Zebrafish are an established and widely utilized developmental genetic model system, but limitations in developed behavioral assays have meant that their potential as a model in behavioral neuroscience has yet to be fully realized. Here, we describe the development of a novel operant behavioral assay to examine a variety of aspects of stimulus control in zebrafish using a 3 choice serial reaction time task (3 CSRTT). Fish were briefly exposed to three spatially distinct, but perceptually identical stimuli, presented in a random order after a fixed-time inter-trial interval (ITI). Entries to the correct response aperture either during the stimulus presentation, or within a brief limited hold period following presentation, were reinforced with illumination of the magazine light and delivery of a small food reward. Following training, premature responding was probed with a long-ITI session three times; once at baseline, once following a saline injection and once following an injection of a low dose of amphetamine (AMPH; 0.025 mg/kg). We predicted that if premature responding was related to impulsivity (as in rodents) it would be reduced following the AMPH injection. Results confirmed that zebrafish could learn to perform a complex operant task similar to tasks developed for rodents which are used to probe sustained attention and impulsivity, but the results from the AMPH trials were inconclusive. This study provides the foundations for development and further validation of this species as a model for some aspects of human attentional and impulse control disorders, such as substance abuse disorder. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Digital Model-Based Engineering: Expectations, Prerequisites, and Challenges of Infusion

    NASA Technical Reports Server (NTRS)

    Hale, J. P.; Zimmerman, P.; Kukkala, G.; Guerrero, J.; Kobryn, P.; Puchek, B.; Bisconti, M.; Baldwin, C.; Mulpuri, M.

    2017-01-01

    Digital model-based engineering (DMbE) is the use of digital artifacts, digital environments, and digital tools in the performance of engineering functions. DMbE is intended to allow an organization to progress from documentation-based engineering methods to digital methods that may provide greater flexibility, agility, and efficiency. The term 'DMbE' was developed as part of an effort by the Model-Based Systems Engineering (MBSE) Infusion Task team to identify what government organizations might expect in the course of moving to or infusing MBSE into their organizations. The Task team was established by the Interagency Working Group on Engineering Complex Systems, an informal collaboration among government systems engineering organizations. This Technical Memorandum (TM) discusses the work of the MBSE Infusion Task team to date. The Task team identified prerequisites, expectations, initial challenges, and recommendations for areas of study to pursue, as well as examples of efforts already in progress. The team identified the following five expectations associated with DMbE infusion, discussed further in this TM: (1) Informed decision making through increased transparency, and greater insight. (2) Enhanced communication. (3) Increased understanding for greater flexibility/adaptability in design. (4) Increased confidence that the capability will perform as expected. (5) Increased efficiency. The team identified the following seven challenges an organization might encounter when looking to infuse DMbE: (1) Assessing value added to the organization. Not all DMbE practices will be applicable to every situation in every organization, and not all implementations will have positive results. (2) Overcoming organizational and cultural hurdles. (3) Adopting contractual practices and technical data management. (4) Redefining configuration management. The DMbE environment changes the range of configuration information to be managed to include performance and design models, database objects, as well as more traditional book-form objects and formats. (5) Developing information technology (IT) infrastructure. Approaches to implementing critical, enabling IT infrastructure capabilities must be flexible, reconfigurable, and updatable. (6) Ensuring security of the single source of truth (7) Potential overreliance on quantitative data over qualitative data. Executable/ computational models and simulations generally incorporate and generate quantitative vice qualitative data. The Task team also developed several recommendations for government, academia, and industry, as discussed in this TM. The Task team recommends continuing beyond this initial work to further develop the means of implementing DMbE and to look for opportunities to collaborate and share best practices.

  20. Intelligent robot control using an adaptive critic with a task control center and dynamic database

    NASA Astrophysics Data System (ADS)

    Hall, E. L.; Ghaffari, M.; Liao, X.; Alhaj Ali, S. M.

    2006-10-01

    The purpose of this paper is to describe the design, development and simulation of a real time controller for an intelligent, vision guided robot. The use of a creative controller that can select its own tasks is demonstrated. This creative controller uses a task control center and dynamic database. The dynamic database stores both global environmental information and local information including the kinematic and dynamic models of the intelligent robot. The kinematic model is very useful for position control and simulations. However, models of the dynamics of the manipulators are needed for tracking control of the robot's motions. Such models are also necessary for sizing the actuators, tuning the controller, and achieving superior performance. Simulations of various control designs are shown. Also, much of the model has also been used for the actual prototype Bearcat Cub mobile robot. This vision guided robot was designed for the Intelligent Ground Vehicle Contest. A novel feature of the proposed approach is that the method is applicable to both robot arm manipulators and robot bases such as wheeled mobile robots. This generality should encourage the development of more mobile robots with manipulator capability since both models can be easily stored in the dynamic database. The multi task controller also permits wide applications. The use of manipulators and mobile bases with a high-level control are potentially useful for space exploration, certain rescue robots, defense robots, and medical robotics aids.

  1. Development of an Instructional Model for Online Task-Based Interactive Listening for EFL Learners

    ERIC Educational Resources Information Center

    Tian, Xingbin; Suppasetseree, Suksan

    2013-01-01

    College English in China has shifted from cultivating reading ability to comprehensive communicative abilities with an emphasis on listening and speaking. For this reason, new teaching models should be built on modern information technology. However, little research on developing models for the online teaching of listening skills has been…

  2. Using GOMS and Bayesian plan recognition to develop recognition models of operator behavior

    NASA Astrophysics Data System (ADS)

    Zaientz, Jack D.; DeKoven, Elyon; Piegdon, Nicholas; Wood, Scott D.; Huber, Marcus J.

    2006-05-01

    Trends in combat technology research point to an increasing role for uninhabited vehicles in modern warfare tactics. To support increased span of control over these vehicles human responsibilities need to be transformed from tedious, error-prone and cognition intensive operations into tasks that are more supervisory and manageable, even under intensely stressful conditions. The goal is to move away from only supporting human command of low-level system functions to intention-level human-system dialogue about the operator's tasks and situation. A critical element of this process is developing the means to identify when human operators need automated assistance and to identify what assistance they need. Toward this goal, we are developing an unmanned vehicle operator task recognition system that combines work in human behavior modeling and Bayesian plan recognition. Traditionally, human behavior models have been considered generative, meaning they describe all possible valid behaviors. Basing behavior recognition on models designed for behavior generation can offers advantages in improved model fidelity and reuse. It is not clear, however, how to reconcile the structural differences between behavior recognition and behavior modeling approaches. Our current work demonstrates that by pairing a cognitive psychology derived human behavior modeling approach, GOMS, with a Bayesian plan recognition engine, ASPRN, we can translate a behavior generation model into a recognition model. We will discuss the implications for using human performance models in this manner as well as suggest how this kind of modeling may be used to support the real-time control of multiple, uninhabited battlefield vehicles and other semi-autonomous systems.

  3. Engineering technology for networks

    NASA Technical Reports Server (NTRS)

    Paul, Arthur S.; Benjamin, Norman

    1991-01-01

    Space Network (SN) modeling and evaluation are presented. The following tasks are included: Network Modeling (developing measures and metrics for SN, modeling of the Network Control Center (NCC), using knowledge acquired from the NCC to model the SNC, and modeling the SN); and Space Network Resource scheduling.

  4. Investigation of automated task learning, decomposition and scheduling

    NASA Technical Reports Server (NTRS)

    Livingston, David L.; Serpen, Gursel; Masti, Chandrashekar L.

    1990-01-01

    The details and results of research conducted in the application of neural networks to task planning and decomposition are presented. Task planning and decomposition are operations that humans perform in a reasonably efficient manner. Without the use of good heuristics and usually much human interaction, automatic planners and decomposers generally do not perform well due to the intractable nature of the problems under consideration. The human-like performance of neural networks has shown promise for generating acceptable solutions to intractable problems such as planning and decomposition. This was the primary reasoning behind attempting the study. The basis for the work is the use of state machines to model tasks. State machine models provide a useful means for examining the structure of tasks since many formal techniques have been developed for their analysis and synthesis. It is the approach to integrate the strong algebraic foundations of state machines with the heretofore trial-and-error approach to neural network synthesis.

  5. Study of a Tracking and Data Acquisition System (TDAS) in the 1990's

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Progress in concept definition studies, operational assessments, and technology demonstrations for the Tracking and Data Acquisition System (TDAS) is reported. The proposed TDAS will be the follow-on to the Tracking and Data Relay Satellite System and will function as a key element of the NASA End-to-End Data System, providing the tracking and data acquisition interface between user accessible data ports on Earth and the user's spaceborne equipment. Technical activities of the "spacecraft data system architecture' task and the "communication mission model' task are emphasized. The objective of the first task is to provide technology forecasts for sensor data handling, navigation and communication systems, and estimate corresponding costs. The second task is concerned with developing a parametric description of the required communication channels. Other tasks with significant activity include the "frequency plan and radio interference model' and the "Viterbi decoder/simulator study'.

  6. Formative feedback and scaffolding for developing complex problem solving and modelling outcomes

    NASA Astrophysics Data System (ADS)

    Frank, Brian; Simper, Natalie; Kaupp, James

    2018-07-01

    This paper discusses the use and impact of formative feedback and scaffolding to develop outcomes for complex problem solving in a required first-year course in engineering design and practice at a medium-sized research-intensive Canadian university. In 2010, the course began to use team-based, complex, open-ended contextualised problems to develop problem solving, communications, teamwork, modelling, and professional skills. Since then, formative feedback has been incorporated into: task and process-level feedback on scaffolded tasks in-class, formative assignments, and post-assignment review. Development in complex problem solving and modelling has been assessed through analysis of responses from student surveys, direct criterion-referenced assessment of course outcomes from 2013 to 2015, and an external longitudinal study. The findings suggest that students are improving in outcomes related to complex problem solving over the duration of the course. Most notably, the addition of new feedback and scaffolding coincided with improved student performance.

  7. A longitudinal study of lexical development in children learning Vietnamese and English.

    PubMed

    Pham, Giang; Kohnert, Kathryn

    2014-01-01

    This longitudinal study modeled lexical development among children who spoke Vietnamese as a first language (L1) and English as a second language (L2). Participants (n = 33, initial mean age of 7.3 years) completed a total of eight tasks (four in each language) that measured vocabulary knowledge and lexical processing at four yearly time points. Multivariate hierarchical linear modeling was used to calculate L1 and L2 trajectories within the same model for each task. Main findings included (a) positive growth in each language, (b) greater gains in English resulting in shifts toward L2 dominance, and (c) different patterns for receptive and expressive domains. Timing of shifts to L2 dominance underscored L1 skills that are resilient and vulnerable to increases in L2 proficiency. © 2013 The Authors. Child Development © 2013 Society for Research in Child Development, Inc.

  8. Rodent Versions of the Iowa Gambling Task: Opportunities and Challenges for the Understanding of Decision-Making

    PubMed Central

    de Visser, Leonie; Homberg, Judith R.; Mitsogiannis, Manuela; Zeeb, Fiona D.; Rivalan, Marion; Fitoussi, Aurélie; Galhardo, Vasco; van den Bos, Ruud; Winstanley, Catherine A.; Dellu-Hagedorn, Françoise

    2011-01-01

    Impaired decision-making is a core problem in several psychiatric disorders including attention-deficit/hyperactivity disorder, schizophrenia, obsessive–compulsive disorder, mania, drug addiction, eating disorders, and substance abuse as well as in chronic pain. To ensure progress in the understanding of the neuropathophysiology of these disorders, animal models with good construct and predictive validity are indispensable. Many human studies aimed at measuring decision-making capacities use the Iowa gambling task (IGT), a task designed to model everyday life choices through a conflict between immediate gratification and long-term outcomes. Recently, new rodent models based on the same principle have been developed to investigate the neurobiological mechanisms underlying IGT-like decision-making on behavioral, neural, and pharmacological levels. The comparative strengths, as well as the similarities and differences between these paradigms are discussed. The contribution of these models to elucidate the neurobehavioral factors that lead to poor decision-making and to the development of better treatments for psychiatric illness is considered, along with important future directions and potential limitations. PMID:22013406

  9. Man-machine Integration Design and Analysis System (MIDAS) Task Loading Model (TLM) experimental and software detailed design report

    NASA Technical Reports Server (NTRS)

    Staveland, Lowell

    1994-01-01

    This is the experimental and software detailed design report for the prototype task loading model (TLM) developed as part of the man-machine integration design and analysis system (MIDAS), as implemented and tested in phase 6 of the Army-NASA Aircrew/Aircraft Integration (A3I) Program. The A3I program is an exploratory development effort to advance the capabilities and use of computational representations of human performance and behavior in the design, synthesis, and analysis of manned systems. The MIDAS TLM computationally models the demands designs impose on operators to aide engineers in the conceptual design of aircraft crewstations. This report describes TLM and the results of a series of experiments which were run this phase to test its capabilities as a predictive task demand modeling tool. Specifically, it includes discussions of: the inputs and outputs of TLM, the theories underlying it, the results of the test experiments, the use of the TLM as both stand alone tool and part of a complete human operator simulation, and a brief introduction to the TLM software design.

  10. The use of interactive computer vision and robot hand controllers for enhancing manufacturing safety

    NASA Technical Reports Server (NTRS)

    Marzwell, Neville I.; Jacobus, Charles J.; Peurach, Thomas M.; Mitchell, Brian T.

    1994-01-01

    Current available robotic systems provide limited support for CAD-based model-driven visualization, sensing algorithm development and integration, and automated graphical planning systems. This paper describes ongoing work which provides the functionality necessary to apply advanced robotics to automated manufacturing and assembly operations. An interface has been built which incorporates 6-DOF tactile manipulation, displays for three dimensional graphical models, and automated tracking functions which depend on automated machine vision. A set of tools for single and multiple focal plane sensor image processing and understanding has been demonstrated which utilizes object recognition models. The resulting tool will enable sensing and planning from computationally simple graphical objects. A synergistic interplay between human and operator vision is created from programmable feedback received from the controller. This approach can be used as the basis for implementing enhanced safety in automated robotics manufacturing, assembly, repair and inspection tasks in both ground and space applications. Thus, an interactive capability has been developed to match the modeled environment to the real task environment for safe and predictable task execution.

  11. The use of interactive computer vision and robot hand controllers for enhancing manufacturing safety

    NASA Astrophysics Data System (ADS)

    Marzwell, Neville I.; Jacobus, Charles J.; Peurach, Thomas M.; Mitchell, Brian T.

    1994-02-01

    Current available robotic systems provide limited support for CAD-based model-driven visualization, sensing algorithm development and integration, and automated graphical planning systems. This paper describes ongoing work which provides the functionality necessary to apply advanced robotics to automated manufacturing and assembly operations. An interface has been built which incorporates 6-DOF tactile manipulation, displays for three dimensional graphical models, and automated tracking functions which depend on automated machine vision. A set of tools for single and multiple focal plane sensor image processing and understanding has been demonstrated which utilizes object recognition models. The resulting tool will enable sensing and planning from computationally simple graphical objects. A synergistic interplay between human and operator vision is created from programmable feedback received from the controller. This approach can be used as the basis for implementing enhanced safety in automated robotics manufacturing, assembly, repair and inspection tasks in both ground and space applications. Thus, an interactive capability has been developed to match the modeled environment to the real task environment for safe and predictable task execution.

  12. Final Report on ITER Task Agreement 81-08

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richard L. Moore

    As part of an ITER Implementing Task Agreement (ITA) between the ITER US Participant Team (PT) and the ITER International Team (IT), the INL Fusion Safety Program was tasked to provide the ITER IT with upgrades to the fusion version of the MELCOR 1.8.5 code including a beryllium dust oxidation model. The purpose of this model is to allow the ITER IT to investigate hydrogen production from beryllium dust layers on hot surfaces inside the ITER vacuum vessel (VV) during in-vessel loss-of-cooling accidents (LOCAs). Also included in the ITER ITA was a task to construct a RELAP5/ATHENA model of themore » ITER divertor cooling loop to model the draining of the loop during a large ex-vessel pipe break followed by an in-vessel divertor break and compare the results to a simular MELCOR model developed by the ITER IT. This report, which is the final report for this agreement, documents the completion of the work scope under this ITER TA, designated as TA 81-08.« less

  13. Temporal motifs reveal collaboration patterns in online task-oriented networks

    NASA Astrophysics Data System (ADS)

    Xuan, Qi; Fang, Huiting; Fu, Chenbo; Filkov, Vladimir

    2015-05-01

    Real networks feature layers of interactions and complexity. In them, different types of nodes can interact with each other via a variety of events. Examples of this complexity are task-oriented social networks (TOSNs), where teams of people share tasks towards creating a quality artifact, such as academic research papers or software development in commercial or open source environments. Accomplishing those tasks involves both work, e.g., writing the papers or code, and communication, to discuss and coordinate. Taking into account the different types of activities and how they alternate over time can result in much more precise understanding of the TOSNs behaviors and outcomes. That calls for modeling techniques that can accommodate both node and link heterogeneity as well as temporal change. In this paper, we report on methodology for finding temporal motifs in TOSNs, limited to a system of two people and an artifact. We apply the methods to publicly available data of TOSNs from 31 Open Source Software projects. We find that these temporal motifs are enriched in the observed data. When applied to software development outcome, temporal motifs reveal a distinct dependency between collaboration and communication in the code writing process. Moreover, we show that models based on temporal motifs can be used to more precisely relate both individual developer centrality and team cohesion to programmer productivity than models based on aggregated TOSNs.

  14. Temporal motifs reveal collaboration patterns in online task-oriented networks.

    PubMed

    Xuan, Qi; Fang, Huiting; Fu, Chenbo; Filkov, Vladimir

    2015-05-01

    Real networks feature layers of interactions and complexity. In them, different types of nodes can interact with each other via a variety of events. Examples of this complexity are task-oriented social networks (TOSNs), where teams of people share tasks towards creating a quality artifact, such as academic research papers or software development in commercial or open source environments. Accomplishing those tasks involves both work, e.g., writing the papers or code, and communication, to discuss and coordinate. Taking into account the different types of activities and how they alternate over time can result in much more precise understanding of the TOSNs behaviors and outcomes. That calls for modeling techniques that can accommodate both node and link heterogeneity as well as temporal change. In this paper, we report on methodology for finding temporal motifs in TOSNs, limited to a system of two people and an artifact. We apply the methods to publicly available data of TOSNs from 31 Open Source Software projects. We find that these temporal motifs are enriched in the observed data. When applied to software development outcome, temporal motifs reveal a distinct dependency between collaboration and communication in the code writing process. Moreover, we show that models based on temporal motifs can be used to more precisely relate both individual developer centrality and team cohesion to programmer productivity than models based on aggregated TOSNs.

  15. User-Assisted Store Recycling for Dynamic Task Graph Schedulers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurt, Mehmet Can; Krishnamoorthy, Sriram; Agrawal, Gagan

    The emergence of the multi-core era has led to increased interest in designing effective yet practical parallel programming models. Models based on task graphs that operate on single-assignment data are attractive in several ways: they can support dynamic applications and precisely represent the available concurrency. However, they also require nuanced algorithms for scheduling and memory management for efficient execution. In this paper, we consider memory-efficient dynamic scheduling of task graphs. Specifically, we present a novel approach for dynamically recycling the memory locations assigned to data items as they are produced by tasks. We develop algorithms to identify memory-efficient store recyclingmore » functions by systematically evaluating the validity of a set of (user-provided or automatically generated) alternatives. Because recycling function can be input data-dependent, we have also developed support for continued correct execution of a task graph in the presence of a potentially incorrect store recycling function. Experimental evaluation demonstrates that our approach to automatic store recycling incurs little to no overheads, achieves memory usage comparable to the best manually derived solutions, often produces recycling functions valid across problem sizes and input parameters, and efficiently recovers from an incorrect choice of store recycling functions.« less

  16. Summaries of Minnehaha Creek Watershed District Plans/Studies/Reports

    DTIC Science & Technology

    2004-01-30

    34+ Management of all wetland functional assessment data in a Microsoft Access© database "+ Development of a GIS wetland data management system "+ Recommendations...General Task B Design GIS -Based Decision Making Model: Scenario-Based $125,000 $125,000 Model of Landuse Hydro Data Monitoring Task C Water Quality...Landuse and Land cover data + Watershed GIS data layers + Flood Insurance Rate Maps + Proposed project locations + Stream miles, reaches and conditions

  17. Students' Development of Structure Sense for the Distributive Law

    ERIC Educational Resources Information Center

    Schüler-Meyer, Alexander

    2017-01-01

    After being introduced to the distributive law in meaningful contexts, students need to extend its scope of application to unfamiliar expressions. In this article, a process model for the development of structure sense is developed. Building on this model, this article reports on a design research project in which exercise tasks support students…

  18. A quantum probability account of order effects in inference.

    PubMed

    Trueblood, Jennifer S; Busemeyer, Jerome R

    2011-01-01

    Order of information plays a crucial role in the process of updating beliefs across time. In fact, the presence of order effects makes a classical or Bayesian approach to inference difficult. As a result, the existing models of inference, such as the belief-adjustment model, merely provide an ad hoc explanation for these effects. We postulate a quantum inference model for order effects based on the axiomatic principles of quantum probability theory. The quantum inference model explains order effects by transforming a state vector with different sequences of operators for different orderings of information. We demonstrate this process by fitting the quantum model to data collected in a medical diagnostic task and a jury decision-making task. To further test the quantum inference model, a new jury decision-making experiment is developed. Using the results of this experiment, we compare the quantum inference model with two versions of the belief-adjustment model, the adding model and the averaging model. We show that both the quantum model and the adding model provide good fits to the data. To distinguish the quantum model from the adding model, we develop a new experiment involving extreme evidence. The results from this new experiment suggest that the adding model faces limitations when accounting for tasks involving extreme evidence, whereas the quantum inference model does not. Ultimately, we argue that the quantum model provides a more coherent account for order effects that was not possible before. Copyright © 2011 Cognitive Science Society, Inc.

  19. Using Explanatory Item Response Models to Evaluate Complex Scientific Tasks Designed for the Next Generation Science Standards

    NASA Astrophysics Data System (ADS)

    Chiu, Tina

    This dissertation includes three studies that analyze a new set of assessment tasks developed by the Learning Progressions in Middle School Science (LPS) Project. These assessment tasks were designed to measure science content knowledge on the structure of matter domain and scientific argumentation, while following the goals from the Next Generation Science Standards (NGSS). The three studies focus on the evidence available for the success of this design and its implementation, generally labelled as "validity" evidence. I use explanatory item response models (EIRMs) as the overarching framework to investigate these assessment tasks. These models can be useful when gathering validity evidence for assessments as they can help explain student learning and group differences. In the first study, I explore the dimensionality of the LPS assessment by comparing the fit of unidimensional, between-item multidimensional, and Rasch testlet models to see which is most appropriate for this data. By applying multidimensional item response models, multiple relationships can be investigated, and in turn, allow for a more substantive look into the assessment tasks. The second study focuses on person predictors through latent regression and differential item functioning (DIF) models. Latent regression models show the influence of certain person characteristics on item responses, while DIF models test whether one group is differentially affected by specific assessment items, after conditioning on latent ability. Finally, the last study applies the linear logistic test model (LLTM) to investigate whether item features can help explain differences in item difficulties.

  20. Using Students' Interests as Algebraic Models

    ERIC Educational Resources Information Center

    Whaley, Kenneth A.

    2012-01-01

    Fostering algebraic thinking is an important goal for middle-grades mathematics teachers. Developing mathematical reasoning requires that teachers cultivate students' habits of mind. Teachers develop students' understanding of algebra by engaging them in tasks that involve modeling and representation. This study was designed to investigate how…

  1. Personality Traits Moderate the Effect of Workload Sources on Perceived Workload in Flying Column Police Officers

    PubMed Central

    Chiorri, Carlo; Garbarino, Sergio; Bracco, Fabrizio; Magnavita, Nicola

    2015-01-01

    Previous research has suggested that personality traits of the Five Factor Model play a role in worker's response to workload. The aim of this study was to investigate the association of personality traits of first responders with their perceived workload in real-life tasks. A flying column of 269 police officers completed a measure of subjective workload (NASA-Task Load Index) after intervention tasks in a major public event. Officers' scores on a measure of Five Factor Model personality traits were obtained from archival data. Linear Mixed Modeling was used to test the direct and interaction effects of personality traits on workload scores once controlling for background variables, task type and workload source (mental, temporal and physical demand of the task, perceived effort, dissatisfaction for the performance and frustration due to the task). All personality traits except extraversion significantly interacted at least with one workload source. Perceived workload in flying column police officers appears to be the result of their personality characteristics interacting with the workload source. The implications of these results for the development of support measures aimed at reducing the impact of workload in this category of workers are discussed. PMID:26640456

  2. Modeling shared resources with generalized synchronization within a Petri net bottom-up approach.

    PubMed

    Ferrarini, L; Trioni, M

    1996-01-01

    This paper proposes a simple and effective way to represent shared resources in manufacturing systems within a Petri net model previously developed. Such a model relies on the bottom-up and modular approach to synthesis and analysis. The designer may define elementary tasks and then connect them with one another with three kinds of connections: self-loops, inhibitor arcs and simple synchronizations. A theoretical framework has been established for the analysis of liveness and reversibility of such models. The generalized synchronization, here formalized, represents an extension of the simple synchronization, allowing the merging of suitable subnets among elementary tasks. It is proved that under suitable, but not restrictive, hypotheses the generalized synchronization may be substituted for a simple one, thus being compatible with all the developed theoretical body.

  3. The role of production and teamwork practices in construction safety: a cognitive model and an empirical case study.

    PubMed

    Mitropoulos, Panagiotis Takis; Cupido, Gerardo

    2009-01-01

    In construction, the challenge for researchers and practitioners is to develop work systems (production processes and teams) that can achieve high productivity and high safety at the same time. However, construction accident causation models ignore the role of work practices and teamwork. This study investigates the mechanisms by which production and teamwork practices affect the likelihood of accidents. The paper synthesizes a new model for construction safety based on the cognitive perspective (Fuller's Task-Demand-Capability Interface model, 2005) and then presents an exploratory case study. The case study investigates and compares the work practices of two residential framing crews: a 'High Reliability Crew' (HRC)--that is, a crew with exceptional productivity and safety over several years, and an average performing crew from the same company. The model explains how the production and teamwork practices generate the work situations that workers face (the task demands) and affect the workers ability to cope (capabilities). The case study indicates that the work practices of the HRC directly influence the task demands and match them with the applied capabilities. These practices were guided by the 'principle' of avoiding errors and rework and included work planning and preparation, work distribution, managing the production pressures, and quality and behavior monitoring. The Task Demand-Capability model links construction research to a cognitive model of accident causation and provides a new way to conceptualize safety as an emergent property of the production practices and teamwork processes. The empirical evidence indicates that the crews' work practices and team processes strongly affect the task demands, the applied capabilities, and the match between demands and capabilities. The proposed model and the exploratory case study will guide further discovery of work practices and teamwork processes that can increase both productivity and safety in construction operations. Such understanding will enable training of construction foremen and crews in these practices to systematically develop high reliability crews.

  4. Evaluation of a low-cost, 3D-printed model for bronchoscopy training.

    PubMed

    Parotto, Matteo; Jiansen, Joshua Qua; AboTaiban, Ahmed; Ioukhova, Svetlana; Agzamov, Alisher; Cooper, Richard; O'Leary, Gerald; Meineri, Massimiliano

    2017-01-01

    Flexible bronchoscopy is a fundamental procedure in anaesthesia and critical care medicine. Although learning this procedure is a complex task, the use of simulation-based training provides significant advantages, such as enhanced patient safety. Access to bronchoscopy simulators may be limited in low-resource settings. We have developed a low-cost 3D-printed bronchoscopy training model. A parametric airway model was obtained from an online medical model repository and fabricated using a low-cost 3D printer. The participating physicians had no prior bronchoscopy experience. Participants received a 30-minute lecture on flexible bronchoscopy and were administered a 15-item pre-test questionnaire on bronchoscopy. Afterwards, participants were instructed to perform a series of predetermined bronchoscopy tasks on the 3D printed simulator on 4 consecutive occasions. The time needed to perform the tasks and the quality of task performance (identification of bronchial anatomy, technique, dexterity, lack of trauma) were recorded. Upon completion of the simulator tests, participants were administered the 15-item questionnaire (post-test) once again. Participant satisfaction data on the perceived usefulness and accuracy of the 3D model were collected. A statistical analysis was performed using the t-test. Data are reported as mean values (± standard deviation). The time needed to complete all tasks was 152.9 ± 71.5 sec on the 1st attempt vs. 98.7 ± 40.3 sec on the 4th attempt (P = 0.03). Likewise, the quality of performance score improved from 8.3 ± 6.7 to 18.2 ± 2.5 (P < 0.0001). The average number of correct answers in the questionnaire was 6.8 ± 1.9 pre-test and 13.3 ± 3.1 post-test (P < 0.0001). Participants reported a high level of satisfaction with the perceived usefulness and accuracy of the model. We developed a 3D-printed model for bronchoscopy training. This model improved trainee performance and may represent a valid, low-cost bronchoscopy training tool.

  5. Development of 3D electromagnetic modeling tools for airborne vehicles

    NASA Technical Reports Server (NTRS)

    Volakis, John L.

    1992-01-01

    The main goal of this project is to develop methodologies for scattering by airborne composite vehicles. Although our primary focus continues to be the development of a general purpose code for analyzing the entire structure as a single unit, a number of other tasks are also pursued in parallel with this effort. These tasks are important in testing the overall approach and in developing suitable models for materials coatings, junctions and, more generally, in assessing the effectiveness of the various parts comprising the final code. Here, we briefly discuss our progress on the five different tasks which were pursued during this period. Our progress on each of these tasks is described in the detailed reports (listed at the end of this report) and the memoranda included. The first task described below is, of course, the core of this project and deals with the development of the overall code. Undoubtedly, it is the outcome of the research which was funded by NASA-Ames and the Navy over the past three years. During this year we developed the first finite element code for scattering by structures of arbitrary shape and composition. The code employs a new absorbing boundary condition which allows termination of the finite element mesh only 0.3 lambda from the outer surface of the target. This leads to a remarkable reduction of the mesh size and is a unique feature of the code. Other unique features of this code include capabilities to model resistive sheets, impedance sheets and anisotropic materials. This last capability is the latest feature of the code and is still under development. The code has been extensively validated for a number of composite geometries and some examples are given. The validation of the code is still in progress for anisotropic and larger non-metallic geometries and cavities. The developed finite element code is based on a Galerkin's formulation and employs edge-based tetrahedral elements for discretizing the dielectric sections and the region between the target and the outer mesh termination boundary (ATB). This boundary is placed in conformity with the target's outer surface, thus resulting in additional reduction of the unknown count.

  6. Maximally Expressive Modeling

    NASA Technical Reports Server (NTRS)

    Jaap, John; Davis, Elizabeth; Richardson, Lea

    2004-01-01

    Planning and scheduling systems organize tasks into a timeline or schedule. Tasks are logically grouped into containers called models. Models are a collection of related tasks, along with their dependencies and requirements, that when met will produce the desired result. One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed; the information sought is at the cutting edge of scientific endeavor; and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a maximally expressive modeling schema.

  7. Model construction of “earning money by taking photos”

    NASA Astrophysics Data System (ADS)

    Yang, Jingmei

    2018-03-01

    In the era of information, with the increasingly developed network, “to earn money by taking photos” is a self-service model under the mobile Internet. The user downloads the APP, registers as a member of the APP, and then takes a task that needs to take photographs from the APP and earns the reward of the task on the APP. The article uses the task data and membership information data of an already completed project, including the member’s location and reputation value. On the basis of reasonable assumption, the data was processed with the MATLAB, SPSS and Excel software. This article mainly studied problems of the function relationship between the task performance, task position (GPS latitude and GPS longitude) and task price of users, analyzed the project’s task pricing rules and the reasons why the task is not completed, and applied multivariate regression function and GeoQ software to analyze the data, studied the task pricing rules, applied the chart method to solve the complex data, clear and easy to understand, and also reality simulation is applied to analyze why the tasks are not completed. Also, compared with the previous program, a new task pricing program is designed for the project to obtain the confidence level by means of the SPSS software, to estimate the reasonable range of the task pricing, predict and design a new pricing program on the reasonable price range.

  8. Reasoning about Probabilistic Security Using Task-PIOAs

    NASA Astrophysics Data System (ADS)

    Jaggard, Aaron D.; Meadows, Catherine; Mislove, Michael; Segala, Roberto

    Task-structured probabilistic input/output automata (Task-PIOAs) are concurrent probabilistic automata that, among other things, have been used to provide a formal framework for the universal composability paradigms of protocol security. One of their advantages is that that they allow one to distinguish high-level nondeterminism that can affect the outcome of the protocol, from low-level choices, which can't. We present an alternative approach to analyzing the structure of Task-PIOAs that relies on ordered sets. We focus on two of the components that are required to define and apply Task-PIOAs: discrete probability theory and automata theory. We believe our development gives insight into the structure of Task-PIOAs and how they can be utilized to model crypto-protocols. We illustrate our approach with an example from anonymity, an area that has not previously been addressed using Task-PIOAs. We model Chaum's Dining Cryptographers Protocol at a level that does not require cryptographic primitives in the analysis. We show via this example how our approach can leverage a proof of security in the case a principal behaves deterministically to prove security when that principal behaves probabilistically.

  9. Automated Visual Cognitive Tasks for Recording Neural Activity Using a Floor Projection Maze

    PubMed Central

    Kent, Brendon W.; Yang, Fang-Chi; Burwell, Rebecca D.

    2014-01-01

    Neuropsychological tasks used in primates to investigate mechanisms of learning and memory are typically visually guided cognitive tasks. We have developed visual cognitive tasks for rats using the Floor Projection Maze1,2 that are optimized for visual abilities of rats permitting stronger comparisons of experimental findings with other species. In order to investigate neural correlates of learning and memory, we have integrated electrophysiological recordings into fully automated cognitive tasks on the Floor Projection Maze1,2. Behavioral software interfaced with an animal tracking system allows monitoring of the animal's behavior with precise control of image presentation and reward contingencies for better trained animals. Integration with an in vivo electrophysiological recording system enables examination of behavioral correlates of neural activity at selected epochs of a given cognitive task. We describe protocols for a model system that combines automated visual presentation of information to rodents and intracranial reward with electrophysiological approaches. Our model system offers a sophisticated set of tools as a framework for other cognitive tasks to better isolate and identify specific mechanisms contributing to particular cognitive processes. PMID:24638057

  10. An efficient approach to understanding and predicting the effects of multiple task characteristics on performance.

    PubMed

    Richardson, Miles

    2017-04-01

    In ergonomics there is often a need to identify and predict the separate effects of multiple factors on performance. A cost-effective fractional factorial approach to understanding the relationship between task characteristics and task performance is presented. The method has been shown to provide sufficient independent variability to reveal and predict the effects of task characteristics on performance in two domains. The five steps outlined are: selection of performance measure, task characteristic identification, task design for user trials, data collection, regression model development and task characteristic analysis. The approach can be used for furthering knowledge of task performance, theoretical understanding, experimental control and prediction of task performance. Practitioner Summary: A cost-effective method to identify and predict the separate effects of multiple factors on performance is presented. The five steps allow a better understanding of task factors during the design process.

  11. Modeling driver behavior in a cognitive architecture.

    PubMed

    Salvucci, Dario D

    2006-01-01

    This paper explores the development of a rigorous computational model of driver behavior in a cognitive architecture--a computational framework with underlying psychological theories that incorporate basic properties and limitations of the human system. Computational modeling has emerged as a powerful tool for studying the complex task of driving, allowing researchers to simulate driver behavior and explore the parameters and constraints of this behavior. An integrated driver model developed in the ACT-R (Adaptive Control of Thought-Rational) cognitive architecture is described that focuses on the component processes of control, monitoring, and decision making in a multilane highway environment. This model accounts for the steering profiles, lateral position profiles, and gaze distributions of human drivers during lane keeping, curve negotiation, and lane changing. The model demonstrates how cognitive architectures facilitate understanding of driver behavior in the context of general human abilities and constraints and how the driving domain benefits cognitive architectures by pushing model development toward more complex, realistic tasks. The model can also serve as a core computational engine for practical applications that predict and recognize driver behavior and distraction.

  12. Catastrophe models for cognitive workload and fatigue in N-back tasks.

    PubMed

    Guastello, Stephen J; Reiter, Katherine; Malon, Matthew; Timm, Paul; Shircel, Anton; Shaline, James

    2015-04-01

    N-back tasks place a heavy load on working memory, and thus make good candidates for studying cognitive workload and fatigue (CWLF). This study extended previous work on CWLF which separated the two phenomena with two cusp catastrophe models. Participants were 113 undergraduates who completed 2-back and 3-back tasks with both auditory and visual stimuli simultaneously. Task data were complemented by several measures hypothesized to be related to cognitive elasticity and compensatory abilities and the NASA TLX ratings of subjective workload. The adjusted R2 was .980 for the workload model, which indicated a highly accurate prediction with six bifurcation (elasticity versus rigidity) effects: algebra flexibility, TLX performance, effort, and frustration; and psychosocial measures of inflexibility and monitoring. There were also two cognitive load effects (asymmetry): 2 vs. 3-back and TLX temporal demands. The adjusted R2 was .454 for the fatigue model, which contained two bifurcation variables indicating the amount of work done, and algebra flexibility as the compensatory ability variable. Both cusp models were stronger than the next best linear alternative model. The study makes an important step forward by uncovering an apparently complete model for workload, finding the role of subjective workload in the context of performance dynamics, and finding CWLF dynamics in yet another type of memory-intensive task. The results were also consistent with the developing notion that performance deficits induced by workload and deficits induced by fatigue result from the impact of the task on the workspace and executive functions of working memory respectively.

  13. Twelfth Annual Conference on Manual Control

    NASA Technical Reports Server (NTRS)

    Wempe, T. E.

    1976-01-01

    Main topics discussed cover multi-task decision making, attention allocation and workload measurement, displays and controls, nonvisual displays, tracking and other psychomotor tasks, automobile driving, handling qualities and pilot ratings, remote manipulation, system identification, control models, and motion and visual cues. Sixty-five papers are included with presentations on results of analytical studies to develop and evaluate human operator models for a range of control task, vehicle dynamics and display situations; results of tests of physiological control systems and applications to medical problems; and on results of simulator and flight tests to determine display, control and dynamics effects on operator performance and workload for aircraft, automobile, and remote control systems.

  14. EMISSION AND SURFACE EXCHANGE PROCESS

    EPA Science Inventory

    This task supports the development, evaluation, and application of emission and dry deposition algorithms in air quality simulation models, such as the Models-3/Community Multiscale Air Quality (CMAQ) modeling system. Emission estimates influence greatly the accuracy of air qual...

  15. R&M (Reliability and Maintainability) Program Cost Drivers.

    DTIC Science & Technology

    1987-05-01

    Specific data points used to develop the models (i.e., labor hours mid associated systems and task application characteristics) were obtained from three...study data base used to generate the CER’s em be expanded by adding project data points to the input data given in Appendix 13, adjusting the CER...FRACAS, worst-case/ thermal analyses, stress screening and R-growth. However, the studies did not assign benefits to specific task areas. c. Task

  16. Real-time value-driven diagnosis

    NASA Technical Reports Server (NTRS)

    Dambrosio, Bruce

    1995-01-01

    Diagnosis is often thought of as an isolated task in theoretical reasoning (reasoning with the goal of updating our beliefs about the world). We present a decision-theoretic interpretation of diagnosis as a task in practical reasoning (reasoning with the goal of acting in the world), and sketch components of our approach to this task. These components include an abstract problem description, a decision-theoretic model of the basic task, a set of inference methods suitable for evaluating the decision representation in real-time, and a control architecture to provide the needed continuing coordination between the agent and its environment. A principal contribution of this work is the representation and inference methods we have developed, which extend previously available probabilistic inference methods and narrow, somewhat, the gap between probabilistic and logical models of diagnosis.

  17. Using a generalized linear mixed model approach to explore the role of age, motor proficiency, and cognitive styles in children's reach estimation accuracy.

    PubMed

    Caçola, Priscila M; Pant, Mohan D

    2014-10-01

    The purpose was to use a multi-level statistical technique to analyze how children's age, motor proficiency, and cognitive styles interact to affect accuracy on reach estimation tasks via Motor Imagery and Visual Imagery. Results from the Generalized Linear Mixed Model analysis (GLMM) indicated that only the 7-year-old age group had significant random intercepts for both tasks. Motor proficiency predicted accuracy in reach tasks, and cognitive styles (object scale) predicted accuracy in the motor imagery task. GLMM analysis is suitable to explore age and other parameters of development. In this case, it allowed an assessment of motor proficiency interacting with age to shape how children represent, plan, and act on the environment.

  18. Pulsed Lidar Performance/Technical Maturity Assessment

    NASA Technical Reports Server (NTRS)

    Gimmestad, Gary G.; West, Leanne L.; Wood, Jack W.; Frehlich, Rod

    2004-01-01

    This report describes the results of investigations performed by the Georgia Tech Research Institute (GTRI) and the National Center for Atmospheric Research (NCAR) under a task entitled 'Pulsed Lidar Performance/Technical Maturity Assessment' funded by the Crew Systems Branch of the Airborne Systems Competency at the NASA Langley Research Center. The investigations included two tasks, 1.1(a) and 1.1(b). The Tasks discussed in this report are in support of the NASA Virtual Airspace Modeling and Simulation (VAMS) program and are designed to evaluate a pulsed lidar that will be required for active wake vortex avoidance solutions. The Coherent Technologies, Inc. (CTI) WindTracer LIDAR is an eye-safe, 2-micron, coherent, pulsed Doppler lidar with wake tracking capability. The actual performance of the WindTracer system was to be quantified. In addition, the sensor performance has been assessed and modeled, and the models have been included in simulation efforts. The WindTracer LIDAR was purchased by the Federal Aviation Administration (FAA) for use in near-term field data collection efforts as part of a joint NASA/FAA wake vortex research program. In the joint research program, a minimum common wake and weather data collection platform will be defined. NASA Langley will use the field data to support wake model development and operational concept investigation in support of the VAMS project, where the ultimate goal is to improve airport capacity and safety. Task 1.1(a), performed by NCAR in Boulder, Colorado to analyze the lidar system to determine its performance and capabilities based on results from simulated lidar data with analytic wake vortex models provided by NASA, which were then compared to the vendor's claims for the operational specifications of the lidar. Task 1.1(a) is described in Section 3, including the vortex model, lidar parameters and simulations, and results for both detection and tracking of wake vortices generated by Boeing 737s and 747s. Task 1.1(b) was performed by GTRI in Atlanta, Georgia and is described in Section 4. Task 1.1(b) includes a description of the St. Louis Airport (STL) field test being conducted by the Volpe National Transportation Systems Center, and it also addresses the development of a test plan to validate simulation studies conducted as part of Task 1.1(a). Section 4.2 provides a description of the Volpe STL field tests, and Section 4.3 describes 3 possible ways to validate the WindTracer lidar simulations performed in Task 1.1(a).

  19. Local spatio-temporal analysis in vision systems

    NASA Astrophysics Data System (ADS)

    Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David

    1994-07-01

    The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.

  20. Task-Specific Response Strategy Selection on the Basis of Recent Training Experience

    PubMed Central

    Fulvio, Jacqueline M.; Green, C. Shawn; Schrater, Paul R.

    2014-01-01

    The goal of training is to produce learning for a range of activities that are typically more general than the training task itself. Despite a century of research, predicting the scope of learning from the content of training has proven extremely difficult, with the same task producing narrowly focused learning strategies in some cases and broadly scoped learning strategies in others. Here we test the hypothesis that human subjects will prefer a decision strategy that maximizes performance and reduces uncertainty given the demands of the training task and that the strategy chosen will then predict the extent to which learning is transferable. To test this hypothesis, we trained subjects on a moving dot extrapolation task that makes distinct predictions for two types of learning strategy: a narrow model-free strategy that learns an input-output mapping for training stimuli, and a general model-based strategy that utilizes humans' default predictive model for a class of trajectories. When the number of distinct training trajectories is low, we predict better performance for the mapping strategy, but as the number increases, a predictive model is increasingly favored. Consonant with predictions, subject extrapolations for test trajectories were consistent with using a mapping strategy when trained on a small number of training trajectories and a predictive model when trained on a larger number. The general framework developed here can thus be useful both in interpreting previous patterns of task-specific versus task-general learning, as well as in building future training paradigms with certain desired outcomes. PMID:24391490

  1. Putative cognitive enhancers in preclinical models related to schizophrenia: the search for an elusive target.

    PubMed

    Barak, Segev; Weiner, Ina

    2011-08-01

    Several developments have converged to drive what may be called "the cognitive revolution" in drug discovery in schizophrenia (SCZ), including the emphasis on cognitive deficits as a core disabling aspect of SCZ, the increasing consensus that cognitive deficits are not treated satisfactorily by the available antipsychotic drugs (APDs), and the failure of animal models to predict drug efficacy for cognitive deficits in clinical trials. Consequently, in recent years, a paradigm shift has been encouraged in animal modeling, triggered by the NIMH sponsored Measurement and Treatment Research to Improve Cognition in Schizophrenia (MATRICS) initiative, and intended to promote the development and use of behavioral measures in animals that can generate valid (clinically relevant) measures of cognition and thus promote the identification of cognition enhancers for SCZ. Here, we provide a non-exhaustive survey of the effects of putative cognition enhancers (PCEs) representing 10 pharmacological targets as well as antipsychotic drugs (APDs), on SCZ-mimetic drugs (NMDA antagonists, muscarinic antagonist scopolamine and dopaminergic agonist amphetamine), in several tasks considered to measure cognitive processes/domains that are disrupted in SCZ (the five choice serial reaction time task, sustain attention task, working and/or recognition memory (delayed (non)matching to sample, delayed alternation task, radial arm maze, novel object recognition), reversal learning, attentional set shifting, latent inhibition and spatial learning and memory). We conclude that most of the available models have no capacity to distinguish between PCEs and APDs and that there is a need to establish models based on tasks whose perturbations lead to performance impairments that are resistant to APDs, and/or to accept APDs as a "weak gold standard". Several directions derived from the surveyed data are suggested. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. Modeling Search Behaviors during the Acquisition of Expertise in a Sequential Decision-Making Task.

    PubMed

    Moënne-Loccoz, Cristóbal; Vergara, Rodrigo C; López, Vladimir; Mery, Domingo; Cosmelli, Diego

    2017-01-01

    Our daily interaction with the world is plagued of situations in which we develop expertise through self-motivated repetition of the same task. In many of these interactions, and especially when dealing with computer and machine interfaces, we must deal with sequences of decisions and actions. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion and a specific sequence of choices must be performed in order to produce the expected outcome. But, as we become experts in the use of such interfaces, is it possible to identify specific search and learning strategies? And if so, can we use this information to predict future actions? In addition to better understanding the cognitive processes underlying sequential decision making, this could allow building adaptive interfaces that can facilitate interaction at different moments of the learning curve. Here we tackle the question of modeling sequential decision-making behavior in a simple human-computer interface that instantiates a 4-level binary decision tree (BDT) task. We record behavioral data from voluntary participants while they attempt to solve the task. Using a Hidden Markov Model-based approach that capitalizes on the hierarchical structure of behavior, we then model their performance during the interaction. Our results show that partitioning the problem space into a small set of hierarchically related stereotyped strategies can potentially capture a host of individual decision making policies. This allows us to follow how participants learn and develop expertise in the use of the interface. Moreover, using a Mixture of Experts based on these stereotyped strategies, the model is able to predict the behavior of participants that master the task.

  3. Evaluation of image quality

    NASA Technical Reports Server (NTRS)

    Pavel, M.

    1993-01-01

    This presentation outlines in viewgraph format a general approach to the evaluation of display system quality for aviation applications. This approach is based on the assumption that it is possible to develop a model of the display which captures most of the significant properties of the display. The display characteristics should include spatial and temporal resolution, intensity quantizing effects, spatial sampling, delays, etc. The model must be sufficiently well specified to permit generation of stimuli that simulate the output of the display system. The first step in the evaluation of display quality is an analysis of the tasks to be performed using the display. Thus, for example, if a display is used by a pilot during a final approach, the aesthetic aspects of the display may be less relevant than its dynamic characteristics. The opposite task requirements may apply to imaging systems used for displaying navigation charts. Thus, display quality is defined with regard to one or more tasks. Given a set of relevant tasks, there are many ways to approach display evaluation. The range of evaluation approaches includes visual inspection, rapid evaluation, part-task simulation, and full mission simulation. The work described is focused on two complementary approaches to rapid evaluation. The first approach is based on a model of the human visual system. A model of the human visual system is used to predict the performance of the selected tasks. The model-based evaluation approach permits very rapid and inexpensive evaluation of various design decisions. The second rapid evaluation approach employs specifically designed critical tests that embody many important characteristics of actual tasks. These are used in situations where a validated model is not available. These rapid evaluation tests are being implemented in a workstation environment.

  4. The effect of visual representation style in problem-solving: a perspective from cognitive processes.

    PubMed

    Nyamsuren, Enkhbold; Taatgen, Niels A

    2013-01-01

    Using results from a controlled experiment and simulations based on cognitive models, we show that visual presentation style can have a significant impact on performance in a complex problem-solving task. We compared subject performances in two isomorphic, but visually different, tasks based on a card game of SET. Although subjects used the same strategy in both tasks, the difference in presentation style resulted in radically different reaction times and significant deviations in scanpath patterns in the two tasks. Results from our study indicate that low-level subconscious visual processes, such as differential acuity in peripheral vision and low-level iconic memory, can have indirect, but significant effects on decision making during a problem-solving task. We have developed two ACT-R models that employ the same basic strategy but deal with different presentations styles. Our ACT-R models confirm that changes in low-level visual processes triggered by changes in presentation style can propagate to higher-level cognitive processes. Such a domino effect can significantly affect reaction times and eye movements, without affecting the overall strategy of problem solving.

  5. The Effect of Visual Representation Style in Problem-Solving: A Perspective from Cognitive Processes

    PubMed Central

    Nyamsuren, Enkhbold; Taatgen, Niels A.

    2013-01-01

    Using results from a controlled experiment and simulations based on cognitive models, we show that visual presentation style can have a significant impact on performance in a complex problem-solving task. We compared subject performances in two isomorphic, but visually different, tasks based on a card game of SET. Although subjects used the same strategy in both tasks, the difference in presentation style resulted in radically different reaction times and significant deviations in scanpath patterns in the two tasks. Results from our study indicate that low-level subconscious visual processes, such as differential acuity in peripheral vision and low-level iconic memory, can have indirect, but significant effects on decision making during a problem-solving task. We have developed two ACT-R models that employ the same basic strategy but deal with different presentations styles. Our ACT-R models confirm that changes in low-level visual processes triggered by changes in presentation style can propagate to higher-level cognitive processes. Such a domino effect can significantly affect reaction times and eye movements, without affecting the overall strategy of problem solving. PMID:24260415

  6. Cooperating Expert Systems For Space Station Power Distribution Management

    NASA Astrophysics Data System (ADS)

    Nguyen, T. A.; Chiou, W. C.

    1987-02-01

    In a complex system such as the manned Space Station, it is deem necessary that many expert systems must perform tasks in a concurrent and cooperative manner. An important question arise is: what cooperative-task-performing models are appropriate for multiple expert systems to jointly perform tasks. The solution to this question will provide a crucial automation design criteria for the Space Station complex systems architecture. Based on a client/server model for performing tasks, we have developed a system that acts as a front-end to support loosely-coupled communications between expert systems running on multiple Symbolics machines. As an example, we use two ART*-based expert systems to demonstrate the concept of parallel symbolic manipulation for power distribution management and dynamic load planner/scheduler in the simulated Space Station environment. This on-going work will also explore other cooperative-task-performing models as alternatives which can evaluate inter and intra expert system communication mechanisms. It will be served as a testbed and a bench-marking tool for other Space Station expert subsystem communication and information exchange.

  7. Preliminary Analysis of Perfusionists’ Strategies for Managing Routine and Failure Mode Scenarios in Cardiopulmonary Bypass

    PubMed Central

    Power, Gerald; Miller, Anne

    2007-01-01

    Abstract: Cardiopulmonary bypass (CPB) is a complex task requiring high levels of practitioner expertise. Although some education standards exist, few are based on an analysis of perfusionists’ problem-solving needs. This study shows the efficacy of work domain analysis (WDA) as a framework for analyzing perfusionists’ conceptualization and problem-solving strategies. A WDA model of a CPB circuit was developed. A high-fidelity CPB simulator (Manbit) was used to present routine and oxygenator failure scenarios to six proficient perfusionists. The video-cued recall technique was used to elicit perfusionists’ conceptualization strategies. The resulting recall transcripts were coded using the WDA model and analyzed for associations between task completion times and patterns of conceptualization. The WDA model developed was successful in being able to account for and describe the thought process followed by each participant. It was also shown that, although there was no correlation between experience with CPB and ability to change an oxygenator, there was a link between the between specific thought patterns and the efficiency in undertaking this task. Simulators are widely used in many fields of human endeavor, and in this research, the attempt was made to use WDA to gain insights into the complexities of the human thought process when engaged in the complex task of conducting CPB. The assumption that experience equates with ability is challenged, and rather, it is shown that thought process is a more significant determinant of success when engaged in complex tasks. WDA analysis in combination with a CPB simulator may be used to elucidate successful strategies for completing complex tasks. PMID:17972450

  8. Bayesian Action–Perception Computational Model: Interaction of Production and Recognition of Cursive Letters

    PubMed Central

    Gilet, Estelle; Diard, Julien; Bessière, Pierre

    2011-01-01

    In this paper, we study the collaboration of perception and action representations involved in cursive letter recognition and production. We propose a mathematical formulation for the whole perception–action loop, based on probabilistic modeling and Bayesian inference, which we call the Bayesian Action–Perception (BAP) model. Being a model of both perception and action processes, the purpose of this model is to study the interaction of these processes. More precisely, the model includes a feedback loop from motor production, which implements an internal simulation of movement. Motor knowledge can therefore be involved during perception tasks. In this paper, we formally define the BAP model and show how it solves the following six varied cognitive tasks using Bayesian inference: i) letter recognition (purely sensory), ii) writer recognition, iii) letter production (with different effectors), iv) copying of trajectories, v) copying of letters, and vi) letter recognition (with internal simulation of movements). We present computer simulations of each of these cognitive tasks, and discuss experimental predictions and theoretical developments. PMID:21674043

  9. Clinical Complexity in Medicine: A Measurement Model of Task and Patient Complexity.

    PubMed

    Islam, R; Weir, C; Del Fiol, G

    2016-01-01

    Complexity in medicine needs to be reduced to simple components in a way that is comprehensible to researchers and clinicians. Few studies in the current literature propose a measurement model that addresses both task and patient complexity in medicine. The objective of this paper is to develop an integrated approach to understand and measure clinical complexity by incorporating both task and patient complexity components focusing on the infectious disease domain. The measurement model was adapted and modified for the healthcare domain. Three clinical infectious disease teams were observed, audio-recorded and transcribed. Each team included an infectious diseases expert, one infectious diseases fellow, one physician assistant and one pharmacy resident fellow. The transcripts were parsed and the authors independently coded complexity attributes. This baseline measurement model of clinical complexity was modified in an initial set of coding processes and further validated in a consensus-based iterative process that included several meetings and email discussions by three clinical experts from diverse backgrounds from the Department of Biomedical Informatics at the University of Utah. Inter-rater reliability was calculated using Cohen's kappa. The proposed clinical complexity model consists of two separate components. The first is a clinical task complexity model with 13 clinical complexity-contributing factors and 7 dimensions. The second is the patient complexity model with 11 complexity-contributing factors and 5 dimensions. The measurement model for complexity encompassing both task and patient complexity will be a valuable resource for future researchers and industry to measure and understand complexity in healthcare.

  10. The Bayesian Revolution Approaches Psychological Development

    ERIC Educational Resources Information Center

    Shultz, Thomas R.

    2007-01-01

    This commentary reviews five articles that apply Bayesian ideas to psychological development, some with psychology experiments, some with computational modeling, and some with both experiments and modeling. The reviewed work extends the current Bayesian revolution into tasks often studied in children, such as causal learning and word learning, and…

  11. A Constructive Neural-Network Approach to Modeling Psychological Development

    ERIC Educational Resources Information Center

    Shultz, Thomas R.

    2012-01-01

    This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…

  12. Journalism as Model for Civic and Information Literacies

    ERIC Educational Resources Information Center

    Smirnov, Natalia; Saiyed, Gulnaz; Easterday, Matthew W.; Lam, Wan Shun Eva

    2018-01-01

    Journalism can serve as a generative disciplinary context for developing civic and information literacies needed to meaningfully participate in an increasingly networked and mediated public sphere. Using interviews with journalists, we developed a cognitive task analysis model, identifying an iterative sequence of production and domain-specific…

  13. EVALUATING THE REGIONAL PREDICTIVE CAPACITY OF A PROCESS-BASED MERCURY EXPOSURE MODEL (R-MCM) FOR LAKES ACROSS VERMONT AND NEW HAMPSHIRE, USA

    EPA Science Inventory

    Regulatory agencies are confronted with a daunting task of developing fish consumption advisories for a large number of lakes and rivers with little resources. A feasible mechanism to develop region-wide fish advisories is by using a process-based mathematical model. One model of...

  14. Trading away what kind of jobs? Globalization, trade and tasks in the US economy

    PubMed Central

    Kemeny, Thomas; Rigby, David

    2015-01-01

    Economists and other social scientists are calling for a reassessment of the impact of international trade on labor markets in developed and developing countries. Classical models of globalization and trade, based upon the international exchange of finished goods, fail to capture the fragmentation of much commodity production and the geographical separation of individual production tasks. This fragmentation, captured in the growing volume of intra-industry trade, prompts investigation of the effects of trade within, rather than between, sectors of the economy. In this paper we examine the relationship between international trade and the task structure of US employment. We link disaggregate US trade data from 1972 to 2006, the NBER manufacturing database, the Decennial Census, and occupational and task data from the Dictionary of Occupational Titles. Within-industry shifts in task characteristics are linked to import competition and technological change. Our results suggest that trade has played a major role in the growth in relative demand for nonroutine tasks, particularly those requiring high levels of interpersonal interaction. PMID:26722134

  15. Trading away what kind of jobs? Globalization, trade and tasks in the US economy.

    PubMed

    Kemeny, Thomas; Rigby, David

    2012-04-01

    Economists and other social scientists are calling for a reassessment of the impact of international trade on labor markets in developed and developing countries. Classical models of globalization and trade, based upon the international exchange of finished goods, fail to capture the fragmentation of much commodity production and the geographical separation of individual production tasks. This fragmentation, captured in the growing volume of intra-industry trade, prompts investigation of the effects of trade within, rather than between, sectors of the economy. In this paper we examine the relationship between international trade and the task structure of US employment. We link disaggregate US trade data from 1972 to 2006, the NBER manufacturing database, the Decennial Census, and occupational and task data from the Dictionary of Occupational Titles. Within-industry shifts in task characteristics are linked to import competition and technological change. Our results suggest that trade has played a major role in the growth in relative demand for nonroutine tasks, particularly those requiring high levels of interpersonal interaction.

  16. Computational Modeling for Language Acquisition: A Tutorial With Syntactic Islands.

    PubMed

    Pearl, Lisa S; Sprouse, Jon

    2015-06-01

    Given the growing prominence of computational modeling in the acquisition research community, we present a tutorial on how to use computational modeling to investigate learning strategies that underlie the acquisition process. This is useful for understanding both typical and atypical linguistic development. We provide a general overview of why modeling can be a particularly informative tool and some general considerations when creating a computational acquisition model. We then review a concrete example of a computational acquisition model for complex structural knowledge referred to as syntactic islands. This includes an overview of syntactic islands knowledge, a precise definition of the acquisition task being modeled, the modeling results, and how to meaningfully interpret those results in a way that is relevant for questions about knowledge representation and the learning process. Computational modeling is a powerful tool that can be used to understand linguistic development. The general approach presented here can be used to investigate any acquisition task and any learning strategy, provided both are precisely defined.

  17. Opportunities and challenges in developing deep learning models using electronic health records data: a systematic review.

    PubMed

    Xiao, Cao; Choi, Edward; Sun, Jimeng

    2018-06-08

    To conduct a systematic review of deep learning models for electronic health record (EHR) data, and illustrate various deep learning architectures for analyzing different data sources and their target applications. We also highlight ongoing research and identify open challenges in building deep learning models of EHRs. We searched PubMed and Google Scholar for papers on deep learning studies using EHR data published between January 1, 2010, and January 31, 2018. We summarize them according to these axes: types of analytics tasks, types of deep learning model architectures, special challenges arising from health data and tasks and their potential solutions, as well as evaluation strategies. We surveyed and analyzed multiple aspects of the 98 articles we found and identified the following analytics tasks: disease detection/classification, sequential prediction of clinical events, concept embedding, data augmentation, and EHR data privacy. We then studied how deep architectures were applied to these tasks. We also discussed some special challenges arising from modeling EHR data and reviewed a few popular approaches. Finally, we summarized how performance evaluations were conducted for each task. Despite the early success in using deep learning for health analytics applications, there still exist a number of issues to be addressed. We discuss them in detail including data and label availability, the interpretability and transparency of the model, and ease of deployment.

  18. The Effects of Study Tasks in a Computer-Based Chemistry Learning Environment

    NASA Astrophysics Data System (ADS)

    Urhahne, Detlef; Nick, Sabine; Poepping, Anna Christin; Schulz, Sarah Jayne

    2013-12-01

    The present study examines the effects of different study tasks on the acquisition of knowledge about acids and bases in a computer-based learning environment. Three different task formats were selected to create three treatment conditions: learning with gap-fill and matching tasks, learning with multiple-choice tasks, and learning only from text and figures without any additional tasks. Participants were 196 ninth-grade students who learned with a self-developed multimedia program in a pretest-posttest control group design. Research results reveal that gap-fill and matching tasks were most effective in promoting knowledge acquisition, followed by multiple-choice tasks, and no tasks at all. The findings are in line with previous research on this topic. The effects can possibly be explained by the generation-recognition model, which predicts that gap-fill and matching tasks trigger more encompassing learning processes than multiple-choice tasks. It is concluded that instructional designers should incorporate more challenging study tasks for enhancing the effectiveness of computer-based learning environments.

  19. Development of a standardized job description for healthcare managers of metabolic syndrome management programs in Korean community health centers.

    PubMed

    Lee, Youngjin; Choo, Jina; Cho, Jeonghyun; Kim, So-Nam; Lee, Hye-Eun; Yoon, Seok-Jun; Seomun, GyeongAe

    2014-03-01

    This study aimed to develop a job description for healthcare managers of metabolic syndrome management programs using task analysis. Exploratory research was performed by using the Developing a Curriculum method, the Intervention Wheel model, and focus group discussions. Subsequently, we conducted a survey of 215 healthcare workers from 25 community health centers to verify that the job description we created was accurate. We defined the role of healthcare managers. Next, we elucidated the tasks of healthcare managers and performed needs analysis to examine the frequency, importance, and difficulty of each of their duties. Finally, we verified that our job description was accurate. Based on the 8 duties, 30 tasks, and 44 task elements assigned to healthcare managers, we found that the healthcare managers functioned both as team coordinators responsible for providing multidisciplinary health services and nurse specialists providing health promotion services. In terms of importance and difficulty of tasks performed by the healthcare managers, which were measured using a determinant coefficient, the highest-ranked task was planning social marketing (15.4), while the lowest-ranked task was managing human resources (9.9). A job description for healthcare managers may provide basic data essential for the development of a job training program for healthcare managers working in community health promotion programs. Copyright © 2014. Published by Elsevier B.V.

  20. MULTIMEDIA EXPOSURE MODELING

    EPA Science Inventory

    This task addresses a number of issues that arise in multimedia modeling with an emphasis on interactions among the atmosphere and multiple other environmental media. Approaches for working with multiple types of models and the data sets are being developed. Proper software tool...

  1. Parallel plan execution with self-processing networks

    NASA Technical Reports Server (NTRS)

    Dautrechy, C. Lynne; Reggia, James A.

    1989-01-01

    A critical issue for space operations is how to develop and apply advanced automation techniques to reduce the cost and complexity of working in space. In this context, it is important to examine how recent advances in self-processing networks can be applied for planning and scheduling tasks. For this reason, the feasibility of applying self-processing network models to a variety of planning and control problems relevant to spacecraft activities is being explored. Goals are to demonstrate that self-processing methods are applicable to these problems, and that MIRRORS/II, a general purpose software environment for implementing self-processing models, is sufficiently robust to support development of a wide range of application prototypes. Using MIRRORS/II and marker passing modelling techniques, a model of the execution of a Spaceworld plan was implemented. This is a simplified model of the Voyager spacecraft which photographed Jupiter, Saturn, and their satellites. It is shown that plan execution, a task usually solved using traditional artificial intelligence (AI) techniques, can be accomplished using a self-processing network. The fact that self-processing networks were applied to other space-related tasks, in addition to the one discussed here, demonstrates the general applicability of this approach to planning and control problems relevant to spacecraft activities. It is also demonstrated that MIRRORS/II is a powerful environment for the development and evaluation of self-processing systems.

  2. The NEWMEDS rodent touchscreen test battery for cognition relevant to schizophrenia.

    PubMed

    Hvoslef-Eide, M; Mar, A C; Nilsson, S R O; Alsiö, J; Heath, C J; Saksida, L M; Robbins, T W; Bussey, T J

    2015-11-01

    The NEWMEDS initiative (Novel Methods leading to New Medications in Depression and Schizophrenia, http://www.newmeds-europe.com ) is a large industrial-academic collaborative project aimed at developing new methods for drug discovery for schizophrenia. As part of this project, Work package 2 (WP02) has developed and validated a comprehensive battery of novel touchscreen tasks for rats and mice for assessing cognitive domains relevant to schizophrenia. This article provides a review of the touchscreen battery of tasks for rats and mice for assessing cognitive domains relevant to schizophrenia and highlights validation data presented in several primary articles in this issue and elsewhere. The battery consists of the five-choice serial reaction time task and a novel rodent continuous performance task for measuring attention, a three-stimulus visual reversal and the serial visual reversal task for measuring cognitive flexibility, novel non-matching to sample-based tasks for measuring spatial working memory and paired-associates learning for measuring long-term memory. The rodent (i.e. both rats and mice) touchscreen operant chamber and battery has high translational value across species due to its emphasis on construct as well as face validity. In addition, it offers cognitive profiling of models of diseases with cognitive symptoms (not limited to schizophrenia) through a battery approach, whereby multiple cognitive constructs can be measured using the same apparatus, enabling comparisons of performance across tasks. This battery of tests constitutes an extensive tool package for both model characterisation and pre-clinical drug discovery.

  3. From scenarios to domain models: processes and representations

    NASA Astrophysics Data System (ADS)

    Haddock, Gail; Harbison, Karan

    1994-03-01

    The domain specific software architectures (DSSA) community has defined a philosophy for the development of complex systems. This philosophy improves productivity and efficiency by increasing the user's role in the definition of requirements, increasing the systems engineer's role in the reuse of components, and decreasing the software engineer's role to the development of new components and component modifications only. The scenario-based engineering process (SEP), the first instantiation of the DSSA philosophy, has been adopted by the next generation controller project. It is also the chosen methodology of the trauma care information management system project, and the surrogate semi-autonomous vehicle project. SEP uses scenarios from the user to create domain models and define the system's requirements. Domain knowledge is obtained from a variety of sources including experts, documents, and videos. This knowledge is analyzed using three techniques: scenario analysis, task analysis, and object-oriented analysis. Scenario analysis results in formal representations of selected scenarios. Task analysis of the scenario representations results in descriptions of tasks necessary for object-oriented analysis and also subtasks necessary for functional system analysis. Object-oriented analysis of task descriptions produces domain models and system requirements. This paper examines the representations that support the DSSA philosophy, including reference requirements, reference architectures, and domain models. The processes used to create and use the representations are explained through use of the scenario-based engineering process. Selected examples are taken from the next generation controller project.

  4. Developing a 3-choice serial reaction time task for examining neural and cognitive function in an equine model.

    PubMed

    Roberts, Kirsty; Hemmings, Andrew J; McBride, Sebastian D; Parker, Matthew O

    2017-12-01

    Large animal models of human neurological disorders are advantageous compared to rodent models due to their neuroanatomical complexity, longevity and their ability to be maintained in naturalised environments. Some large animal models spontaneously develop behaviours that closely resemble the symptoms of neural and psychiatric disorders. The horse is an example of this; the domestic form of this species consistently develops spontaneous stereotypic behaviours akin to the compulsive and impulsive behaviours observed in human neurological disorders such as Tourette's syndrome. The ability to non-invasively probe normal and abnormal equine brain function through cognitive testing may provide an extremely useful methodological tool to assess brain changes associated with certain human neurological and psychiatric conditions. An automated operant system with the ability to present visual and auditory stimuli as well as dispense salient food reward was developed. To validate the system, ten horses were trained and tested using a standard cognitive task (three choice serial reaction time task (3-CSRTT)). All animals achieved total learning criterion and performed six probe sessions. Learning criterion was met within 16.30±0.79 sessions over a three day period. During six probe sessions, level of performance was maintained at 80.67±0.57% (mean±SEM) accuracy. This is the first mobile fully automated system developed to examine cognitive function in the horse. A fully-automated operant system for mobile cognitive function of a large animal model has been designed and validated. Horses pose an interesting complementary model to rodents for the examination of human neurological dysfunction. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Examining the development of attention and executive functions in children with a novel paradigm.

    PubMed

    Klimkeit, Ester I; Mattingley, Jason B; Sheppard, Dianne M; Farrow, Maree; Bradshaw, John L

    2004-09-01

    The development of attention and executive functions in normal children (7-12 years) was investigated using a novel selective reaching task, which involved reaching as rapidly as possible towards a target, while at times having to ignore a distractor. The information processing paradigm allowed the measurement of various distinct dimensions of behaviour within a single task. The largest improvements in vigilance, set-shifting, response inhibition, selective attention, and impulsive responding were observed to occur between the ages of 8 and 10, with a plateau in performance between 10 and 12 years of age. These findings, consistent with a step-wise model of development, coincide with the observed developmental spurt in frontal brain functions between 7 and 10 years of age, and indicate that attention and executive functions develop in parallel. This task appears to be a useful research tool in the assessment of attention and executive functions, within a single task. Thus it may have a role in determining which cognitive functions are most affected in different childhood disorders.

  6. The development of a model to predict the effects of worker and task factors on foot placements in manual material handling tasks.

    PubMed

    Wagner, David W; Reed, Matthew P; Chaffin, Don B

    2010-11-01

    Accurate prediction of foot placements in relation to hand locations during manual materials handling tasks is critical for prospective biomechanical analysis. To address this need, the effects of lifting task conditions and anthropometric variables on foot placements were studied in a laboratory experiment. In total, 20 men and women performed two-handed object transfers that required them to walk to a shelf, lift an object from the shelf at waist height and carry the object to a variety of locations. Five different changes in the direction of progression following the object pickup were used, ranging from 45° to 180° relative to the approach direction. Object weights of 1.0 kg, 4.5 kg, 13.6 kg were used. Whole-body motions were recorded using a 3-D optical retro-reflective marker-based camera system. A new parametric system for describing foot placements, the Quantitative Transition Classification System, was developed to facilitate the parameterisation of foot placement data. Foot placements chosen by the subjects during the transfer tasks appeared to facilitate a change in the whole-body direction of progression, in addition to aiding in performing the lift. Further analysis revealed that five different stepping behaviours accounted for 71% of the stepping patterns observed. More specifically, the most frequently observed behaviour revealed that the orientation of the lead foot during the actual lifting task was primarily affected by the amount of turn angle required after the lift (R(2) = 0.53). One surprising result was that the object mass (scaled by participant body mass) was not found to significantly affect any of the individual step placement parameters. Regression models were developed to predict the most prevalent step placements and are included in this paper to facilitate more accurate human motion simulations and ergonomics analyses of manual material lifting tasks. STATEMENT OF RELEVANCE: This study proposes a method for parameterising the steps (foot placements) associated with manual material handling tasks. The influence of task conditions and subject anthropometry on the foot placements of the most frequently observed stepping pattern during a laboratory study is discussed. For prospective postural analyses conducted using digital human models, accurate prediction of the foot placements is critical to realistic postural analyses and improved biomechanical job evaluations.

  7. Development and Application of ANN Model for Worker Assignment into Virtual Cells of Large Sized Configurations

    NASA Astrophysics Data System (ADS)

    Murali, R. V.; Puri, A. B.; Fathi, Khalid

    2010-10-01

    This paper presents an extended version of study already undertaken on development of an artificial neural networks (ANNs) model for assigning workforce into virtual cells under virtual cellular manufacturing systems (VCMS) environments. Previously, the same authors have introduced this concept and applied it to virtual cells of two-cell configuration and the results demonstrated that ANNs could be a worth applying tool for carrying out workforce assignments. In this attempt, three-cell configurations problems are considered for worker assignment task. Virtual cells are formed under dual resource constraint (DRC) context in which the number of available workers is less than the total number of machines available. Since worker assignment tasks are quite non-linear and highly dynamic in nature under varying inputs & conditions and, in parallel, ANNs have the ability to model complex relationships between inputs and outputs and find similar patterns effectively, an attempt was earlier made to employ ANNs into the above task. In this paper, the multilayered perceptron with feed forward (MLP-FF) neural network model has been reused for worker assignment tasks of three-cell configurations under DRC context and its performance at different time periods has been analyzed. The previously proposed worker assignment model has been reconfigured and cell formation solutions available for three-cell configuration in the literature are used in combination to generate datasets for training ANNs framework. Finally, results of the study have been presented and discussed.

  8. Autism-specific covariation in perceptual performances: "g" or "p" factor?

    PubMed

    Meilleur, Andrée-Anne S; Berthiaume, Claude; Bertone, Armando; Mottron, Laurent

    2014-01-01

    Autistic perception is characterized by atypical and sometimes exceptional performance in several low- (e.g., discrimination) and mid-level (e.g., pattern matching) tasks in both visual and auditory domains. A factor that specifically affects perceptive abilities in autistic individuals should manifest as an autism-specific association between perceptual tasks. The first purpose of this study was to explore how perceptual performances are associated within or across processing levels and/or modalities. The second purpose was to determine if general intelligence, the major factor that accounts for covariation in task performances in non-autistic individuals, equally controls perceptual abilities in autistic individuals. We asked 46 autistic individuals and 46 typically developing controls to perform four tasks measuring low- or mid-level visual or auditory processing. Intelligence was measured with the Wechsler's Intelligence Scale (FSIQ) and Raven Progressive Matrices (RPM). We conducted linear regression models to compare task performances between groups and patterns of covariation between tasks. The addition of either Wechsler's FSIQ or RPM in the regression models controlled for the effects of intelligence. In typically developing individuals, most perceptual tasks were associated with intelligence measured either by RPM or Wechsler FSIQ. The residual covariation between unimodal tasks, i.e. covariation not explained by intelligence, could be explained by a modality-specific factor. In the autistic group, residual covariation revealed the presence of a plurimodal factor specific to autism. Autistic individuals show exceptional performance in some perceptual tasks. Here, we demonstrate the existence of specific, plurimodal covariation that does not dependent on general intelligence (or "g" factor). Instead, this residual covariation is accounted for by a common perceptual process (or "p" factor), which may drive perceptual abilities differently in autistic and non-autistic individuals.

  9. Autism-Specific Covariation in Perceptual Performances: “g” or “p” Factor?

    PubMed Central

    Meilleur, Andrée-Anne S.; Berthiaume, Claude; Bertone, Armando; Mottron, Laurent

    2014-01-01

    Background Autistic perception is characterized by atypical and sometimes exceptional performance in several low- (e.g., discrimination) and mid-level (e.g., pattern matching) tasks in both visual and auditory domains. A factor that specifically affects perceptive abilities in autistic individuals should manifest as an autism-specific association between perceptual tasks. The first purpose of this study was to explore how perceptual performances are associated within or across processing levels and/or modalities. The second purpose was to determine if general intelligence, the major factor that accounts for covariation in task performances in non-autistic individuals, equally controls perceptual abilities in autistic individuals. Methods We asked 46 autistic individuals and 46 typically developing controls to perform four tasks measuring low- or mid-level visual or auditory processing. Intelligence was measured with the Wechsler's Intelligence Scale (FSIQ) and Raven Progressive Matrices (RPM). We conducted linear regression models to compare task performances between groups and patterns of covariation between tasks. The addition of either Wechsler's FSIQ or RPM in the regression models controlled for the effects of intelligence. Results In typically developing individuals, most perceptual tasks were associated with intelligence measured either by RPM or Wechsler FSIQ. The residual covariation between unimodal tasks, i.e. covariation not explained by intelligence, could be explained by a modality-specific factor. In the autistic group, residual covariation revealed the presence of a plurimodal factor specific to autism. Conclusions Autistic individuals show exceptional performance in some perceptual tasks. Here, we demonstrate the existence of specific, plurimodal covariation that does not dependent on general intelligence (or “g” factor). Instead, this residual covariation is accounted for by a common perceptual process (or “p” factor), which may drive perceptual abilities differently in autistic and non-autistic individuals. PMID:25117450

  10. A design space of visualization tasks.

    PubMed

    Schulz, Hans-Jörg; Nocke, Thomas; Heitzler, Magnus; Schumann, Heidrun

    2013-12-01

    Knowledge about visualization tasks plays an important role in choosing or building suitable visual representations to pursue them. Yet, tasks are a multi-faceted concept and it is thus not surprising that the many existing task taxonomies and models all describe different aspects of tasks, depending on what these task descriptions aim to capture. This results in a clear need to bring these different aspects together under the common hood of a general design space of visualization tasks, which we propose in this paper. Our design space consists of five design dimensions that characterize the main aspects of tasks and that have so far been distributed across different task descriptions. We exemplify its concrete use by applying our design space in the domain of climate impact research. To this end, we propose interfaces to our design space for different user roles (developers, authors, and end users) that allow users of different levels of expertise to work with it.

  11. Using a contextualized sensemaking model for interaction design: A case study of tumor contouring.

    PubMed

    Aselmaa, Anet; van Herk, Marcel; Laprie, Anne; Nestle, Ursula; Götz, Irina; Wiedenmann, Nicole; Schimek-Jasch, Tanja; Picaud, Francois; Syrykh, Charlotte; Cagetti, Leonel V; Jolnerovski, Maria; Song, Yu; Goossens, Richard H M

    2017-01-01

    Sensemaking theories help designers understand the cognitive processes of a user when he/she performs a complicated task. This paper introduces a two-step approach of incorporating sensemaking support within the design of health information systems by: (1) modeling the sensemaking process of physicians while performing a task, and (2) identifying software interaction design requirements that support sensemaking based on this model. The two-step approach is presented based on a case study of the tumor contouring clinical task for radiotherapy planning. In the first step of the approach, a contextualized sensemaking model was developed to describe the sensemaking process based on the goal, the workflow and the context of the task. In the second step, based on a research software prototype, an experiment was conducted where three contouring tasks were performed by eight physicians respectively. Four types of navigation interactions and five types of interaction sequence patterns were identified by analyzing the gathered interaction log data from those twenty-four cases. Further in-depth study on each of the navigation interactions and interaction sequence patterns in relation to the contextualized sensemaking model revealed five main areas for design improvements to increase sensemaking support. Outcomes of the case study indicate that the proposed two-step approach was beneficial for gaining a deeper understanding of the sensemaking process during the task, as well as for identifying design requirements for better sensemaking support. Copyright © 2016. Published by Elsevier Inc.

  12. Effects of two types of intra-team feedback on developing a shared mental model in Command & Control teams.

    PubMed

    Rasker, P C; Post, W M; Schraagen, J M

    2000-08-01

    In two studies, the effect of two types of intra-team feedback on developing a shared mental model in Command & Control teams was investigated. A distinction is made between performance monitoring and team self-correction. Performance monitoring is the ability of team members to monitor each other's task execution and give feedback during task execution. Team self-correction is the process in which team members engage in evaluating their performance and in determining their strategies after task execution. In two experiments the opportunity to engage in performance monitoring, respectively team self-correction, was varied systematically. Both performance monitoring as well as team self-correction appeared beneficial in the improvement of team performance. Teams that had the opportunity to engage in performance monitoring, however, performed better than teams that had the opportunity to engage in team self-correction.

  13. The Categorisation of Non-Categorical Colours: A Novel Paradigm in Colour Perception

    PubMed Central

    Cropper, Simon J.; Kvansakul, Jessica G. S.; Little, Daniel R.

    2013-01-01

    In this paper, we investigate a new paradigm for studying the development of the colour ‘signal’ by having observers discriminate and categorize the same set of controlled and calibrated cardinal coloured stimuli. Notably, in both tasks, each observer was free to decide whether two pairs of colors were the same or belonged to the same category. The use of the same stimulus set for both tasks provides, we argue, an incremental behavioural measure of colour processing from detection through discrimination to categorisation. The measured data spaces are different for the two tasks, and furthermore the categorisation data is unique to each observer. In addition, we develop a model which assumes that the principal difference between the tasks is the degree of similarity between the stimuli which has different constraints for the categorisation task compared to the discrimination task. This approach not only makes sense of the current (and associated) data but links the processes of discrimination and categorisation in a novel way and, by implication, expands upon the previous research linking categorisation to other tasks not limited to colour perception. PMID:23536899

  14. The categorisation of non-categorical colours: a novel paradigm in colour perception.

    PubMed

    Cropper, Simon J; Kvansakul, Jessica G S; Little, Daniel R

    2013-01-01

    In this paper, we investigate a new paradigm for studying the development of the colour 'signal' by having observers discriminate and categorize the same set of controlled and calibrated cardinal coloured stimuli. Notably, in both tasks, each observer was free to decide whether two pairs of colors were the same or belonged to the same category. The use of the same stimulus set for both tasks provides, we argue, an incremental behavioural measure of colour processing from detection through discrimination to categorisation. The measured data spaces are different for the two tasks, and furthermore the categorisation data is unique to each observer. In addition, we develop a model which assumes that the principal difference between the tasks is the degree of similarity between the stimuli which has different constraints for the categorisation task compared to the discrimination task. This approach not only makes sense of the current (and associated) data but links the processes of discrimination and categorisation in a novel way and, by implication, expands upon the previous research linking categorisation to other tasks not limited to colour perception.

  15. Attention-Modulating Effects of Cognitive Enhancers

    PubMed Central

    Levin, Edward D.; Bushnell, Philip J.; Rezvani, Amir H.

    2011-01-01

    Attention can be readily measured in experimental animal models. Animal models of attention have been used to better understand the neural systems involved in attention, how attention is impaired, and how therapeutic treatments can ameliorate attentional deficits. This review focuses on the ways in which animal models are used to better understand the neuronal mechanism of attention and how to develop new therapeutic treatments for attentional impairment. Several behavioral test methods have been developed for experimental animal studies of attention, including a 5-choice serial reaction time task (5-CSRTT), a signal detection task (SDT), and a novel object recognition (NOR) test. These tasks can be used together with genetic, lesion, pharmacological and behavioral models of attentional impairment to test the efficacy of novel therapeutic treatments. The most prominent genetic model is the spontaneously hypertensive rat (SHR). Well-characterized lesion models include frontal cortical or hippocamapal lesions. Pharmacological models include challenge with the NMDA glutamate antagonist dizocilpine (MK-801), the nicotinic cholinergic antagonist mecamylamine and the muscarinic cholinergic antagonist scopolamine. Behavioral models include distracting stimuli and attenuated target stimuli. Important validation of these behavioral tests and models of attentional impairments for developing effective treatments for attentional dysfunction is the fact that stimulant treatments effective for attention deficit hyperactivity disorder (ADHD), such as methylphenidate (Ritalin®), are effective in the experimental animal models. Newer lines of treatment including nicotinic agonists, α4β2 nicotinic receptor desensitizers, and histamine H3 antagonists, have also been found to be effective in improving attention in these animal models. Good carryover has also been seen for the attentional improvement of nicotine in experimental animal models and in human populations. Animal models of attention can be effectively used for the development of new treatments of attentional impairment in ADHD and other syndromes in which have attentional impairments occur, such as Alzheimer’s disease and schizophrenia. PMID:21334367

  16. Development of a Hand Held Thromboelastograph

    DTIC Science & Technology

    2013-01-01

    prototype model, and there was no indication of damage and was found to comply to IEC 61010 -1. Currently, loss of calibration has not been evaluated...Task 4 - PCM Certification Testing Subtask 4a: IEC 60601-1 Subtask 4b: IEC 60601-1-2 Subtask 4c: ISO 10993 Subtask 4d: ISTA 2A These tasks

  17. Why I Believe I Achieve Determines Whether I Achieve

    ERIC Educational Resources Information Center

    Siegle, Del; McCoach, D. Betsy; Roberts, Anne

    2017-01-01

    The beliefs and values students hold toward themselves, given tasks, and achievement itself can influence what tasks students seek, and whether they are able to obtain them. On the basis of previous research on underachievement and motivation, we developed the Achievement Orientation Model (AOM) to explore the issue of student achievement. The…

  18. Managing the Organizational Culture of Rural Schools: Creating Environments for Human Development.

    ERIC Educational Resources Information Center

    Steinhoff, Carl R.; Owens, Robert G.

    The factors of people, technology, structure, and task provide a sociotechnical model for understanding the essential elements of schools as organizations. Schools can be understood as cultures and managed as such. Effective schools focus on a task-oriented organizational culture that meaningfully involves all participants in the key elements of…

  19. Adaptive Allocation of Decision Making Responsibility Between Human and Computer in Multi-Task Situations. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chu, Y. Y.

    1978-01-01

    A unified formulation of computer-aided, multi-task, decision making is presented. Strategy for the allocation of decision making responsibility between human and computer is developed. The plans of a flight management systems are studied. A model based on the queueing theory was implemented.

  20. Combined Task and Physical Demands Analyses towards a Comprehensive Human Work Model

    DTIC Science & Technology

    2014-09-01

    new equipment or modifying tasks and providing training (van der Molen, Sluiter, Hulshof , Vink, & Frings-Dresen, 2005). List the Job Duties (the...00 1/SV, Defence Research and Development Canada. van der Molen, H. F., Sluiter, J. K., Hulshof , C. T. J. , Vink, P., & Frings-Dresen, M. H. W

  1. An Approximation of an Instructional Model for Developing Home Living Skills in Severely Handicapped Students.

    ERIC Educational Resources Information Center

    Hamre, S.

    The author discusses the need for severely handicapped students to acquire basic home living skills, reviews task analysis principles, and provides sample instructional programs. Listed are basic grooming, dressing, domestic maintenance, and cooking skills. A sample task analysis procedure is demonstrated for the skill of brushing teeth. Reported…

  2. Nonequilibrium radiation and chemistry models for aerocapture vehicle flowfields

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1990-01-01

    The primary tasks during January 1990 to June 1990 have been the development and evaluation of various electron and electron-electronic energy equation models, the continued development of improved nonequilibrium radiation models for molecules and atoms, and the continued development and investigation of precursor models and their effects. In addition, work was initiated to develop a vibrational model for the viscous shock layer (VSL) nonequilibrium chemistry blunt body engineering code. Also, an effort was started associated with the effects of including carbon species, say from an ablator, in the flowfield.

  3. Propel: Tools and Methods for Practical Source Code Model Checking

    NASA Technical Reports Server (NTRS)

    Mansouri-Samani, Massoud; Mehlitz, Peter; Markosian, Lawrence; OMalley, Owen; Martin, Dale; Moore, Lantz; Penix, John; Visser, Willem

    2003-01-01

    The work reported here is an overview and snapshot of a project to develop practical model checking tools for in-the-loop verification of NASA s mission-critical, multithreaded programs in Java and C++. Our strategy is to develop and evaluate both a design concept that enables the application of model checking technology to C++ and Java, and a model checking toolset for C++ and Java. The design concept and the associated model checking toolset is called Propel. It builds upon the Java PathFinder (JPF) tool, an explicit state model checker for Java applications developed by the Automated Software Engineering group at NASA Ames Research Center. The design concept that we are developing is Design for Verification (D4V). This is an adaption of existing best design practices that has the desired side-effect of enhancing verifiability by improving modularity and decreasing accidental complexity. D4V, we believe, enhances the applicability of a variety of V&V approaches; we are developing the concept in the context of model checking. The model checking toolset, Propel, is based on extending JPF to handle C++. Our principal tasks in developing the toolset are to build a translator from C++ to Java, productize JPF, and evaluate the toolset in the context of D4V. Through all these tasks we are testing Propel capabilities on customer applications.

  4. Generic algorithms for high performance scalable geocomputing

    NASA Astrophysics Data System (ADS)

    de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek

    2016-04-01

    During the last decade, the characteristics of computing hardware have changed a lot. For example, instead of a single general purpose CPU core, personal computers nowadays contain multiple cores per CPU and often general purpose accelerators, like GPUs. Additionally, compute nodes are often grouped together to form clusters or a supercomputer, providing enormous amounts of compute power. For existing earth simulation models to be able to use modern hardware platforms, their compute intensive parts must be rewritten. This can be a major undertaking and may involve many technical challenges. Compute tasks must be distributed over CPU cores, offloaded to hardware accelerators, or distributed to different compute nodes. And ideally, all of this should be done in such a way that the compute task scales well with the hardware resources. This presents two challenges: 1) how to make good use of all the compute resources and 2) how to make these compute resources available for developers of simulation models, who may not (want to) have the required technical background for distributing compute tasks. The first challenge requires the use of specialized technology (e.g.: threads, OpenMP, MPI, OpenCL, CUDA). The second challenge requires the abstraction of the logic handling the distribution of compute tasks from the model-specific logic, hiding the technical details from the model developer. To assist the model developer, we are developing a C++ software library (called Fern) containing algorithms that can use all CPU cores available in a single compute node (distributing tasks over multiple compute nodes will be done at a later stage). The algorithms are grid-based (finite difference) and include local and spatial operations such as convolution filters. The algorithms handle distribution of the compute tasks to CPU cores internally. In the resulting model the low-level details of how this is done is separated from the model-specific logic representing the modeled system. This contrasts with practices in which code for distributing of compute tasks is mixed with model-specific code, and results in a better maintainable model. For flexibility and efficiency, the algorithms are configurable at compile-time with the respect to the following aspects: data type, value type, no-data handling, input value domain handling, and output value range handling. This makes the algorithms usable in very different contexts, without the need for making intrusive changes to existing models when using them. Applications that benefit from using the Fern library include the construction of forward simulation models in (global) hydrology (e.g. PCR-GLOBWB (Van Beek et al. 2011)), ecology, geomorphology, or land use change (e.g. PLUC (Verstegen et al. 2014)) and manipulation of hyper-resolution land surface data such as digital elevation models and remote sensing data. Using the Fern library, we have also created an add-on to the PCRaster Python Framework (Karssenberg et al. 2010) allowing its users to speed up their spatio-temporal models, sometimes by changing just a single line of Python code in their model. In our presentation we will give an overview of the design of the algorithms, providing examples of different contexts where they can be used to replace existing sequential algorithms, including the PCRaster environmental modeling software (www.pcraster.eu). We will show how the algorithms can be configured to behave differently when necessary. References Karssenberg, D., Schmitz, O., Salamon, P., De Jong, K. and Bierkens, M.F.P., 2010, A software framework for construction of process-based stochastic spatio-temporal models and data assimilation. Environmental Modelling & Software, 25, pp. 489-502, Link. Best Paper Award 2010: Software and Decision Support. Van Beek, L. P. H., Y. Wada, and M. F. P. Bierkens. 2011. Global monthly water stress: 1. Water balance and water availability. Water Resources Research. 47. Verstegen, J. A., D. Karssenberg, F. van der Hilst, and A. P. C. Faaij. 2014. Identifying a land use change cellular automaton by Bayesian data assimilation. Environmental Modelling & Software 53:121-136.

  5. Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking

    PubMed Central

    Lages, Martin; Scheel, Anne

    2016-01-01

    We investigated the proposition of a two-systems Theory of Mind in adults’ belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking. PMID:27853440

  6. ASTP ranging system mathematical model

    NASA Technical Reports Server (NTRS)

    Ellis, M. R.; Robinson, L. H.

    1973-01-01

    A mathematical model is presented of the VHF ranging system to analyze the performance of the Apollo-Soyuz test project (ASTP). The system was adapted for use in the ASTP. The ranging system mathematical model is presented in block diagram form, and a brief description of the overall model is also included. A procedure for implementing the math model is presented along with a discussion of the validation of the math model and the overall summary and conclusions of the study effort. Detailed appendices of the five study tasks are presented: early late gate model development, unlock probability development, system error model development, probability of acquisition and model development, and math model validation testing.

  7. Self-narrative reconstruction in emotion-focused therapy: A preliminary task analysis.

    PubMed

    Cunha, Carla; Mendes, Inês; Ribeiro, António P; Angus, Lynne; Greenberg, Leslie S; Gonçalves, Miguel M

    2017-11-01

    This research explored the consolidation phase of emotion-focused therapy (EFT) for depression and studies-through a task-analysis method-how client-therapist dyads evolved from the exploration of the problem to self-narrative reconstruction. Innovative moments (IMs) were used to situate the process of self-narrative reconstruction within sessions, particularly through reconceptualization and performing change IMs. We contrasted the observation of these occurrences with a rational model of self-narrative reconstruction, previously built. This study presents the rational model and the revised rational-empirical model of the self-narrative reconstruction task in three EFT dyads, suggesting nine steps necessary for task resolution: (1) Explicit recognition of differences in the present and steps in the path of change; (2) Development of a meta-perspective contrast between present self and past self; (3) Amplification of contrast in the self; (4) A positive appreciation of changes is conveyed; (5) Occurrence of feelings of empowerment, competence, and mastery; (6) Reference to difficulties still present; (7) Emphasis on the loss of centrality of the problem; (8) Perception of change as a gradual, developing process; and (9) Reference to projects, experiences of change, or elaboration of new plans. Central aspects of therapist activity in facilitating the client's progression along these nine steps are also elaborated.

  8. A Pilot Study of Biomedical Text Comprehension using an Attention-Based Deep Neural Reader: Design and Experimental Analysis.

    PubMed

    Kim, Seongsoon; Park, Donghyeon; Choi, Yonghwa; Lee, Kyubum; Kim, Byounggun; Jeon, Minji; Kim, Jihye; Tan, Aik Choon; Kang, Jaewoo

    2018-01-05

    With the development of artificial intelligence (AI) technology centered on deep-learning, the computer has evolved to a point where it can read a given text and answer a question based on the context of the text. Such a specific task is known as the task of machine comprehension. Existing machine comprehension tasks mostly use datasets of general texts, such as news articles or elementary school-level storybooks. However, no attempt has been made to determine whether an up-to-date deep learning-based machine comprehension model can also process scientific literature containing expert-level knowledge, especially in the biomedical domain. This study aims to investigate whether a machine comprehension model can process biomedical articles as well as general texts. Since there is no dataset for the biomedical literature comprehension task, our work includes generating a large-scale question answering dataset using PubMed and manually evaluating the generated dataset. We present an attention-based deep neural model tailored to the biomedical domain. To further enhance the performance of our model, we used a pretrained word vector and biomedical entity type embedding. We also developed an ensemble method of combining the results of several independent models to reduce the variance of the answers from the models. The experimental results showed that our proposed deep neural network model outperformed the baseline model by more than 7% on the new dataset. We also evaluated human performance on the new dataset. The human evaluation result showed that our deep neural model outperformed humans in comprehension by 22% on average. In this work, we introduced a new task of machine comprehension in the biomedical domain using a deep neural model. Since there was no large-scale dataset for training deep neural models in the biomedical domain, we created the new cloze-style datasets Biomedical Knowledge Comprehension Title (BMKC_T) and Biomedical Knowledge Comprehension Last Sentence (BMKC_LS) (together referred to as BioMedical Knowledge Comprehension) using the PubMed corpus. The experimental results showed that the performance of our model is much higher than that of humans. We observed that our model performed consistently better regardless of the degree of difficulty of a text, whereas humans have difficulty when performing biomedical literature comprehension tasks that require expert level knowledge. ©Seongsoon Kim, Donghyeon Park, Yonghwa Choi, Kyubum Lee, Byounggun Kim, Minji Jeon, Jihye Kim, Aik Choon Tan, Jaewoo Kang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 05.01.2018.

  9. Latent Patient Cluster Discovery for Robust Future Forecasting and New-Patient Generalization.

    PubMed

    Qian, Ting; Masino, Aaron J

    2016-01-01

    Commonly referred to as predictive modeling, the use of machine learning and statistical methods to improve healthcare outcomes has recently gained traction in biomedical informatics research. Given the vast opportunities enabled by large Electronic Health Records (EHR) data and powerful resources for conducting predictive modeling, we argue that it is yet crucial to first carefully examine the prediction task and then choose predictive methods accordingly. Specifically, we argue that there are at least three distinct prediction tasks that are often conflated in biomedical research: 1) data imputation, where a model fills in the missing values in a dataset, 2) future forecasting, where a model projects the development of a medical condition for a known patient based on existing observations, and 3) new-patient generalization, where a model transfers the knowledge learned from previously observed patients to newly encountered ones. Importantly, the latter two tasks-future forecasting and new-patient generalizations-tend to be more difficult than data imputation as they require predictions to be made on potentially out-of-sample data (i.e., data following a different predictable pattern from what has been learned by the model). Using hearing loss progression as an example, we investigate three regression models and show that the modeling of latent clusters is a robust method for addressing the more challenging prediction scenarios. Overall, our findings suggest that there exist significant differences between various kinds of prediction tasks and that it is important to evaluate the merits of a predictive model relative to the specific purpose of a prediction task.

  10. An Investigation of Large Aircraft Handling Qualities

    NASA Astrophysics Data System (ADS)

    Joyce, Richard D.

    An analytical technique for investigating transport aircraft handling qualities is exercised in a study using models of two such vehicles, a Boeing 747 and Lockheed C-5A. Two flight conditions are employed for climb and directional tasks, and a third included for a flare task. The analysis technique is based upon a "structural model" of the human pilot developed by Hess. The associated analysis procedure has been discussed previously in the literature, but centered almost exclusively on the characteristics of high-performance fighter aircraft. The handling qualities rating level (HQRL) and pilot induced oscillation tendencies rating level (PIORL) are predicted for nominal configurations of the aircraft and for "damaged" configurations where actuator rate limits are introduced as nonlinearites. It is demonstrated that the analysis can accommodate nonlinear pilot/vehicle behavior and do so in the context of specific flight tasks, yielding estimates of handling qualities, pilot-induced oscillation tendencies and upper limits of task performance. A brief human-in-the-loop tracking study was performed to provide a limited validation of the pilot model employed.

  11. The Bayesian reader: explaining word recognition as an optimal Bayesian decision process.

    PubMed

    Norris, Dennis

    2006-04-01

    This article presents a theory of visual word recognition that assumes that, in the tasks of word identification, lexical decision, and semantic categorization, human readers behave as optimal Bayesian decision makers. This leads to the development of a computational model of word recognition, the Bayesian reader. The Bayesian reader successfully simulates some of the most significant data on human reading. The model accounts for the nature of the function relating word frequency to reaction time and identification threshold, the effects of neighborhood density and its interaction with frequency, and the variation in the pattern of neighborhood density effects seen in different experimental tasks. Both the general behavior of the model and the way the model predicts different patterns of results in different tasks follow entirely from the assumption that human readers approximate optimal Bayesian decision makers. ((c) 2006 APA, all rights reserved).

  12. Effective Team Support: From Modeling to Software Agents

    NASA Technical Reports Server (NTRS)

    Remington, Roger W. (Technical Monitor); John, Bonnie; Sycara, Katia

    2003-01-01

    The purpose of this research contract was to perform multidisciplinary research between CMU psychologists, computer scientists and engineers and NASA researchers to design a next generation collaborative system to support a team of human experts and intelligent agents. To achieve robust performance enhancement of such a system, we had proposed to perform task and cognitive modeling to thoroughly understand the impact technology makes on the organization and on key individual personnel. Guided by cognitively-inspired requirements, we would then develop software agents that support the human team in decision making, information filtering, information distribution and integration to enhance team situational awareness. During the period covered by this final report, we made substantial progress in modeling infrastructure and task infrastructure. Work is continuing under a different contract to complete empirical data collection, cognitive modeling, and the building of software agents to support the teams task.

  13. Decoding memory features from hippocampal spiking activities using sparse classification models.

    PubMed

    Dong Song; Hampson, Robert E; Robinson, Brian S; Marmarelis, Vasilis Z; Deadwyler, Sam A; Berger, Theodore W

    2016-08-01

    To understand how memory information is encoded in the hippocampus, we build classification models to decode memory features from hippocampal CA3 and CA1 spatio-temporal patterns of spikes recorded from epilepsy patients performing a memory-dependent delayed match-to-sample task. The classification model consists of a set of B-spline basis functions for extracting memory features from the spike patterns, and a sparse logistic regression classifier for generating binary categorical output of memory features. Results show that classification models can extract significant amount of memory information with respects to types of memory tasks and categories of sample images used in the task, despite the high level of variability in prediction accuracy due to the small sample size. These results support the hypothesis that memories are encoded in the hippocampal activities and have important implication to the development of hippocampal memory prostheses.

  14. A unifying motor control framework for task-specific dystonia

    PubMed Central

    Rothwell, John C.; Edwards, Mark J.

    2018-01-01

    Task-specific dystonia is a movement disorder characterized by the development of a painless loss of dexterity specific to a particular motor skill. This disorder is prevalent among writers, musicians, dancers and athletes. No current treatment is predictably effective and the disorder generally ends the careers of affected individuals. There are a number of limitations with traditional dystonic disease models for task-specific dystonia. We therefore review emerging evidence that the disorder has its origins within normal compensatory mechanisms of a healthy motor system in which the representation and reproduction of motor skill is disrupted. We describe how risk factors for task-specific dystonia can be stratified and translated into mechanisms of dysfunctional motor control. The proposed model aims to define new directions for experimental research and stimulate therapeutic advances for this highly disabling disorder. PMID:29104291

  15. Study to design and develop remote manipulator system. [computer simulation of human performance

    NASA Technical Reports Server (NTRS)

    Hill, J. W.; Mcgovern, D. E.; Sword, A. J.

    1974-01-01

    Modeling of human performance in remote manipulation tasks is reported by automated procedures using computers to analyze and count motions during a manipulation task. Performance is monitored by an on-line computer capable of measuring the joint angles of both master and slave and in some cases the trajectory and velocity of the hand itself. In this way the operator's strategies with different transmission delays, displays, tasks, and manipulators can be analyzed in detail for comparison. Some progress is described in obtaining a set of standard tasks and difficulty measures for evaluating manipulator performance.

  16. Task scheduling in dataflow computer architectures

    NASA Technical Reports Server (NTRS)

    Katsinis, Constantine

    1994-01-01

    Dataflow computers provide a platform for the solution of a large class of computational problems, which includes digital signal processing and image processing. Many typical applications are represented by a set of tasks which can be repetitively executed in parallel as specified by an associated dataflow graph. Research in this area aims to model these architectures, develop scheduling procedures, and predict the transient and steady state performance. Researchers at NASA have created a model and developed associated software tools which are capable of analyzing a dataflow graph and predicting its runtime performance under various resource and timing constraints. These models and tools were extended and used in this work. Experiments using these tools revealed certain properties of such graphs that require further study. Specifically, the transient behavior at the beginning of the execution of a graph can have a significant effect on the steady state performance. Transformation and retiming of the application algorithm and its initial conditions can produce a different transient behavior and consequently different steady state performance. The effect of such transformations on the resource requirements or under resource constraints requires extensive study. Task scheduling to obtain maximum performance (based on user-defined criteria), or to satisfy a set of resource constraints, can also be significantly affected by a transformation of the application algorithm. Since task scheduling is performed by heuristic algorithms, further research is needed to determine if new scheduling heuristics can be developed that can exploit such transformations. This work has provided the initial development for further long-term research efforts. A simulation tool was completed to provide insight into the transient and steady state execution of a dataflow graph. A set of scheduling algorithms was completed which can operate in conjunction with the modeling and performance tools previously developed. Initial studies on the performance of these algorithms were done to examine the effects of application algorithm transformations as measured by such quantities as number of processors, time between outputs, time between input and output, communication time, and memory size.

  17. Developing a case-mix model for PPS.

    PubMed

    Goldberg, H B; Delargy, D

    2000-01-01

    Agencies are pinning hopes for success under PPS on an accurate case-mix adjustor. The Health Care Financing Administration (HCFA) tasked Abt Associates Inc. to develop a system to accurately predict the volume and type of home health services each patient requires, based on his or her characteristics (not the service actually received). HCFA wanted this system to be feasible, clinically logical, and valid and accurate. Authors Goldberg and Delargy explain how Abt approached this daunting task.

  18. TeleOperator/telePresence System (TOPS) Concept Verification Model (CVM) development

    NASA Technical Reports Server (NTRS)

    Shimamoto, Mike S.

    1993-01-01

    The development of an anthropomorphic, undersea manipulator system, the TeleOperator/telePresence System (TOPS) Concept Verification Model (CVM) is described. The TOPS system's design philosophy, which results from NRaD's experience in undersea vehicles and manipulator systems development and operations, is presented. The TOPS design approach, task teams, manipulator, and vision system development and results, conclusions, and recommendations are presented.

  19. Development and evaluation of nursing user interface screens using multiple methods.

    PubMed

    Hyun, Sookyung; Johnson, Stephen B; Stetson, Peter D; Bakken, Suzanne

    2009-12-01

    Building upon the foundation of the Structured Narrative Electronic Health Record (EHR) model, we applied theory-based (combined Technology Acceptance Model and Task-Technology Fit Model) and user-centered methods to explore nurses' perceptions of functional requirements for an electronic nursing documentation system, design user interface screens reflective of the nurses' perspectives, and assess nurses' perceptions of the usability of the prototype user interface screens. The methods resulted in user interface screens that were perceived to be easy to use, potentially useful, and well-matched to nursing documentation tasks associated with Nursing Admission Assessment, Blood Administration, and Nursing Discharge Summary. The methods applied in this research may serve as a guide for others wishing to implement user-centered processes to develop or extend EHR systems. In addition, some of the insights obtained in this study may be informative to the development of safe and efficient user interface screens for nursing document templates in EHRs.

  20. The Lexical Stroop Sort (LSS) picture-word task: a computerized task for assessing the relationship between language and executive functioning in school-aged children.

    PubMed

    Wilbourn, Makeba Parramore; Kurtz, Laura E; Kalia, Vrinda

    2012-03-01

    The relationship between language development and executive function (EF) in children is not well understood. The Lexical Stroop Sort (LSS) task is a computerized EF task created for the purpose of examining the relationship between school-aged children's oral language development and EF. To validate this new measure, a diverse sample of school-aged children completed standardized oral language assessments, the LSS task, and the widely used Dimensional Change Card Sort (DCCS; Zelazo, 2006) task. Both EF tasks require children to sort stimuli into categories based on predetermined rules. While the DCCS largely relies on visual stimuli, the LSS employs children's phonological loop to access their semantic knowledge base. Accuracy and reaction times were recorded for both tasks. Children's scores on the LSS task were correlated with their scores on the DCCS task, and a similar pattern of relationships emerged between children's vocabulary and the two EF tasks, thus providing convergent validity for the LSS. However, children's phonological awareness was associated with their scores on the LSS, but not with those on the DCCS. In addition, a mediation model was used to elucidate the predictive relationship between phonological awareness and children's performance on the LSS task, with children's vocabulary fully mediating this relationship. The use of this newly created and validated LSS task with different populations, such as preschoolers and bilinguals, is also discussed.

  1. Visual-search models for location-known detection tasks

    NASA Astrophysics Data System (ADS)

    Gifford, H. C.; Karbaschi, Z.; Banerjee, K.; Das, M.

    2017-03-01

    Lesion-detection studies that analyze a fixed target position are generally considered predictive of studies involving lesion search, but the extent of the correlation often goes untested. The purpose of this work was to develop a visual-search (VS) model observer for location-known tasks that, coupled with previous work on localization tasks, would allow efficient same-observer assessments of how search and other task variations can alter study outcomes. The model observer featured adjustable parameters to control the search radius around the fixed lesion location and the minimum separation between suspicious locations. Comparisons were made against human observers, a channelized Hotelling observer and a nonprewhitening observer with eye filter in a two-alternative forced-choice study with simulated lumpy background images containing stationary anatomical and quantum noise. These images modeled single-pinhole nuclear medicine scans with different pinhole sizes. When the VS observer's search radius was optimized with training images, close agreement was obtained with human-observer results. Some performance differences between the humans could be explained by varying the model observer's separation parameter. The range of optimal pinhole sizes identified by the VS observer was in agreement with the range determined with the channelized Hotelling observer.

  2. How African American English-Speaking First Graders Segment and Rhyme Words and Nonwords With Final Consonant Clusters.

    PubMed

    Shollenbarger, Amy J; Robinson, Gregory C; Taran, Valentina; Choi, Seo-Eun

    2017-10-05

    This study explored how typically developing 1st grade African American English (AAE) speakers differ from mainstream American English (MAE) speakers in the completion of 2 common phonological awareness tasks (rhyming and phoneme segmentation) when the stimulus items were consonant-vowel-consonant-consonant (CVCC) words and nonwords. Forty-nine 1st graders met criteria for 2 dialect groups: AAE and MAE. Three conditions were tested in each rhyme and segmentation task: Real Words No Model, Real Words With a Model, and Nonwords With a Model. The AAE group had significantly more responses that rhymed CVCC words with consonant-vowel-consonant words and segmented CVCC words as consonant-vowel-consonant than the MAE group across all experimental conditions. In the rhyming task, the presence of a model in the real word condition elicited more reduced final cluster responses for both groups. In the segmentation task, the MAE group was at ceiling, so only the AAE group changed across the different stimulus presentations and reduced the final cluster less often when given a model. Rhyming and phoneme segmentation performance can be influenced by a child's dialect when CVCC words are used.

  3. Overcoming catastrophic forgetting in neural networks

    PubMed Central

    Kirkpatrick, James; Pascanu, Razvan; Rabinowitz, Neil; Veness, Joel; Desjardins, Guillaume; Rusu, Andrei A.; Milan, Kieran; Quan, John; Ramalho, Tiago; Grabska-Barwinska, Agnieszka; Hassabis, Demis; Clopath, Claudia; Kumaran, Dharshan; Hadsell, Raia

    2017-01-01

    The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on a hand-written digit dataset and by learning several Atari 2600 games sequentially. PMID:28292907

  4. Terminal Area Productivity Airport Wind Analysis and Chicago O'Hare Model Description

    NASA Technical Reports Server (NTRS)

    Hemm, Robert; Shapiro, Gerald

    1998-01-01

    This paper describes two results from a continuing effort to provide accurate cost-benefit analyses of the NASA Terminal Area Productivity (TAP) program technologies. Previous tasks have developed airport capacity and delay models and completed preliminary cost benefit estimates for TAP technologies at 10 U.S. airports. This task covers two improvements to the capacity and delay models. The first improvement is the completion of a detailed model set for the Chicago O'Hare (ORD) airport. Previous analyses used a more general model to estimate the benefits for ORD. This paper contains a description of the model details with results corresponding to current conditions. The second improvement is the development of specific wind speed and direction criteria for use in the delay models to predict when the Aircraft Vortex Spacing System (AVOSS) will allow use of reduced landing separations. This paper includes a description of the criteria and an estimate of AVOSS utility for 10 airports based on analysis of 35 years of weather data.

  5. Exploring Dimensionality of Effortful Control Using Hot and Cool Tasks in a Sample of Preschool Children

    PubMed Central

    Allan, Nicholas P.; Lonigan, Christopher J.

    2015-01-01

    Effortful control (EC) is an important developmental construct associated with academic performance, socioemotional growth, and psychopathology. EC, defined as the ability to inhibit or delay a prepotent response typically in favor of a subdominant response, undergoes rapid development during children’s preschool years. Research involving EC in preschool children can be aided by ensuring that the measured model of EC matches the latent structure of EC. Extant research indicates that EC may be multidimensional, consisting of hot (affectively salient) and cool (affectively neutral) dimensions. However, there are several untested assumptions regarding the defining features of hot EC. Confirmatory factor analysis was used in a sample of 281 preschool children (Mage = 55.92 - months, SD = 4.16; 46.6% male and 53.4% female) to compare a multidimensional model composed of hot and cool EC factors with a unidimensional model. Hot tasks were created by adding affective salience to cool tasks so that hot and cool tasks varied only by this aspect of the tasks. Tasks measuring EC were best described by a single factor and not distinct hot and cool factors, indicating that affective salience alone does not differentiate between hot and cool EC. EC shared gender-invariant associations with academic skills and externalizing behavior problems. PMID:24518050

  6. Exploring dimensionality of effortful control using hot and cool tasks in a sample of preschool children.

    PubMed

    Allan, Nicholas P; Lonigan, Christopher J

    2014-06-01

    Effortful control (EC) is an important developmental construct associated with academic performance, socioemotional growth, and psychopathology. EC, defined as the ability to inhibit or delay a prepotent response typically in favor of a subdominant response, undergoes rapid development during children's preschool years. Research involving EC in preschool children can be aided by ensuring that the measured model of EC matches the latent structure of EC. Extant research indicates that EC may be multidimensional, consisting of hot (affectively salient) and cool (affectively neutral) dimensions. However, there are several untested assumptions regarding the defining features of hot EC. Confirmatory factor analysis was used in a sample of 281 preschool children (Mage=55.92months, SD=4.16; 46.6% male and 53.4% female) to compare a multidimensional model composed of hot and cool EC factors with a unidimensional model. Hot tasks were created by adding affective salience to cool tasks so that hot and cool tasks varied only by this aspect of the tasks. Tasks measuring EC were best described by a single factor and not distinct hot and cool factors, indicating that affective salience alone does not differentiate between hot and cool EC. EC shared gender-invariant associations with academic skills and externalizing behavior problems. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. A Developmental Learning Approach of Mobile Manipulator via Playing

    PubMed Central

    Wu, Ruiqi; Zhou, Changle; Chao, Fei; Zhu, Zuyuan; Lin, Chih-Min; Yang, Longzhi

    2017-01-01

    Inspired by infant development theories, a robotic developmental model combined with game elements is proposed in this paper. This model does not require the definition of specific developmental goals for the robot, but the developmental goals are implied in the goals of a series of game tasks. The games are characterized into a sequence of game modes based on the complexity of the game tasks from simple to complex, and the task complexity is determined by the applications of developmental constraints. Given a current mode, the robot switches to play in a more complicated game mode when it cannot find any new salient stimuli in the current mode. By doing so, the robot gradually achieves it developmental goals by playing different modes of games. In the experiment, the game was instantiated into a mobile robot with the playing task of picking up toys, and the game is designed with a simple game mode and a complex game mode. A developmental algorithm, “Lift-Constraint, Act and Saturate,” is employed to drive the mobile robot move from the simple mode to the complex one. The experimental results show that the mobile manipulator is able to successfully learn the mobile grasping ability after playing simple and complex games, which is promising in developing robotic abilities to solve complex tasks using games. PMID:29046632

  8. Opening the Black Box: Cognitive Strategies in Family Practice

    PubMed Central

    Christensen, Robert E.; Fetters, Michael D.; Green, Lee A.

    2005-01-01

    PURPOSE We wanted to describe the cognitive strategies used by family physicians when structuring the decision-making tasks of an outpatient visit. METHODS This qualitative study used cognitive task analysis, a structured interview method in which a trained interviewer works individually with expert decision makers to capture their stages and elements of information processing. RESULTS Eighteen family physicians of varying levels of experience participated. Three dominant themes emerged: time pressure, a high degree of variation in task structuring, and varying degrees of task automatization. Based on these data and previous research from the cognitive sciences, we developed a model of novice and expert approaches to decision making in primary care. The model illustrates differences in responses to unexpected opportunity in practice, particularly the expert’s use of attentional surplus (reserve capacity to handle problems) vs the novice’s choice between taking more time or displacing another task. CONCLUSIONS Family physicians have specific, highly individualized cognitive task-structuring approaches and show the decision behavior features typical of expert decision makers in other fields. This finding places constraints on and suggests useful approaches for improving practice. PMID:15798041

  9. Framing matters: Effects of framing on older adults’ exploratory decision-making

    PubMed Central

    Cooper, Jessica A.; Blanco, Nathaniel; Maddox, W. Todd

    2016-01-01

    We examined framing effects on exploratory decision-making. In Experiment 1 we tested older and younger adults in two decision-making tasks separated by one week, finding that older adults’ decision-making performance was preserved when maximizing gains, but declined when minimizing losses. Computational modeling indicates that younger adults in both conditions, and older adults in gains-maximization, utilized a decreasing threshold strategy (which is optimal), but older adults in losses were better fit by a fixed-probability model of exploration. In Experiment 2 we examined within-subjects behavior in older and younger adults in the same exploratory decision-making task, but without a time separation between tasks. We replicated the older adult disadvantage in loss-minimization from Experiment 1, and found that the older adult deficit was significantly reduced when the loss-minimization task immediately followed the gains-maximization task. We conclude that older adults’ performance in exploratory decision-making is hindered when framed as loss-minimization, but that this deficit is attenuated when older adults can first develop a strategy in a gains-framed task. PMID:27977218

  10. Interactive Scene Analysis Module - A sensor-database fusion system for telerobotic environments

    NASA Technical Reports Server (NTRS)

    Cooper, Eric G.; Vazquez, Sixto L.; Goode, Plesent W.

    1992-01-01

    Accomplishing a task with telerobotics typically involves a combination of operator control/supervision and a 'script' of preprogrammed commands. These commands usually assume that the location of various objects in the task space conform to some internal representation (database) of that task space. The ability to quickly and accurately verify the task environment against the internal database would improve the robustness of these preprogrammed commands. In addition, the on-line initialization and maintenance of a task space database is difficult for operators using Cartesian coordinates alone. This paper describes the Interactive Scene' Analysis Module (ISAM) developed to provide taskspace database initialization and verification utilizing 3-D graphic overlay modelling, video imaging, and laser radar based range imaging. Through the fusion of taskspace database information and image sensor data, a verifiable taskspace model is generated providing location and orientation data for objects in a task space. This paper also describes applications of the ISAM in the Intelligent Systems Research Laboratory (ISRL) at NASA Langley Research Center, and discusses its performance relative to representation accuracy and operator interface efficiency.

  11. Framing matters: Effects of framing on older adults' exploratory decision-making.

    PubMed

    Cooper, Jessica A; Blanco, Nathaniel J; Maddox, W Todd

    2017-02-01

    We examined framing effects on exploratory decision-making. In Experiment 1 we tested older and younger adults in two decision-making tasks separated by one week, finding that older adults' decision-making performance was preserved when maximizing gains, but it declined when minimizing losses. Computational modeling indicates that younger adults in both conditions, and older adults in gains maximization, utilized a decreasing threshold strategy (which is optimal), but older adults in losses were better fit by a fixed-probability model of exploration. In Experiment 2 we examined within-subject behavior in older and younger adults in the same exploratory decision-making task, but without a time separation between tasks. We replicated the older adult disadvantage in loss minimization from Experiment 1 and found that the older adult deficit was significantly reduced when the loss-minimization task immediately followed the gains-maximization task. We conclude that older adults' performance in exploratory decision-making is hindered when framed as loss minimization, but that this deficit is attenuated when older adults can first develop a strategy in a gains-framed task. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. Vocabulary skills are well developed in university students with dyslexia: Evidence from multiple case studies.

    PubMed

    Cavalli, Eddy; Casalis, Séverine; El Ahmadi, Abdessadek; Zira, Mélody; Poracchia-George, Florence; Colé, Pascale

    2016-01-01

    Most studies in adults with developmental dyslexia have focused on identifying the deficits responsible for their persistent reading difficulties, but little is known on how these readers manage the intensive exposure to written language required to obtain a university degree. The main objective of this study was to identify certain skills, and specifically vocabulary skills, that French university students with dyslexia have developed and that may contribute to their literacy skills. We tested 20 university students with dyslexia and 20 normal readers (matched on chronological age, gender, nonverbal IQ, and level of education) in reading, phonological, vocabulary breadth (number of known words), and vocabulary depth (accuracy and precision) tasks. In comparing vocabulary measures, we used both Rasch model and single case study methodologies. Results on reading and phonological tasks confirmed the persistence of deficits in written word recognition and phonological skills. However, using the Rasch model we found that the two groups performed at the same level in the vocabulary breadth task, whereas dyslexics systematically outperformed their chronological age controls in the vocabulary depth task. These results are supplemented by multiple case studies. The vocabulary skills of French university students with dyslexia are well developed. Possible interpretations of these results are discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Nonequilibrium radiation and chemistry models for aerocapture vehicle flowfields

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1993-01-01

    The period from Jan. 1993 thru Aug. 1993 is covered. The primary tasks during this period were the development of a single and multi-vibrational temperature preferential vibration-dissociation coupling model, the development of a normal shock nonequilibrium radiation-gasdynamic coupling model based upon the blunt body model, and the comparison of results obtained with these models with experimental data. In addition, an extensive series of computations were conducted using the blunt body model to develop a set of reference results covering a wide range of vehicle sizes, altitudes, and entry velocities.

  14. Introduction to the IWA task group on biofilm modeling.

    PubMed

    Noguera, D R; Morgenroth, E

    2004-01-01

    An International Water Association (IWA) Task Group on Biofilm Modeling was created with the purpose of comparatively evaluating different biofilm modeling approaches. The task group developed three benchmark problems for this comparison, and used a diversity of modeling techniques that included analytical, pseudo-analytical, and numerical solutions to the biofilm problems. Models in one, two, and three dimensional domains were also compared. The first benchmark problem (BM1) described a monospecies biofilm growing in a completely mixed reactor environment and had the purpose of comparing the ability of the models to predict substrate fluxes and concentrations for a biofilm system of fixed total biomass and fixed biomass density. The second problem (BM2) represented a situation in which substrate mass transport by convection was influenced by the hydrodynamic conditions of the liquid in contact with the biofilm. The third problem (BM3) was designed to compare the ability of the models to simulate multispecies and multisubstrate biofilms. These three benchmark problems allowed identification of the specific advantages and disadvantages of each modeling approach. A detailed presentation of the comparative analyses for each problem is provided elsewhere in these proceedings.

  15. A chain-retrieval model for voluntary task switching.

    PubMed

    Vandierendonck, André; Demanet, Jelle; Liefooghe, Baptist; Verbruggen, Frederick

    2012-09-01

    To account for the findings obtained in voluntary task switching, this article describes and tests the chain-retrieval model. This model postulates that voluntary task selection involves retrieval of task information from long-term memory, which is then used to guide task selection and task execution. The model assumes that the retrieved information consists of acquired sequences (or chains) of tasks, that selection may be biased towards chains containing more task repetitions and that bottom-up triggered repetitions may overrule the intended task. To test this model, four experiments are reported. In Studies 1 and 2, sequences of task choices and the corresponding transition sequences (task repetitions or switches) were analyzed with the help of dependency statistics. The free parameters of the chain-retrieval model were estimated on the observed task sequences and these estimates were used to predict autocorrelations of tasks and transitions. In Studies 3 and 4, sequences of hand choices and their transitions were analyzed similarly. In all studies, the chain-retrieval model yielded better fits and predictions than statistical models of event choice. In applications to voluntary task switching (Studies 1 and 2), all three parameters of the model were needed to account for the data. When no task switching was required (Studies 3 and 4), the chain-retrieval model could account for the data with one or two parameters clamped to a neutral value. Implications for our understanding of voluntary task selection and broader theoretical implications are discussed. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Issues in Developing a Normative Descriptive Model for Dyadic Decision Making

    NASA Technical Reports Server (NTRS)

    Serfaty, D.; Kleinman, D. L.

    1984-01-01

    Most research in modelling human information processing and decision making has been devoted to the case of the single human operator. In the present effort, concepts from the fields of organizational behavior, engineering psychology, team theory and mathematical modelling are merged in an attempt to consider first the case of two cooperating decisionmakers (the Dyad) in a multi-task environment. Rooted in the well-known Dynamic Decision Model (DDM), the normative descriptive approach brings basic cognitive and psychophysical characteristics inherent to human behavior into a team theoretic analytic framework. An experimental paradigm, involving teams in dynamic decision making tasks, is designed to produce the data with which to build the theoretical model.

  17. Development of Mouse Models of Ovarian Cancer for Studying Tumor Biology and Testing Novel Molecularly Targeted Therapeutic Strategies

    DTIC Science & Technology

    2011-09-01

    Treatment of tumor-bearing mice with cisplatin accompanied by MRI and BLI (completed years 2 and 3, Rehemtulla and Cho laboratories) Task 10: Treatment...of tumor-bearing mice with perifosine accompanied by MRI and BLI (completed, years 2 and 3, Rehemtulla and Cho laboratories). Task 11: Treatment of...tumor-bearing mice with SC-560 accompanied by MRI and BLI (not performed) Task 12: Histological and immunohistochemical analysis of β-catenin

  18. Simulating the Role of Visual Selective Attention during the Development of Perceptual Completion

    ERIC Educational Resources Information Center

    Schlesinger, Matthew; Amso, Dima; Johnson, Scott P.

    2012-01-01

    We recently proposed a multi-channel, image-filtering model for simulating the development of visual selective attention in young infants (Schlesinger, Amso & Johnson, 2007). The model not only captures the performance of 3-month-olds on a visual search task, but also implicates two cortical regions that may play a role in the development of…

  19. COMMUNITY MULTISCALE AIR QUALITY MODELING SYSTEM (ONE ATMOSPHERE)

    EPA Science Inventory

    This task supports ORD's strategy by providing responsive technical support of EPA's mission and provides credible state of the art air quality models and guidance. This research effort is to develop and improve the Community Multiscale Air Quality (CMAQ) modeling system, a mu...

  20. Artificial Neural Networks for Modeling Knowing and Learning in Science.

    ERIC Educational Resources Information Center

    Roth, Wolff-Michael

    2000-01-01

    Advocates artificial neural networks as models for cognition and development. Provides an example of how such models work in the context of a well-known Piagetian developmental task and school science activity: balance beam problems. (Contains 59 references.) (Author/WRM)

  1. An efficient liner cooling scheme for advanced small gas turbine combustors

    NASA Technical Reports Server (NTRS)

    Paskin, Marc D.; Mongia, Hukam C.; Acosta, Waldo A.

    1993-01-01

    A joint Army/NASA program was conducted to design, fabricate, and test an advanced, small gas turbine, reverse-flow combustor utilizing a compliant metal/ceramic (CMC) wall cooling concept. The objectives of this effort were to develop a design method (basic design data base and analysis) for the CMC cooling technique and then demonstrate its application to an advanced cycle, small, reverse-flow combustor with 3000 F burner outlet temperature. The CMC concept offers significant improvements in wall cooling effectiveness resulting in a large reduction in cooling air requirements. Therefore, more air is available for control of burner outlet temperature pattern in addition to the benefits of improved efficiency, reduced emissions, and lower smoke levels. The program was divided into four tasks. Task 1 defined component materials and localized design of the composite wall structure in conjunction with development of basic design models for the analysis of flow and heat transfer through the wall. Task 2 included implementation of the selected materials and validated design models during combustor preliminary design. Detail design of the selected combustor concept and its refinement with 3D aerothermal analysis were completed in Task 3. Task 4 covered detail drawings, process development and fabrication, and a series of burner rig tests. The purpose of this paper is to provide details of the investigation into the fundamental flow and heat transfer characteristics of the CMC wall structure as well as implementation of the fundamental analysis method for full-scale combustor design.

  2. Disentangling working memory processes during spatial span assessment: a modeling analysis of preferred eye movement strategies.

    PubMed

    Patt, Virginie M; Thomas, Michael L; Minassian, Arpi; Geyer, Mark A; Brown, Gregory G; Perry, William

    2014-01-01

    The neurocognitive processes involved during classic spatial working memory (SWM) assessment were investigated by examining naturally preferred eye movement strategies. Cognitively healthy adult volunteers were tested in a computerized version of the Corsi Block-Tapping Task--a spatial span task requiring the short term maintenance of a series of locations presented in a specific order--coupled with eye tracking. Modeling analysis was developed to characterize eye-tracking patterns across all task phases, including encoding, retention, and recall. Results revealed a natural preference for local gaze maintenance during both encoding and retention, with fewer than 40% fixated targets. These findings contrasted with the stimulus retracing pattern expected during recall as a result of task demands, with 80% fixated targets. Along with participants' self-reported strategies of mentally "making shapes," these results suggest the involvement of covert attention shifts and higher order cognitive Gestalt processes during spatial span tasks, challenging instrument validity as a single measure of SWM storage capacity.

  3. Examining depletion theories under conditions of within-task transfer.

    PubMed

    Brewer, Gene A; Lau, Kevin K H; Wingert, Kimberly M; Ball, B Hunter; Blais, Chris

    2017-07-01

    In everyday life, mental fatigue can be detrimental across many domains including driving, learning, and working. Given the importance of understanding and accounting for the deleterious effects of mental fatigue on behavior, a growing body of literature has studied the role of motivational and executive control processes in mental fatigue. In typical laboratory paradigms, participants complete a task that places demand on these self-control processes and are later given a subsequent task. Generally speaking, decrements to subsequent task performance are taken as evidence that the initial task created mental fatigue through the continued engagement of motivational and executive functions. Several models have been developed to account for negative transfer resulting from this "ego depletion." In the current study, we provide a brief literature review, specify current theoretical approaches to ego-depletion, and report an empirical test of current models of depletion. Across 4 experiments we found minimal evidence for executive control depletion along with strong evidence for motivation mediated ego depletion. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. Toward a Cognitive Task Analysis for Biomedical Query Mediation

    PubMed Central

    Hruby, Gregory W.; Cimino, James J.; Patel, Vimla; Weng, Chunhua

    2014-01-01

    In many institutions, data analysts use a Biomedical Query Mediation (BQM) process to facilitate data access for medical researchers. However, understanding of the BQM process is limited in the literature. To bridge this gap, we performed the initial steps of a cognitive task analysis using 31 BQM instances conducted between one analyst and 22 researchers in one academic department. We identified five top-level tasks, i.e., clarify research statement, explain clinical process, identify related data elements, locate EHR data element, and end BQM with either a database query or unmet, infeasible information needs, and 10 sub-tasks. We evaluated the BQM task model with seven data analysts from different clinical research institutions. Evaluators found all the tasks completely or semi-valid. This study contributes initial knowledge towards the development of a generalizable cognitive task representation for BQM. PMID:25954589

  5. Toward a cognitive task analysis for biomedical query mediation.

    PubMed

    Hruby, Gregory W; Cimino, James J; Patel, Vimla; Weng, Chunhua

    2014-01-01

    In many institutions, data analysts use a Biomedical Query Mediation (BQM) process to facilitate data access for medical researchers. However, understanding of the BQM process is limited in the literature. To bridge this gap, we performed the initial steps of a cognitive task analysis using 31 BQM instances conducted between one analyst and 22 researchers in one academic department. We identified five top-level tasks, i.e., clarify research statement, explain clinical process, identify related data elements, locate EHR data element, and end BQM with either a database query or unmet, infeasible information needs, and 10 sub-tasks. We evaluated the BQM task model with seven data analysts from different clinical research institutions. Evaluators found all the tasks completely or semi-valid. This study contributes initial knowledge towards the development of a generalizable cognitive task representation for BQM.

  6. A dynamic model of stress and sustained attention

    NASA Technical Reports Server (NTRS)

    Hancock, P. A.; Warm, Joel S.

    1989-01-01

    Arguments are presented that an integrated view of stress and performance must consider the task demanding a sustained attention as a primary source of cognitive stress. A dynamic model is developed on the basis of the concept of adaptability in both physiological and psychological terms, that addresses the effects of stress on vigilance and, potentially, a wide variety of attention-demanding performance tasks. The model provides an insight into the failure of an operator under the driving influences of stress and opens a number of potential avenues through which solutions to the complex challenge of stress and performance might be posed.

  7. Space Storable Rocket Technology (SSRT) basic program

    NASA Technical Reports Server (NTRS)

    Chazen, M. L.; Mueller, T.; Casillas, A. R.; Huang, D.

    1992-01-01

    The Space Storable Rocket Technology Program (SSRT) was conducted to establish a technology for a new class of high performance and long life bipropellant engines using space storable propellants. The results are described. Task 1 evaluated several characteristics for a number of fuels to determine the best space storable fuel for use with LO2. The results indicated that LO2-N2H4 is the best propellant combination and provides the maximum mission/system capability maximum payload into GEO of satellites. Task 2 developed two models, performance and thermal. The performance model indicated the performance goal of specific impulse greater than or = 340 seconds (sigma = 204) could be achieved. The thermal model was developed and anchored to hot fire test data. Task 3 consisted of design, fabrication, and testing of a 200 lbf thrust test engine operating at a chamber pressure of 200 psia using LO2-N2H4. A total of 76 hot fire tests were conducted demonstrating performance greater than 340 (sigma = 204) which is a 25 second specific impulse improvement over the existing highest performance flight apogee type engines.

  8. Advanced transportation system studies. Alternate propulsion subsystem concepts: Propulsion database

    NASA Technical Reports Server (NTRS)

    Levack, Daniel

    1993-01-01

    The Advanced Transportation System Studies alternate propulsion subsystem concepts propulsion database interim report is presented. The objective of the database development task is to produce a propulsion database which is easy to use and modify while also being comprehensive in the level of detail available. The database is to be available on the Macintosh computer system. The task is to extend across all three years of the contract. Consequently, a significant fraction of the effort in this first year of the task was devoted to the development of the database structure to ensure a robust base for the following years' efforts. Nonetheless, significant point design propulsion system descriptions and parametric models were also produced. Each of the two propulsion databases, parametric propulsion database and propulsion system database, are described. The descriptions include a user's guide to each code, write-ups for models used, and sample output. The parametric database has models for LOX/H2 and LOX/RP liquid engines, solid rocket boosters using three different propellants, a hybrid rocket booster, and a NERVA derived nuclear thermal rocket engine.

  9. Integrated Task and Data Parallel Programming

    NASA Technical Reports Server (NTRS)

    Grimshaw, A. S.

    1998-01-01

    This research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers 1995 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program. Additional 1995 Activities During the fall I collaborated with Andrew Grimshaw and Adam Ferrari to write a book chapter which will be included in Parallel Processing in C++ edited by Gregory Wilson. I also finished two courses, Compilers and Advanced Compilers, in 1995. These courses complete my class requirements at the University of Virginia. I have only my dissertation research and defense to complete.

  10. Integrated Task And Data Parallel Programming: Language Design

    NASA Technical Reports Server (NTRS)

    Grimshaw, Andrew S.; West, Emily A.

    1998-01-01

    his research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers '95 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program m. Additional 1995 Activities During the fall I collaborated with Andrew Grimshaw and Adam Ferrari to write a book chapter which will be included in Parallel Processing in C++ edited by Gregory Wilson. I also finished two courses, Compilers and Advanced Compilers, in 1995. These courses complete my class requirements at the University of Virginia. I have only my dissertation research and defense to complete.

  11. A Problem-Solving Conceptual Framework and Its Implications in Designing Problem-Posing Tasks

    ERIC Educational Resources Information Center

    Singer, Florence Mihaela; Voica, Cristian

    2013-01-01

    The links between the mathematical and cognitive models that interact during problem solving are explored with the purpose of developing a reference framework for designing problem-posing tasks. When the process of solving is a successful one, a solver successively changes his/her cognitive stances related to the problem via transformations that…

  12. Effects of Pretask Modeling on Attention to Form and Question Development

    ERIC Educational Resources Information Center

    Kim, YouJin

    2013-01-01

    Over the last two decades, a growing body of research has shown positive impacts for task planning in task-based instruction (e.g., Ellis, 2005; Foster & Skehan, 1996). However, what learners plan during pretask planning, and whether any specific planning strategies are more beneficial in encouraging learners to attend to linguistic forms and…

  13. Relation between CBM-R and CBM-mR Slopes: An Application of Latent Growth Modeling

    ERIC Educational Resources Information Center

    Yeo, Seungsoo; Fearrington, Jamie Y.; Christ, Theodore J.

    2012-01-01

    Oral reading tasks and Maze reading tasks are often used interchangeably to assess the level and rate of reading skill development. This study examined the concurrent validity of growth estimates derived from "Curriculum-Based Measurement of Oral Reading" (CBM-R) and "Maze Reading" (CBM-mR). Participants were 1,528 students…

  14. The Identification and Comparison of the Tasks for the Occupational Role of Industrial Production Technologist.

    ERIC Educational Resources Information Center

    Nee, John G.

    This paper describes a project designed to: (1) develop a model for determining occupational activity components to be used in any vocational-technical program, (2) produce a list of occupational activity components (tasks) for the occupational roles identified, (3) determine scores, ranks and percentages for each component from each occupational…

  15. Maximally Expressive Task Modeling

    NASA Technical Reports Server (NTRS)

    Japp, John; Davis, Elizabeth; Maxwell, Theresa G. (Technical Monitor)

    2002-01-01

    Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiment activities for the Space Station. The equipment used in these experiments is some of the most complex hardware ever developed by mankind, the information sought by these experiments is at the cutting edge of scientific endeavor, and the procedures for executing the experiments are intricate and exacting. Scheduling is made more difficult by a scarcity of space station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling space station experiment operations calls for a "maximally expressive" modeling schema. Modeling even the simplest of activities cannot be automated; no sensor can be attached to a piece of equipment that can discern how to use that piece of equipment; no camera can quantify how to operate a piece of equipment. Modeling is a human enterprise-both an art and a science. The modeling schema should allow the models to flow from the keyboard of the user as easily as works of literature flowed from the pen of Shakespeare. The Ground Systems Department at the Marshall Space Flight Center has embarked on an effort to develop a new scheduling engine that is highlighted by a maximally expressive modeling schema. This schema, presented in this paper, is a synergy of technological advances and domain-specific innovations.

  16. Methods for Maximizing the Learning Process: A Theoretical and Experimental Analysis.

    ERIC Educational Resources Information Center

    Atkinson, Richard C.

    This research deals with optimizing the instructional process. The approach adopted was to limit consideration to simple learning tasks for which adequate mathematical models could be developed. Optimal or suitable suboptimal instructional strategies were developed for the models. The basic idea was to solve for strategies that either maximize the…

  17. An Integrative Social-Cognitive Developmental Model of Supervision for Substance Abuse Counselors-in-Training

    ERIC Educational Resources Information Center

    Sias, Shari M.; Lambie, Glenn W.

    2008-01-01

    Substance abuse counselors (SACs) at higher levels of social-cognitive maturity manage complex situations and perform counselor-related tasks more effectively than individuals at lower levels of development. This article presents an integrative clinical supervision model designed to promote the social-cognitive maturity (ego development;…

  18. A Planning Guide for Gifted Preschoolers.

    ERIC Educational Resources Information Center

    Malley-Crist, Justine; And Others

    Contained in the curriculum planning guide developed by the Chapel Hill Gifted-Handicapped Project are a model, a training sequence, and 17 instructional units for use with preschool gifted children. The model is explained to be based on the hierarchy of cognitive tasks developed by B. Bloom. A worksheet for teachers suggests activities to help…

  19. Auto Mechanics. Instructional System Development Model for Vermont Area Vocational Centers.

    ERIC Educational Resources Information Center

    The model curriculum guide was developed to teach automotive mechanics in secondary schools in Vermont. It is composed of a series of units related to tasks identified as skills, concepts, and values, which are stated in behavioral terms, supported by suggested learning activities, reinforced by teacher resource needs and suggested evaluation…

  20. Simple Plans or Sophisticated Habits? State, Transition and Learning Interactions in the Two-Step Task.

    PubMed

    Akam, Thomas; Costa, Rui; Dayan, Peter

    2015-12-01

    The recently developed 'two-step' behavioural task promises to differentiate model-based from model-free reinforcement learning, while generating neurophysiologically-friendly decision datasets with parametric variation of decision variables. These desirable features have prompted its widespread adoption. Here, we analyse the interactions between a range of different strategies and the structure of transitions and outcomes in order to examine constraints on what can be learned from behavioural performance. The task involves a trade-off between the need for stochasticity, to allow strategies to be discriminated, and a need for determinism, so that it is worth subjects' investment of effort to exploit the contingencies optimally. We show through simulation that under certain conditions model-free strategies can masquerade as being model-based. We first show that seemingly innocuous modifications to the task structure can induce correlations between action values at the start of the trial and the subsequent trial events in such a way that analysis based on comparing successive trials can lead to erroneous conclusions. We confirm the power of a suggested correction to the analysis that can alleviate this problem. We then consider model-free reinforcement learning strategies that exploit correlations between where rewards are obtained and which actions have high expected value. These generate behaviour that appears model-based under these, and also more sophisticated, analyses. Exploiting the full potential of the two-step task as a tool for behavioural neuroscience requires an understanding of these issues.

  1. Use of modeling to identify vulnerabilities to human error in laparoscopy.

    PubMed

    Funk, Kenneth H; Bauer, James D; Doolen, Toni L; Telasha, David; Nicolalde, R Javier; Reeber, Miriam; Yodpijit, Nantakrit; Long, Myra

    2010-01-01

    This article describes an exercise to investigate the utility of modeling and human factors analysis in understanding surgical processes and their vulnerabilities to medical error. A formal method to identify error vulnerabilities was developed and applied to a test case of Veress needle insertion during closed laparoscopy. A team of 2 surgeons, a medical assistant, and 3 engineers used hierarchical task analysis and Integrated DEFinition language 0 (IDEF0) modeling to create rich models of the processes used in initial port creation. Using terminology from a standardized human performance database, detailed task descriptions were written for 4 tasks executed in the process of inserting the Veress needle. Key terms from the descriptions were used to extract from the database generic errors that could occur. Task descriptions with potential errors were translated back into surgical terminology. Referring to the process models and task descriptions, the team used a modified failure modes and effects analysis (FMEA) to consider each potential error for its probability of occurrence, its consequences if it should occur and be undetected, and its probability of detection. The resulting likely and consequential errors were prioritized for intervention. A literature-based validation study confirmed the significance of the top error vulnerabilities identified using the method. Ongoing work includes design and evaluation of procedures to correct the identified vulnerabilities and improvements to the modeling and vulnerability identification methods. Copyright 2010 AAGL. Published by Elsevier Inc. All rights reserved.

  2. Simple Plans or Sophisticated Habits? State, Transition and Learning Interactions in the Two-Step Task

    PubMed Central

    Akam, Thomas; Costa, Rui; Dayan, Peter

    2015-01-01

    The recently developed ‘two-step’ behavioural task promises to differentiate model-based from model-free reinforcement learning, while generating neurophysiologically-friendly decision datasets with parametric variation of decision variables. These desirable features have prompted its widespread adoption. Here, we analyse the interactions between a range of different strategies and the structure of transitions and outcomes in order to examine constraints on what can be learned from behavioural performance. The task involves a trade-off between the need for stochasticity, to allow strategies to be discriminated, and a need for determinism, so that it is worth subjects’ investment of effort to exploit the contingencies optimally. We show through simulation that under certain conditions model-free strategies can masquerade as being model-based. We first show that seemingly innocuous modifications to the task structure can induce correlations between action values at the start of the trial and the subsequent trial events in such a way that analysis based on comparing successive trials can lead to erroneous conclusions. We confirm the power of a suggested correction to the analysis that can alleviate this problem. We then consider model-free reinforcement learning strategies that exploit correlations between where rewards are obtained and which actions have high expected value. These generate behaviour that appears model-based under these, and also more sophisticated, analyses. Exploiting the full potential of the two-step task as a tool for behavioural neuroscience requires an understanding of these issues. PMID:26657806

  3. A Chain-Retrieval Model for Voluntary Task Switching

    ERIC Educational Resources Information Center

    Vandierendonck, Andre; Demanet, Jelle; Liefooghe, Baptist; Verbruggen, Frederick

    2012-01-01

    To account for the findings obtained in voluntary task switching, this article describes and tests the chain-retrieval model. This model postulates that voluntary task selection involves retrieval of task information from long-term memory, which is then used to guide task selection and task execution. The model assumes that the retrieved…

  4. Children's motivation in elementary physical education: an expectancy-value model of achievement choice.

    PubMed

    Xiang, Ping; McBride, Ron; Guan, Jianmin; Solmon, Melinda

    2003-03-01

    This study examined children's motivation in elementary physical education within an expectancy-value model developed by Eccles and her colleagues. Four hundred fourteen students in second and fourth grades completed questionnaires assessing their expectancy-related beliefs, subjective task values, and intention for future participation in physical education. Results indicated that expectancy-related beliefs and subjective task values were clearly distinguishable from one another across physical education and throwing. The two constructs were related to each other positively. Children's intention for future participation in physical education was positively associated with their subjective task values and/or expectancy-related beliefs. Younger children had higher motivation for learning in physical education than older children. Gender differences emerged and the findings provided empirical evidence supporting the validity of the expectancy-value model in elementary physical education.

  5. Developing learning community model with soft skill integration for the building engineering apprenticeship programme in vocational high school

    NASA Astrophysics Data System (ADS)

    Sutrisno, Dardiri, Ahmad; Sugandi, R. Machmud

    2017-09-01

    This study aimed to address the procedure, effectiveness, and problems in the implementation of learning model for Building Engineering Apprenticeship Training Programme. This study was carried out through survey method and experiment. The data were collected using questionnaire, test, and assessment sheet. The collected data were examined through description, t-test, and covariance analysis. The results of the study showed that (1) the model's procedure covered preparation course, readiness assessment, assignment distribution, handing over students to apprenticeship instructors, task completion, assisting, field assessment, report writing, and follow-up examination, (2) the Learning Community model could significantly improve students' active learning, but not improve students' hard skills and soft skills, and (3) the problems emerging in the implementation of the model were (1) students' difficulties in finding apprenticeship places and qualified instructors, and asking for relevant tasks, (2) teachers' difficulties in determining relevant tasks and monitoring students, and (3) apprenticeship instructors' difficulties in assigning, monitoring, and assessing students.

  6. Software Tools For Building Decision-support Models For Flood Emergency Situations

    NASA Astrophysics Data System (ADS)

    Garrote, L.; Molina, M.; Ruiz, J. M.; Mosquera, J. C.

    The SAIDA decision-support system was developed by the Spanish Ministry of the Environment to provide assistance to decision-makers during flood situations. SAIDA has been tentatively implemented in two test basins: Jucar and Guadalhorce, and the Ministry is currently planning to have it implemented in all major Spanish basins in a few years' time. During the development cycle of SAIDA, the need for providing as- sistance to end-users in model definition and calibration was clearly identified. System developers usually emphasise abstraction and generality with the goal of providing a versatile software environment. End users, on the other hand, require concretion and specificity to adapt the general model to their local basins. As decision-support models become more complex, the gap between model developers and users gets wider: Who takes care of model definition, calibration and validation?. Initially, model developers perform these tasks, but the scope is usually limited to a few small test basins. Before the model enters operational stage, end users must get involved in model construction and calibration, in order to gain confidence in the model recommendations. However, getting the users involved in these activities is a difficult task. The goal of this re- search is to develop representation techniques for simulation and management models in order to define, develop and validate a mechanism, supported by a software envi- ronment, oriented to provide assistance to the end-user in building decision models for the prediction and management of river floods in real time. The system is based on three main building blocks: A library of simulators of the physical system, an editor to assist the user in building simulation models, and a machine learning method to calibrate decision models based on the simulation models provided by the user.

  7. Task conflict and team creativity: a question of how much and when.

    PubMed

    Farh, Jiing-Lih; Lee, Cynthia; Farh, Crystal I C

    2010-11-01

    Bridging the task conflict, team creativity, and project team development literatures, we present a contingency model in which the relationship between task conflict and team creativity depends on the level of conflict and when it occurs in the life cycle of a project team. In a study of 71 information technology project teams in the greater China region, we found that task conflict had a curvilinear effect on team creativity, such that creativity was highest at moderate levels of task conflict. Additionally, we found this relationship to be moderated by team phase, such that the curvilinear effect was strongest at an early phase. In contrast, at later phases of the team life cycle, task conflict was found to be unrelated to team creativity. (c) 2010 APA, all rights reserved.

  8. Classification of a Driver's cognitive workload levels using artificial neural network on ECG signals.

    PubMed

    Tjolleng, Amir; Jung, Kihyo; Hong, Wongi; Lee, Wonsup; Lee, Baekhee; You, Heecheon; Son, Joonwoo; Park, Seikwon

    2017-03-01

    An artificial neural network (ANN) model was developed in the present study to classify the level of a driver's cognitive workload based on electrocardiography (ECG). ECG signals were measured on 15 male participants while they performed a simulated driving task as a primary task with/without an N-back task as a secondary task. Three time-domain ECG measures (mean inter-beat interval (IBI), standard deviation of IBIs, and root mean squared difference of adjacent IBIs) and three frequencydomain ECG measures (power in low frequency, power in high frequency, and ratio of power in low and high frequencies) were calculated. To compensate for individual differences in heart response during the driving tasks, a three-step data processing procedure was performed to ECG signals of each participant: (1) selection of two most sensitive ECG measures, (2) definition of three (low, medium, and high) cognitive workload levels, and (3) normalization of the selected ECG measures. An ANN model was constructed using a feed-forward network and scaled conjugate gradient as a back-propagation learning rule. The accuracy of the ANN classification model was found satisfactory for learning data (95%) and testing data (82%). Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Task Models in the Digital Ocean

    ERIC Educational Resources Information Center

    DiCerbo, Kristen E.

    2014-01-01

    The Task Model is a description of each task in a workflow. It defines attributes associated with that task. The creation of task models becomes increasingly important as the assessment tasks become more complex. Explicitly delineating the impact of task variables on the ability to collect evidence and make inferences demands thoughtfulness from…

  10. Intelligence Reach for Expertise (IREx)

    NASA Astrophysics Data System (ADS)

    Hadley, Christina; Schoening, James R.; Schreiber, Yonatan

    2015-05-01

    IREx is a search engine for next-generation analysts to find collaborators. U.S. Army Field Manual 2.0 (Intelligence) calls for collaboration within and outside the area of operations, but finding the best collaborator for a given task can be challenging. IREx will be demonstrated as part of Actionable Intelligence Technology Enabled Capability Demonstration (AI-TECD) at the E15 field exercises at Ft. Dix in July 2015. It includes a Task Model for describing a task and its prerequisite competencies, plus a User Model (i.e., a user profile) for individuals to assert their capabilities and other relevant data. These models use a canonical suite of ontologies as a foundation for these models, which enables robust queries and also keeps the models logically consistent. IREx also supports learning validation, where a learner who has completed a course module can search and find a suitable task to practice and demonstrate that their new knowledge can be used in the real world for its intended purpose. The IREx models are in the initial phase of a process to develop them as an IEEE standard. This initiative is currently an approved IEEE Study Group, after which follows a standards working group, then a balloting group, and if all goes well, an IEEE standard.

  11. Forecasting the Occurrence of Severe Haze Events in Asia using Machine Learning Algorithms

    NASA Astrophysics Data System (ADS)

    Walton, A. L.

    2016-12-01

    Particulate pollution has become a serious environmental issue of many Asian countries in recent decades, threatening human health and frequently causing low visibility or haze days that interrupt from working, outdoor, and school activities to air, road, and sea transportation. To ultimately prevent such severe haze to occur requires many difficult tasks to be accomplished, dealing with trade and negotiation, emission control, energy consumption, transportation, land and plantation management, among other, of all involved countries or parties. Whereas, before these difficult measures could finally take place, it would be more practical to reduce the economic loss by developing skills to predict the occurrence of such events in reasonable accuracy so that effective mitigation or adaptation measures could be implemented ahead of time. The "traditional" numerical models developed based on fluid dynamics and explicit or parameterized representations of physiochemical processes can be certainly used for this task. However, the significant and sophisticated spatiotemporal variabilities associated with these events, the propagation of numerical or parameterization errors through model integration, and the computational demand all pose serious challenges to the practice of using these models to accomplish this interdisciplinary task. On the other hand, large quantity of meteorological, hydrological, atmospheric aerosol and composition, and surface visibility data from in-situ observation, reanalysis, or satellite retrievals, have become available to the community. These data might still not sufficient for evaluating and improving certain important aspects of the "traditional" models. Nevertheless, it is likely that these data can already support the effort to develop alternative "task-oriented" and computationally efficient forecasting skill using deep machine learning technique to avoid directly dealing with the sophisticated interplays across multiple process layers. I will present an experiential case of applying machine learning technique to predict the occurrence of severe haze events in Asia.

  12. Forecasting the Occurrence of Severe Haze Events in Asia using Machine Learning Algorithms

    NASA Astrophysics Data System (ADS)

    Wang, C.

    2017-12-01

    Particulate pollution has become a serious environmental issue of many Asian countries in recent decades, threatening human health and frequently causing low visibility or haze days that interrupt from working, outdoor, and school activities to air, road, and sea transportation. To ultimately prevent such severe haze to occur requires many difficult tasks to be accomplished, dealing with trade and negotiation, emission control, energy consumption, transportation, land and plantation management, among other, of all involved countries or parties. Whereas, before these difficult measures could finally take place, it would be more practical to reduce the economic loss by developing skills to predict the occurrence of such events in reasonable accuracy so that effective mitigation or adaptation measures could be implemented ahead of time. The "traditional" numerical models developed based on fluid dynamics and explicit or parameterized representations of physiochemical processes can be certainly used for this task. However, the significant and sophisticated spatiotemporal variabilities associated with these events, the propagation of numerical or parameterization errors through model integration, and the computational demand all pose serious challenges to the practice of using these models to accomplish this interdisciplinary task. On the other hand, large quantity of meteorological, hydrological, atmospheric aerosol and composition, and surface visibility data from in-situ observation, reanalysis, or satellite retrievals, have become available to the community. These data might still not sufficient for evaluating and improving certain important aspects of the "traditional" models. Nevertheless, it is likely that these data can already support the effort to develop alternative "task-oriented" and computationally efficient forecasting skill using deep machine learning technique to avoid directly dealing with the sophisticated interplays across multiple process layers. I will present an experiential case of applying machine learning technique to predict the occurrence of severe haze events in Asia.

  13. Recommendations from the Investigational New Drug/Investigational Device Exemption Task Force of the clInical and Translational Science Award Consortium: developing and implementing a sponsor-investigators training program.

    PubMed

    Holbein, M E Blair; Berglund, Jelena Petrovic; O'Reilly, Erin K; Hartman, Karen; Speicher, Lisa A; Adamo, Joan E; O'Riordan, Gerri; Brown, Jennifer Swanton; Schuff, Kathryn G

    2014-06-01

    The objective of this study was to provide recommendations for provision of training for sponsor and investigators at Academic Health Centers. A subgroup of the Investigational New Drug/Investigational Device Exemption (IND/IDE) Task Force of the Clinical and Translational Science Award (CTSA) program Regulatory Knowledge Key Function Committee was assembled to specifically address how clinical investigators who hold an IND/IDE and thus assume the role of sponsor-investigators are adequately trained to meet the additional regulatory requirements of this role. The participants who developed the recommendations were representatives of institutions with IND/IDE support programs. Through an informal survey, the task force determined that a variety and mix of models are used to provide support for IND/IDE holders within CTSA institutions. In addition, a CTSA consortium-wide resources survey was used. The participants worked from the models and survey results to develop consensus recommendations to address institutional support, training content, and implementation. The CTSA IND/IDE Task Force recommendations are as follows: (1) Institutions should assess the scope of Food and Drug Administration-regulated research, perform a needs analysis, and provide resources to implement a suitable training program; (2) The model of training program should be tailored to each institution; (3) The training should specifically address the unique role of sponsor-investigators, and the effectiveness of training should be evaluated regularly by methods that fit the model adopted by the institution; and (4) Institutional leadership should mandate sponsor-investigator training and effectively communicate the necessity and availability of training.

  14. Modeling and dynamic simulation of astronaut's upper limb motions considering counter torques generated by the space suit.

    PubMed

    Li, Jingwen; Ye, Qing; Ding, Li; Liao, Qianfang

    2017-07-01

    Extravehicular activity (EVA) is an inevitable task for astronauts to maintain proper functions of both the spacecraft and the space station. Both experimental research in a microgravity simulator (e.g. neutral buoyancy tank, zero-g aircraft or a drop tower/tube) and mathematical modeling were used to study EVA to provide guidance for the training on Earth and task design in space. Modeling has become more and more promising because of its efficiency. Based on the task analysis, almost 90% of EVA activity is accomplished through upper limb motions. Therefore, focusing on upper limb models of the body and space suit is valuable to this effort. In previous modeling studies, some multi-rigid-body systems were developed to simplify the human musculoskeletal system, and the space suit was mostly considered as a part of the astronaut body. With the aim to improve the reality of the models, we developed an astronauts' upper limb model, including a torque model and a muscle-force model, with the counter torques from the space suit being considered as a boundary condition. Inverse kinematics and the Maggi-Kane's method was applied to calculate the joint angles, joint torques and muscle force given that the terminal trajectory of upper limb motion was known. Also, we validated the muscle-force model using electromyogram (EMG) data collected in a validation experiment. Muscle force calculated from our model presented a similar trend with the EMG data, supporting the effectiveness and feasibility of the muscle-force model we established, and also, partially validating the joint model in kinematics aspect.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savic, Vesna; Hector, Louis G.; Ezzat, Hesham

    This paper presents an overview of a four-year project focused on development of an integrated computational materials engineering (ICME) toolset for third generation advanced high-strength steels (3GAHSS). Following a brief look at ICME as an emerging discipline within the Materials Genome Initiative, technical tasks in the ICME project will be discussed. Specific aims of the individual tasks are multi-scale, microstructure-based material model development using state-of-the-art computational and experimental techniques, forming, toolset assembly, design optimization, integration and technical cost modeling. The integrated approach is initially illustrated using a 980 grade transformation induced plasticity (TRIP) steel, subject to a two-step quenching andmore » partitioning (Q&P) heat treatment, as an example.« less

  16. Heat storage capability of a rolling cylinder using Glauber's salt

    NASA Technical Reports Server (NTRS)

    Herrick, C. S.; Zarnoch, K. P.

    1980-01-01

    The rolling cylinder phase change heat storage concept was developed to the point where a prototype design is completed and a cost analysis is prepared. A series of experimental and analytical tasks are defined to establish the thermal, mechanical, and materials behavior of rolling cylinder devices. These tasks include: analyses of internal and external heat transfer; performance and lifetime testing of the phase change materials; corrosion evaluation; development of a mathematical model; and design of a prototype and associated test equipment.

  17. A Pilot Model for the NASA Simplified Aid for EVA Rescue (SAFER) (Single-Axis Pitch Task)

    NASA Astrophysics Data System (ADS)

    Handley, Patrick Mark

    This thesis defines, tests, and validates a descriptive pilot model for a single-axis pitch control task of the Simplified Aid for EVA Rescue (SAFER). SAFER is a small propulsive jetpack used by astronauts for self-rescue. Pilot model research supports development of improved self-rescue strategies and technologies through insights into pilot behavior.This thesis defines a multi-loop pilot model. The innermost loop controls the hand controller, the middle loop controls pitch rate, and the outer loop controls pitch angle. A human-in-the-loop simulation was conducted to gather data from a human pilot. Quantitative and qualitative metrics both indicate that the model is an acceptable fit to the human data. Fuel consumption was nearly identical; time to task completion matched very well. There is some evidence that the model responds faster to initial pitch rates than the human, artificially decreasing the model's time to task completion. This pilot model is descriptive, not predictive, of the human pilot. Insights are made into pilot behavior from this research. Symmetry implies that the human responds to positive and negative initial conditions with the same strategy. The human pilot appears indifferent to pitch angles within 0.5 deg, coasts at a constant pitch rate 1.09 deg/s, and has a reaction delay of 0.1 s.

  18. Task Analysis Assessment on Intrastate Bus Traffic Controllers

    NASA Astrophysics Data System (ADS)

    Yen Bin, Teo; Azlis-Sani, Jalil; Nur Annuar Mohd Yunos, Muhammad; Ismail, S. M. Sabri S. M.; Tajedi, Noor Aqilah Ahmad

    2016-11-01

    Public transportation acts as social mobility and caters the daily needs of the society for passengers to travel from one place to another. This is true for a country like Malaysia where international trade has been growing significantly over the past few decades. Task analysis assessment was conducted with the consideration of cognitive ergonomic view towards problem related to human factors. Conducting research regarding the task analysis on bus traffic controllers had allowed a better understanding regarding the nature of work and the overall monitoring activities of the bus services. This paper served to study the task analysis assessment on intrastate bus traffic controllers and the objectives of this study include to conduct task analysis assessment on the bus traffic controllers. Task analysis assessment for the bus traffic controllers was developed via Hierarchical Task Analysis (HTA). There are a total of five subsidiary tasks on level one and only two were able to be further broken down in level two. Development of HTA allowed a better understanding regarding the work and this could further ease the evaluation of the tasks conducted by the bus traffic controllers. Thus, human error could be reduced for the safety of all passengers and increase the overall efficiency of the system. Besides, it could assist in improving the operation of the bus traffic controllers by modelling or synthesizing the existing tasks if necessary.

  19. Current practices in pavement performance modeling project 08-03 (C07) : task 4 report final summary of findings.

    DOT National Transportation Integrated Search

    2010-02-26

    In anticipation of developing pavement performance models as part of a proposed pavement management : system, the Pennsylvania Department of Transportation (PennDOT) initiated a study in 2009 to investigate : performance modeling activities and condi...

  20. Optimization of Land Use Suitability for Agriculture Using Integrated Geospatial Model and Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Mansor, S. B.; Pormanafi, S.; Mahmud, A. R. B.; Pirasteh, S.

    2012-08-01

    In this study, a geospatial model for land use allocation was developed from the view of simulating the biological autonomous adaptability to environment and the infrastructural preference. The model was developed based on multi-agent genetic algorithm. The model was customized to accommodate the constraint set for the study area, namely the resource saving and environmental-friendly. The model was then applied to solve the practical multi-objective spatial optimization allocation problems of land use in the core region of Menderjan Basin in Iran. The first task was to study the dominant crops and economic suitability evaluation of land. Second task was to determine the fitness function for the genetic algorithms. The third objective was to optimize the land use map using economical benefits. The results has indicated that the proposed model has much better performance for solving complex multi-objective spatial optimization allocation problems and it is a promising method for generating land use alternatives for further consideration in spatial decision-making.

  1. Case Study: Organotypic human in vitro models of embryonic ...

    EPA Pesticide Factsheets

    Morphogenetic fusion of tissues is a common event in embryonic development and disruption of fusion is associated with birth defects of the eye, heart, neural tube, phallus, palate, and other organ systems. Embryonic tissue fusion requires precise regulation of cell-cell and cell-matrix interactions that drive proliferation, differentiation, and morphogenesis. Chemical low-dose exposures can disrupt morphogenesis across space and time by interfering with key embryonic fusion events. The Morphogenetic Fusion Task uses computer and in vitro models to elucidate consequences of developmental exposures. The Morphogenetic Fusion Task integrates multiple approaches to model responses to chemicals that leaad to birth defects, including integrative mining on ToxCast DB, ToxRefDB, and chemical structures, advanced computer agent-based models, and human cell-based cultures that model disruption of cellular and molecular behaviors including mechanisms predicted from integrative data mining and agent-based models. The purpose of the poster is to indicate progress on the CSS 17.02 Virtual Tissue Models Morphogenesis Task 1 products for the Board of Scientific Counselors meeting on Nov 16-17.

  2. Numerical simulations for active tectonic processes: increasing interoperability and performance

    NASA Technical Reports Server (NTRS)

    Donnellan, A.; Fox, G.; Rundle, J.; McLeod, D.; Tullis, T.; Grant, L.

    2002-01-01

    The objective of this project is to produce a system to fully model earthquake-related data. This task develops simulation and analysis tools to study the physics of earthquakes using state-of-the-art modeling.

  3. TOD to TTP calibration

    NASA Astrophysics Data System (ADS)

    Bijl, Piet; Reynolds, Joseph P.; Vos, Wouter K.; Hogervorst, Maarten A.; Fanning, Jonathan D.

    2011-05-01

    The TTP (Targeting Task Performance) metric, developed at NVESD, is the current standard US Army model to predict EO/IR Target Acquisition performance. This model however does not have a corresponding lab or field test to empirically assess the performance of a camera system. The TOD (Triangle Orientation Discrimination) method, developed at TNO in The Netherlands, provides such a measurement. In this study, we make a direct comparison between TOD performance for a range of sensors and the extensive historical US observer performance database built to develop and calibrate the TTP metric. The US perception data were collected doing an identification task by military personnel on a standard 12 target, 12 aspect tactical vehicle image set that was processed through simulated sensors for which the most fundamental sensor parameters such as blur, sampling, spatial and temporal noise were varied. In the present study, we measured TOD sensor performance using exactly the same sensors processing a set of TOD triangle test patterns. The study shows that good overall agreement is obtained when the ratio between target characteristic size and TOD test pattern size at threshold equals 6.3. Note that this number is purely based on empirical data without any intermediate modeling. The calibration of the TOD to the TTP is highly beneficial to the sensor modeling and testing community for a variety of reasons. These include: i) a connection between requirement specification and acceptance testing, and ii) a very efficient method to quickly validate or extend the TTP range prediction model to new systems and tasks.

  4. Development of a rotorcraft. Propulsion dynamics interface analysis, volume 2

    NASA Technical Reports Server (NTRS)

    Hull, R.

    1982-01-01

    A study was conducted to establish a coupled rotor/propulsion analysis that would be applicable to a wide range of rotorcraft systems. The effort included the following tasks: (1) development of a model structure suitable for simulating a wide range of rotorcraft configurations; (2) defined a methodology for parameterizing the model structure to represent a particular rotorcraft; (3) constructing a nonlinear coupled rotor/propulsion model as a test case to use in analyzing coupled system dynamics; and (4) an attempt to develop a mostly linear coupled model derived from the complete nonlinear simulations. Documentation of the computer models developed is presented.

  5. Formally verifying human–automation interaction as part of a system model: limitations and tradeoffs

    PubMed Central

    Bass, Ellen J.

    2011-01-01

    Both the human factors engineering (HFE) and formal methods communities are concerned with improving the design of safety-critical systems. This work discusses a modeling effort that leveraged methods from both fields to perform formal verification of human–automation interaction with a programmable device. This effort utilizes a system architecture composed of independent models of the human mission, human task behavior, human-device interface, device automation, and operational environment. The goals of this architecture were to allow HFE practitioners to perform formal verifications of realistic systems that depend on human–automation interaction in a reasonable amount of time using representative models, intuitive modeling constructs, and decoupled models of system components that could be easily changed to support multiple analyses. This framework was instantiated using a patient controlled analgesia pump in a two phased process where models in each phase were verified using a common set of specifications. The first phase focused on the mission, human-device interface, and device automation; and included a simple, unconstrained human task behavior model. The second phase replaced the unconstrained task model with one representing normative pump programming behavior. Because models produced in the first phase were too large for the model checker to verify, a number of model revisions were undertaken that affected the goals of the effort. While the use of human task behavior models in the second phase helped mitigate model complexity, verification time increased. Additional modeling tools and technological developments are necessary for model checking to become a more usable technique for HFE. PMID:21572930

  6. Connecting Artificial Brains to Robots in a Comprehensive Simulation Framework: The Neurorobotics Platform

    PubMed Central

    Falotico, Egidio; Vannucci, Lorenzo; Ambrosano, Alessandro; Albanese, Ugo; Ulbrich, Stefan; Vasquez Tieck, Juan Camilo; Hinkel, Georg; Kaiser, Jacques; Peric, Igor; Denninger, Oliver; Cauli, Nino; Kirtay, Murat; Roennau, Arne; Klinker, Gudrun; Von Arnim, Axel; Guyot, Luc; Peppicelli, Daniel; Martínez-Cañada, Pablo; Ros, Eduardo; Maier, Patrick; Weber, Sandro; Huber, Manuel; Plecher, David; Röhrbein, Florian; Deser, Stefan; Roitberg, Alina; van der Smagt, Patrick; Dillman, Rüdiger; Levi, Paul; Laschi, Cecilia; Knoll, Alois C.; Gewaltig, Marc-Oliver

    2017-01-01

    Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain–body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 “Neurorobotics” of the Human Brain Project (HBP).1 At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments. PMID:28179882

  7. Connecting Artificial Brains to Robots in a Comprehensive Simulation Framework: The Neurorobotics Platform.

    PubMed

    Falotico, Egidio; Vannucci, Lorenzo; Ambrosano, Alessandro; Albanese, Ugo; Ulbrich, Stefan; Vasquez Tieck, Juan Camilo; Hinkel, Georg; Kaiser, Jacques; Peric, Igor; Denninger, Oliver; Cauli, Nino; Kirtay, Murat; Roennau, Arne; Klinker, Gudrun; Von Arnim, Axel; Guyot, Luc; Peppicelli, Daniel; Martínez-Cañada, Pablo; Ros, Eduardo; Maier, Patrick; Weber, Sandro; Huber, Manuel; Plecher, David; Röhrbein, Florian; Deser, Stefan; Roitberg, Alina; van der Smagt, Patrick; Dillman, Rüdiger; Levi, Paul; Laschi, Cecilia; Knoll, Alois C; Gewaltig, Marc-Oliver

    2017-01-01

    Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain-body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 "Neurorobotics" of the Human Brain Project (HBP). At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments.

  8. Comparative analysis of operational forecasts versus actual weather conditions in airline flight planning: Summary report

    NASA Technical Reports Server (NTRS)

    Keitz, J. F.

    1982-01-01

    The impact of more timely and accurate weather data on airline flight planning with the emphasis on fuel savings is studied. This summary report discusses the results of each of the four major tasks of the study. Task 1 compared airline flight plans based on operational forecasts to plans based on the verifying analyses and found that average fuel savings of 1.2 to 2.5 percent are possible with improved forecasts. Task 2 consisted of similar comparisons but used a model developed for the FAA by SRI International that simulated the impact of ATc diversions on the flight plans. While parts of Task 2 confirm the Task I findings, inconsistency with other data and the known impact of ATC suggests that other Task 2 findings are the result of errors in the model. Task 3 compares segment weather data from operational flight plans with the weather actually observed by the aircraft and finds the average error could result in fuel burn penalties (or savings) of up to 3.6 percent for the average 8747 flight. In Task 4 an in-depth analysis of the weather forecast for the 33 days included in the study finds that significant errors exist on 15 days. Wind speeds in the area of maximum winds are underestimated by 20 to 50 kts., a finding confirmed in the other three tasks.

  9. Closed-loop, pilot/vehicle analysis of the approach and landing task

    NASA Technical Reports Server (NTRS)

    Anderson, M. R.; Schmidt, D. K.

    1986-01-01

    In the case of approach and landing, it is universally accepted that the pilot uses more than one vehicle response, or output, to close his control loops. Therefore, to model this task, a multi-loop analysis technique is required. The analysis problem has been in obtaining reasonable analytic estimates of the describing functions representing the pilot's loop compensation. Once these pilot describing functions are obtained, appropriate performance and workload metrics must then be developed for the landing task. The optimal control approach provides a powerful technique for obtaining the necessary describing functions, once the appropriate task objective is defined in terms of a quadratic objective function. An approach is presented through the use of a simple, reasonable objective function and model-based metrics to evaluate loop performance and pilot workload. The results of an analysis of the LAHOS (Landing and Approach of Higher Order Systems) study performed by R.E. Smith is also presented.

  10. Psychophysical Models for Signal Detection with Time Varying Uncertainty. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Gai, E.

    1975-01-01

    Psychophysical models for the behavior of the human operator in detection tasks which include change in detectability, correlation between observations and deferred decisions are developed. Classical Signal Detection Theory (SDT) is discussed and its emphasis on the sensory processes is contrasted to decision strategies. The analysis of decision strategies utilizes detection tasks with time varying signal strength. The classical theory is modified to include such tasks and several optimal decision strategies are explored. Two methods of classifying strategies are suggested. The first method is similar to the analysis of ROC curves, while the second is based on the relation between the criterion level (CL) and the detectability. Experiments to verify the analysis of tasks with changes of signal strength are designed. The results show that subjects are aware of changes in detectability and tend to use strategies that involve changes in the CL's.

  11. Distributed computing feasibility in a non-dedicated homogeneous distributed system

    NASA Technical Reports Server (NTRS)

    Leutenegger, Scott T.; Sun, Xian-He

    1993-01-01

    The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.

  12. Strategy selection as rational metareasoning.

    PubMed

    Lieder, Falk; Griffiths, Thomas L

    2017-11-01

    Many contemporary accounts of human reasoning assume that the mind is equipped with multiple heuristics that could be deployed to perform a given task. This raises the question of how the mind determines when to use which heuristic. To answer this question, we developed a rational model of strategy selection, based on the theory of rational metareasoning developed in the artificial intelligence literature. According to our model people learn to efficiently choose the strategy with the best cost-benefit tradeoff by learning a predictive model of each strategy's performance. We found that our model can provide a unifying explanation for classic findings from domains ranging from decision-making to arithmetic by capturing the variability of people's strategy choices, their dependence on task and context, and their development over time. Systematic model comparisons supported our theory, and 4 new experiments confirmed its distinctive predictions. Our findings suggest that people gradually learn to make increasingly more rational use of fallible heuristics. This perspective reconciles the 2 poles of the debate about human rationality by integrating heuristics and biases with learning and rationality. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Design Specifications for the Advanced Instructional Design Advisor (AIDA). Volume 1

    DTIC Science & Technology

    1992-01-01

    research; (3) Describe the knowledge base sufficient to support the varieties of knowledge to be represented in the AIDA model ; (4) Document the...feasibility of continuing the development of the AIDA model . 2.3 Background In Phase I of the AIDA project (Task 0006), (1) the AIDA concept was defined...the AIDA Model A paper-based demonstration of the AIDA instructional design model was performed by using the model to develop a minimal application

  14. Thermal barrier coating life prediction model

    NASA Technical Reports Server (NTRS)

    Hillery, R. V.; Pilsner, B. H.; Cook, T. S.; Kim, K. S.

    1986-01-01

    This is the second annual report of the first 3-year phase of a 2-phase, 5-year program. The objectives of the first phase are to determine the predominant modes of degradation of a plasma sprayed thermal barrier coating system and to develop and verify life prediction models accounting for these degradation modes. The primary TBC system consists of an air plasma sprayed ZrO-Y2O3 top coat, a low pressure plasma sprayed NiCrAlY bond coat, and a Rene' 80 substrate. Task I was to evaluate TBC failure mechanisms. Both bond coat oxidation and bond coat creep have been identified as contributors to TBC failure. Key property determinations have also been made for the bond coat and the top coat, including tensile strength, Poisson's ratio, dynamic modulus, and coefficient of thermal expansion. Task II is to develop TBC life prediction models for the predominant failure modes. These models will be developed based on the results of thermmechanical experiments and finite element analysis. The thermomechanical experiments have been defined and testing initiated. Finite element models have also been developed to handle TBCs and are being utilized to evaluate different TBC failure regimes.

  15. A Correlational Study of Seven Projective Spatial Structures with Regard to the Phases of the MOON^

    NASA Astrophysics Data System (ADS)

    Wellner, Karen Linette

    1995-01-01

    This study investigated the relationship between projective spatial structures and the ability to construct a scientific model. In addition, gender-related performance and the influence of prior astronomy experience on task success were evaluated. Sixty-one college science undergraduates were individually administered Piagetian tasks to assess for projective spatial structures and the ability to set up a phases of the moon model. The spatial tasks included: (a) Mountains task (coordination of perspectives); (b) Railroad task (size and intervals of objects with increasing distance); (c) Telephone Poles task (masking and ordering objects); and (d) Shadows task (spatial relationships between an object and its shadow, dependent upon the object's orientation). Cramer coefficient analyses indicated that significant relationships existed between Moon task and spatial task success. In particular, the Shadows task, requiring subjects to draw shadows of objects in different orientations, proved most difficult and was most strongly associated with with a subject's understanding of lunar phases. Chi-square tests for two independent samples were used to analyze gender performance differences on each of the Ave tasks. Males performed significantly better at a.05 significance level in regard to the Shadows task and the Moon task. Chi-square tests for two independent samples showed no significant difference in Moon task performance between subjects with astronomy or Earth science coursework, and those without such science classroom experience. Overall, only six subjects passed all seven projective spatial structure tasks. Piaget (1967) contends that concrete -operational spatial structures must be established before an individual is able to develop formal-operational patterns of thinking. The results of this study indicate that 90% of the interviewed science majors are still operating at the concrete-operational level. Several educational implications were drawn from this study: (1) The teaching of spatially dependent content to students without prerequisite spatial structures results in understanding no further beyond that which can be memorized; (2) assessment for projective spatial structures should precede science lessons dealing with time-space relationships, and (3) a student's level of spatial ability may directly impact upon interpretation of three-dimensional models.

  16. National facilities study. Volume 5: Space research and development facilities task group

    NASA Technical Reports Server (NTRS)

    1994-01-01

    With the beginnings of the U.S. space program, there was a pressing need to develop facilities that could support the technology research and development, testing, and operations of evolving space systems. Redundancy in facilities that was once and advantage in providing flexibility and schedule accommodation is instead fast becoming a burden on scarce resources. As a result, there is a clear perception in many sectors that the U.S. has many space R&D facilities that are under-utilized and which are no longer cost-effective to maintain. At the same time, it is clear that the U.S. continues to possess many space R&D facilities which are the best -- or among the best -- in the world. In order to remain world class in key areas, careful assessment of current capabilities and planning for new facilities is needed. The National Facility Study (NFS) was initiated in 1992 to develop a comprehensive and integrated long-term plan for future aerospace facilities that meets current and projected government and commercial needs. In order to assess the nation's capability to support space research and development (R&D), a Space R&D Task Group was formed. The Task Group was co-chaired by NASA and DOD. The Task Group formed four major, technologically- and functionally- oriented working groups: Human and Machine Operations; Information and Communications; Propulsion and Power; and Materials, Structures, and Flight Dynamics. In addition to these groups, three supporting working groups were formed: Systems Engineering and Requirements; Strategy and Policy; and Costing Analysis. The Space R&D Task Group examined several hundred facilities against the template of a baseline mission and requirements model (developed in common with the Space Operations Task Group) and a set of excursions from the baseline. The model and excursions are described in Volume 3 of the NFS final report. In addition, as a part of the effort, the group examined key strategic issues associated with space R&D facilities planning for the U.S., and these are discussed in Section 4 of this volume.

  17. Combat Service Support Model Development: BRASS - TRANSLOG - Army 21

    DTIC Science & Technology

    1984-07-01

    throughout’the system. Transitional problems may address specific hardware and related software , such as the Standard Army Ammunition System ( SAAS ...FILE. 00 Cabat Service Support Model Development .,PASS TRANSLOG -- ARMY 21 0 Contract Number DAAK11-84-D-0004 Task Order #1 DRAFT REPOkT July 1984 D...Armament Systems, Inc. 211 West Bel Air Avenue P.O. Box 158 Aberdeen, MD 21001 8 8 8 2 1 S CORMIT SERVICE SUPPORT MODEL DEVELOPMENT BRASS -- TRANSLOG

  18. Development of a structural optimization capability for the aeroelastic tailoring of composite rotor blades with straight and swept tips

    NASA Technical Reports Server (NTRS)

    Friedmann, P. P.; Venkatesan, C.; Yuan, K.

    1992-01-01

    This paper describes the development of a new structural optimization capability aimed at the aeroelastic tailoring of composite rotor blades with straight and swept tips. The primary objective is to reduce vibration levels in forward flight without diminishing the aeroelastic stability margins of the blade. In the course of this research activity a number of complicated tasks have been addressed: (1) development of a new, aeroelastic stability and response analysis; (2) formulation of a new comprehensive sensitive analysis, which facilitates the generation of the appropriate approximations for the objective and the constraints; (3) physical understanding of the new model and, in particular, determination of its potential for aeroelastic tailoring, and (4) combination of the newly developed analysis capability, the sensitivity derivatives and the optimizer into a comprehensive optimization capability. The first three tasks have been completed and the fourth task is in progress.

  19. Study of Turbofan Engines Designed for Low Enery Consumption

    NASA Technical Reports Server (NTRS)

    Neitzel, R. E.; Hirschkron, R.; Johnston, R. P.

    1976-01-01

    Subsonic transport turbofan engine design and technology features which have promise of improving aircraft energy consumption are described. Task I addressed the selection and evaluation of features for the CF6 family of engines in current aircraft, and growth models of these aircraft. Task II involved cycle studies and the evaluation of technology features for advanced technology turbofans, consistent with initial service in 1985. Task III pursued the refined analysis of a specific design of an advanced technology turbofan engine selected as the result of Task II studies. In all of the above, the impact upon aircraft economics, as well as energy consumption, was evaluated. Task IV summarized recommendations for technology developments which would be necessary to achieve the improvements in energy consumption identified.

  20. Methodolgy For Evaluation Of Technology Impacts In Space Electric Power Systems

    NASA Technical Reports Server (NTRS)

    Holda, Julie

    2004-01-01

    The Analysis and Management branch of the Power and Propulsion Office at NASA Glenn Research Center is responsible for performing complex analyses of the space power and In-Space propulsion products developed by GRC. This work quantifies the benefits of the advanced technologies to support on-going advocacy efforts. The Power and Propulsion Office is committed to understanding how the advancement in space technologies could benefit future NASA missions. They support many diverse projects and missions throughout NASA as well as industry and academia. The area of work that we are concentrating on is space technology investment strategies. Our goal is to develop a Monte-Carlo based tool to investigate technology impacts in space electric power systems. The framework is being developed at this stage, which will be used to set up a computer simulation of a space electric power system (EPS). The outcome is expected to be a probabilistic assessment of critical technologies and potential development issues. We are developing methods for integrating existing spreadsheet-based tools into the simulation tool. Also, work is being done on defining interface protocols to enable rapid integration of future tools. Monte Carlo-based simulation programs for statistical modeling of the EPS Model. I decided to learn and evaluate Palisade's @Risk and Risk Optimizer software, and utilize it's capabilities for the Electric Power System (EPS) model. I also looked at similar software packages (JMP, SPSS, Crystal Ball, VenSim, Analytica) available from other suppliers and evaluated them. The second task was to develop the framework for the tool, in which we had to define technology characteristics using weighing factors and probability distributions. Also we had to define the simulation space and add hard and soft constraints to the model. The third task is to incorporate (preliminary) cost factors into the model. A final task is developing a cross-platform solution of this framework.

  1. Supervisory manipulation based on the concepts of absolute vs relative and fixed vs moving tasks

    NASA Technical Reports Server (NTRS)

    Brooks, T. L.

    1980-01-01

    If a machine is to perform a given subtask autonomously, it will require an internal model which, combined with operator and environmental inputs, can be used to generate the manipulator functions necessary to complete the task. This paper will advance a technique based on linear transformations by which short, supervised periods of manipulation can be accomplished. To achieve this end a distinction will be made between tasks which can be completely defined during the training period, and tasks which can be only partially defined prior to the moment of execution. A further distinction will be made between tasks which have a fixed relationship to the manipulator base throughout the execution period, and tasks which have a continuously changing task/base relationship during execution. Finally, through a rudimentary analysis of the methods developed in this paper, some of the practical aspects of implementing a supervisory system will be illustrated

  2. EVALUATION TECHNIQUES AND TOOL DEVELOPMENT FOR FY 08 CMAQ RELEASE

    EPA Science Inventory

    In this task, research efforts are outlined that relate to the AMD Model Evaluation Program element and support CMAQ releases within the FY05-FY08 time period. Model evaluation serves dual purposes; evaluation is necessary to characterize the accuracy of model predictions, and e...

  3. An open-source Java-based Toolbox for environmental model evaluation: The MOUSE Software Application

    USDA-ARS?s Scientific Manuscript database

    A consequence of environmental model complexity is that the task of understanding how environmental models work and identifying their sensitivities/uncertainties, etc. becomes progressively more difficult. Comprehensive numerical and visual evaluation tools have been developed such as the Monte Carl...

  4. Sequence-sensitive exemplar and decision-bound accounts of speeded-classification performance in a modified Garner-tasks paradigm.

    PubMed

    Little, Daniel R; Wang, Tony; Nosofsky, Robert M

    2016-09-01

    Among the most fundamental results in the area of perceptual classification are the "correlated facilitation" and "filtering interference" effects observed in Garner's (1974) speeded categorization tasks: In the case of integral-dimension stimuli, relative to a control task, single-dimension classification is faster when there is correlated variation along a second dimension, but slower when there is orthogonal variation that cannot be filtered out (e.g., by attention). These fundamental effects may result from participants' use of a trial-by-trial bypass strategy in the control and correlated tasks: The observer changes the previous category response whenever the stimulus changes, and maintains responses if the stimulus repeats. Here we conduct modified versions of the Garner tasks that eliminate the availability of a pure bypass strategy. The fundamental facilitation and interference effects remain, but are still largely explainable in terms of pronounced sequential effects in all tasks. We develop sequence-sensitive versions of exemplar-retrieval and decision-bound models aimed at capturing the detailed, trial-by-trial response-time distribution data. The models combine assumptions involving: (i) strengthened perceptual/memory representations of stimuli that repeat across consecutive trials, and (ii) a bias to change category responses on trials in which the stimulus changes. These models can predict our observed effects and provide a more complete account of the underlying bases of performance in our modified Garner tasks. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Reading Guided by Automated Graphical Representations: How Model-Based Text Visualizations Facilitate Learning in Reading Comprehension Tasks

    ERIC Educational Resources Information Center

    Pirnay-Dummer, Pablo; Ifenthaler, Dirk

    2011-01-01

    Our study integrates automated natural language-oriented assessment and analysis methodologies into feasible reading comprehension tasks. With the newly developed T-MITOCAR toolset, prose text can be automatically converted into an association net which has similarities to a concept map. The "text to graph" feature of the software is based on…

  6. A Theoretical Model for Designing an In-House Community College Department Chair Professional Development Program

    ERIC Educational Resources Information Center

    Sirkis, Jocelyn Eager

    2013-01-01

    Academic department chairs serve as front-line managers and leaders who perform a wide variety of tasks. These tasks may include mundane chores, such as ordering office supplies, or important ones, such as changing the department culture to one that embraces assessment. Too often, however, individuals take on the chair position with little to no…

  7. Pennsylvania Blue Shield's Job Linked Skills Program. A Basic Skills Education Program. Final Performance Report.

    ERIC Educational Resources Information Center

    Pennsylvania Blue Shield, Camp Hill.

    A project developed a model curriculum to be delivered by computer-based instruction to teach the required literacy skills for entry workers in the health insurance industry. Literacy task analyses were performed for the targeted jobs and then validated with focus groups. The job tasks and related basic skills were divided into modules. The job…

  8. Decentration Revisited: A Two-Factor Model for Role-Taking Development in Young Children.

    ERIC Educational Resources Information Center

    O'Connor, Margaret

    This study investigates spatial and conceptual role-taking at the preschool level to determine the components of and relationship between these two forms of role-taking. A total of 80 children between 3 and 5 years of age were tested individually on four spatial tasks and five conceptual tasks and rated on the levels of egocentrism employed.…

  9. Action video game play facilitates the development of better perceptual templates.

    PubMed

    Bejjanki, Vikranth R; Zhang, Ruyuan; Li, Renjie; Pouget, Alexandre; Green, C Shawn; Lu, Zhong-Lin; Bavelier, Daphne

    2014-11-25

    The field of perceptual learning has identified changes in perceptual templates as a powerful mechanism mediating the learning of statistical regularities in our environment. By measuring threshold-vs.-contrast curves using an orientation identification task under varying levels of external noise, the perceptual template model (PTM) allows one to disentangle various sources of signal-to-noise changes that can alter performance. We use the PTM approach to elucidate the mechanism that underlies the wide range of improvements noted after action video game play. We show that action video game players make use of improved perceptual templates compared with nonvideo game players, and we confirm a causal role for action video game play in inducing such improvements through a 50-h training study. Then, by adapting a recent neural model to this task, we demonstrate how such improved perceptual templates can arise from reweighting the connectivity between visual areas. Finally, we establish that action gamers do not enter the perceptual task with improved perceptual templates. Instead, although performance in action gamers is initially indistinguishable from that of nongamers, action gamers more rapidly learn the proper template as they experience the task. Taken together, our results establish for the first time to our knowledge the development of enhanced perceptual templates following action game play. Because such an improvement can facilitate the inference of the proper generative model for the task at hand, unlike perceptual learning that is quite specific, it thus elucidates a general learning mechanism that can account for the various behavioral benefits noted after action game play.

  10. Action video game play facilitates the development of better perceptual templates

    PubMed Central

    Bejjanki, Vikranth R.; Zhang, Ruyuan; Li, Renjie; Pouget, Alexandre; Green, C. Shawn; Lu, Zhong-Lin; Bavelier, Daphne

    2014-01-01

    The field of perceptual learning has identified changes in perceptual templates as a powerful mechanism mediating the learning of statistical regularities in our environment. By measuring threshold-vs.-contrast curves using an orientation identification task under varying levels of external noise, the perceptual template model (PTM) allows one to disentangle various sources of signal-to-noise changes that can alter performance. We use the PTM approach to elucidate the mechanism that underlies the wide range of improvements noted after action video game play. We show that action video game players make use of improved perceptual templates compared with nonvideo game players, and we confirm a causal role for action video game play in inducing such improvements through a 50-h training study. Then, by adapting a recent neural model to this task, we demonstrate how such improved perceptual templates can arise from reweighting the connectivity between visual areas. Finally, we establish that action gamers do not enter the perceptual task with improved perceptual templates. Instead, although performance in action gamers is initially indistinguishable from that of nongamers, action gamers more rapidly learn the proper template as they experience the task. Taken together, our results establish for the first time to our knowledge the development of enhanced perceptual templates following action game play. Because such an improvement can facilitate the inference of the proper generative model for the task at hand, unlike perceptual learning that is quite specific, it thus elucidates a general learning mechanism that can account for the various behavioral benefits noted after action game play. PMID:25385590

  11. In-Depth Analysis of the JACK Model.

    DOT National Transportation Integrated Search

    2009-04-30

    Recently, as part of a comprehensive analysis of budget and funding options, a TxDOT : special task force has examined the agencys current financial forecasting methods and has : developed a model designed to estimate future State Highway Fund rev...

  12. Toward an embedded training tool for Deep Space Network operations

    NASA Technical Reports Server (NTRS)

    Hill, Randall W., Jr.; Sturdevant, Kathryn F.; Johnson, W. L.

    1993-01-01

    There are three issues to consider when building an embedded training system for a task domain involving the operation of complex equipment: (1) how skill is acquired in the task domain; (2) how the training system should be designed to assist in the acquisition of the skill, and more specifically, how an intelligent tutor could aid in learning; and (3) whether it is feasible to incorporate the resulting training system into the operational environment. This paper describes how these issues have been addressed in a prototype training system that was developed for operations in NASA's Deep Space Network (DSN). The first two issues were addressed by building an executable cognitive model of problem solving and skill acquisition of the task domain and then using the model to design an intelligent tutor. The cognitive model was developed in Soar for the DSN's Link Monitor and Control (LMC) system; it led to several insights about learning in the task domain that were used to design an intelligent tutor called REACT that implements a method called 'impasse-driven tutoring'. REACT is one component of the LMC training system, which also includes a communications link simulator and a graphical user interface. A pilot study of the LMC training system indicates that REACT shows promise as an effective way for helping operators to quickly acquire expert skills.

  13. Electro-Optic Identification Research Program

    DTIC Science & Technology

    2002-04-01

    Electro - optic identification (EOID) sensors provide photographic quality images that can be used to identify mine-like contacts provided by long...tasks such as validating existing electro - optic models, development of performance metrics, and development of computer aided identification and

  14. Cognitive Task Analysis of En Route Air Traffic Control: Model Extension and Validation.

    ERIC Educational Resources Information Center

    Redding, Richard E.; And Others

    Phase II of a project extended data collection and analytic procedures to develop a model of expertise and skill development for en route air traffic control (ATC). New data were collected by recording the Dynamic Simulator (DYSIM) performance of five experts with a work overload problem. Expert controllers were interviewed in depth for mental…

  15. Designing Tasks for Math Modeling in College Algebra: A Critical Review

    ERIC Educational Resources Information Center

    Staats, Susan; Robertson, Douglas

    2014-01-01

    Over the last decade, the pedagogical approach known as mathematical modeling has received increased interest in college algebra classes in the United States. Math modeling assignments ask students to develop their own problem-solving tools to address non-routine, realistic scenarios. The open-ended quality of modeling activities creates dilemmas…

  16. Piloted Evaluation of a UH-60 Mixer Equivalent Turbulence Simulation Model

    NASA Technical Reports Server (NTRS)

    Lusardi, Jeff A.; Blanken, Chris L.; Tischeler, Mark B.

    2002-01-01

    A simulation study of a recently developed hover/low speed Mixer Equivalent Turbulence Simulation (METS) model for the UH-60 Black Hawk helicopter was conducted in the NASA Ames Research Center Vertical Motion Simulator (VMS). The experiment was a continuation of previous work to develop a simple, but validated, turbulence model for hovering rotorcraft. To validate the METS model, two experienced test pilots replicated precision hover tasks that had been conducted in an instrumented UH-60 helicopter in turbulence. Objective simulation data were collected for comparison with flight test data, and subjective data were collected that included handling qualities ratings and pilot comments for increasing levels of turbulence. Analyses of the simulation results show good analytic agreement between the METS model and flight test data, with favorable pilot perception of the simulated turbulence. Precision hover tasks were also repeated using the more complex rotating-frame SORBET (Simulation Of Rotor Blade Element Turbulence) model to generate turbulence. Comparisons of the empirically derived METS model with the theoretical SORBET model show good agreement providing validation of the more complex blade element method of simulating turbulence.

  17. Ares I-X Flight Data Evaluation: Executive Overview

    NASA Technical Reports Server (NTRS)

    Huebner, Lawrence D.; Waits, David A.; Lewis, Donny L.; Richards, James S.; Coates, R. H., Jr.; Cruit, Wendy D.; Bolte, Elizabeth J.; Bangham, Michal E.; Askins, Bruce R.; Trausch, Ann N.

    2011-01-01

    NASA's Constellation Program (CxP) successfully launched the Ares I-X flight test vehicle on October 28, 2009. The Ares I-X flight was a developmental flight test to demonstrate that this very large, long, and slender vehicle could be controlled successfully. The flight offered a unique opportunity for early engineering data to influence the design and development of the Ares I crew launch vehicle. As the primary customer for flight data from the Ares I-X mission, the Ares Projects Office (APO) established a set of 33 flight evaluation tasks to correlate flight results with prospective design assumptions and models. The flight evaluation tasks used Ares I-X data to partially validate tools and methodologies in technical disciplines that will ultimately influence the design and development of Ares I and future launch vehicles. Included within these tasks were direct comparisons of flight data with preflight predictions and post-flight assessments utilizing models and processes being applied to design and develop Ares I. The benefits of early development flight testing were made evident by results from these flight evaluation tasks. This overview provides summary information from assessment of the Ares I-X flight test data and represents a small subset of the detailed technical results. The Ares Projects Office published a 1,600-plus-page detailed technical report that documents the full set of results. This detailed report is subject to the International Traffic in Arms Regulations (ITAR) and is available in the Ares Projects Office archives files.

  18. SABRINA - an interactive geometry modeler for MCNP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    West, J.T.; Murphy, J.

    One of the most difficult tasks when analyzing a complex three-dimensional system with Monte Carlo is geometry model development. SABRINA attempts to make the modeling process more user-friendly and less of an obstacle. It accepts both combinatorial solid bodies and MCNP surfaces and produces MCNP cells. The model development process in SABRINA is highly interactive and gives the user immediate feedback on errors. Users can view their geometry from arbitrary perspectives while the model is under development and interactively find and correct modeling errors. An example of a SABRINA display is shown. It represents a complex three-dimensional shape.

  19. A Longitudinal Study of Lexical Development in Children Learning Vietnamese and English

    PubMed Central

    Pham, Giang; Kohnert, Kathryn

    2013-01-01

    This longitudinal study modeled lexical development among children who spoke Vietnamese as a first language (L1) and English as a second language (L2). Participants (n=33, initial mean age of 7.3 years) completed a total of eight tasks (four in each language) that measured vocabulary knowledge and lexical processing at four yearly time points. Multivariate hierarchical linear modeling was used to calculate L1 and L2 trajectories within the same model for each task. Main findings included (a) positive growth in each language, (b) greater gains in English resulting in shifts toward L2 dominance, and (c) different patterns for receptive and expressive domains. Timing of shifts to L2 dominance underscored L1 skills that are resilient and vulnerable to increases in L2 proficiency. PMID:23869741

  20. Wind Sensing, Analysis, and Modeling

    NASA Technical Reports Server (NTRS)

    Corvin, Michael A.

    1995-01-01

    The purpose of this task was to begin development of a unified approach to the sensing, analysis, and modeling of the wind environments in which launch system operate. The initial activity was to examine the current usage and requirements for wind modeling for the Titan 4 vehicle. This was to be followed by joint technical efforts with NASA Langley Research Center to develop applicable analysis methods. This work was to be performed in and demonstrate the use of prototype tools implementing an environment in which to realize a unified system. At the convenience of the customer, due to resource limitations, the task was descoped. The survey of Titan 4 processes was accomplished and is reported in this document. A summary of general requirements is provided. Current versions of prototype Process Management Environment tools are being provided to the customer.

  1. Wind sensing, analysis, and modeling

    NASA Technical Reports Server (NTRS)

    Corvin, Michael A.

    1995-01-01

    The purpose of this task was to begin development of a unified approach to the sensing, analysis, and modeling of the wind environments in which launch systems operate. The initial activity was to examine the current usage and requirements for wind modeling for the Titan 4 vehicle. This was to be followed by joint technical efforts with NASA Langley Research Center to develop applicable analysis methods. This work was to be performed in and demonstrate the use of prototype tools implementing an environment in which to realize a unified system. At the convenience of the customer, due to resource limitations, the task was descoped. The survey of Titan 4 processes was accomplished and is reported in this document. A summary of general requirements is provided . Current versions of prototype Process Management Environment tools are being provided to the customer.

  2. Regression models for predicting peak and continuous three-dimensional spinal loads during symmetric and asymmetric lifting tasks.

    PubMed

    Fathallah, F A; Marras, W S; Parnianpour, M

    1999-09-01

    Most biomechanical assessments of spinal loading during industrial work have focused on estimating peak spinal compressive forces under static and sagittally symmetric conditions. The main objective of this study was to explore the potential of feasibly predicting three-dimensional (3D) spinal loading in industry from various combinations of trunk kinematics, kinetics, and subject-load characteristics. The study used spinal loading, predicted by a validated electromyography-assisted model, from 11 male participants who performed a series of symmetric and asymmetric lifts. Three classes of models were developed: (a) models using workplace, subject, and trunk motion parameters as independent variables (kinematic models); (b) models using workplace, subject, and measured moments variables (kinetic models); and (c) models incorporating workplace, subject, trunk motion, and measured moments variables (combined models). The results showed that peak 3D spinal loading during symmetric and asymmetric lifting were predicted equally well using all three types of regression models. Continuous 3D loading was predicted best using the combined models. When the use of such models is infeasible, the kinematic models can provide adequate predictions. Finally, lateral shear forces (peak and continuous) were consistently underestimated using all three types of models. The study demonstrated the feasibility of predicting 3D loads on the spine under specific symmetric and asymmetric lifting tasks without the need for collecting EMG information. However, further validation and development of the models should be conducted to assess and extend their applicability to lifting conditions other than those presented in this study. Actual or potential applications of this research include exposure assessment in epidemiological studies, ergonomic intervention, and laboratory task assessment.

  3. Vocational Training and Placement of the Severely Handicapped: Research and Development Recommendations.

    ERIC Educational Resources Information Center

    Cook, Paul F.; Dahl, Peter R.

    The monograph synthesizes findings of the Vocational Training and Placement of the Severely Handicapped Project (VOTAP) and offers recommendations for further research and development. Chapter 1 presents the theoretical model that describes the dimensions upon which the task of developing research and development recommendations could be…

  4. A contrast-sensitive channelized-Hotelling observer to predict human performance in a detection task using lumpy backgrounds and Gaussian signals

    NASA Astrophysics Data System (ADS)

    Park, Subok; Badano, Aldo; Gallas, Brandon D.; Myers, Kyle J.

    2007-03-01

    Previously, a non-prewhitening matched filter (NPWMF) incorporating a model for the contrast sensitivity of the human visual system was introduced for modeling human performance in detection tasks with different viewing angles and white-noise backgrounds by Badano et al. But NPWMF observers do not perform well detection tasks involving complex backgrounds since they do not account for random backgrounds. A channelized-Hotelling observer (CHO) using difference-of-Gaussians (DOG) channels has been shown to track human performance well in detection tasks using lumpy backgrounds. In this work, a CHO with DOG channels, incorporating the model of the human contrast sensitivity, was developed similarly. We call this new observer a contrast-sensitive CHO (CS-CHO). The Barten model was the basis of our human contrast sensitivity model. A scalar was multiplied to the Barten model and varied to control the thresholding effect of the contrast sensitivity on luminance-valued images and hence the performance-prediction ability of the CS-CHO. The performance of the CS-CHO was compared to the average human performance from the psychophysical study by Park et al., where the task was to detect a known Gaussian signal in non-Gaussian distributed lumpy backgrounds. Six different signal-intensity values were used in this study. We chose the free parameter of our model to match the mean human performance in the detection experiment at the strongest signal intensity. Then we compared the model to the human at five different signal-intensity values in order to see if the performance of the CS-CHO matched human performance. Our results indicate that the CS-CHO with the chosen scalar for the contrast sensitivity predicts human performance closely as a function of signal intensity.

  5. Exploring the Processes of Generating LOD (0-2) Citygml Models in Greater Municipality of Istanbul

    NASA Astrophysics Data System (ADS)

    Buyuksalih, I.; Isikdag, U.; Zlatanova, S.

    2013-08-01

    3D models of cities, visualised and exploded in 3D virtual environments have been available for several years. Currently a large number of impressive realistic 3D models have been regularly presented at scientific, professional and commercial events. One of the most promising developments is OGC standard CityGML. CityGML is object-oriented model that support 3D geometry and thematic semantics, attributes and relationships, and offers advanced options for realistic visualization. One of the very attractive characteristics of the model is the support of 5 levels of detail (LOD), starting from 2.5D less accurate model (LOD0) and ending with very detail indoor model (LOD4). Different local government offices and municipalities have different needs when utilizing the CityGML models, and the process of model generation depends on local and domain specific needs. Although the processes (i.e. the tasks and activities) for generating the models differs depending on its utilization purpose, there are also some common tasks (i.e. common denominator processes) in the model generation of City GML models. This paper focuses on defining the common tasks in generation of LOD (0-2) City GML models and representing them in a formal way with process modeling diagrams.

  6. The development of a virtual reality training curriculum for colonoscopy.

    PubMed

    Sugden, Colin; Aggarwal, Rajesh; Banerjee, Amrita; Haycock, Adam; Thomas-Gibson, Siwan; Williams, Christopher B; Darzi, Ara

    2012-07-01

    The development of a structured virtual reality (VR) training curriculum for colonoscopy using high-fidelity simulation. Colonoscopy requires detailed knowledge and technical skill. Changes to working practices in recent times have reduced the availability of traditional training opportunities. Much might, therefore, be achieved by applying novel technologies such as VR simulation to colonoscopy. Scientifically developed device-specific curricula aim to maximize the yield of laboratory-based training by focusing on validated modules and linking progression to the attainment of benchmarked proficiency criteria. Fifty participants comprised of 30 novices (<10 colonoscopies), 10 intermediates (100 to 500 colonoscopies), and 10 experienced (>500 colonoscopies) colonoscopists were recruited to participate. Surrogates of proficiency, such as number of procedures undertaken, determined prospective allocation to 1 of 3 groups (novice, intermediate, and experienced). Construct validity and learning value (comparison between groups and within groups respectively) for each task and metric on the chosen simulator model determined suitability for inclusion in the curriculum. Eight tasks in possession of construct validity and significant learning curves were included in the curriculum: 3 abstract tasks, 4 part-procedural tasks, and 1 procedural task. The whole-procedure task was valid for 11 metrics including the following: "time taken to complete the task" (1238, 343, and 293 s; P < 0.001) and "insertion length with embedded tip" (23.8, 3.6, and 4.9 cm; P = 0.005). Learning curves consistently plateaued at or beyond the ninth attempt. Valid metrics were used to define benchmarks, derived from the performance of the experienced cohort, for each included task. A comprehensive, stratified, benchmarked, whole-procedure curriculum has been developed for a modern high-fidelity VR colonoscopy simulator.

  7. The effects of interaction with the device described by procedural text on recall, true/false, and task performance.

    PubMed

    Diehl, V A; Mills, C B

    1995-11-01

    In two experiments, subjects interacted to different extents with relevant devices while reading two complex multistep procedural texts and were then tested with task performance time, true/false, and recall measures. While reading, subjects performed the task (read and do), saw the experimenter perform the task (read and see experimenter do), imagined doing the task (read and imagine), looked at the device while reading (read and see), or only read (read only). Van Dijk and Kintsch's (1983) text representation theory led to the prediction that exposure to the task device (in the read-and-do, read-and-see, and read-and-see-experimenter-do conditions) would lead to the development of a stronger situation model and therefore faster task performance, whereas the read-only and read-and-see conditions would lead to a better textbase, and therefore better performance on the true/false and recall tasks. Paivio's (1991) dual coding theory led to the opposite prediction for recall. The results supported the text representation theory with task performance and recall. The read-and-see condition produced consistently good performance on the true/false measure. Amount of text study time contributed to recall performance. These findings support the notion that information available while reading leads to differential development of representations in memory, which, in turn, causes differences in performance on various measures.

  8. Running Memory for Clinical Handoffs: A Look at Active and Passive Processing.

    PubMed

    Anderson-Montoya, Brittany L; Scerbo, Mark W; Ramirez, Dana E; Hubbard, Thomas W

    2017-05-01

    The goal of the present study was to examine the effects of domain-relevant expertise on running memory and the ability to process handoffs of information. In addition, the role of active or passive processing was examined. Currently, there is little research that addresses how individuals with different levels of expertise process information in running memory when the information is needed to perform a real-world task. Three groups of participants differing in their level of clinical expertise (novice, intermediate, and expert) performed an abstract running memory span task and two tasks resembling real-world activities, a clinical handoff task and an air traffic control (ATC) handoff task. For all tasks, list length and the amount of information to be recalled were manipulated. Regarding processing strategy, all participants used passive processing for the running memory span and ATC tasks. The novices also used passive processing for the clinical task. The experts, however, appeared to use more active processing, and the intermediates fell in between. Overall, the results indicated that individuals with clinical expertise and a developed mental model rely more on active processing of incoming information for the clinical task while individuals with little or no knowledge rely on passive processing. The results have implications about how training should be developed to aid less experienced personnel identify what information should be included in a handoff and what should not.

  9. Plan recognition and generalization in command languages with application to telerobotics

    NASA Technical Reports Server (NTRS)

    Yared, Wael I.; Sheridan, Thomas B.

    1991-01-01

    A method for pragmatic inference as a necessary accompaniment to command languages is proposed. The approach taken focuses on the modeling and recognition of the human operator's intent, which relates sequences of domain actions ('plans') to changes in some model of the task environment. The salient feature of this module is that it captures some of the physical and linguistic contextual aspects of an instruction. This provides a basis for generalization and reinterpretation of the instruction in different task environments. The theoretical development is founded on previous work in computational linguistics and some recent models in the theory of action and intention. To illustrate these ideas, an experimental command language to a telerobot is implemented. The program consists of three different components: a robot graphic simulation, the command language itself, and the domain-independent pragmatic inference module. Examples of task instruction processes are provided to demonstrate the benefits of this approach.

  10. Effective Team Support: From Task and Cognitive Modeling to Software Agents for Time-Critical Complex Work Environments

    NASA Technical Reports Server (NTRS)

    Remington, Roger W. (Technical Monitor); John, Bonnie E.; Sycara, Katia

    2005-01-01

    The purpose of this research contract was to perform multidisciplinary research between CMU psychologists, computer scientists and NASA researchers to design a next generation collaborative system to support a team of human experts and intelligent agents. To achieve robust performance enhancement of such a system, we had proposed to perform task and cognitive modeling to thoroughly understand the impact technology makes on the organization and on key individual personnel. Guided by cognitively-inspired requirements, we would then develop software agents that support the human team in decision making, information filtering, information distribution and integration to enhance team situational awareness. During the period covered by this final report, we made substantial progress in completing a system for empirical data collection, cognitive modeling, and the building of software agents to support a team's tasks, and in running experiments for the collection of baseline data.

  11. Development of 3D electromagnetic modeling tools for airborne vehicles

    NASA Technical Reports Server (NTRS)

    Volakis, John L.

    1992-01-01

    The main goal of this report is to advance the development of methodologies for scattering by airborne composite vehicles. Although the primary focus continues to be the development of a general purpose computer code for analyzing the entire structure as a single unit, a number of other tasks are also being pursued in parallel with this effort. One of these tasks discussed within is on new finite element formulations and mesh termination schemes. The goal here is to decrease computation time while retaining accuracy and geometric adaptability.The second task focuses on the application of wavelets to electromagnetics. Wavelet transformations are shown to be able to reduce a full matrix to a band matrix, thereby reducing the solutions memory requirements. Included within this document are two separate papers on finite element formulations and wavelets.

  12. A constructivist connectionist model of transitions on false-belief tasks.

    PubMed

    Berthiaume, Vincent G; Shultz, Thomas R; Onishi, Kristine H

    2013-03-01

    How do children come to understand that others have mental representations, e.g., of an object's location? Preschoolers go through two transitions on verbal false-belief tasks, in which they have to predict where an agent will search for an object that was moved in her absence. First, while three-and-a-half-year-olds usually fail at approach tasks, in which the agent wants to find the object, children just under four succeed. Second, only after four do children succeed at tasks in which the agent wants to avoid the object. We present a constructivist connectionist model that autonomously reproduces the two transitions and suggests that the transitions are due to increases in general processing abilities enabling children to (1) overcome a default true-belief attribution by distinguishing false- from true-belief situations, and to (2) predict search in avoidance situations, where there is often more than one correct, empty search location. Constructivist connectionist models are rigorous, flexible and powerful tools that can be analyzed before and after transitions to uncover novel and emergent mechanisms of cognitive development. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Modeling human decision making behavior in supervisory control

    NASA Technical Reports Server (NTRS)

    Tulga, M. K.; Sheridan, T. B.

    1977-01-01

    An optimal decision control model was developed, which is based primarily on a dynamic programming algorithm which looks at all the available task possibilities, charts an optimal trajectory, and commits itself to do the first step (i.e., follow the optimal trajectory during the next time period), and then iterates the calculation. A Bayesian estimator was included which estimates the tasks which might occur in the immediate future and provides this information to the dynamic programming routine. Preliminary trials comparing the human subject's performance to that of the optimal model show a great similarity, but indicate that the human skips certain movements which require quick change in strategy.

  14. Model of depositing layer on cylindrical surface produced by induction-assisted laser cladding process

    NASA Astrophysics Data System (ADS)

    Kotlan, Václav; Hamar, Roman; Pánek, David; Doležel, Ivo

    2017-12-01

    A model of hybrid cladding on a cylindrical surface is built and numerically solved. Heating of both substrate and the powder material to be deposited on its surface is realized by laser beam and preheating inductor. The task represents a hard-coupled electromagnetic-thermal problem with time-varying geometry. Two specific algorithms are developed to incorporate this effect into the model, driven by local distribution of temperature and its gradients. The algorithms are implemented into the COMSOL Multiphysics 5.2 code that is used for numerical computations of the task. The methodology is illustrated with a typical example whose results are discussed.

  15. TREAT Modeling and Simulation Strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeHart, Mark David

    2015-09-01

    This report summarizes a four-phase process used to describe the strategy in developing modeling and simulation software for the Transient Reactor Test Facility. The four phases of this research and development task are identified as (1) full core transient calculations with feedback, (2) experiment modeling, (3) full core plus experiment simulation and (4) quality assurance. The document describes the four phases, the relationship between these research phases, and anticipated needs within each phase.

  16. The Development and Testing of a Model for a Nationally Based Vehicle Dedicated to the Continuing Professional Growth of School Administrators. Final Report.

    ERIC Educational Resources Information Center

    Knezevich, Stephen J.

    The primary objectives of the study were to develop a model for a National Academy for School Executives (NASE), to determine the receptivity of school administrators to such a program, and to determine the feasibility of implementing the model within the near future. Four academic task forces studied the structural elements, fiscal requirements,…

  17. Thermosolutal convection and macrosegregation in dendritic alloys

    NASA Technical Reports Server (NTRS)

    Poirier, David R.; Heinrich, J. C.

    1993-01-01

    A mathematical model of solidification, that simulates the formation of channel segregates or freckles, is presented. The model simulates the entire solidification process, starting with the initial melt to the solidified cast, and the resulting segregation is predicted. Emphasis is given to the initial transient, when the dendritic zone begins to develop and the conditions for the possible nucleation of channels are established. The mechanisms that lead to the creation and eventual growth or termination of channels are explained in detail and illustrated by several numerical examples. A finite element model is used for the simulations. It uses a single system of equations to deal with the all-liquid region, the dendritic region, and the all-solid region. The dendritic region is treated as an anisotropic porous medium. The algorithm uses the bilinear isoparametric element, with a penalty function approximation and a Petrov-Galerkin formulation. The major task was to develop the solidification model. In addition, other tasks that were performed in conjunction with the modeling of dendritic solidification are briefly described.

  18. Multiloop Manual Control of Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Hess, R. A.; Mcnally, B. D.

    1984-01-01

    Human interaction with a simple, multiloop dynamic system in which the human's activity was systematically varied by changing the levels of automation was studied. The control loop structure resulting from the task definition parallels that for any multiloop manual control system, is considered a sterotype. Simple models of the human in the task, and upon extending a technique for describing the manner in which the human subjectively quantifies his opinion of task difficulty were developed. A man in the loop simulation which provides data to support and direct the analytical effort is presented.

  19. Modeling Wettability Alteration using Chemical EOR Processes in Naturally Fractured Reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori

    2007-09-30

    The objective of our search is to develop a mechanistic simulation tool by adapting UTCHEM to model the wettability alteration in both conventional and naturally fractured reservoirs. This will be a unique simulator that can model surfactant floods in naturally fractured reservoir with coupling of wettability effects on relative permeabilities, capillary pressure, and capillary desaturation curves. The capability of wettability alteration will help us and others to better understand and predict the oil recovery mechanisms as a function of wettability in naturally fractured reservoirs. The lack of a reliable simulator for wettability alteration means that either the concept that hasmore » already been proven to be effective in the laboratory scale may never be applied commercially to increase oil production or the process must be tested in the field by trial and error and at large expense in time and money. The objective of Task 1 is to perform a literature survey to compile published data on relative permeability, capillary pressure, dispersion, interfacial tension, and capillary desaturation curve as a function of wettability to aid in the development of petrophysical property models as a function of wettability. The new models and correlations will be tested against published data. The models will then be implemented in the compositional chemical flooding reservoir simulator, UTCHEM. The objective of Task 2 is to understand the mechanisms and develop a correlation for the degree of wettability alteration based on published data. The objective of Task 3 is to validate the models and implementation against published data and to perform 3-D field-scale simulations to evaluate the impact of uncertainties in the fracture and matrix properties on surfactant alkaline and hot water floods.« less

  20. Human-centric predictive model of task difficulty for human-in-the-loop control tasks

    PubMed Central

    Majewicz Fey, Ann

    2018-01-01

    Quantitatively measuring the difficulty of a manipulation task in human-in-the-loop control systems is ill-defined. Currently, systems are typically evaluated through task-specific performance measures and post-experiment user surveys; however, these methods do not capture the real-time experience of human users. In this study, we propose to analyze and predict the difficulty of a bivariate pointing task, with a haptic device interface, using human-centric measurement data in terms of cognition, physical effort, and motion kinematics. Noninvasive sensors were used to record the multimodal response of human user for 14 subjects performing the task. A data-driven approach for predicting task difficulty was implemented based on several task-independent metrics. We compare four possible models for predicting task difficulty to evaluated the roles of the various types of metrics, including: (I) a movement time model, (II) a fusion model using both physiological and kinematic metrics, (III) a model only with kinematic metrics, and (IV) a model only with physiological metrics. The results show significant correlation between task difficulty and the user sensorimotor response. The fusion model, integrating user physiology and motion kinematics, provided the best estimate of task difficulty (R2 = 0.927), followed by a model using only kinematic metrics (R2 = 0.921). Both models were better predictors of task difficulty than the movement time model (R2 = 0.847), derived from Fitt’s law, a well studied difficulty model for human psychomotor control. PMID:29621301

  1. Analyzing ROC curves using the effective set-size model

    NASA Astrophysics Data System (ADS)

    Samuelson, Frank W.; Abbey, Craig K.; He, Xin

    2018-03-01

    The Effective Set-Size model has been used to describe uncertainty in various signal detection experiments. The model regards images as if they were an effective number (M*) of searchable locations, where the observer treats each location as a location-known-exactly detection task with signals having average detectability d'. The model assumes a rational observer behaves as if he searches an effective number of independent locations and follows signal detection theory at each location. Thus the location-known-exactly detectability (d') and the effective number of independent locations M* fully characterize search performance. In this model the image rating in a single-response task is assumed to be the maximum response that the observer would assign to these many locations. The model has been used by a number of other researchers, and is well corroborated. We examine this model as a way of differentiating imaging tasks that radiologists perform. Tasks involving more searching or location uncertainty may have higher estimated M* values. In this work we applied the Effective Set-Size model to a number of medical imaging data sets. The data sets include radiologists reading screening and diagnostic mammography with and without computer-aided diagnosis (CAD), and breast tomosynthesis. We developed an algorithm to fit the model parameters using two-sample maximum-likelihood ordinal regression, similar to the classic bi-normal model. The resulting model ROC curves are rational and fit the observed data well. We find that the distributions of M* and d' differ significantly among these data sets, and differ between pairs of imaging systems within studies. For example, on average tomosynthesis increased readers' d' values, while CAD reduced the M* parameters. We demonstrate that the model parameters M* and d' are correlated. We conclude that the Effective Set-Size model may be a useful way of differentiating location uncertainty from the diagnostic uncertainty in medical imaging tasks.

  2. Autonomous visual exploration creates developmental change in familiarity and novelty seeking behaviors

    PubMed Central

    Perone, Sammy; Spencer, John P.

    2013-01-01

    What motivates children to radically transform themselves during early development? We addressed this question in the domain of infant visual exploration. Over the first year, infants' exploration shifts from familiarity to novelty seeking. This shift is delayed in preterm relative to term infants and is stable within individuals over the course of the first year. Laboratory tasks have shed light on the nature of this familiarity-to-novelty shift, but it is not clear what motivates the infant to change her exploratory style. We probed this by letting a Dynamic Neural Field (DNF) model of visual exploration develop itself via accumulating experience in a virtual world. We then situated it in a canonical laboratory task. Much like infants, the model exhibited a familiarity-to-novelty shift. When we manipulated the initial conditions of the model, the model's performance was developmentally delayed much like preterm infants. This delay was overcome by enhancing the model's experience during development. We also found that the model's performance was stable at the level of the individual. Our simulations indicate that novelty seeking emerges with no explicit motivational source via the accumulation of visual experience within a complex, dynamical exploratory system. PMID:24065948

  3. International Reference Ionosphere (IRI): Task Force Activity 2000

    NASA Technical Reports Server (NTRS)

    Bilitza, D.

    2000-01-01

    The annual IRI Task Force Activity was held at the Abdus Salam International Center for Theoretical Physics in Trieste, Italy from July 10 to July 14. The participants included J. Adeniyi (University of Ilorin, Nigeria), D. Bilitza (NSSDC/RITSS, USA), D. Buresova (Institute of Atmospheric Physics, Czech Republic), B. Forte (ICTP, Italy), R. Leitinger (University of Graz, Austria), B. Nava (ICTP, Italy), M. Mosert (University National Tucuman, Argentina), S. Pulinets (IZMIRAN, Russia), S. Radicella (ICTP, Italy), and B. Reinisch (University of Mass. Lowell, USA). The main topic of this Task Force Activity was the modeling of the topside ionosphere and the development of strategies for modeling of ionospheric variability. Each day during the workshop week the team debated a specific modeling problem in the morning during informal presentations and round table discussions of all participants. Ways of resolving the specific modeling problem were devised and tested in the afternoon in front of the computers of the ICTP Aeronomy and Radiopropagation Laboratory using ICTP s computer networks and internet access.

  4. SCIENCE VERSION OF PM CHEMISTRY MODEL

    EPA Science Inventory

    PM chemistry models containing detailed treatments of key chemical processes controlling ambient concentrations of inorganic and organic compounds in PM2.5 are needed to develop strategies for reducing PM2.5 concentrations. This task, that builds on previous research conducted i...

  5. Detailed weather and terrain analysis for aircraft noise modeling

    DOT National Transportation Integrated Search

    2013-04-30

    A study has been conducted supporting refinement and development of FAAs airport environmental analysis tools. Tasks conducted in this study are: (1) updated analysis of the 1997 KDEN noise model validation study with newer versions of INM and rel...

  6. Educational Television: Brazil.

    ERIC Educational Resources Information Center

    Bretz, R.; Shinar, D.

    Based on evaluation of nine Brazilian educational television centers, an Instructional Television Training Model (ITV) was developed to aid in determining and designing training requirements for instructional television systems. Analysis based on this model would include these tasks: (1) determine instructional purpose of the television…

  7. Administrator Training and Development: Conceptual Model.

    ERIC Educational Resources Information Center

    Boardman, Gerald R.

    A conceptual model for an individualized training program for school administrators integrates processes, characteristics, and tasks through theory training and application. Based on an application of contingency theory, it provides a system matching up administrative candidates' needs in three areas (administrative process, administrative…

  8. An automatic experimental apparatus to study arm reaching in New World monkeys.

    PubMed

    Yin, Allen; An, Jehi; Lehew, Gary; Lebedev, Mikhail A; Nicolelis, Miguel A L

    2016-05-01

    Several species of the New World monkeys have been used as experimental models in biomedical and neurophysiological research. However, a method for controlled arm reaching tasks has not been developed for these species. We have developed a fully automated, pneumatically driven, portable, and reconfigurable experimental apparatus for arm-reaching tasks suitable for these small primates. We have utilized the apparatus to train two owl monkeys in a visually-cued arm-reaching task. Analysis of neural recordings demonstrates directional tuning of the M1 neurons. Our apparatus allows automated control, freeing the experimenter from manual experiments. The presented apparatus provides a valuable tool for conducting neurophysiological research on New World monkeys. Copyright © 2016. Published by Elsevier B.V.

  9. The effects of different tasks on the comprehension and production of idioms in children.

    PubMed

    Levorato, M C; Cacciari, C

    1995-10-01

    The present study investigated the developmental processes which lead from a literal interpretation of idiomatic expression to the ability of comprehending and producing them figuratively. A Model of the Development of Figurative Competence was presented according to which acquisition of idioms occurs as part of the general process of language and world knowledge development. Three experiments were carried out with second- and fourth-grade children, in which comprehension tasks - Recall, Multiple Choice, Paraphrase - and a production task - Completion - were employed. The results showed that younger children are more literally oriented than older children who in turn are more idiomatically oriented and that children of both age groups found it more difficult to produce idiomatic expressions than to comprehend them.

  10. Cognitive/Information Processing Psychology and Instruction: Reviewing Recent Theory and Practice.

    ERIC Educational Resources Information Center

    Gallagher, John P.

    1979-01-01

    Discusses recent developments in instructional psychology relative to cognitive task analysis, individual difference variables, and cognitive models of interactive instructional decision making, which use constructs developed within the field of cognitive/information processing psychology. (Author/WBC)

  11. Industrial Automation Mechanic Model Curriculum Project. Final Report.

    ERIC Educational Resources Information Center

    Toledo Public Schools, OH.

    This document describes a demonstration program that developed secondary level competency-based instructional materials for industrial automation mechanics. Program activities included task list compilation, instructional materials research, learning activity packet (LAP) development, construction of lab elements, system implementation,…

  12. A survey of Applied Psychological Services' models of the human operator

    NASA Technical Reports Server (NTRS)

    Siegel, A. I.; Wolf, J. J.

    1979-01-01

    A historical perspective is presented in terms of the major features and status of two families of computer simulation models in which the human operator plays the primary role. Both task oriented and message oriented models are included. Two other recent efforts are summarized which deal with visual information processing. They involve not whole model development but a family of subroutines customized to add the human aspects to existing models. A global diagram of the generalized model development/validation process is presented and related to 15 criteria for model evaluation.

  13. A Cognitive Modeling Approach to Strategy Formation in Dynamic Decision Making.

    PubMed

    Prezenski, Sabine; Brechmann, André; Wolff, Susann; Russwinkel, Nele

    2017-01-01

    Decision-making is a high-level cognitive process based on cognitive processes like perception, attention, and memory. Real-life situations require series of decisions to be made, with each decision depending on previous feedback from a potentially changing environment. To gain a better understanding of the underlying processes of dynamic decision-making, we applied the method of cognitive modeling on a complex rule-based category learning task. Here, participants first needed to identify the conjunction of two rules that defined a target category and later adapt to a reversal of feedback contingencies. We developed an ACT-R model for the core aspects of this dynamic decision-making task. An important aim of our model was that it provides a general account of how such tasks are solved and, with minor changes, is applicable to other stimulus materials. The model was implemented as a mixture of an exemplar-based and a rule-based approach which incorporates perceptual-motor and metacognitive aspects as well. The model solves the categorization task by first trying out one-feature strategies and then, as a result of repeated negative feedback, switching to two-feature strategies. Overall, this model solves the task in a similar way as participants do, including generally successful initial learning as well as reversal learning after the change of feedback contingencies. Moreover, the fact that not all participants were successful in the two learning phases is also reflected in the modeling data. However, we found a larger variance and a lower overall performance of the modeling data as compared to the human data which may relate to perceptual preferences or additional knowledge and rules applied by the participants. In a next step, these aspects could be implemented in the model for a better overall fit. In view of the large interindividual differences in decision performance between participants, additional information about the underlying cognitive processes from behavioral, psychobiological and neurophysiological data may help to optimize future applications of this model such that it can be transferred to other domains of comparable dynamic decision tasks.

  14. A Cognitive Modeling Approach to Strategy Formation in Dynamic Decision Making

    PubMed Central

    Prezenski, Sabine; Brechmann, André; Wolff, Susann; Russwinkel, Nele

    2017-01-01

    Decision-making is a high-level cognitive process based on cognitive processes like perception, attention, and memory. Real-life situations require series of decisions to be made, with each decision depending on previous feedback from a potentially changing environment. To gain a better understanding of the underlying processes of dynamic decision-making, we applied the method of cognitive modeling on a complex rule-based category learning task. Here, participants first needed to identify the conjunction of two rules that defined a target category and later adapt to a reversal of feedback contingencies. We developed an ACT-R model for the core aspects of this dynamic decision-making task. An important aim of our model was that it provides a general account of how such tasks are solved and, with minor changes, is applicable to other stimulus materials. The model was implemented as a mixture of an exemplar-based and a rule-based approach which incorporates perceptual-motor and metacognitive aspects as well. The model solves the categorization task by first trying out one-feature strategies and then, as a result of repeated negative feedback, switching to two-feature strategies. Overall, this model solves the task in a similar way as participants do, including generally successful initial learning as well as reversal learning after the change of feedback contingencies. Moreover, the fact that not all participants were successful in the two learning phases is also reflected in the modeling data. However, we found a larger variance and a lower overall performance of the modeling data as compared to the human data which may relate to perceptual preferences or additional knowledge and rules applied by the participants. In a next step, these aspects could be implemented in the model for a better overall fit. In view of the large interindividual differences in decision performance between participants, additional information about the underlying cognitive processes from behavioral, psychobiological and neurophysiological data may help to optimize future applications of this model such that it can be transferred to other domains of comparable dynamic decision tasks. PMID:28824512

  15. Temporal neural networks and transient analysis of complex engineering systems

    NASA Astrophysics Data System (ADS)

    Uluyol, Onder

    A theory is introduced for a multi-layered Local Output Gamma Feedback (LOGF) neural network within the paradigm of Locally-Recurrent Globally-Feedforward neural networks. It is developed for the identification, prediction, and control tasks of spatio-temporal systems and allows for the presentation of different time scales through incorporation of a gamma memory. It is initially applied to the tasks of sunspot and Mackey-Glass series prediction as benchmarks, then it is extended to the task of power level control of a nuclear reactor at different fuel cycle conditions. The developed LOGF neuron model can also be viewed as a Transformed Input and State (TIS) Gamma memory for neural network architectures for temporal processing. The novel LOGF neuron model extends the static neuron model by incorporating into it a short-term memory structure in the form of a digital gamma filter. A feedforward neural network made up of LOGF neurons can thus be used to model dynamic systems. A learning algorithm based upon the Backpropagation-Through-Time (BTT) approach is derived. It is applicable for training a general L-layer LOGF neural network. The spatial and temporal weights and parameters of the network are iteratively optimized for a given problem using the derived learning algorithm.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swift, Ralph

    Idaho`s Model Watershed Project was established as part of the Northwest Power Planning Council`s plan for salmon recovery in the Columbia River Basin. The Council`s charge was simply stated and came without strings. The tasks were to identify actions within the watershed that are planned or needed for salmon habitat, and establish a procedure for implementing habitat-improvement measures. The Council gave the responsibility of developing this project to the Idaho Soil Conservation Commission. This Model Watershed Plan is intended to be a dynamic plan that helps address these two tasks. It is not intended to be the final say onmore » either. It is also not meant to establish laws, policies, or regulations for the agencies, groups, or individuals who participated in the plan development.« less

  17. A multi-country perspective on nurses' tasks below their skill level: reports from domestically trained nurses and foreign trained nurses from developing countries.

    PubMed

    Bruyneel, Luk; Li, Baoyue; Aiken, Linda; Lesaffre, Emmanuel; Van den Heede, Koen; Sermeus, Walter

    2013-02-01

    Several studies have concluded that the use of nurses' time and energy is often not optimized. Given widespread migration of nurses from developing to developed countries, it is important for human resource planning to know whether nursing education in developing countries is associated with more exaggerated patterns of inefficiency. First, to describe nurses' reports on tasks below their skill level. Second, to examine the association between nurses' migratory status (domestically trained nurse or foreign trained nurse from a developing country) and reports on these tasks. The Registered Nurse Forecasting Study used a cross-sectional quantitative research design to gather data from 33,731 nurses (62% response rate) in 486 hospitals in Belgium, England, Finland, Germany, Greece, Ireland, the Netherlands, Norway, Poland, Spain, Sweden and Switzerland. For this analysis, nurse-reported information on migratory status and tasks below their skill level performed during their last shift was used. Random effects models estimated the effect of nurses' migratory status on reports of these tasks. 832 nurses were trained in a developing country (2.5% of total sample). Across countries, a high proportion of both domestically trained and foreign trained nurses from developing countries reported having performed tasks below their skill level during their last shift. After adjusting for nurses' type of last shift worked, years of experience, and level of education, there remained a pronounced overall effect of being a foreign trained nurse from a developing country and an increase in reports of tasks below skill level performed during the last shift. The findings suggest that there remains much room for improvement to optimize the use of nurses' time and energy. Special attention should be given to raising the professional level of practice of foreign trained nurses from developing countries. Further research is needed to understand the influence of professional practice standards, skill levels of foreign trained nurses from developing countries and values attached to these tasks resulting from previous work experiences in their home countries. This will allow us to better understand the conditions under which foreign trained nurses from developing countries can optimally contribute to professional nursing practice in developed country contexts. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Automatic classification of animal vocalizations

    NASA Astrophysics Data System (ADS)

    Clemins, Patrick J.

    2005-11-01

    Bioacoustics, the study of animal vocalizations, has begun to use increasingly sophisticated analysis techniques in recent years. Some common tasks in bioacoustics are repertoire determination, call detection, individual identification, stress detection, and behavior correlation. Each research study, however, uses a wide variety of different measured variables, called features, and classification systems to accomplish these tasks. The well-established field of human speech processing has developed a number of different techniques to perform many of the aforementioned bioacoustics tasks. Melfrequency cepstral coefficients (MFCCs) and perceptual linear prediction (PLP) coefficients are two popular feature sets. The hidden Markov model (HMM), a statistical model similar to a finite autonoma machine, is the most commonly used supervised classification model and is capable of modeling both temporal and spectral variations. This research designs a framework that applies models from human speech processing for bioacoustic analysis tasks. The development of the generalized perceptual linear prediction (gPLP) feature extraction model is one of the more important novel contributions of the framework. Perceptual information from the species under study can be incorporated into the gPLP feature extraction model to represent the vocalizations as the animals might perceive them. By including this perceptual information and modifying parameters of the HMM classification system, this framework can be applied to a wide range of species. The effectiveness of the framework is shown by analyzing African elephant and beluga whale vocalizations. The features extracted from the African elephant data are used as input to a supervised classification system and compared to results from traditional statistical tests. The gPLP features extracted from the beluga whale data are used in an unsupervised classification system and the results are compared to labels assigned by experts. The development of a framework from which to build animal vocalization classifiers will provide bioacoustics researchers with a consistent platform to analyze and classify vocalizations. A common framework will also allow studies to compare results across species and institutions. In addition, the use of automated classification techniques can speed analysis and uncover behavioral correlations not readily apparent using traditional techniques.

  19. Ionospheric Simulation System for Satellite Observations and Global Assimilative Model Experiments - ISOGAME

    NASA Technical Reports Server (NTRS)

    Pi, Xiaoqing; Mannucci, Anthony J.; Verkhoglyadova, Olga; Stephens, Philip; Iijima, Bryron A.

    2013-01-01

    Modeling and imaging the Earth's ionosphere as well as understanding its structures, inhomogeneities, and disturbances is a key part of NASA's Heliophysics Directorate science roadmap. This invention provides a design tool for scientific missions focused on the ionosphere. It is a scientifically important and technologically challenging task to assess the impact of a new observation system quantitatively on our capability of imaging and modeling the ionosphere. This question is often raised whenever a new satellite system is proposed, a new type of data is emerging, or a new modeling technique is developed. The proposed constellation would be part of a new observation system with more low-Earth orbiters tracking more radio occultation signals broadcast by Global Navigation Satellite System (GNSS) than those offered by the current GPS and COSMIC observation system. A simulation system was developed to fulfill this task. The system is composed of a suite of software that combines the Global Assimilative Ionospheric Model (GAIM) including first-principles and empirical ionospheric models, a multiple- dipole geomagnetic field model, data assimilation modules, observation simulator, visualization software, and orbit design, simulation, and optimization software.

  20. Open source integrated modeling environment Delta Shell

    NASA Astrophysics Data System (ADS)

    Donchyts, G.; Baart, F.; Jagers, B.; van Putten, H.

    2012-04-01

    In the last decade, integrated modelling has become a very popular topic in environmental modelling since it helps solving problems, which is difficult to model using a single model. However, managing complexity of integrated models and minimizing time required for their setup remains a challenging task. The integrated modelling environment Delta Shell simplifies this task. The software components of Delta Shell are easy to reuse separately from each other as well as a part of integrated environment that can run in a command-line or a graphical user interface mode. The most components of the Delta Shell are developed using C# programming language and include libraries used to define, save and visualize various scientific data structures as well as coupled model configurations. Here we present two examples showing how Delta Shell simplifies process of setting up integrated models from the end user and developer perspectives. The first example shows coupling of a rainfall-runoff, a river flow and a run-time control models. The second example shows how coastal morphological database integrates with the coastal morphological model (XBeach) and a custom nourishment designer. Delta Shell is also available as open-source software released under LGPL license and accessible via http://oss.deltares.nl.

Top